Autism is a developmental disorder that manifests in early childhood and persists throughout life, profoundly affecting social behavior and hindering the acquisition of learning and social skills in those diagnosed. With technological advancements, an increasing array of tools is being utilized to support the education of students with autism spectrum disorder (ASD), aiming to improve their educational outcomes and social capabilities. Numerous studies on autism intervention have highlighted the effectiveness of social robots in behavioral treatments. However, research on their integration into classroom settings for children with this condition remains sparse. This paper describes the design and implementation of a group experiment in a collective classroom setting mediated by a NAO robot. This involved special education teachers and the NAO robot collaboratively conducting classroom activities, aiming to foster a dynamic learning environment through interactions among teachers, the robot, and students. Conducted in a special education school, this served as a foundational study in anticipation of introducing extended robot-assisted classroom sessions at a later date. Data from the experiment suggest that ASD students in classrooms equipped with a NAO robot exhibited notably improved performance compared to those in regular classrooms. Our preliminary findings indicate that NAO robots significantly enhance focus and classroom engagement among students with ASD, potentially improving educational performance and fostering enhanced social functioning.
Viewed as a threat to academic integrity, several questions have arisen about the dark side of AI. While AI offers several opportunities for teaching and learning, questions have also arisen regarding its impact on the future of education. This paper offers a critical perspective on the dark side of AI in education by drawing on critical social theory. It examines how AI can deepen the processes of educational inequality, power relations, and cultural identity. The findings illustrate that differential levels of economic development, digital infrastructure, cultural norms, and technological capacity significantly influence the integration and impact of AI across global education systems. To counter these risks, this paper calls for AI developers to adopt an inclusive design from the start of the application to ensure that these tools are accessible, adaptable, and affordable to the economies of the Global South. The paper further highlights the urgency of incorporating African voices, values, ethics, and worldviews into global AI governance conversations. Africa and other developing countries must not merely be sites of AI deployment but key actors in shaping the educational futures these technologies enable. Their expertise is essential to confronting colonial legacies in technological innovation and ensuring that AI advances democratic empowerment rather than digital dependency.
The course project report (CPR) is a crucial component for assessing students’ learning outcomes from courses they are studying. It assesses practical skills, academic writing, and logical thinking. In recent times, researchers have increasingly leveraged large language models (LLMs) to promote automated essay scoring (AES) in the education intelligence field due to its strong generalization and reasoning abilities. However, the existing LLM-based AES method design is based solely on writing proficiency and inevitably ignores the importance of assessment of cognitive engagement and practical competencies in CPRs. Additionally, CPR writing is a reflective process that includes knowledge-inquiry and cognition through critical thinking (CT), which have rarely been explored in the design of prompts for specific LLMs. To tackle this issue, we propose a novel, guided generative AI (GenAI) prompting framework for automated CPR assessment. It is created by integrating the Paul-Elder critical thinking concept into prompt design to enhance domain-specific knowledge transfer and the analytical capabilities of GenAI LLMs. Rather than focusing solely on language structure or writing skills, our approach emphasizes critical thinking evaluation using the Paul-Elder CT framework. Specifically, our framework—PEG-Prompt—evaluates CPR across six dimensions—structure, logic, coherence, originality, citation, and knowledge proficiency—to evaluate CPRs comprehensively from the aspects of practical competencies, analytical reasoning, and writing skills. To further enhance the CPR assessment performance of PEG-Prompt, we combine PEG-Prompt with extracted key content from reports and representative examples of few-shot scoring. Experimental results demonstrate that PEG-Prompt significantly improves the correlation between LLM-generated scores and human scores. The enhanced framework may enable students to receive helpful feedback and summaries of their CPR results through GenAI once it has been calibrated with human evaluators.
The integration of generative artificial intelligence (GenAI) for dissertation writing has sparked debates regarding where it can augment the writing process, which must exclusively have human intelligence at its core, and how to write GenAI prompts that produce effective output. The present study’s exploration of this topic is based on qualitative data gathered from a survey of 86 doctoral students and 7 thesis supervisors in the social sciences and humanities disciplines. We applied the AI Assessment Scale developed by Perkins et al. (2024) to evaluate GenAI’s role across various stages of doctoral dissertation writing and to explore pedagogical adaptations of GenAI to support dissertation writing in the contexts of the social sciences and humanities. Our findings indicate that GenAI can be fully utilized to improve writing mechanics, including grammar, structure, and coherence, by enhancing clarity and efficiency. GenAI also proves beneficial in analyzing larger datasets by defining a coding frame, identifying trends, and conducting sentiment analyses. GenAI can be utilized in argument structuring by organizing literature, suggesting logical ways to arrange sentences, and generating counterarguments. The participants agreed that exploring these applications saved their time and allowed them to focus on a deeper intellectual engagement. However, they recommended limiting or prohibiting GenAI use in areas that require critical reasoning, originality, and cultural context. Moreover, they underscored that AI-generated content may lack accuracy and contextual depth, thus requiring careful human validation against vague expressions. This study focuses on prompt literacy and provides a scale to utilize GenAI for doctoral dissertation writing.
This paper examines how Chinese secondary and tertiary English as a second language (ESL) learners engage with generative AI (GenAI) tools, such as ChatGPT, Claude, Doubao, and Pigai, not merely as writing aids but as coauthors in the academic writing process. Against the backdrop of an assessment-centered education system that emphasizes memorization and structured learning, GenAI opens new possibilities for dialogic learning and critical thinking. Drawing on case studies and current research, this paper examines how prompt engineering, both as a technical and pedagogical skill, supports digital literacy, rhetorical awareness, and metacognition. It further investigates how GenAI scaffolds language production and supports student agency, while also presenting risks such as epistemic dependency, reduced critical thinking, and ethical ambiguity. The concept of Human–AI co-learning is advanced as a theoretical framework for understanding this interaction. The paper concludes by calling for critical AI literacy, educator mediation, and culturally responsive pedagogy that reconciles traditional Chinese learning practices with reflective engagement in digital environments. By reframing GenAI from a shortcut to a scaffold, this study proposes a pedagogical model that empowers learners to reclaim authorship and engage more deeply in academic inquiry.