Human–AI Co-Learning in Academic Writing Among Chinese ESL Learners

Jason Chan , John Wong , Jack Ieong , Hao Pang

Frontiers of Digital Education ›› 2026, Vol. 3 ›› Issue (2) : 15

PDF (1357KB)
Frontiers of Digital Education ›› 2026, Vol. 3 ›› Issue (2) :15 DOI: 10.1007/s44366-026-0089-8
REVIEW ARTICLE
Human–AI Co-Learning in Academic Writing Among Chinese ESL Learners
Author information +
History +
PDF (1357KB)

Abstract

This paper examines how Chinese secondary and tertiary English as a second language (ESL) learners engage with generative AI (GenAI) tools, such as ChatGPT, Claude, Doubao, and Pigai, not merely as writing aids but as coauthors in the academic writing process. Against the backdrop of an assessment-centered education system that emphasizes memorization and structured learning, GenAI opens new possibilities for dialogic learning and critical thinking. Drawing on case studies and current research, this paper examines how prompt engineering, both as a technical and pedagogical skill, supports digital literacy, rhetorical awareness, and metacognition. It further investigates how GenAI scaffolds language production and supports student agency, while also presenting risks such as epistemic dependency, reduced critical thinking, and ethical ambiguity. The concept of Human–AI co-learning is advanced as a theoretical framework for understanding this interaction. The paper concludes by calling for critical AI literacy, educator mediation, and culturally responsive pedagogy that reconciles traditional Chinese learning practices with reflective engagement in digital environments. By reframing GenAI from a shortcut to a scaffold, this study proposes a pedagogical model that empowers learners to reclaim authorship and engage more deeply in academic inquiry.

Graphical abstract

Keywords

GenAI / prompt engineering / Human–AI co-learning / Chinese ESL learners / AI literacy / academic writing pedagogy

Cite this article

Download citation ▾
Jason Chan, John Wong, Jack Ieong, Hao Pang. Human–AI Co-Learning in Academic Writing Among Chinese ESL Learners. Frontiers of Digital Education, 2026, 3(2): 15 DOI:10.1007/s44366-026-0089-8

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

As generative AI (GenAI) tools become increasingly embedded in education, they are reshaping not only the process of academic writing but also learners’ identities and agency. This paper investigates how Chinese secondary and tertiary English as a second language (ESL) learners engage with GenAI, not merely as passive users but as active coauthors and learners, through the lenses of prompt engineering, digital literacy, and critical pedagogy. The emergence of GenAI tools, such as ChatGPT, Claude, Doubao, and Pigai, has catalyzed new forms of engagement in academic writing. For Chinese secondary students in international schools, writing in English is a complex cognitive and linguistic endeavor. These learners, navigating dual cultural and educational expectations, are increasingly turning to GenAI for support. While much discourse has centered on university-level use and concerns about academic integrity, younger learners’ conceptualizations of co-learning with GenAI remain underexplored. This paper theorizes GenAI not only as a writing assistant but also as a co-learning partner that both shape and are shaped by students’ thinking, agency, and identity as writers.

The proliferation of AI technologies has raised concerns about job displacement across various sectors, including education. Frey and Osborne (2013) projected that approximately 47% of jobs might be susceptible to automation within a decade. In language assessment, particularly in Asia, a region rich in linguistic and cultural diversity, the integration of GenAI presents both opportunities and challenges. Educators are now tasked with utilizing GenAI to improve assessment practices while ensuring that these technologies complement rather than replace human judgment. Emphasizing language assessment in this context is essential to equip students with the skills needed to thrive in an AI-augmented world.

The advent of GenAI has further transformed academic writing and digital education. ESL learners and educators are beginning to master prompt engineering to effectively interact with AI for content generation, summarization, and analysis. This paper examines the pedagogical promise and challenges of integrating GenAI into ESL contexts, emphasizing how prompt engineering navigates diverse linguistic and cultural barriers.

Historically, educational technologies have aimed to automate tasks for efficiency. By contrast, the rise of GenAI tools, has revolutionized natural language processing, enabling instantaneous evaluations and feedback. These tools can offer personalized, real-time feedback on spoken language nuances and writing coherence, thereby supporting greater linguistic proficiency (Teng & Ieong, 2024). AI technologies also support collaborative assessment environments by encouraging peer interaction and intercultural competence through shared learning experiences. Furthermore, AI-powered systems offer adaptive testing tailored to individual learner profiles, thereby enabling timely adjustments in learning strategies and improving language acquisition. However, the successful implementation of AI-driven assessment depends on reliable infrastructure and equitable access, which remains uneven across many regions, especially in Asia. Educators are essential to this integration, necessitating ongoing professional development to use GenAI tools effectively and interpret their outputs. In addition, a culturally responsive AI design is necessary to avoid algorithmic biases and ensure fair assessments across diverse contexts. While AI can improve efficiency, it must be guided by ethical frameworks and human insight to support holistic language development. Balancing AI’s capabilities with human expertise allows language assessment to evolve in the digital age while preserving the rich nuances of human communication.

While GenAI tools offer new possibilities for academic writing support, they also raise ethical concerns—particularly for learners from diverse educational and cultural backgrounds. In the language learning context of Macao, China, a region rich in linguistic and cultural diversity, the integration of GenAI presents both opportunities and challenges for Chinese ESL learners. This paper critically examines how prompt engineering (hereafter referred to as “prompting”)—the formulation of user inputs to guide AI responses—can either support or hinder the development of critical academic competencies. This paper further explores how prompts, when paired with GenAI pedagogy, can bridge linguistic gaps, encourage critical engagement, and promote ethical academic practices in English academic writing instruction for Chinese ESL students.

2 Educational Context and Practical Challenges

To understand how GenAI affects Chinese ESL learners’ academic writing practices, it is essential to first examine the educational foundations that inform their learning habits. Traditionally, ESL instruction in many Chinese public schools has been exam-oriented, which emphasizes accuracy, memorization of model examples, and teacher-provided structures (Xin et al., 2025). Such paradigm aligns closely with what Freire (1970) referred to as the “banking model” of education, in which learners function as passive knowledge recipients rather than active participants, with learning outcomes predominantly assessed by the ability to recall and reproduce information accurately (Dehler & Welsh, 2014).

Historically, many Chinese classrooms had large student-teacher ratios, which limited individualized instruction, and writing tasks tended to be formulaic, prioritizing grammar and vocabulary accuracy over argumentation (Tung & Chang, 2009; Zhang, 2017). Such conditions further reduce opportunities for personalized guidance, collaborative learning, and small-group critical discussion (Dehler & Welsh, 2014). Although learners often develop solid grammatical control and lexical accuracy, these strengths stem from a system that values conformity and reproduction over critical engagement and independent thinking. While such model-based teaching can support novice writers’ fundamental development, it may simultaneously constrain originality and critical thinking, both of which are essential for academic writing. Consequently, students may enter higher education with sound grammatical skills yet limited experience with open-ended, critical writing tasks, which are often prioritized in contemporary academic contexts (Chan et al., 2024; Hyland, 2008).

In the past decade, however, China’s English-language education has undergone substantial reform. In 2017, the Ministry of Education of the People’s Republic of China (2017) officially released The English curriculum standards for senior high schools (2017 edition), followed by The English curriculum standards for compulsory education (2022 edition) in 2022 (Ministry of Education of the People’s Republic of China, 2022). Both documents adopt a competency-based framework designed to cultivate students’ language proficiency, cultural awareness, thinking ability, and learning ability. These core competencies are operationalized through theme-based units, which are structured around six interrelated components: theme, text type, linguistic knowledge, cultural knowledge, language skills, and learning strategies. Against this backdrop, the widespread application of GenAI offers new prospects for Chinese ESL education in the cultivation of core competencies, facilitating the emergence of Human–AI co-learning.

3 From Writing Tool to Coauthor: Theorizing Human–AI Co-Learning

Against this backdrop, GenAI introduces a fundamentally different learning paradigm—one that positions AI not simply as a tool for passive use but as a coauthor in a collaborative learning process. This section develops the theoretical framework of Human–AI co-learning to examine how interactions with AI writing tools reflect evolving notions of agency, authorship, and literacy.

The evolution of AI in education parallels a broader shift from automation to augmentation. Early digital writing tools concentrated on grammar correction and formatting. In contrast, GenAI systems today offer contextual suggestions, creative prompts, and real-time scaffolding, which blur the boundary between tool and collaborator. Prompting—the crafting of inputs to elicit targeted outputs—is a metacognitive practice that reflects students’ evolving awareness of genre, audience, and rhetorical intent.

Co-learning refers to an interactive model in which humans and AI systems jointly participate in meaning making. This model resists reductive binary views of dependence versus independence and emphasizes students’ agency in selecting, revising, or rejecting AI-generated content. Teenagers’ iterative experimentation with prompting and the critical filtering of responses signifies a new literacy practice—a dialogic relationship with machine intelligence that encourages reflective learning.

GenAI’s most transformative potential lies not in generating text but in prompting students to think differently. For Chinese ESL learners, GenAI outputs provide linguistic exemplars, organizational templates, and lexical alternatives that facilitate engagement with academic discourse. However, this cognitive support carries ethical tensions. Overreliance on AI risks displacing students’ intellectual labor and undermining their confidence in original thinking (Yan, 2023).

Moreover, GenAI introduces epistemological ambiguity. When students cannot discern whether an idea originates from themselves or from the machine, the boundaries of authorship blur. This necessitates pedagogical attention not only to plagiarism policies but also to epistemic agency, empowering students to take ownership of the writing process, even as they collaborate with AI.

4 Prompting in ESL Classrooms: Pedagogical Strategies

Building on the theoretical foundation of coauthorship, this section outlines practical strategies for implementing prompting in ESL classrooms. Prompting is not just a technical skill; it is a pedagogical tool that helps learners develop metacognition, rhetorical awareness, and critical thinking skills through structured interaction with GenAI. Prompting refers to the design of questions or instructions for an AI tool like ChatGPT to elicit a desired response (Brawn, 2024; Cain, 2024; Lo, 2023; Mzwri & Turcsányi-Szabo, 2025; Woo et al., 2023; Woo et al., 2024). To create effective prompts for the teaching and learning of ESL academic writing, students and educators can adopt the following strategies.

4.1 Crafting Clear and Specific Prompts

Effective prompts must be clear, specific, and contextually relevant (Brawn, 2024; Cain, 2024; Huang, 2023; Leung, 2024). For example, instead of asking ChatGPT, “Tell me about social media platforms,” a learner might ask, “What are the advantages and disadvantages of Instagram and TikTok use among Hong Kong secondary students?” In other words, users need to relate their prompts as closely as possible to lesson objectives or topic requirements to maximize the GenAI output. If a prompt is ambiguous or relatively broad, the output may be vague or unfit for the purpose.

4.2 Identifying Objectives and Assigning Roles to the GenAI Tool

To maximize the GenAI output, users can specify the objectives and assign a role to the GenAI tool when crafting a prompt. For example, a user may write, “You are an experienced psychologist. Convince an employer to help their staff achieve a better work-life balance,” or “Discuss the pros and cons of work-life balance from the perspective of an economic policymaker.” (Cain, 2024; Leung, 2024; Lin, 2024). With clear objectives in mind and specific roles to play, an AI tool can aid in the development of critical thinking skills by inviting students to analyze a problem from different perspectives.

4.3 Applying Constraints

To prevent the GenAI output from being too brief or detailed for the purpose, a user can apply constraints to the prompt, such as “Provide a 200-word summary of the following article,” or “List three major causes of heart diseases and suggest three solutions for people aged 40–50 in Hong Kong. The answer should be no more than 1,000 words.” (Cain, 2024; Leung, 2024; Lin, 2024).

Context is another important constraint. Users can apply in-context learning by guiding an AI tool to learn from new examples in a prompt rather than depending entirely on its previous training data. Users can design a one-shot prompt by providing a single example to draw a more bespoke response from ChatGPT—for instance, “Write an essay about gender equality in a similar fashion as the following example.” If users include two or more examples in a prompt, it is called a few-shot prompt (Brown et al., 2020; Huang, 2023; Woo et al., 2023; Yong et al., 2023), which supplies more contextual information to achieve higher outcome accuracy.

Users may also specify the target audience and the nature of examples. For example, a GenAI tool can be instructed to explain a complicated social system to students using daily examples, similar to adapting a classic novel into a simplified version. In this sense, prompting serves as an efficient means of grasping difficult concepts or abstract theories.

4.4 Revising Responses via Multiturn Dialogue

When users take on a large or complex assignment, they can break it down into smaller, more manageable tasks by adopting a chain-of-thought prompting approach. This strategy involves guiding a GenAI tool through a series of questions or instructions, with the output of each stage becoming the input for the next. Such multiturn dialogue can improve outcome accuracy, as complex tasks are prone to more errors than simpler ones (Brawn, 2024; Lee et al., 2024; Leung, 2024).

There are multiple ways to hold a dialogue with a GenAI model. Users may request responses that are more detailed or concise, more technical or jargon free, more formal or colloquial, and so on. They can also instruct the tool to explain an answer to primary school students, convert it into a blog post, or present it in table form, to name a few options (Leung, 2024; Lin, 2024).

5 Students’ Engagement with GenAI

To help ESL students develop their writing skills, Huang (2023) proposed integrating an AI-powered feedback system into the classroom. He chose ChatGPT because it can provide personalized feedback for learners by instantly assessing their vocabulary, grammar, and syntax, helping them identify errors, and suggesting alternative phrasings for reference (Brawn, 2024; Pack et al., 2025). It can alleviate teachers’ workloads and promote students’ engagement and autonomy in academic writing (Brawn, 2024), which is particularly important in large classes, where teachers’ individual attention to each student is unavoidably limited (Godwin-Jones, 2022).

Huang (2023) found that a clear and specific prompt, such as the one below (shown as Figure 1), can generate an effective response.

Employing the chain-of-thought prompting technique (Lee et al., 2024; Leung, 2024), Huang (2023) argued that few-shot prompting can improve the AI model’s accuracy in providing feedback for ESL students (Brown et al., 2020; Huang, 2023; Woo et al., 2023; Yong et al., 2023), shown as Figure 2.

6 GenAI Use in a Secondary School and University Setting

The conceptual discussion gains further clarity when situated in real classroom contexts. This section presents case studies of Chinese ESL learners in both secondary and tertiary settings, illustrating how students engage with GenAI for academic writing, the challenges they face, and the diverse learning outcomes that emerge.

6.1 A Case Study of Prompting in a Secondary School in Hong Kong

For secondary ESL students in Hong Kong, multiturn dialogue with ChatGPT is inevitably a trial-and-error process (Brawn, 2024; Woo et al., 2023), given their relative lack of experience with such GenAI tools. After examining 4 students’ prompting pathways in a Hong Kong secondary school, Woo et al. (2023) noted that while the participants used one-shot prompts and natural language instructions, they did not craft few-shot prompts, assign specific roles to ChatGPT, or seek feedback and alternative phrasing from the GenAI tool during the writing process, which points to the importance of teaching prompting in ESL writing classrooms.

6.2 A Case Study of Prompting in a University in the Chinese Mainland

Just as ChatGPT is widely used in the West, Pigai is a well-received automatic writing evaluation (AWE) software in the Chinese mainland, with more than one million registered users throughout (Chen et al., 2022; Xiao et al., 2024). While Pigai is not very strong at offering feedback on idea development or textual organization, it provides instant corrective feedback on language mechanics, such as vocabulary and grammar, as well as learning tips and positive comments (Chen et al., 2022).

After studying 16 assignments produced by 4 students using Pigai at a university in the Chinese mainland, Chen et al. (2022) reported that two major types of feedback offered to the participants were error feedback and language tips. Error feedback includes “punctuation error,” “verb error,” “noun error,” “spelling error,” “preposition error,” and “sentence error,” whereas language tips refer to alternative vocabulary, including synonyms. The participants tried to correct all language errors based on the feedback, but some “errors” were, in fact, “false alarms,” as they were not incorrect; consequently, participants left them untreated. This demonstrates learners’ critical awareness of inaccurate feedback resulting from systemic errors. However, while one participant was critical of some automated feedback, another reported finding both corrective feedback and suggested synonyms useful, indicating different learner beliefs regarding the AWE software (Chen et al., 2022).

It is interesting to note that the AWE system functioned as a scoring competition among students that motivated them to revise their writing to achieve higher scores. When one participant scored 91 points while his/her classmate scored 91.5, he/she was dissatisfied and revised his/her work several more times based on the automated feedback to surpass the classmate (Chen et al., 2022). In this sense, the AI-driven feedback system succeeded in enhancing student engagement (Liu & Yu, 2022).

7 Redefined Roles of Educators and Ethical Challenges

These student experiences expose a central tension: While GenAI can improve learning, it also poses significant pedagogical and ethical challenges. Educators play a decisive role in mediating this balance by guiding learners in interpreting GenAI outputs, curating feedback, and navigating authorship and academic integrity in an AI-integrated learning environment.

7.1 Teachers’ Roles and Pedagogical Challenges

As discussed above, the possibilities and benefits of teaching prompting and using AI-driven feedback in the ESL classrooms are considerable. However, this does not imply that teachers’ roles have been replaced. Instead, teachers are playing new roles in this changing pedagogical landscape: They are becoming mediators and facilitators between GenAI models and students (Godwin-Jones, 2022).

With the prevalence of AI hallucinations (Lakhani, 2023; O’Brien, 2023; Song et al., 2025; Sun et al., 2024; Weise & Metz, 2023)—that is, fabricated content and references—students must critically filter AI-generated output before selecting information that is both accurate and relevant for their purposes. This is where teachers must play a guiding role. Moreover, linking pieces of GenAI output into a coherent written structure presents a significant challenge for students, so teachers must assist students with that (Xu & Jumaat, 2024).

With respect to automated feedback, such feedback may at times be inappropriate or incorrect, further emphasizing the teachers’ guiding role. Even when feedback is correct and appropriate, some students may not understand it and therefore require teacher guidance (Godwin-Jones, 2022; Liu & Yu, 2022). One limitation of AI-generated feedback is that it often fails to explain how or why a particular score is assigned, leaving learners puzzled (Pan & Wang, 2023). Therefore, teacher explanation is necessary. In addition, learners may find it challenging to mediate between human and AI feedback (Ranalli & Yamashita, 2022; Yan, 2023) and to negotiate the boundary between AI-generated content and their own ideas.

Finally, determining how much AI-generated content and feedback should be incorporated into students’ own writing can be an ethical issue that can only be resolved through ongoing discussions among stakeholders, including educators, students, school leaders, and policymakers.

7.2 Opportunities and Ethical Limitations of GenAI as a Pedagogical Scaffold for Chinese ESL Learners

With the rise of GenAI tools, longstanding learning behaviors and habits are being reconfigured, but not necessarily transformed. Chinese ESL students, who have long been trained to rely on memorization and direct translation using traditional tools such as Google Translate to produce assignments, now increasingly use AI systems to produce fluent, grammatically accurate prose—often bypassing the construction of their own arguments or reflection on their own ideas (Chan & Hu, 2023; Kohnke, 2024).

On the one hand, GenAI has considerable potential for supporting Chinese ESL learners who struggle with language production. It functions as a scaffold to help learners navigate English-dominant academic environments by offering instant grammar and vocabulary feedback, generating models of academic writing at various proficiency levels, and assisting with the organization of ideas (Fang et al., 2023; Marengo et al., 2024; Moorhouse et al., 2024). For learners lacking confidence or linguistic precision in such environments, GenAI can make English for academic purposes and ESL learning less intimidating and more accessible (Crompton & Burke, 2023; Su et al., 2023; Wei, 2023). When paired with educator guidance, critical scaffolding, and reflective activities inside and outside the classroom, it may also serve to connect memorized knowledge with applied skills (Gill et al., 2024).

On the other hand, these benefits might be limited if ESL learners rely on GenAI as a shortcut to linguistic fluency without engaging in the cognitive and reflective processes fundamental to deeper learning. The pedagogical potential of GenAI is realized only when learners interact with it as part of a broader learning and inquiry process. When used uncritically or in isolation, GenAI may reproduce—and even amplify—existing issues (Ding et al., 2024). It can reinforce mechanical writing habits, obscure the authentic role of the writer’s voice, and diminish learners’ motivation to engage in higher-order thinking (Barrot, 2023; Chan, 2023; Yan, 2023). These concerns are not merely theoretical. It has been observed that, in some language modules, Chinese ESL learners rely heavily on translation software or GenAI tools to convert entire essays or assignments from Chinese into English—bypassing the writing process, academic rigor, and the higher-order thinking it requires. Similar practices have been observed by teachers in some secondary schools in the Chinese mainland. Such patterns point to deeper issues, including a lack of motivation, confidence, and training in using English writing as a medium for analytical reasoning and creative expression (Moorhouse et al., 2024).

When Chinese ESL learners treat GenAI primarily as a shortcut for producing inauthentic and error-free academic content, their engagement remains confined to the lower levels of Bloom’s cognitive taxonomy—specifically, remembering and understanding (Bloom, 1956). Surface-level language engagement undermines progression toward more advanced cognitive processes, such as analysis, synthesis, evaluation, and authenticity, which are widely endorsed objectives in higher education and academic writing scholarship (Warschauer et al., 2023).

Another ethical consideration is that Chinese ESL learners often perceive AI-generated feedback intuitively and unconsciously. The opacity of GenAI systems exacerbates this issue, as learners may not fully understand that the content is generated or recognize the cultural and linguistic assumptions embedded in outputs. As a result, learners may gradually adopt passive learning habits and accept AI-generated content uncritically, becoming what some scholars described as “lazy learners” (Ahmad et al., 2023; Vinter et al., 2010).

It is also worth noting that many dominant GenAI systems are trained on large-scale datasets that reflect mostly Western academic and cultural norms (Giannakos et al., 2025; Golda et al., 2024). Consequently, AI-generated content may not correspond to the rhetorical traditions or lived experiences of Chinese students, particularly when they are asked to engage in prose writing or community-based, reflective, or experiential academic projects. In some cases, AI-generated responses may also be misaligned with the learner’s actual language proficiency (Barrot, 2023); for instance, producing writing output at a Common European Framework of Reference (CEFR) C2 level for an ESL learner assessed at CEFR B1. Researchers further suggest that GenAI tools often generate repetitive writing patterns and language structures distinct from authentic human writing (Barrot, 2023; Liu et al., 2024; Sahari et al., 2023). As a result, the output often feels artificial and generic and lacks the authenticity and personal voice of the learner. This disjunction not only undermines the development of critical thinking but also raises further ethical concerns regarding the validity and meaningfulness of the learning process itself. For instance, some studies have indicated that translated texts produced by GenAI tools tend to exhibit similar writing patterns and language structures that are distinct from those of human-translated passages. In instances where Chinese ESL students encounter AI-generated writing that “sounds academic” but feels disconnected from their context, they may either disengage or accept the output uncritically (Adiguzel et al., 2023). This mirrors the same passive model of knowledge acceptance that Freire (1970) critiques in the concept of “banking model” of education—a model that has historically been reflected in Chinese students’ early formal schooling. In such scenarios, academic ownership is diminished and passive learning habits are reinforced—only now facilitated by AI technology rather than textbooks, rote practices, and the teachers as an authoritarian figure.

8 Toward Critical AI Literacy and Empowerment

Human–AI co-learning requires critical literacy: the ability to question, contextualize, and reframe AI-generated content (Sasson Lazovsky et al., 2025; Siegle, 2025). For Chinese adolescents in academically demanding environments, AI can either amplify educational inequities or serve as a democratizing force. Those with higher digital literacy may utilize GenAI to scaffold learning, while others may succumb to passive consumption. Educational stakeholders must therefore promote not only technical proficiency but also ethical discernment and reflective habits of mind.

Teachers play an important role in modeling responsible AI use, guiding prompt engineering strategies, and framing GenAI as a partner in the learning process. Assessment practices must evolve to recognize process over product, encourage transparency about AI use, and value students’ reflection on their learning journey (Yurchenko & Nalyvaiko, 2025).

To move past surface-level engagement, students must be equipped with the ability to critically assess, adapt, and respond to AI-generated content. This section proposes a pedagogical model informed by critical AI literacy, in which prompt design, reflective writing, and contextual analysis empower students to reclaim authorship and support deeper learning.

Despite the drawbacks mentioned in the preceding section, it would be shortsighted to regard GenAI solely as a threat to learning for Chinese ESL learners. In fact, given the rapid advancement of GenAI and its increasing utilization in the educational field, its potential applications suggest that it will likely become an integral part of higher education in the foreseeable future (Crompton & Burke, 2023). Most higher education institutions in Macao—namely, the University of Macao, the Macao University of Tourism, and the University of Saint Joseph—have either implemented or are actively working toward integrating GenAI into their teaching practices, including general language teaching and learning, as well as related areas such as policy development, ethics, and digital literacy.

Emerging from this research is the recognition that there is both room and value in balancing traditional learning mindsets and practices rooted in Chinese students’ learning norms with the activation and application of knowledge through GenAI. While limited in scope, this paper suggests that rote learning can serve as a foundational skill for Chinese ESL learners—particularly in areas such as vocabulary acquisition and grammatical accuracy. What is needed, therefore, is not the abandonment of traditional methods based on learners’ educational mindsets but rather their thoughtful integration into a wider, more reflective pedagogical approach to engaging with GenAI. Through carefully designed prompts and scaffolded tasks, educators can better guide students to use GenAI as a tool for exploration and skill development rather than for mere replication (Gibbons, 2002; Kim et al., 2018). For instance, prompts that require personal reflection, cultural relevance, or critical comparison can help Chinese ESL learners move past surface-level reproduction. An assignment that asks students to investigate how a local tradition in China relates to values at a societal level or how their personal experience compares with an AI-generated response invites evaluation, critique, and personalization in their writing (Perkins, 2023; Zdanovic et al., 2022). Similarly, collaborative editing tasks, such as reviewing an AI-drafted email requesting academic support, can assist learners in developing functional language use and an appropriate tone for real-world communication. They can also reflect on the ethics of using GenAI for coursework by generating contrasting viewpoints and discussing their implications in relation to institutional academic integrity policies.

Such scaffolded, visual, and locally grounded tasks support the development of critical thinking skills, communicative competence, and reflective awareness—skills essential for success in higher education. Educators can better guide students to visualize learning and engage with GenAI not just as a tool for replication but as a medium for exploration, critical reflection, and language development within a culturally responsive pedagogical framework (Ma et al., 2024; Yan et al., 2024).

9 Conclusions

As this paper demonstrates, GenAI is not only reshaping how writing is done; it is redefining what it means to learn, author, and think in a digital world. For Chinese ESL learners and educators, embracing Human–AI co-learning means engaging with ethical ambiguities while seizing opportunities for empowerment, transformation, and authentic expression.

As AI systems become ubiquitous in academic writing, the question is no longer whether students will use them but how they will engage with them meaningfully. This conceptual exploration proposes a shift from viewing GenAI as a tool to as a coauthor in a learning dialogue. For Chinese secondary ESL learners straddling linguistic, cultural, and technological boundaries, such engagement holds transformative potential—if guided by critical pedagogy, ethical reflection, and intentional practice. Human–AI co-learning is not the future of education—it is the present imperative.

As China’s universities begin to integrate GenAI into their curricula, there is a growing need to ensure that these tools are used both ethically and pedagogically. This includes not only clear institutional guidelines but also classroom practices that promote transparency, critical reflection, and student agency. Educators must move beyond viewing GenAI solely as a hack in the learning system; instead, they should embrace its potential as a catalyst for deeper cognitive engagement and richer linguistic development.

In the context of Chinese ESL students, GenAI should be integrated in ways that respect and reflect learners’ educational backgrounds, cultural values, and identity formation. Learners must be taught not only how to use GenAI effectively, but also how to question it, challenge its outputs, and reflect on its role in shaping their thinking and language use. This calls for scaffolded, reflective, responsive pedagogical approaches that balance traditional learning practices with the affordances of emerging technologies.

Ultimately, advancing critical AI literacy in language education is not only a technical task; it is also an ethical and educational imperative. As institutions continue to explore the role of GenAI in higher education, further research and collaboration will be essential for creating inclusive, reflective, and future-ready learning environments. Through the thoughtful integration of AI capabilities with human insight, education can evolve to meet the demands of the digital age while still valuing the cultural foundations, mindsets, and communicative complexities that shape and connect us.

References

[1]

Adiguzel, T., Kaya, M. H., Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of ChatGPT.Contemporary Educational Technology, 15(3): 429

[2]

Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education.Humanities and Social Sciences Communications, 10: 311

[3]

Barrot, J. S. (2023). Using ChatGPT for second language writing: Pitfalls and potentials.Assessing Writing, 57: 100745

[4]

Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals. London: Longman Group.

[5]

Brawn, J. R. (2024). In search of the prompt that produces useful written corrective feedback for L2 composition classes.International Journal of Education, 12(4): 17–24

[6]

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., , et al. (2020). Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver: Curran Associates Inc., 159.

[7]

Cain, W. (2024). Prompting change: Exploring prompt engineering in large language model AI and its potential to transform education.TechTrends, 68(1): 47–57

[8]

Chan, C. K. Y. (2023). Is AI changing the rules of academic misconduct? An in-depth look at students’ perceptions of ‘AI-giarism’. arXiv Preprint, arXiv:2306.03358.

[9]

Chan, C. K. Y., Hu, W. J. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education.International Journal of Educational Technology in Higher Education, 20(1): 43

[10]

Chan, J., Siu, C., Ieong, J. (2024). Are Macau students ready for higher education in English? The perspectives of three USJ lecturers.Macao Education, (1):

[11]

Chen, Z. Z., Chen, W. C., Jia, J. Y., Le, H. X. (2022). Exploring AWE-supported writing process: An activity theory perspective.Language Learning & Technology, 26(2): 129–148

[12]

Crompton, H., Burke, D. (2023). Artificial intelligence in higher education: The state of the field.International Journal of Educational Technology in Higher Education, 20(1): 22

[13]

Dehler, G. E., Welsh, M. A. (2014). Against spoon-feeding. For learning. Reflections on students’ claims to knowledge.Journal of Management Education, 38(6): 875–893

[14]

Ding, A. C. E., Shi, L. H., Yang, H. T., Choi, I. (2024). Enhancing teacher AI literacy and integration through different types of cases in teacher professional development.Computers and Education Open, 6: 100178

[15]

Fang, T., Yang, S., Lan, K. X., Wong, D. F., Hu, J. P., Chao, L. S., Zhang, Y. (2023). Is ChatGPT a highly fluent grammatical error correction system? A comprehensive evaluation. arXiv Preprint, arXiv:2304.01746.

[16]

Freire, P. (1970). Pedagogy of the oppressed. New York: Seabury Press.

[17]

Frey, C. B., Osborne, M. A. (2013, September 17). The future of employment: How susceptible are jobs to computerisation? Available from University of Oxford website.

[18]

Giannakos, M., Azevedo, R., Brusilovsky, P., Cukurova, M., Dimitriadis, Y., Hernandez-Leo, D., Järvelä, S., Mavrikis, M., Rienties, B. (2025). The promise and challenges of generative AI in education.Behaviour & Information Technology, 44(11): 2518–2544

[19]

Gibbons, P. (2002). Scaffolding language, scaffolding learning. Portsmouth: Heinemann.

[20]

Gill, S. S., Xu, M., Patros, P., Wu, H., Kaur, R., Kaur, K., Fuller, S., Singh, M., Arora, P., Parlikad, A. K., et al. (2024). Transformative effects of ChatGPT on modern education: Emerging era of AI chatbots.Internet of Things and Cyber-Physical Systems, 4: 19–23

[21]

Godwin-Jones, R. (2022). Partnering with AI: Intelligent writing assistance and instructed language learning.Language Learning & Technology, 26(2): 5–24

[22]

Golda, A., Mekonen, K., Pandey, A., Singh, A., Hassija, V., Chamola, V., Sikdar, B. (2024). Privacy and security concerns in generative AI: A comprehensive survey.IEEE Access, 12: 48126–48144

[23]

Huang, J. (2023). Engineering ChatGPT prompts for EFL writing classes.International Journal of TESOL Studies, 5(4): 73–79

[24]

Hyland, K. (2008). Genre and academic writing in the disciplines.Language Teaching, 41(4): 543–562

[25]

Kim, N. J., Belland, B. R., Walker, A. E. (2018). Effectiveness of computer-based scaffolding in the context of problem-based learning for STEM education: Bayesian meta-analysis.Educational Psychology Review, 30(2): 397–429

[26]

Kohnke, L. (2024). Exploring EAP students’ perceptions of GenAI and traditional grammar-checking tools for language learning.Computers and Education: Artificial Intelligence, 7: 100279

[27]

Lakhani, K. (2023, July 17). How can we counteract generative AI’s hallucinations? Available from Harvard Business website.

[28]

Lee, A. V. Y., Teo, C. L., Tan, S. C. (2024). Prompt engineering for knowledge creation: Using chain-of-thought to support students’ improvable ideas.AI, 5(3): 1446–1461

[29]

Leung, C. H. (2024). Promoting optimal learning with ChatGPT: A comprehensive exploration of prompt engineering in education.Asian Journal of Contemporary Education, 8(2): 104–114

[30]

Lin, Z. Q. (2024). Prompt engineering for applied linguistics: Elements, examples, techniques, and strategies.English Language Teaching, 17(9): 14–25

[31]

Liu, M. L., Zhang, L. J., Biebricher, C. (2024). Investigating students’ cognitive processes in generative AI-assisted digital multimodal composing and traditional writing.Computers & Education, 211: 104977

[32]

Liu, S., Yu, G. X. (2022). L2 learners’ engagement with automated feedback: An eye-tracking study.Language Learning & Technology, 26(2): 78–105

[33]

Lo, L. S. (2023). The art and science of prompt engineering: A new literacy in the information age.Internet Reference Services Quarterly, 27(4): 203–210

[34]

Ma, Q., Crosthwaite, P., Sun, D. E., Zou, D. (2024). Exploring ChatGPT literacy in language education: A global perspective and comprehensive approach.Computers and Education: Artificial Intelligence, 7: 100278

[35]

Marengo, A., Pagano, A., Pange, J., Soomro, K. A. (2024). The educational value of artificial intelligence in higher education: A 10-year systematic literature review.Interactive Technology and Smart Education, 21(4): 625–644

[36]

Ministry of Education of the People’s Republic of China. (2017, December 29). The English curriculum standards for senior high schools (2017 edition). Available from Ministry of Education of the People’s Republic of China website. (in Chinese).

[37]

Ministry of Education of the People’s Republic of China. (2022, March 25). The English curriculum standards for compulsory education (2022 edition). Available from Ministry of Education of the People’s Republic of China website. (in Chinese).

[38]

Moorhouse, B. L., Wan, Y. W., Wu, C. Z., Kohnke, L., Ho, T. Y., Kwong, T. (2024). Developing language teachers’ professional generative AI competence: An intervention study in an initial language teacher education course.System, 125: 103399

[39]

Mzwri, K., Turcsányi-Szabo, M. (2025). The impact of prompt engineering and a generative AI-driven tool on autonomous learning: A case study.Education Sciences, 15(2): 199

[40]

O’Brien, M. (2023, August 2). Chatbots sometimes make things up. Is AI’s hallucination problem fixable? Available from Associated Press website.

[41]

Pack, A., Hartshorn, K. J., Escalante, J., Gillette, N. (2025). How well can GenAI (GPT-4) provide written corrective feedback on English-language learners’ writing.International Journal of English for Academic Purposes: Research and Practice, 5(1): 7–26

[42]

Pan, M. W., Wang, Y. (2023). The listening and speaking test of NMET Shanghai.Studies in Language Assessment, 12(1): 93–102

[43]

Perkins, M. (2023). Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond.Journal of University Teaching and Learning Practice, 20(2): 7

[44]

Ranalli, J., Yamashita, T. (2022). Automated written corrective feedback: Error-correction performance and timing of delivery.Language Learning & Technology, 26(1): 1–25

[45]

Sahari, Y., Al-Kadi, A. M. T., Ali, J. K. M. (2023). A cross sectional study of ChatGPT in translation: Magnitude of use, attitudes, and uncertainties.Journal of Psycholinguistic Research, 52(6): 2937–2954

[46]

Sasson Lazovsky, G., Raz, T., Kenett, Y. N. (2025). The art of creative inquiry—From question asking to prompt engineering.The Journal of Creative Behavior, 59(1): 1–16

[47]

Siegle, D. (2025). Using AI prompt engineering to improve gifted students’ questioning.Gifted Child Today, 48(1): 68–72

[48]

Song, Y., Cui, M. J., Wan, F., Yu, Z. W., Jiang, J. G. (2025). AI hallucination in crisis self-rescue scenarios: The impact on AI service evaluation and the mitigating effect of human expert advice.International Journal of Human–Computer Interaction, 41(22): 14419–14439

[49]

Su, Y. F., Lin, Y., Lai, C. (2023). Collaborating with ChatGPT in argumentative writing classrooms.Assessing Writing, 57: 100752

[50]

Sun, Y. J., Sheng, D. F., Zhou, Z. H., Wu, Y. F. (2024). AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content.Humanities and Social Sciences Communications, 11(1): 1278

[51]

Teng, S. H., Ieong, F. F. (2024). How does artificial intelligence promote teaching innovation in basic education? Teaching experience from Macau, China.Journal of Infrastructure Policy and Development, 8(14): 10133

[52]

Tung, C. A., Chang, S. Y. (2009). Developing critical thinking through literature reading.Feng Chia Journal of Humanities and Social Sciences, (19): 287–317

[53]

Vinter, A., Pacton, S., Witt, A., Perruchet, P. (2010). Implicit learning, development, and education. In: Didier, J. P., & Bigand, E., eds. Rethinking physical and rehabilitation medicine. Paris: Springer, 111–127.

[54]

Warschauer, M., Tseng, W., Yim, S., Webster, T., Jacob, S., Du, Q., Tate, T. (2023). The affordances and contradictions of AI-generated text for writers of English as a second or foreign language.Journal of Second Language Writing, 62: 101071

[55]

Wei, L. (2023). Artificial intelligence in language instruction: Impact on English learning achievement, L2 motivation, and self-regulated learning.Frontiers in Psychology, 14: 1261955

[56]

Weise, K., Metz, C. (2023, May 1). When AI chatbots hallucinate. Available from The New York Times website.

[57]

Woo, D. J., Guo, K., Susanto, H. (2023). Cases of EFL secondary students’ prompt engineering pathways to complete a writing task with ChatGPT. arXiv Preprint, arXiv:2307.05493.

[58]

Woo, D. J., Wang, D. L., Guo, K., Susanto, H. (2024). Teaching EFL students to write with ChatGPT: Students’ motivation to learn, cognitive load, and satisfaction with the learning process.Education and Information Technologies, 29(18): 24963–24990

[59]

Xiao, D. J., Mohamad, M., Li, W. Y. (2024). Research progress and trends of Pigai.org automated writing evaluation system in English writing: A systematic bibliometric analysis (2011−2023).World Journal of English Language, 14(3): 440

[60]

Xin, Q., Alibakhshi, G., Javaheri, R. (2025). A phenomenographic study on Chinese EFL teachers’ cognitions of positive and negative educational, social, and psychological consequences of high-stake tests. Scientific Reports, 15.

[61]

Xu, T., Jumaat, N. F. (2024). ChatGPT-empowered writing strategies in EFL students’ academic writing: Calibre, challenges and chances.International Journal of Interactive Mobile Technologies, 18(15): 95–114

[62]

Yan, D. (2023). Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation.Education and Information Technologies, 28(11): 13943–13967

[63]

Yan, L., Zhao, L., Echeverria, V., Jin, Y., Alfredo, R., Li, X., Gaševi'c, D., Martinez-Maldonado, R. (2024). VizChat: Enhancing learning analytics dashboards with contextualised explanations using multimodal generative AI chatbots. In: Olney, A. M., Chounta, I. A., Liu, Z., Santos, O. C., Bittencourt, I. I. eds. Artificial intelligence in education. Recife: Springer, 180–193.

[64]

Yong, G., Jeon, K., Gil, D., Lee, G. (2023). Prompt engineering for zero‐shot and few‐shot defect detection and classification using a visual-language pretrained model.Computer-Aided Civil and Infrastructure Engineering, 38(11): 1536–1554

[65]

Yurchenko, V., Nalyvaiko, O. (2025). How ChatGPT shapes a new reality of writing: Is there a place for humans in an artificial world.Educational Challenges, 30(1): 138–155

[66]

Zdanovic, D., Lembcke, T. J., Bogers, T. (2022). The influence of data storytelling on the ability to recall information. In: Proceedings of ACM SIGIR Conference on Human Information Interaction and Retrieval. Regensburg: ACM, 67–77.

[67]

Zhang, T. (2017). Why do Chinese postgraduates struggle with critical thinking? Some clues from the higher education curriculum in China.Journal of Further and Higher Education, 41(6): 857–871

RIGHTS & PERMISSIONS

The Authors. This article is published with open access at link.springer.com and journal.hep.com.cn

PDF (1357KB)

575

Accesses

0

Citation

Detail

Sections
Recommended

/