Evolution of Learning: Assessing the Transformative Impact of Generative AI on Higher Education

Stefanie Krause , Bhumi Hitesh Panchal , Nikhil Ubhe

Frontiers of Digital Education ›› 2025, Vol. 2 ›› Issue (2) : 21

PDF (3145KB)
Frontiers of Digital Education ›› 2025, Vol. 2 ›› Issue (2) : 21 DOI: 10.1007/s44366-025-0058-7
RESEARCH ARTICLE

Evolution of Learning: Assessing the Transformative Impact of Generative AI on Higher Education

Author information +
History +
PDF (3145KB)

Abstract

Generative artificial intelligence (GenAI) models, such as ChatGPT, have rapidly gained popularity. Despite this widespread usage, there is still a limited understanding of how this emerging technology impacts different stakeholders in higher education. While extensive research exists on the general opportunities and risks in education, there is often a lack of specificity regarding the target audience—namely, students, educators, and institutions—and concrete solution strategies and recommendations are typically absent. Our goal is to address the perspectives of students and educators separately and offer tailored solutions for each of these two stakeholder groups. This study employs a mixed-method approach that integrates a detailed online questionnaire of 188 students with a scenario analysis to examine potential benefits and drawbacks introduced by GenAI. The findings indicate that students utilize the technology for tasks such as assignment writing and exam preparation, seeing it as an effective tool for achieving academic goals. Subsequent the scenario analysis provided insights into possible future scenarios, highlighting both opportunities and challenges of integrating GenAI within higher education for students as well as educators. The primary aim is to offer a clear and precise understanding of the potential implications for students and educators separately while providing recommendations and solution strategies. The results suggest that irresponsible and excessive use of the technology could pose significant challenges. Therefore, educators need to establish clear policies, reevaluate learning objectives, enhance AI skills, update curricula, and reconsider examination methods.

Graphical abstract

Keywords

generative AI / ChatGPT / higher education / scenario analysis

Cite this article

Download citation ▾
Stefanie Krause, Bhumi Hitesh Panchal, Nikhil Ubhe. Evolution of Learning: Assessing the Transformative Impact of Generative AI on Higher Education. Frontiers of Digital Education, 2025, 2(2): 21 DOI:10.1007/s44366-025-0058-7

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Emerging technologies have persistently transformed traditional methods of teaching and learning, leading to significant changes in the educational system (García-Peñalvo, 2023). Large language models (LLMs) have been utilized to develop sophisticated chatbots, enabling them not only to comprehend and interpret human language input but also to generate content (Wei et al., 2023). Recent advancements in transformer models, the foundation for LLMs, have significantly enhanced the capabilities of these models (ChatGPT Generative Pre-trained Transformer & Zhavoronkov, 2022). Generative pre-trained transformer (GPT) is such a model that can generate human-like responses (OpenAI, 2023). Launched by OpenAI in November 2022, ChatGPT is capable of managing a wide range of text-based queries. It achieves nearly human-level performance in tasks such as question answering (Krause & Stolzenburg, 2023) and has demonstrated the ability to pass some of the most challenging exams at Wharton Business School with grades ranging from B to B- (Terwiesch, 2023). These examples highlight the impressive capabilities of these new LLMs. While there are other LLMs available, the core reasons for ChatGPT’s success are its public accessibility, ease of use, and exceptional versatility across numerous applications and tasks. Although the foundational language model continues to evolve, our focus is on the free version of ChatGPT.

ChatGPT is both an LLM and an example of generative AI (GenAI). GenAI refers broadly to any AI system that can produce new content. This can include generating text, images, music, or other types of content. LLMs are a specific type of GenAI that focuses on natural language processing.

Following the media attention that ChatGPT received and the high number of users, higher education institutions (HEIs) displayed varied responses to the new AI systems. Among the top 500 universities listed in the 2022 Quacquarelli Symonds World University Rankings, the foremost 43 universities have implemented policies to ban ChatGPT, restricting the use of ChatGPT or any other AI tools during exams unless explicitly permitted (Xiao et al., 2023). Critics argue that such measures could further widen existing equity gaps (Bozkurt & Sharma, 2023; Warschauer et al., 2023). Instead of prohibiting the technology, education should be adapted to integrate it effectively (Meyer et al., 2023; Tlili et al., 2023). While AI technologies have been widely embraced in the industry, the higher education sector worldwide has not kept pace with this trend (O’Dea & O’Dea, 2023). It is essential for both educators and students to learn how to use GenAI tools responsibly and effectively (Krause et al., 2025; Sharma & Yadav, 2022).

The integration of GenAI in education raises several ethical concerns for both students and educators. These include the risk of students using it in unethical or dishonest ways (Qadir, 2023), as well as potential misuse for inappropriate monitoring, control, and assessment of educators (Jahn et al., 2019). Though ChatGPT supports conclusions with factual arguments, it can make mistakes by overemphasizing certain events and neglecting meta-text, leading to biased interpretations or hallucinations (Banerjee et al., 2024; Kocoń et al., 2023). While GPT systems excel in languages with abundant resources, like English, providing highly accurate outputs, they still face challenges with less represented languages (Hendy et al., 2023). GenAI tools present both opportunities and risks for students and educators alike. The full extent of how this emerging technology will change education is still unknown.

There are a few bachelor or master programs in the field of information technology where educators are teaching AI is part of their curriculum, e.g., in the bachelor program AI Engineering (Krause et al., 2023). However, students of non-IT disciplines need to profit from GenAI as well. Therefore, interdisciplinary approach aims to equip students across various fields with AI literacy and skills relevant to their domains (Thurner & Socher, 2023).

The rest of the paper is organized as follows: Section 2 reviews the related work; Section 3 outlines our research questions; Section 4 details the methodology, including our survey and scenario analysis; Section 5 discusses the main findings and future research directions; and Section 6 summarizes the key conclusions.

2 Related Work

Numerous LLMs, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), LaMDA (Thoppilan et al., 2022), BART (Lewis et al., 2019), and DeepSeek (Liu et al., 2024) have been developed in recent years. However, it was the widespread popularity of ChatGPT that marked a significant influence and transformation in our daily lives. With the substantial impact of this disruptive technology on the educational sector, it is essential to understand the opportunities, risks, and necessary changes that education must confront.

Most research on GenAI in education collectively addresses opportunities and challenges for students, educators, and other stakeholders on various levels. However, O’Dea & O’Dea (2023) examined these aspects at national, institutional, and personal levels separately, while Malik et al. (2023) explored the perspectives of students and staff individually through interviews (15 staff members and 14 students). Students expressed strong support for using GenAI as a learning tool in the classroom, whereas staff members reported feelings of anxiety and vulnerability, particularly regarding assessment with GenAI without sufficient training and support. Additionally, students felt they were in an information vacuum, given the limited GenAI policies and unclear guidance on effectively utilizing GenAI tools. This highlights the need for strategies for educators and the necessity for guidance for students.

Atlas (2023) mentions ways in which ChatGPT can be used in higher education, e.g., for automated essay scoring, research assistance, language translation, helping professors in creating their syllabus, quizzes, exams, generating reports, email and chatbot assistance, etc. Rahman and Watanobe (2023) stated that ChatGPT is a versatile tool that serves as an excellent assistant for learners, supporting them in understanding complex programming problems and generating an average accuracy of 85.42%. In a comparative study between ChatGPT and Google Search on search performance and user experience, Xu et al. (2023) stated that ChatGPT improves work efficiency, excels in answering straightforward questions and offers a positive user experience, but it may hinder further exploration and might not always outperform traditional search engines.

In addition to the numerous educational opportunities associated with ChatGPT, there has also been research conducted on its drawbacks and challenges. Hosseini et al. (2024) queried ChatGPT and suggested that any section written by a natural language processing system like ChatGPT should be checked by a domain expert for accuracy, bias, relevance, and reasoning and if it contains errors or biases. Furthermore, coauthors need to be held accountable for its accuracy, cogency, and integrity. Mhlanga (2023a) analysed the responsible and ethical usage of ChatGPT in his literature review. To start with, educators must inform students about data collection, security measures and the limitations of AI to foster critical thinking. Liu and Stapleton (2018) observed in their exploratory study between two groups of students that conventional prompts in high-stakes English tests led to better performance, but experimental prompts in behavioral economics fostered diverse language use and enhanced critical thinking, highlighting the potential trade-off between standardized testing and cognitive skill development. Perera and Lankathilaka (2023) suggested using ChatGPT as a supplementary tool and not a replacement for human researchers, ensuring it is used under the supervision of experts by reinstating proctored, in-person assessments to reduce reliance on ChatGPT. Cotton et al. (2023) mentioned ways of detecting work written by ChatGPT, firstly by looking for language irregularities or inconsistency that can indicate chatbot authorship. Secondly, by checking for proper citations and references, and thirdly a lack of originality, factually inaccurate content, and error-free grammar.

Understanding student perceptions can help educators and policymakers develop effective strategies for integrating GenAI tools into education while addressing concerns and promoting responsible use. However, there has been limited research into students’ viewpoints regarding GenAI tools (Chan & Hu, 2023). A recent study conducted by Ngo (2023) involved a sample of 200 Vietnamese university students who had previous experience utilizing ChatGPT for academic purposes. The findings indicated that, from the students’ perspective, the barriers to the mean value (3.64) on using ChatGPT were slightly higher than the mean value (3.58) of the perceived benefits. Notably, the simplicity of using ChatGPT was highlighted as its most significant feature. Another study focused on students in Hong Kong, China, examining aspects such as familiarity, willingness to engage, and both potential benefits and challenges of using ChatGPT, alongside strategies for effective integration into academic activities (Chan & Hu, 2023). The study revealed that students acknowledged the potential for personalized learning support, assistance in writing and brainstorming, and capabilities in research and analysis. However, concerns were also raised regarding issues of accuracy, privacy, ethical considerations, and the possible impact on personal development, career prospects, and societal values. Interestingly, in the study, 33.3% of participants had never used GenAI technologies like ChatGPT, while 21.8% reported rare usage, 29.1% used it sometimes, 9.8% often, and 6.0% always used it. Unfortunately, the study did not clearly define what was meant by terms such as “sometimes,” “often,” or “always.” In the field of business education, a study on the use of chatbots reported positive user feedback, with students citing enhanced learning experiences due to the chatbots’ responsiveness, interactivity, and the confidential support they provided (Chen et al., 2023). In March 2024 a survey on the spread and acceptance of GenAI at schools and universities in Germany was published (Schlude et al., 2024). Some relevant aspects are not included in their study, for example, the tasks for which students use GenAI were asked, but the survey missed the question of how often students use these tools. Furthermore, very limited conclusions and recommendations have been drawn from the survey results. Unlike previous theoretical approaches, our method starts by identifying how university students of different nationalities use GenAI tools such as ChatGPT and their feelings towards these tools in their educational experience. Building on this, we develop possible future scenarios to analyse positive and negative implications of GenAI and provide recommendations for both students and educators.

3 Research Questions

This study explores the effects of GenAI on university students and educators. The first research question (RQ1) investigates potential changes in student behavior and performance stemming from GenAI usage. The objective is to understand the benefits and drawbacks of using the prominent GenAI tool, ChatGPT, from the students’ perspective before considering its effects on educators and students in HEIs (RQ2). We seek to explore how GenAI could potentially revolutionize conventional teaching approaches by examining its current usage in the educational context. This study aims to contribute to the ongoing dialogue on the integration of GenAI in higher education and to inform educators and policymakers about the implications for the HEIs learning environment. With this consideration, we formulated the following two research questions:

RQ1. What are the potential benefits and drawbacks for students using GenAI for educational purposes?

RQ2. What potential consequences or effects does GenAI have on educators?

4 Methodology

To thoroughly understand the topic, we employed a mixed-method approach, combining a survey for the quantitative analysis and a scenario analysis for the qualitative aspect. Our full methodology is illustrated in Fig.1. During our research project, we incorporated three feedback loops into the validation process to ensure the quality of our concepts, methods, and findings. Our feedback group included approximately 20 students and two university professors who attended our pitch, intermediate, and final presentations. Initially, we introduced the idea, motivation, and proposed method through a pitch to establish a well-defined research objective. The next step involved conducting a literature review and thoughtfully crafting survey questions designed to draw out the most valuable insights from students. It was crucial to ask questions that addressed both positive and negative aspects impartially to gain a comprehensive understanding of the impact of GenAI on higher education. The survey data was collected through structured questionnaires via Google Forms, ensuring participants’ anonymity and confidentiality. The findings from the survey analysis were subsequently shared during the intermediate presentation, where feedback was solicited from the same audience that attended the initial pitch presentation.

Subsequently, we conducted a scenario analysis, utilizing both the survey findings and insights from our literature review to explore potential future paths for integrating GenAI into higher education. The creation of credible scenarios was based on two key dimensions of uncertainty: the frequency and responsibility with which students use GenAI for their education. Responsibility pertains to the ethical use of GenAI, where students verify critical information against reputable sources, maintaining accountability and academic integrity (later defined in detail). Using these two dimensions, we developed best-case, worst-case, and base-case scenarios. These scenarios provide insight into possible outcomes and implications, aiding in the understanding of the opportunities and challenges associated with integrating GenAI technology in higher education. By combining survey data with scenario forecasting, this methodology offers a comprehensive understanding of GenAI’s impact on university students’ education, enabling the exploration of diverse perspectives and potential trends and aiding in the formulation of recommendations for educators.

4.1 Literature Review

We conducted a literature review on student as well as lecturer perspectives on GenAI in higher education. We found 15 research papers focusing on lecture strategies and clustered and ranked the 15 top-mentioned lecturer strategies from high to low priority according to the number of mentions in the research papers. The top 15 lecture strategies are listed below:

(1) Integration of AI into the curriculum;

(2) Adjusting exam and assessment strategies;

(3) Policy development for AI usage;

(4) Incorporating ethics and privacy in curriculum;

(5) Tailoring pedagogical strategies;

(6) Teaching AI limitations;

(7) Designing project-oriented tasks;

(8) Testing the efficiency of GenAI in educational activities;

(9) AI-driven feedback and automated grading;

(10) Upskilling for AI proficiency;

(11) Evaluating GenAI’s role in pedagogical practices;

(12) Integrating GenAI responsibly;

(13) Rethinking learning objectives;

(14) Automating routine tasks;

(15) Using GenAI for learning support.

Further, we collected positive as well as negative implications of GenAI for students mentioned in the literature. With the help of these findings, we formulated our survey.

4.2 Survey Design

The questionnaire was divided into three sections. The first section gathered general information about the students’ backgrounds, the second explored the benefits and drawbacks of using ChatGPT (in a mixed order), and the final section elicited students’ expectations from their educators in HEIs. The survey was administered via Google Forms, chosen for its easy-to-use interface. The responses were then exported to an Excel spreadsheet for further evaluation and data visualization. The survey link was shared with students through the university’s weekly newsletters as well as through networking among peers and across various platforms.

4.3 Survey Questions and Responses

A total of 188 students participated in the survey. The study involved students between 17 and 38 years old (with a mean age of 25 years). The majority of participants were men, accounting for 61% of the total, while women constituted 37% of the participants. Since participants had 36 different nationalities, most participants came from Germany and India. Additionally, a significant number of participants were master’s students, and a majority of them had backgrounds in science, technology, engineering, and mathematics (STEM) fields. More details on the demographics of the survey participants are presented in Fig.2.

This study revolved around student’s interaction with the GenAI tool ChatGPT. Hence, the fundamental question was whether the participants have actually used ChatGPT to provide the basis for comprehending and interpreting their responses effectively. Later on, we evaluated the usage frequency in more detail. Out of the total pool of 188 participants, the survey focused its attention solely on the subset of 172 participants who affirmed their usage of ChatGPT. This selection allows a consistent and coherent examination of data throughout the subsequent stages. In contrast, the remaining 16 participants indicated a lack of engagement with ChatGPT, leading to the exclusion of their participation in this specific study.

By adopting this selective approach, the research aimed to uphold the integrity and uniformity of the data, enabling robust analysis and meaningful interpretations.

Respondents were requested to rate most questions on a Likert scale of 1 to 5 (5 represents the highest approval). A Likert scale assumes distances between each answer option are equal. It is noteworthy that a substantial majority, accounting for 94% of the participants, indicated a comfort level of 3 or higher for using ChatGPT in their university education. Impressively, 66% of the surveyed students unequivocally acknowledged that they found ChatGPT to be more helpful than alternative resources (e.g., textbooks, professors, and online research tools). A substantial 63% of participants indicated ChatGPT as their primary source for academic research by giving it a rating of at least 3 out of 5. The survey also sought to evaluate the frequency with which participants utilize ChatGPT for their study routines. Notably, the findings showcased a diverse pattern of usage. We can identify that most of the participants with 42% using ChatGPT a few times per month, indicating a periodic reliance on the tool. While 34% of participants revealed a more frequent pattern of usage, relying on ChatGPT several times per week (see Fig.3). These results underscore the versatility of ChatGPT, catering to a variety of study habits and preferences among the participants. We can cluster the students into two different groups: frequent GenAI users (several times per week to daily usage) and non-frequent users (up to a few times per month). Interestingly, these two groups are approximately 50/50 (see green and blue colored parts in Fig.3).

Impressively, 86% of respondents believe ChatGPT can help to prepare for assignments and exams and rated 3 or higher, highlighting its potential to assist in academic tasks. The impact of ChatGPT on alleviating the workload and stress of university students has been assessed as well. Notably, most participants (89%) indicated that ChatGPT holds the potential to reduce their workload, with a rating of 3 or above, suggesting its role in stress reduction. Responses were divided on whether ChatGPT can offer personalized learning experiences. Out of the surveyed students, 40% firmly supported the idea, while 42% expressed uncertainty, responding with a ‘maybe’ regarding the potential for personalized learning.

Participants were asked to select various tasks for which they employed ChatGPT’s assistance. The majority opted for the following tasks:

(1) Basic research or fact-checking;

(2) Generating ideas or brainstorming;

(3) Essay or assignment writing;

(4) Exam Preparation;

(5) Studying specific topics or concepts.

The next questions deal with the potential negative impact of ChatGPT. Most of the participants do not believe that their ability to engage in meaningful discussions and debates with their professors and peers has been reduced with the usage of ChatGPT since 56% of the study participants gave a 2 or less rating, meaning they find this not really concerning.

The effect on problem-solving skills is not a concern according to 56% of participants, however 70% of the participants believe it is easier to cheat or take shortcuts in your academic work and gave it a rating of 3 or more. Many participants experienced that the content provided by ChatGPT can be inaccurate or misleading, and 76% rated the question with 3 or more. Exactly half of the participants do not believe that ChatGPT has negatively affected their critical thinking skills, rating its impact as 2 or below.

The potential of ChatGPT to offer superior learning opportunities compared to traditional classroom settings are met with a clear “no” by 49% of respondents, who stated that it is not a better alternative. The majority (71%) of participants firmly expressed that human educators are irreplaceable, dismissing the idea that ChatGPT could take their role in the future. Regarding the use of ChatGPT in exams and assignments, the participants had different views. Fifty-seven percent of students advocate for its inclusion, supporting the notion that it should be permitted for these purposes. When asked about the responsibility of instructors to educate students about ChatGPT’s functionalities, 67% of respondents strongly agreed or agreed that learning how to use the tool responsibly should be taught by instructors. Many respondents emphasized that ChatGPT should be used primarily for brainstorming and idea generation, with a cautionary note that fact-checking is crucial for the content it generates.

4.4 Analysis of the Survey Results

All survey results based on a 5-point Likert scale are summarized in Fig.4. In contradiction to the findings of Ngo (2023), we found that the mean value (2.81) of the negative aspects are not as important to students compared to the mean value (3.45) of the positive aspects and the mean value (3.49) of the general aspects. Therefore, in our study, the positive aspects outweigh the negative aspects, while the mean value (3.64) of barriers in Ngo (2023) on using ChatGPT are slightly higher than the mean value (3.58) on benefits of using it.

Our survey on students’ user experience of ChatGPT revealed several key insights. Most participants rated ChatGPT’s comfort highly (94%) and found it more helpful than other resources (66%), with many considering it their primary source (63%). In our study, 89% of the students believed it could reduce their workload; this aligns with the results of Schlude et al. (2024), where 62% of their surveyed students mention time-saving using GenAI. However, the study of Schlude et al. (2024) revealed that 58% of the students do not critically reflect on the accuracy of GenAI responses, which is very similar to our study results. The majority of students are not worried that utilizing GenAI diminishes their participation in meaningful discussions or decreases their critical thinking skills. However, 70% of the students expressed concern that it facilitated cheating. Despite this, 86% of the students are willing to incorporate ChatGPT into their exams or coursework. When it comes to the responsible utilization of these tools, 67% of the students highly regard their instructors’ efforts. On another note, the research conducted by Schlude et al. (2024) reveals that 41% of the students surveyed report a lack of guidelines from their HEIs regarding GenAI use, with 57% of them desiring such guidelines.

As shown in Fig.5, frequent and non-frequent users were analysed separately to compare these two user groups. We defined frequent users as those who utilize GenAI several times per week to daily, and non-frequent users as those who utilize GenAI up to a few times per month. These two groups are approximately 50/50 based on our survey results (see Fig.3). We found that these two user groups have different perceptions of positive and general aspects of GenAI. The mean values of all positive effects included in our study, namely help in preparing for assignments and exams, reduction of workload and stress and effectiveness in helping achieve academic goals, are significantly higher for frequent users. Interestingly, the negative aspects of GenAI are viewed rather similarly.

4.5 Scenario Analysis

Scenario analysis is a technique used to assess the potential outcomes of different situations or events, allowing individuals or organizations to make informed decisions based on a range of possibilities (Kosow & Gaßner, 2008). It involves creating various scenarios that represent different future states or conditions, each with its own set of assumptions and implications (De Jouvenel, 2000). The goal is to understand the potential risks, opportunities, and impacts associated with each scenario and provide recommendations on how educators and students can prepare for the changing educational environment.

There are various types of scenario analysis, such as single-point scenarios, best-case, worst-case, base-case scenarios, trend analysis, exploratory scenarios, black swan scenarios, and red-flag scenarios (Kosow & Gaßner, 2008). For our research, we chose the best-case, worst-case, and base-case scenarios to gain a well-rounded understanding of the potential outcomes, making us better equipped to improve our decision-making processes.

The best-case scenario is a strategic analysis that focuses on optimistic assumptions and presents a future outcome that represents the most favorable conditions for a particular situation. It envisions a set of circumstances where everything goes exceptionally well, and all factors align to yield the best possible results. This scenario is often used to identify potential opportunities and rewards that can be achieved under ideal circumstances. However, it’s essential to recognize that the best-case scenario may not always be the most realistic outcome and should be balanced with more cautious analyses.

Conversely, the worst-case scenario is a strategic analysis that concentrates on pessimistic assumptions and illustrates a future outcome characterized by adverse conditions and significant challenges. It envisions a set of circumstances where things go terribly wrong, and the situation reaches its most unfavorable state. The purpose of the worst-case scenario is to identify potential risks, threats, and vulnerabilities that could arise under such conditions, allowing organizations to develop contingency plans and risk mitigation strategies. Like the best-case scenario, the worst-case scenario should also be balanced with more likely outcomes to provide a comprehensive perspective.

The base-case scenario is a strategic analysis that represents a more realistic and probable future outcome. It is based on moderate assumptions and does not assume exceptionally favorable or adverse conditions. Instead, it portrays a future where various factors are expected to develop in line with historical trends, current market conditions, and reasonable projections. The base-case scenario serves as a starting point for further analysis and decision-making, as it offers a more practical assessment of what is likely to happen without extreme deviations. It helps organizations establish baseline expectations and evaluate other scenarios against this standard to understand potential risks and opportunities.

Our scenarios are the outcome of a systematic and iterative approach that included literature-based insights, survey analysis via students’ participation, and rigorous evaluation. This strategy made sure that the scenarios covered a wide range of probable outcomes and promoted a thorough grasp of the likely future dynamics in the field of GenAI-enhanced education.

The tree diagram in Fig.6 shows four different implications of GenAI on students and HEIs depending on the two key uncertainties: the frequency of usage and the level of responsibility of students. The tree ends at different future conditions depending on the path and thereby serves as a comprehensive visual representation for the four different scenarios that we found. We utilized four different categories that examine the effects of excessive and low usage together with responsible and irresponsible behavior. We characterize low usage as students employing GenAI tools rarely, typically once or twice a month, whereas excessive usage implies daily or near-daily utilization of a GenAI tool.

We characterize highly responsible behavior of students as:

• Awareness of limitations;

• Trusting human educators over GenAI;

• Maintaining academic integrity, ethical behavior.

Conversely, we define low responsible or irresponsible as:

• Lack of awareness of limitations;

• Substituting human lecturers, overreliance;

• Academic dishonesty and unethical practices such as plagiarism and cheating.

After our literature review and survey analysis, it becomes clear that the confluence of usage frequency and responsible behavior have significant effects on the possible future scenarios. The importance of responsible GenAI usage is emphasized, for example, by Cooper (2023), who stated the important role of educators in fostering responsible use of ChatGPT. Also, studies by Boxleitner (2023), Chauncey & McKenna (2023), and Mhlanga (2023a) focus on responsible and ethical usage of AI Chatbots in education. The impact of the usage frequency of ChatGPT is studied in (Fakhri et al., 2024) which suggesting that frequent use of ChatGPT in higher education diminishes the positive impact on student attitudes, satisfaction, and competence. However, the research examining the relationship between the frequency of AI use and student perceptions of AI is inconclusive. In Yildiz Durak (2023)’s study of university students in Turkey no correlation between chatbot usage frequency and visual design self-efficacy, course satisfaction, chatbot usage satisfaction, and learner autonomy was found. The finding shows that frequency of use alone is not a meaningful factor. In contrast, Bailey et al. (2021) found that the amount of time spent using a chatbot in a second language writing was positively associated with students’ confidence in using the target language and perception of task value. In conclusion we cannot say in general that the usage frequency alone is a sufficient criterion, but we have to consider it combined with the responsibility with which students use GenAI tools.

However, there could be more factors (like government regulations, changes in institutional policies or technological development of GenAI tools) that influence future scenarios and these political and technological developments are difficult to predict. We rather want to focus on a micro level (van Notten et al., 2003) and consider the most important factors. Therefore, below we describe four scenarios, each of which presents another perspective on how students might utilize this technology and we explore the corresponding adjustments that educators and professors may need to make in order to effectively integrate GenAI into their instructional strategies. An overview of our scenarios is presented in Fig.7.

4.5.1 Scenario 1: Transformation

In this scenario, students use GenAI extensively but very responsibly, leading to a range of positive outcomes. Students will have access to personalized learning experiences with AI tutors capable of adapting to individual learning styles, providing instant feedback, and generating tailored educational materials (Sharma & Yadav, 2022). It also serves as a powerful language learning aid, offering translation services and grammar/vocabulary explanations (Loos et al., 2023). With its 24/7 availability, students have access to assistance whenever they need it, even outside of regular school hours (Islam & Islam, 2023). This revolution in education increases accessibility, especially for remote or underserved communities, as internet connectivity becomes more ubiquitous. GenAI, alongside traditional teaching, would play a pivotal role in transforming education (Gill et al., 2024). However, this scenario also raises concerns about the potential for over-reliance on technology (Nah et al., 2023; Sok & Heng, 2023), issues of equity in access (Bozkurt & Sharma, 2023), and the need for robust cybersecurity measures (Wu et al., 2023) to protect sensitive student data. Additionally, the role of human educators shifts towards being facilitators and mentors, focusing on higher-order thinking skills (Iskender, 2023), social and emotional learning, and the integration of technology into the curriculum. Lecturers can generate new material (Atlas, 2023) or use GenAI for learning assessment (Cotton et al., 2023; Gilson et al., 2023) or to provide immediate feedback (Moore et al., 2022) and thereby reduce workload (Sok & Heng, 2023), but adjustments of their material and exams become necessary. Rethinking learning goals and how to measure these in an exam is of great importance, especially since our survey revealed that over half of the students want to be able to use ChatGPT even in exams. Upskilling competencies can become necessary for educators (Tlili et al., 2023). We describe the best-case scenario, representing a future where technology fundamentally reshapes the educational landscape, offering immense potential for accessible, personalized, and globally connected learning experiences. However, for this scenario HEIs need to make adjustments such as focusing on higher-order thinking skills, social and emotional learning, the integration of technology into the curriculum, upskilling lecturers, and adjusting examination strategies.

4.5.2 Scenario 2: Conversation

In this second scenario, students judiciously incorporate GenAI into their educational experiences, striking a harmonious balance between AI-assisted learning and traditional pedagogical methods (Opera et al., 2023). Students use the new technology up to a few times per week, which is currently most realistic. According to the usage frequency in our survey, 46% of students use GenAI tool like ChatGPT mostly occasionally (a few times per week), and 37% use them frequently (several times per week). GenAI acts as a supplementary tool, offering valuable support for tasks such as concept clarification and idea generation. Students exercise a high degree of responsibility, ensuring that the technology is utilized ethically and in adherence to academic integrity standards (Cotton et al., 2023). They engage with GenAI in a manner that complements their existing learning strategies, leveraging its capabilities to enhance productivity and understanding. Human educators remain pivotal in education, offering guidance, mentorship, and critical thinking opportunities. They could utilize GenAI to augment lessons, provide tailored explanations, create practice exercises, and implement personalized learning strategies (Mhlanga, 2023b). This collaborative approach enhances the overall educational experience. This scenario emphasizes the importance of responsible technology integration and encourages students to leverage AI tools as aids rather than replacements for human-driven education. Lecturers should, however, encourage students to get to know the advantages of the AI tools so that no gap arises between students who use the technology and students who do not take advantage of the possibilities (Bozkurt & Sharma, 2023). This is the base-case scenario, underscoring the value of a balanced approach, where both AI and human educators work in tandem to foster holistic and enriched learning experiences. We believe this scenario is the most realistic, considering the survey result that 83% of the participating students currently use GenAI tools like ChatGPT a few times a month or week. Since students use GenAI responsibly, this scenario is assumed to be the base scenario. In this scenario, educators should integrate GenAI tools into their lectures to provide students with use cases of these tools. They could implement feedback mechanisms for students to provide input on their experiences with GenAI tools, allowing for continuous improvement. They need to ensure that AI-powered resources and materials are accessible and promote digital literacy among students. Furthermore, educators need to regularly evaluate the impact and effectiveness of GenAI tools in achieving educational goals and be prepared to adapt and evolve strategies as GenAI technologies continue to develop and the usage behavior of students could change.

4.5.3 Scenario 3: Survival

In this scenario, students depend extensively on GenAI without exercising appropriate due diligence or responsibility in its utilization. The accessibility and ease of interaction with the AI system leads to over-reliance on its capabilities for various academic tasks (Nah et al., 2023; Sok & Heng, 2023). Instead of actively engaging with course materials or seeking guidance from lecturers, students primarily rely on GenAI as their main source for assignments, research papers, and similar tasks. Additionally, critical thinking and problem-solving skills may erode over time, as students become accustomed to instant answers and automated assistance reducing workload and stress (Limna et al., 2023). Plagiarism becomes a prevalent issue, as students may submit content generated by GenAI tools without proper attribution or original thought (Karthikeyan, 2023). This scenario raises concerns about the erosion of academic integrity, with educational institutions grappling to detect and address instances of irresponsible GenAI use (Yu, 2023). Furthermore, it highlights the potential for missed learning opportunities and diminished engagement in collaborative, interactive learning environments (Tlili et al., 2023). Educators and institutions are prompted to implement stringent policies (Nah et al., 2023), educational campaigns, and technological safeguards to mitigate the negative consequences of this unbridled dependence on GenAI. The responsible use of GenAI needs to be incorporated into the curriculum to guide students on how to use the technology in an appropriate manner. Educators need to teach the limitations of GenAI and punish academic dishonesty and unethical practices. This scenario presents the worst-case scenario, serving as a cautionary tale and emphasizing the need for balanced, responsible use of GenAI tools in education to preserve the integrity and effectiveness of the learning experience.

4.5.4 Scenario 4: Indifferent

In this scenario, despite the availability of GenAI as a valuable educational resource, students opt to use it sparingly and, when in use, in an inappropriate manner. Rather than leveraging the tool for its intended purpose of assisting in the learning process, it is predominantly utilized for shortcuts, such as committing plagiarism or seeking immediate answers without genuine comprehension (Lo, 2023). This minimal engagement with GenAI leads to missed opportunities. It becomes necessary for educators to teach how to use the technology appropriately. Furthermore, irresponsible use may lead to issues of academic integrity and dishonesty, as students may resort to unethical practices in their educational pursuits (Karthikeyan, 2023). This scenario thus highlights the importance of promoting responsible and meaningful engagement with GenAI in the curriculum, emphasizing its true value as an educational tool rather than abandoning it. Educators need to teach the limitations of GenAI, punish academic dishonesty, and promote the advantages of responsibility by showcasing positive examples in class. However, we will not focus on this scenario in more detail as it is not one of the three best-case, worst-case or base scenarios.

4.6 Results of the Scenario Analysis

As we investigate all scenarios, it becomes evident that there are some underlying positive and negative implications. Academic dishonesty and students’ over-reliance on GenAI along with the need for educators to put in more effort to distinguish between AI and students’ work, while also having the advantage of easy generation of new materials, are examples of elements that cut across multiple scenarios, influencing academic outcomes and interactions. The major positive implications include tailored education, instant feedback capability and 24/7 assistance resulting in an improved learning journey for the students. On the other hand, irresponsible use of technology would challenge academic integrity, leading to unethical practices like plagiarism and cheating. For HEIs, educating the right practices becomes vital going forward on this path of awareness to a responsible usage of technology among students. It is equally important to stress the fact that regular physical interaction between the students and educators is necessary to maintain social and emotional learning. Incorporating GenAI tools into the traditional classroom concept is highly recommended as it brings many advantages.

However, the behavior of students should be monitored as it might change over time. At the moment, students use GenAI rather moderately, and we believe we are currently in the scenario Conversation. This might change over time to a more extreme usage. In that case, HEIs have to prepare for the scenarios Transformation or Survival depending on how responsibly students use GenAI tools.

There are important common themes across all scenarios. Independently of the specific scenarios, we provide recommendations for educators and students in HEIs.

Recommendations for educators:

• Integrate GenAI in classes, especially teaching students responsible GenAI usage;

• Inform about limitations of GenAI;

• Rethink learning goals and materials;

• Renew exams in cases where GenAI usage is possible and decide whether it is permitted and, if not, how to make sure there is no cheating.

Recommendations for students:

• Use GenAI responsibly, including awareness of GenAI limitations; trust human educators over GenAI, maintaining academic integrity and ethical behavior;

• Do not overlie on GenAI;

• Foster critical thinking;

• Use GenAI for personalized learning.

5 Discussion and Future Research

The investigation of two dimensions of students’ behavior, GenAI usage frequency and responsible behavior, offers a thorough exploration of the positive and negative effects of GenAI in HEIs. This study showcases the benefits students perceive, such as better support with assignments, enriched learning experiences, and reduced stress. Our results underscore the critical need for instructors to guide students, which aligns with the criticisms highlighted in Schlude et al. (2024) concerning the lack of guidelines from HEIs. Additionally, our findings corroborate the observations in Schlude et al. (2024) regarding the reduction of workload when using GenAI and the inadequate critical reflection on the accuracy of GenAI responses by students. However, our study diverges from the results in Ngo (2023), as we observed that the positive aspects of GenAI usage outweigh the negative. In our research, we analyzed two distinct user groups: frequent and non-frequent GenAI users. We discovered that these groups have differing perceptions of the positive and general aspects of GenAI, with frequent users exhibiting a more positive attitude. This novel classification of user types based on usage frequency offers intriguing insights into the varied experiences and perceptions of GenAI.

However, we must recognize certain limitations. The sample size and demographics of the survey participants were limited, making it difficult to generalize the results. Furthermore, the study only considers students’ perspectives, omitting educators’ viewpoints in the survey. Our major findings for our student survey are summarized below:

High usage of GenAI for academic tasks: A majority of students use GenAI tools for tasks like assignment writing, exam preparation, and brainstorming, indicating its integration into academic routines and perceived usefulness for achieving academic goals.

Concerns about academic integrity: Over 70% of students believe that GenAI tools increase the ease of academic dishonesty, such as cheating or plagiarism, highlighting a need for ethical guidance and stricter usage policies.

Students wish to be taught about responsible GenAI usage: Nearly half of the respondents agree that instructors should teach students responsible GenAI usage, emphasizing an educational gap that HEIs need to address.

The paper further outlines best-case, worst-case, and base-case scenarios for GenAI’s impact on higher education, recommending that HEIs prepare for responsible GenAI integration by revising curricula, upskilling educators, and adjusting examination formats to accommodate GenAI tools. However, this research paper only takes two dimensions of GenAI use in education into account, therefore, future studies could include more criteria like technological development or institutional policy changes.

All participants in education, including students, educators, and policy makers, should not fear GenAI but rather view it as a tool that can assist with specific tasks while recognizing that it is not infallible and cannot be entirely relied upon. When GenAI is used responsibly, it holds the potential to produce high-quality output efficiently. However, it also poses threats to academic integrity, the risk of technology overreliance, and the possibility of inaccurate responses. Training courses for educators are crucial to support them and address any gaps in knowledge and concerns they may have.

Understanding GenAI’s influence on education may require an extended period, and longitudinal studies could illuminate the long-term effects of integrating this technology into educational settings. This line of research might reveal the implications of over-reliance on AI, the effects of spreading misinformation and biases, the psychological aspects of human-AI interactions, and the formation of ethical guidelines. These discussions and future studies have the potential to direct responsible GenAI integration, influence policy decisions, and ensure a balanced, empowered, and ethically sound relationship with GenAI technologies.

In conclusion, while the research paper provides valuable insights, further investigation is necessary. By overcoming its limitations through broader and more inclusive studies, incorporating more detailed perspectives of educators, and exploring the long-term effects, researchers can develop a more comprehensive understanding of how to effectively utilize GenAI to enhance education for students.

6 Conclusions

In conclusion, this research paper examined the impact of GenAI on students and educators in HEIs through an extensive survey and scenario analysis. The findings highlight the varied experiences and perspectives of students using GenAI as an educational tool. The relationship between usage frequency and responsible behavior significantly affects the outcomes of integrating GenAI into education. Using GenAI thoughtfully can enhance its benefits while minimizing potential drawbacks. The research emphasizes the need for a balanced approach when utilizing GenAI. While GenAI can improve learning experiences and ease workloads, it is crucial to address concerns related to content reliability, overreliance, and academic dishonesty. Instructors are vital in teaching responsible usage, and many students recognize the importance of this guidance. However, educators must adapt their materials, rethink curricula and learning goals, and revise exams. Moreover, educators must enhance their efforts to detect academic dishonesty and plagiarism, establish strict policies, and address ethical concerns, especially when students use GenAI irresponsibly. As GenAI technology continues to develop, further research and adaptive educational strategies are essential to maximize its benefits while mitigating potential challenges. By understanding and addressing these issues, we can effectively leverage GenAI to enrich students’ learning experiences.

References

[1]

Atlas, S. (2023). ChatGPT for higher education and professional development: A guide to conversational AI. Available from Digitalcommons website.

[2]

Bailey, D., Southam, A., Costley, J. (2021). Digital storytelling with chatbots: Mapping L2 participation and perception patterns.Interactive Technology and Smart Education, 18(1): 85–103

[3]

Banerjee, S., Agarwal, A., Singla, S. (2024). LLMs will always hallucinate, and we need to live with this. arXiv Preprint, arXiv:2409.05746.

[4]

Boxleitner, A. (2023). Integrating AI in education: Opportunities, challenges and responsible use of ChatGPT. SSRN Electronic Journal, 4607308.

[5]

Bozkurt, A., Sharma, R. C. (2023). Challenging the status quo and exploring the new boundaries in the age of algorithms: Reimagining the role of generative AI in distance education and online learning. Asian Journal of Distance Education, 18(1).

[6]

Chan, C. K. Y., Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education.International Journal of Educational Technology in Higher Education, 20(1): 43

[7]

ChatGPT Generative Pre-trained Transformer, Zhavoronkov, A. (2022). Rapamycin in the context of Pascal’s Wager: Generative pre-trained transformer perspective.Oncoscience, 9: 82–84

[8]

Chauncey, S. A., McKenna, H. P. (2023). A framework and exemplars for ethical and responsible use of AI chatbot technology to support teaching and learning.Computers and Education: Artificial Intelligence, 5: 100182

[9]

Chen, Y., Jensen, S., Albert, L. J., Gupta, S., Lee, T. (2023). Artificial intelligence (AI) student assistants in the classroom: Designing chatbots to support student success.Information Systems Frontiers, 25(1): 161–182

[10]

Cooper, G. (2023). Examining science education in ChatGPT: An exploratory study of generative artificial intelligence.Journal of Science Education and Technology, 32(3): 444–452

[11]

Cotton, D. R. E., Cotton, P. A., Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT.Innovations in Education and Teaching International, 61(2): 228–239

[12]

De Jouvenel, H. (2000). A brief methodological guide to scenario building.Technological Forecasting and Social Change, 65(1): 37–48

[13]

Devlin, J., Chang, M.-W., Lee, K., Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. ACL, 4171–4186.

[14]

Fakhri, M. M., Ahmar, A. S., Isma, A., Rosidah, R., Fadhilatunisa, D. (2024). Exploring generative AI tools frequency: Impacts on attitude, satisfaction, and competency in achieving higher education learning goals.EduLine: Journal of Education and Learning Innovation, 4(1): 196–208

[15]

García-Peñalvo, F. J. (2023). The perception of artificial intelligence in educational contexts after the launch of ChatGPT: Disruption or panic.Education in the Knowledge Society, 24: e31279

[16]

Gill, S. S., Xu, M., Patros, P., Wu, H., Kaur, R., Kaur, K., Fuller, S., Singh, M., Arora, P., Parlikad, A. K., (2024). Transformative effects of ChatGPT on modern education: Emerging era of AI chatbots.Internet of Things and Cyber-Physical Systems, 4: 19–23

[17]

Gilson, A., Safranek, C. W., Huang, T., Socrates, V., Chi, L., Taylor, R. A., Chartash, D. (2023). How does ChatGPT perform on the United States Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment.JMIR Medical Education, 9(1): e45312

[18]

Hendy, A., Abdelrehim, M., Sharaf, A., Raunak, V., Gabr, M., Matsushita, H., Kim, Y. J., Afify, M., Awadalla, H. H. (2023). How good are GPT models at machine translation? A comprehensive evaluation. arXiv Preprint, arXiv:2302.09210.

[19]

Hosseini, M., Rasmussen, L. M., Resnik, D. B. (2024). Using AI to write scholarly publications.Accountability in Research, 31(7): 715–723

[20]

Iskender, A. (2023). Holy or unholy? Interview with Open AI’s ChatGPT.European Journal of Tourism Research, 34: 3414

[21]

Islam, I., Islam, M. N. (2023). Opportunities and challenges of ChatGPT in academia: A conceptual analysis. Authorea Preprints.

[22]

Jahn, S., Kaste, S., März, A., Stühmeier, R. (2019, May 25). Denkimpuls digitale bildung: Einsatz von künstlicher intelligenz im schulunterricht. Available from initiatived21 website. (in German).

[23]

Karthikeyan, C. (2023). Literature review on pros and cons of ChatGPT implications in education.International Journal of Science and Research, 12(3): 283–291

[24]

Kocoń, J., Cichecki, I., Kaszyca, O., Kochanek, M., Szydło, D., Baran, J., Bielaniewicz, J., Gruza, M., Janz, A., Kanclerz, K., (2023). ChatGPT: Jack of all trades, master of none.Information Fusion, 99: 101861

[25]

Kosow, H., Gaßner, R. (2008). Methods of future and scenario analysis: Overview, assessment, and selection criteria (Vol. 39). IDOS.

[26]

Krause, S., Adler, S., Bühl, J., Schenkendorf, R., Schneider, K., Stolzenburg, F., Transchel, F. (2023). Entwicklung interdisziplinärer module in der hochschulbildung. In: Proceedings of INFORMATIK 2023—Designing Futures: Zukünfte Gestalten Bildun. Bonn: Gesellschaft für Informatik e.V.., 461–464. (in German).

[27]

Krause, S., Stolzenburg, F. (2023). Commonsense reasoning and explainable artificial intelligence using large language models. In: Proceedings of European Conference on Artificial Intelligence 2023. Cham: Springer, 302–319.

[28]

Krause, S., Panchal, B.H., Ubhe, N. (2025). The evolution of learning: Assessing the transformative impact of generative AI on higher education. In: Proceedings of the Artificial Intelligence in Education Technologies: New Development and Innovative Practices. Singapore: Springer, 356–371.

[29]

Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., Zettlemoyer, L. (2019). BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv Preprint, arXiv:1910.13461.

[30]

Limna, P., Kraiwanit, T., Jangjarat, K., Klayklung, P., Chocksathaporn, P. (2023). The use of ChatGPT in the digital era: Perspectives on chatbot implementation. Journal of Applied Learning and Teaching, 6(1).

[31]

Liu, A., Feng, B., Xue, B., Wang, B., Wu, B., Lu, C., Zhao, C., Deng, C., Zhang, C., Ruan, C., , . (2024). DeepSeek-v3 technical report. arXiv Preprint, arXiv:2412.19437.

[32]

Liu, F., Stapleton, P. (2018). Connecting writing assessment with critical thinking: An exploratory study of alternative rhetorical functions and objects of enquiry in writing prompts.Assessing Writing, 38: 10–20

[33]

Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. arXiv Preprint, arXiv:1907.11692.

[34]

Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature.Education Sciences, 13(4): 410

[35]

Loos, E., Gröpler, J., Goudeau, M.-L. S. (2023). Using ChatGPT in education: Human reflection on ChatGPT’s self-reflection.Societies, 13(8): 196

[36]

Malik, T., Hughes, L., Dwivedi, Y. K., Dettmer, S. (2023). Exploring the transformative impact of generative AI on higher education. In: Proceedings of New Sustainable Horizons in Artificial Intelligence and Digital Solutions. Cham: Springer, 69–77.

[37]

Meyer, J. G., Urbanowicz, R. J., Martin, P. C., O’Connor, K., Li, R., Peng, P.-C., Bright, T. J., Tatonetti, N., Won, K. J., Gonzalez-Hernandez, G., Moore, J. H. (2023). ChatGPT and large language models in academia: Opportunities and challenges.BioData Mining, 16(1): 20

[38]

Mhlanga, D. (2023a). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong learning. In: Proceedings of FinTech and Artificial Intelligence for Sustainable Development: The Role of Smart Technologies in Achieving Development Goals. Cham: Palgrave Macmillan, 387–409.

[39]

Mhlanga, D. (2023b). Digital transformation education, opportunities, and challenges of the application of ChatGPT to emerging economies. SSRN Electronic Journal, 4355758.

[40]

Moore, S., Nguyen, H. A., Bier, N., Domadia, T., Stamper, J. (2022). Assessing the quality of student-generated short answer questions using GPT-3. In: Proceedings of Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption. Cham: Springer, 243–257.

[41]

Nah, F. F.-H., Zheng, R., Cai, J., Siau, K., Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI‒human collaboration.Journal of Information Technology Case and Application Research, 25(3): 277–304

[42]

Ngo, T. T. A. (2023). The perception by university students of the use of ChatGPT in education.International Journal of Emerging Technologies in Learning, 18(17): 4

[43]

O’Dea, X. C., O’Dea, M. (2023). Is artificial intelligence really the next big thing in learning and teaching in higher education? A conceptual paper.Journal of University Teaching and Learning Practice, 20(5): 05

[44]

Opara, E., Mfon-Ette Theresa, A., Aduke, T.-R. C. (2023). ChatGPT for teaching, learning and research: Prospects and challenges.Global Academic Journal of Humanities and Social Sciences, 5(2): 33–40

[45]

OpenAI. (2023). GPT-4 technical report. arXiv Preprint, arXiv:2303.08774.

[46]

Perera, P., Lankathilaka, M. (2023). AI in higher education: A literature review of ChatGPT and guidelines for responsible implementation.International Journal of Research and Innovation in Social Science, 7: 306–314

[47]

Qadir, J. (2023). Engineering education in the era of ChatGPT: Promise and pitfalls of generative AI for education. In: Proceedings of 2023 IEEE Global Engineering Education Conference. IEEE, 1–9.

[48]

Rahman, M. M., Watanobe, Y. (2023). ChatGPT for education and research: Opportunities, threats, and strategies.Applied Sciences, 13(9): 5783

[49]

Schlude, A., Mendel, U., Stürz, R. A., Fischer, M. (2024, June 21). Verbreitung und akzeptanz generativer KI an schulen und hochschulen. Available from BIDT website. (in German).

[50]

Sharma, S., Yadav, R. (2022). ChatGPT—A technological remedy or challenge for education system.Global Journal of Enterprise Information System, 14(4): 46–51

[51]

Sok, S., Heng, K. (2023). ChatGPT for education and research: A review of benefits and risks. SSRN Electronic Journal, 4378735.

[52]

Terwiesch, C. (2023, January 24). Would ChatGPT3 get a Wharton MBA? A prediction based on its performance in the operations management course. Available from University of Pennsylvania Wharton School website.

[53]

Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H. T., Jin, A., Bos, T., Baker, L., Du, Y., , . (2022). LaMDA: Language models for dialog applications. arXiv Preprint, arXiv:2201.08239.

[54]

Thurner, V., Socher, G. (2023). Disciplines go digital—Developing transdisciplinary study programs that integrate AI with non-IT disciplines. In: Proceedings of 2023 IEEE Global Engineering Education Conference. IEEE, 1–7.

[55]

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education.Smart Learning Environments, 10(1): 15

[56]

van Notten, P. W., Rotmans, J., Van Asselt, M. B., Rothman, D. S. (2003). An updated scenario typology.Futures, 35(5): 423–443

[57]

Warschauer, M., Tseng, W., Yim, S., Webster, T., Jacob, S., Du, Q., Tate, T. (2023). The affordances and contradictions of AI-generated text for second language writers. SSRN Electronic Journal, 4404380.

[58]

Wei, J., Kim, S., Jung, H., Kim, Y.-H. (2023). Leveraging large language models to power chatbots for collecting user self-reported data. arXiv Preprint, arXiv:2301.05843.

[59]

Wu, X., Duan, R., Ni, J. (2023). Unveiling security, privacy, and ethical concerns of ChatGPT.Journal of Information and Intelligence, 2(2): 102–115

[60]

Xiao, P., Chen, Y., Bao, W. (2023). Waiting, banning, and embracing: An empirical analysis of adapting policies for generative AI in higher education. arXiv Preprint, arXiv:2305.18617.

[61]

Xu, R., Feng, Y., Chen, H. (2023). ChatGPT vs. Google: A comparative study of search performance and user experience. arXiv Preprint, arXiv:2307.01135.

[62]

Yildiz Durak, H. (2023). Conversational agent-based guidance: Examining the effect of chatbot usage frequency and satisfaction on visual design self-efficacy, engagement, satisfaction, and learner autonomy.Education and Information Technologies, 28(1): 471–488

[63]

Yu, H. (2023). Reflection on whether ChatGPT should be banned by academia from the perspective of education and teaching.Frontiers in Psychology, 14: 1181712

RIGHTS & PERMISSIONS

The Author(s). This article is published with open access at link.springer.com and journal.hep.com.cn

AI Summary AI Mindmap
PDF (3145KB)

7283

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/