Risk warning technologies and emergency response mechanisms in Sichuan–Tibet Railway construction

Liujiang KANG, Hao LI, Cong LI, Na XIAO, Huijun SUN, Nsabimana BUHIGIRO

Front. Eng ›› 2021, Vol. 8 ›› Issue (4) : 582-594.

PDF(20075 KB)
Front. Eng All Journals
PDF(20075 KB)
Front. Eng ›› 2021, Vol. 8 ›› Issue (4) : 582-594. DOI: 10.1007/s42524-021-0151-7
REVIEW ARTICLE

Risk warning technologies and emergency response mechanisms in Sichuan–Tibet Railway construction

Author information +
History +

Abstract

Safety is one of the most critical themes in any large-scale railway construction project. Recognizing the importance of safety in railway engineering, practitioners and researchers have proposed various standards and procedures to ensure safety in construction activities. In this study, we first review four critical research areas of risk warning technologies and emergency response mechanisms in railway construction, namely, (i) risk identification methods of large-scale railway construction projects, (ii) risk management of large-scale railway construction, (iii) emergency response planning and management, and (iv) emergency response and rescue mechanisms. After reviewing the existing studies, we present four corresponding research areas and recommendations on the Sichuan–Tibet Railway construction. This study aims to inject new significant theoretical elements into the decision-making process and construction of this railway project in China.

Graphical abstract

Keywords

railway construction / risk warning technologies / emergency response mechanisms / Sichuan–Tibet Railway

Cite this article

Download citation ▾
Liujiang KANG, Hao LI, Cong LI, Na XIAO, Huijun SUN, Nsabimana BUHIGIRO. Risk warning technologies and emergency response mechanisms in Sichuan–Tibet Railway construction. Front. Eng, 2021, 8(4): 582‒594 https://doi.org/10.1007/s42524-021-0151-7

1 Introduction

In recent years, large language models (LLMs) have advanced significantly in natural language processing. Notable examples include generative pre-trained transformer (GPT) (Brown et al., 2020), BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019), and T5 (Text-to-Text Transfer Transformer) (Raffel et al., 2020). These models demonstrate impressive performance on a range of tasks, such as text generation, machine translation, and question-answer systems. However, their growing prevalence raises security and ethical concerns, drawing attention from both academia and industry.
The rise of LLMs and generative AI has profoundly affected multiple industries. In sectors like education (Májovský et al., 2023), healthcare (De Angelis et al., 2023; Liu et al., 2024a), and politics (Fan, 2023), LLMs are transforming traditional work processes. They enhance efficiency and cut costs while driving digital transformation and intelligent development. However, this rapid progress comes with challenges.
Regarding information security, LLMs can generate false information (Vykopal et al., 2024). A notable instance involved CNET publishing LLM-generated articles without clear disclosure, which led to potential misinformation and a lack of transparency. Furthermore, LLMs can be misused for cyber-attacks (Chen et al., 2023a; Ding et al., 2024; Gupta et al., 2023; Mozes et al., 2023; Qammar et al., 2023; Zhuo et al., 2023). These sophisticated adversarial attacks can evade AI safety mechanisms, producing harmful content. This poses significant risks, including identity theft and malicious social media posts that threaten social order and personal privacy.
Moreover, the potential biases and unfairness in LLM decision-making have sparked extensive socio-ethical discussions. Biases in training data can result in unfair outcomes and reinforce stereotypes. For example, the “grandma loophole” demonstrated how LLMs could be manipulated into revealing sensitive information, such as Windows 11 serial numbers1. These issues hinder the stable development of technology and challenge societal harmony and stability.
This review provides a comprehensive overview and comparison of academic advancements in the field of LLMs concerning information security and social ethics from 2020 to January 2024. It aims to clarify the latest trends and challenges in this area. The review is organized under the following headings: “Large Language Models OR GPT OR Generative AI OR LLMs,” “Ethics,” “Security,” “Threat,” “Defense OR Defend OR Model processing OR Red-blue confrontation OR Adversarial training” and “Social impaction.” These keywords guided the literature search in the Web of Science, Scopus, Ei Village, and China Knowledge Network databases. A preliminary review of titles and abstracts led to the exclusion of irrelevant papers. Additionally, articles were thoroughly evaluated for structural completeness, innovativeness of experimental models, and writing quality. This process identified 73 research articles focusing on LLMs, information security, and social ethics. The screening process is illustrated in Fig.1. By synthesizing and analyzing the relevant literature, the hidden risks associated with information security and social ethics in this field over recent years are revealed. The detection and defense techniques are categorized, providing a clearer understanding of the information and ethical security of large models and their defense mechanisms for both the public and professional organizations.
Fig.1 Systematic selection process of relevant literature.

Full size|PPT slide

This thesis offers an in-depth look at the security ethics of LLMs, emphasizing threats to information security, as well as defense and detection techniques within the context of social ethics. As a vital part of ethical security, information security is crucial for the safe and responsible use of technology. Its primary objective is to protect information and systems from unauthorized access, use, damage, interference, or destruction. The paper discusses potential issues that LLMs may encounter, including risks associated with the misuse of their functions and malicious attacks. In response to these threats, defense and detection techniques are outlined, and categorized into strategies implemented before model deployment and contingency measures applied post-deployment. Additionally, social and ethical issues surrounding LLMs are addressed, with a framework of synthesized ideas presented in Fig.2.
Fig.2 Framework diagram of the synthesis idea.

Full size|PPT slide

2 New types of information security threats to LLMs

In the current digital era, LLMs are a significant technology in artificial intelligence. They have become widespread in many aspects of our lives. However, as their use expands, information security issues are increasingly important. This section examines the main security threats posed by LLMs, focusing on two key areas: first, the security problems arising from the misuse of LLM functions; and second, the concerns stemming from malicious attacks on LLMs.

2.1 Misbehaviors using LLMs

LLMs leverage deep learning and natural language processing capabilities to generate highly persuasive text. This ability presents new tools for phishing attacks (Begou et al., 2023; Elsadig, 2023; Gupta et al., 2023; Iqbal et al., 2023; Qammar et al., 2023), social engineering attacks (Gupta et al., 2023), malware threats (Deng et al., 2024a; Gupta et al., 2023; Qammar et al., 2023), hacking (Gupta et al., 2023), and disinformation generation (Chen et al., 2023a; Mozes et al., 2023; O’Neill and Connor, 2023; Sison et al., 2024; Staab et al., 2024; Vykopal et al., 2024), as detailed in Tab.1.
Tab.1 Misbehaviors arising from the use of LLMs
Information security classification Misbehavior Related literature Affect
Misconduct arising from the use of LLM Phishing Attack Gupta et al. (2023); Qammar et al. (2023); Elsadig, 2023; Iqbal et al. (2023); Begou et al. (2023) The introduction of more authentic and credible content is likely to result in a higher success rate of attacks.; The objective is to facilitate the deployment of large-scale automated attacks.
Social Engineering Attacks Gupta et al. (2023) The objective of this study is to examine the potential of mimicking human text generation capabilities for psychological manipulation.
Malware Threat Gupta et al. (2023); Qammar et al. (2023); Deng et al. (2024a) The objective is to facilitate the generation of code through the use of automated processes, thereby reducing the technical barrier to entry.; The capacity for iterative learning, which enables the expansion of concealed complexity, is a key factor in the advancement of knowledge.
Hacking Attacks Gupta et al. (2023); Fang et al. (2024) The objective is to automate the hacking procedures and deploy models to identify vulnerabilities.
False Information Generation Vykopal et al. (2024); Sison et al. (2024); O’Neill and Connor, 2023; Mozes et al. (2023); Chen et al. (2023a); Staab et al. (2024) Statistical prediction with limited inference leading to random generation of meaningless text or false information.

2.1.1 Phishing attacks

Phishing attacks are a common cybercrime tactic aimed at obtaining sensitive user information, such as usernames, passwords, and credit card details, through deception (Elsadig, 2023). These attacks can be categorized into two types: large-scale phishing and spear phishing (Iqbal et al., 2023). Recent advancements in artificial intelligence, particularly in LLMs, have enabled attackers to create highly personalized and convincing web-based emails. These emails often bypass the limitations of traditional, non-personalized emails (Iqbal et al., 2023). Additionally, the cross-cultural capabilities of LLMs allow attackers to tailor their efforts to specific regions or language groups, thus enhancing their deceptive tactics (Iqbal et al., 2023). LLM-generated texts mimic the communication styles of trusted entities, increasing user trust (Gupta et al., 2023). This ability helps sidestep spam filters and security systems, significantly improving the success rates of phishing attacks (Qammar et al., 2023). Consequently, this broadens the scale and automation of these attacks (Begou et al., 2023). Furthermore, LLMs can facilitate the creation and launch of phishing websites, allowing attackers to execute effective scams without needing extensive technical skills (Begou et al., 2023). The iterative learning capabilities of LLMs enable continuous improvement of attack strategies (Iqbal et al., 2023), making phishing attacks more prevalent and posing substantial threats to network security.

2.1.2 Social engineering attacks

The sophistication of phishing attacks highlights the troubling integration of AI technologies, particularly LLMs, into social engineering tactics. The enhanced personalization and cultural adaptability of LLMs make phishing emails more believable while significantly improving their ability to evade traditional security measures.
Moreover, the ethical implications surrounding the use of LLMs in social engineering attacks are increasingly significant (Gupta et al., 2023). The capacity of these models to generate contextually relevant and linguistically convincing messages raises important questions about privacy, consent, and potential misuse. For example, an attacker could use an LLM to craft a message that mimics the professional tone of a supervisor, manipulating the victim into disclosing sensitive information or taking actions that compromise security.
The convergence of phishing with broader social engineering strategies, facilitated by LLMs, blurs the lines between various forms of deception. All these tactics exploit human trust and compliance. This evolution requires a thorough understanding of social engineering techniques and a proactive approach to the ethical deployment of AI.

2.1.3 Malware threats

A malware threat is software installed on a computer without the user’s consent, performing harmful operations (Deng et al., 2024a). This software includes various types, including viruses, worms, botnets, Trojans, and ransomware (Gupta et al., 2023; Qammar et al., 2023). Malware can steal sensitive information, exploit system vulnerabilities, gain unauthorized access, lock or unlock systems, render devices unusable, demand ransom, or display unsolicited advertisements, among other malicious activities (Qammar et al., 2023). The risk of malware is heightened by the advanced text generation capabilities of LLMs. These models can generate code, including malware, from simple prompts, lowering the technical barrier for creating such software. Additionally, LLMs’ iterative learning capabilities allow for the continuous enhancement of malware, making it stealthier and more effective. Attackers can also use LLMs to produce code that bypasses terms of use and forges identities, effectively circumventing platform restrictions (Gupta et al., 2023). Moreover, the text generation abilities of ChatGPT can be exploited to create complex and context-specific malicious code called attack payloads. These payloads can perform unauthorized actions, such as deleting files, collecting data, or launching further attacks (Gupta et al., 2023). Polymorphic malware, which modifies its code with each execution to evade detection by antivirus software, poses an additional threat. ChatGPT’s generative capabilities can facilitate the creation of such malware, increasing the sophistication and stealth of attacks. Furthermore, ChatGPT can produce instances of polymorphic malware that exploits zero-day vulnerabilities and generate different malware variants tailored to specific attacks. It can also craft scripts, such as Java snippets or PowerShell scripts, used to remotely control infected computers. Additionally, it can aid in the creation of darknet marketplaces for illicit transactions (Gupta et al., 2023; Qammar et al., 2023). These developments significantly heighten the threat posed by malware, as they lower the technological barriers for launching cyber-attacks while increasing their sophistication and stealth.

2.1.4 Hacking

Hacking refers to exploiting system vulnerabilities to gain unauthorized access or control (Gupta et al., 2023). The emergence of LLMs, like GPT-4, provides a powerful tool that malicious actors could use to automate hacking processes. Research shows that GPT-4 can autonomously exploit real-world one-day vulnerabilities with an 87% success rate when given CVE descriptions. This strikingly contrasts with other models, which have a 0% success rate without CVE guidance (Fang et al., 2024). This difference highlights the transformative effect of LLMs on cyber-exploitation strategies and emphasizes the urgent need to address the new challenges AI poses to cybersecurity.

2.1.5 Generation of false information

LLMs generate text by predicting the next word based on statistical correlations in training data, which makes them unaware and limits their understanding, reasoning, and creativity (Sison et al., 2024; Vykopal et al., 2024). The output generation process is inherently stochastic, contributing further to the creation of nonsensical content (O’Neill and Connor, 2023). As a result, LLMs can generate large-scale misinformation that appears credible to human readers, often without human involvement (Mozes et al., 2023). This trend has led to an increase in online misinformation, which could further disconnect political discourse from facts (Mozes et al., 2023). Moreover, the similarity between LLM-generated content and human-made content may make it difficult for people to distinguish between them. This challenge can result in issues like framing, malicious fraud, and political manipulation (Chen et al., 2023a). Additionally, LLMs can infer personal data from vast amounts of unstructured text, posing potential privacy violations. As LLMs become more accessible and affordable, adversaries may find it easier to make these inferences, increasing the risk of personal privacy breaches (Staab et al., 2024).
The dangers posed by LLMs extend beyond immediate security issues; they also have significant societal impacts. The subtle spread of misinformation and manipulation enabled by LLMs can erode public trust, distort political discourse, and infringe on personal privacy. While these security threats directly threaten our tangible interests, the ethical implications shape societal norms and perceptions in a more pervasive way. We will explore the societal implications of LLMs and their ethical security specifically in Section 4.

2.2 Malicious attacks on LLMs

This section examines the information security threats resulting from malicious behaviors targeting LLMs, both at the data and model levels (Chen et al., 2023a; Huang et al., 2024; Mozes et al., 2023; Zhuo et al., 2023) and at the usage and interaction levels (Derner et al., 2024; Ding et al., 2024; Gupta et al., 2023; Liu et al., 2024b; Mozes et al., 2023; Qammar et al., 2023; Wen et al., 2023; Yang et al., 2024; Zhuo et al., 2023). The specific impacts of these attacks are detailed in Tab.2.
Tab.2 Information security issues resulting from attacks on LLMs
Information security classification New type of threat Related literature Affect
Attacks on LLM lead to information security Data and model level Data Memory Mozes et al. (2023); Zhuo et al. (2023) The data obtained from memory training is of a highly sensitive nature.
Model Inversion Attack Chen et al. (2023a) The process of analyzing training data in order to back-propagate sensitive information.
Model Extraction Attack Chen et al. (2023a) With regard to the model itself, it is possible to extract sensitive information or to reconstruct the model structure.
Poisoning Attacks Ding et al. (2024) The introduction of examples that are detrimental to the learning process in order to influence the outcome of the learning experience.
Backdoor Attacks Mozes et al. (2023); Huang et al. (2024) The following example illustrates the phenomenon of training poisoning, whereby an implant trigger is used.
Usage and interaction level Prompt Injection Mozes et al. (2023); Yang et al. (2024) The act of circumventing security instructions and contravening security policies.
Membership inference attacks Derner et al. (2024) The process of inferring specific data and the act of leaking training data are both forms of data leakage.
Reinforcement Learning-based (RL) Attacks Wen et al. (2023) The process of inducing an implicit toxic output.
Jailbreak Attacks Mozes et al. (2023); Gupta et al. (2023); Qammar et al. (2023); Liu et al. (2024b); Ding et al. (2024) The manipulation of input prompts in order to circumvent security mechanisms.
At the data and model levels, attackers can exploit LLMs’ data memory capabilities. These capabilities enable the model to process and generate text that may unintentionally retain and reproduce sensitive information from its training phase (Zhuo et al., 2023). Generally, larger models and repetitive training data enhance the memory retention of the data. Attackers can leverage this memory ability to extract sensitive information using carefully crafted queries, threatening individual privacy and organizational security (Mozes et al., 2023; Zhuo et al., 2023). Moreover, attackers may employ model inversion and model extraction attacks to gain access to training data, which risks breaches of privacy and intellectual property rights (Chen et al., 2023a). Additionally, poisoning attacks (Mozes et al., 2023) and backdoor attacks further exacerbate the model’s misbehaviors. Under normal conditions, the model performs adequately with clean input data. However, it may generate harmful outputs when triggered by specific conditions, thereby increasing the security risks faced by users (Huang et al., 2024). Compound backdoor attacks activate backdoors through multiple trigger keys, enhancing the likelihood of success while minimizing the impact on the model’s utility (Huang et al., 2024).
At the user interaction level, attackers use various techniques such as hint injection and indirect hint injection (Mozes et al., 2023). By manipulating system hints from an LLM, an attacker can bypass the model’s security restrictions. This allows for quick access and manipulation of the model, leading to the generation of harmful content that violates security policies. For example, the Substitution-based Contextual Optimization approach (SICO) can help evade AI-generated text detection. This method allows ChatGPT to significantly outperform the average drop in AUC of six existing detectors by a margin of 0.54 (Yang et al., 2024). Membership inference attacks aim to compromise the confidentiality of model training data, revealing sensitive information and raising privacy and legal concerns (Derner et al., 2024). Additionally, reinforcement learning-based attack methods can use reward mechanisms to induce models to generate implicitly harmful outputs (Wen et al., 2023). Jailbreaking attacks are common as they allow attackers to circumvent the model’s built-in security mechanisms by manipulating input prompts. This results in the generation of malicious content that the model was intended to block (Zhuo et al., 2023). Attackers often use role-playing, reverse psychology, and generic adversarial triggers to bypass security protections, inducing the model to produce content that violates security policies, such as malware creation instructions or harmful information dissemination (Gupta et al., 2023; Mozes et al., 2023; Qammar et al., 2023). Traditional jailbreak attacks fall into two categories: manually written and learning-based (Liu et al., 2024b). Manually written attacks, such as the “Do-Anything-Now (DAN)” series, can uncover hidden jailbreak hints but struggle with scalability and adaptability. Learning-based attacks, like the GCG attack, are reformulated as an adversarial example generation process. However, they often produce meaningless sequences that can be disrupted by defense mechanisms such as confusion-based detection. Recent research has proposed improved methods like AutoDAN (Liu et al., 2024b) and ReNeLLM (Ding et al., 2024). These methods generate more covert and effective jailbreak hints, significantly increasing the attack success rate. They are more difficult to detect by existing defense mechanisms and demonstrate generalizability and transferability across typical language models.
These attack methods can operate independently or be combined to create more complex attack chains. For example, an attacker might acquire sensitive data about key personnel through model manipulation attacks. They could then use malicious text generation attacks to create phishing emails, ultimately gaining unauthorized access to the target company. The combined use of these attacks heightens security threats (Derner et al., 2024).

3 Defense and response to LLMs

As security threats grow in complexity, the safety of LLMs faces serious challenges, especially given their widespread use across industries. Ensuring the reliability and security of LLMs in various applications necessitates urgent research into defensive strategies and counteractions. This research can be divided into two key areas: defense strategies prior to LLM deployment and contingency measures after deployment.

3.1 Defense strategies for LLMs

The goal of LLM defense is to enhance their ability to withstand malicious inputs or attacks. Defense methods can be categorized into parameter processing (Hasan et al., 2024; Jiang et al., 2022), input preprocessing (Cao et al., 2024; Chen et al., 2023b; Liu et al., 2023a; Mo et al., 2023; Robey et al., 2023; Suo, 2024; Zhang et al., 2024), and adversarial training (Bhardwaj and Poria, 2023; Deng et al., 2023; Ge et al., 2024; Jain et al., 2023; Li et al., 2024; Ma et al., 2023; Salem et al., 2023; Yao et al., 2024). This classification facilitates a discussion of specific defense methods, along with an analysis of their advantages and disadvantages.

3.1.1 Parameter processing

Parameter processing, also referred to as model processing, aims to boost the model’s resistance to jailbreak or prompt injection attacks. It accomplishes this by adjusting the parameters or structure of the LLMs, thereby avoiding additional training (Hasan et al., 2024; Jiang et al., 2022). For instance, Hasan et al. enhanced model defense by pruning parameters and demonstrated the universality of their approach (Hasan et al., 2024). However, this method can be demanding and challenging to adapt to most commercially available models. Similarly, the ROSE method proposed by Jiang et al. improves the model’s resilience by filtering out non-robust and redundant parameters (Jiang et al., 2022). Despite solving some generality issues, parameter processing methods still face challenges related to persistence. More details are provided in Tab.3.
Tab.3 Parameter processing in LLMs defense approach
Document Type of malicious attack Description of defence methods Disadvantages of the method Advantages of the method
Hasan et al. (2024) Jailbreaking prompts “Trimming” of 20 per cent of model parameters High demands on the model General and universal
Jiang et al. (2022) ROSE: Filtering the model for worthless and non-robust parameters Lack of sustainability

3.1.2 Input preprocessing

Input preprocessing, also known as paraphrasing and retokenization, helps identify early warnings or safety concerns by modifying prompt statements (Cao et al., 2024; Chen et al., 2023b; Liu et al., 2023a; Mo et al., 2023; Robey et al., 2023; Suo, 2024; Zhang et al., 2024). This training-free approach rapidly adapts to new attacks on LLMs and outperforms parameter processing methods in terms of usability and generalizability. However, it relies on manual operations, leading to higher materials and time costs. Techniques like the training-free prefix prompt mechanism (Liu et al., 2023a), Intention Analysis Prompting (IAPrompt) (Zhang et al., 2024), and “Signed-Prompt” methods (Suo, 2024) essentially involve paraphrasing and retokenization of inputs. While these methods enhance the LLM’s defenses, they incur increased costs as the sophistication of attacks improves. In terms of alternative input processing, methods such as SmoothLLM (Robey et al., 2023) and moving target defense (MTD) (Chen et al., 2023b) promote sustainability by detecting adversarial inputs through multiple copies or candidate outputs from various models, ensuring harmless output. Similar to input preprocessing, external defenses for LLMs address backdoor and alignment-breaking attacks. Notable examples include RA-LLM (Cao et al., 2024) and backdoor defense (Mo et al., 2023), as detailed in Tab.4.
Tab.4 Input preprocessing in LLMs defense approach
Document Type of malicious attack Description of defence methods Disadvantages of the method Advantages of the method
Cao et al. (2024) Alignment-Breaking Attacks RA-LLM Difficult to perform gradient-based search No need to fine-tune the original LLM for defence purposes
Liu et al. (2023a) Induced text attacks Training-free prefix prompting mechanism and RoBERTa mechanism Increased quality of attacks leads to higher response costs Rapid adaptation to emergencies; more powerful detection capabilities
Suo (2024) Prompt Injection “Signed-Prompt” methods More stable
Zhang et al. (2024) Jailbreaking prompts attacks with stealthy and complex intentions Intention Analysis Prompting (IAPrompt) Improve security; ensure a certain level of multilingual adaptability
Mo et al. (2023) Alignment-Breaking Attacks, Backdoor Attacks Backdoor defence against black box LLMs Limited in some contexts; prone to increased costs and reduced efficiency Effectively countering backdoor attacks; effectively reducing backdoor vulnerabilities in LLM
Chen et al. (2023b) Adversarial Attack Moving Target Defence (MTD) Need to consider different results from different models Sustainability through adding, subtracting and optimising models and copies
Robey et al. (2023) Jailbreak attack SmoothLLM Limitations to the types of jailbreak attacks that can be defended against (semantic jailbreak attacks)

3.1.3 Adversarial training

Adversarial training, outlined in Tab.5, is the primary method for enhancing the defense of LLMs against class recognition issues and adversarial attacks. This is achieved through red and blue teaming adversarial training or security fine-tuning (Jain et al., 2023). A notable technique is Red-Teaming, which improves models’ defensive capabilities by simulating realistic attack scenarios (Bhardwaj and Poria, 2023; Deng et al., 2023; Ge et al., 2024; Li et al., 2024; Ma et al., 2023; Salem et al., 2023; Yao et al., 2024). The reactive nature of traditional static defense methods necessitates the reliance on Red-Teaming for targeted protection. Researchers have proposed several approaches to enhance red team training. These include ensuring model security through safety alignment (Bhardwaj and Poria, 2023), implementing iterative fine-tuning strategies for attack-defense frameworks (Deng et al., 2023), and utilizing the red-teaming game (RTG) (Ma et al., 2023). More advanced methods, like the MART approach, optimize Red-Teaming by automating attacks and defenses, thereby reducing training costs (Ge et al., 2024). FuzzLLM enhances training efficiency by identifying and defending against jailbreak vulnerabilities through automated proactive testing (Yao et al., 2024). For jailbreaking prompt attacks, the success rate can be minimized by integrating target prioritization during training and inference. Additionally, to address the persistence of red team attacks and defenses, automated variants of known prompt injection attacks are analyzed. Data sets are generated to enable models to bolster their defenses against emerging prompt injection attacks (Salem et al., 2023). Beyond Red-Teaming, strategies for stealth and continuous fine-tuning against backdoor injections are employed, along with generating data sets fine-tuned for specific attack classes. This ensures that models develop their own defense mechanisms tailored to particular tasks. In multilingual settings, frameworks like SELF-DEFENSE (Deng et al., 2024b) and semantic-preserving algorithms (Li et al., 2024) facilitate the creation of multilingual data sets for adversarial training in LLMs, providing strategies to mitigate attacks.
Tab.5 Adversarial training in LLMs defense approach
Document Type of malicious attack Description of defence methods Disadvantages of the method Advantages of the method
Bhardwaj and Poria (2023) Jailbreaking prompts RED-INSTRUCT for secure alignment of LLMs Ensuring high security in RED-EVAL
Deng et al. (2024a) Prompt Injection Iterative fine-tuning strategies for attack-defence frameworks Smaller attack dataset; fewer types of LLM applied Better preservation of the efficiency and robustness of the defence framework
Ma et al. (2023) Red-Teaming Game (RTG) No elucidation of the geometrical features of RTGs Comprehensive detection and optimisation of security vulnerabilities in LLMs
Ge et al. (2024) Adversarial Prompt Injection Multi-Round Automatic Red-Teaming (MART) The way attacks are iteratively updated in adversarial training is not automatic Reduced training costs
Yao et al. (2024) Jailbreaking prompts FuzzLLM Expanded detection of jailbreak vulnerabilities
Salem et al. (2023) Prompt Injection Automated Variant Analysis of Known Prompt Injection Attacks No specific instructions on how to apply Maatphor Ensuring the durability of the red team’s adversarial training
Deng et al. (2024b) Multilingual environment SELF-DEFENSE framework Enhanced Multilingual Security for the LLM
Li et al. (2024) Semantic-preserving algorithm No more research on resource-limited languages Better solution to the problem of missing data sets

3.2 Detection of LLMs generated content

The identification of content generated by LLM systems is often challenging due to their powerful text generation capabilities. This makes detection difficult. Users frequently employ various techniques to bypass detection measures, such as paraphrasing attacks (Lucas et al., 2023; Ren et al., 2024) and cross-model obfuscation (Liu et al., 2024c). Paraphrasing attacks aim to evade text-based detection systems by rewriting or rearranging the text, which makes the original content harder to identify (Lucas et al., 2023; Ren et al., 2024). Cross-model obfuscation uses different generation formats than mainstream detection models or content that bears less resemblance to the original text to avoid detection (Liu et al., 2024c). Identifying model-generated content is crucial for preventing abuse and manipulation. Interestingly, LLMs can also be utilized to counteract such misuse (Lucas et al., 2023). Researchers have proposed a semantically grounded watermarking technique based on LLMs (Ren et al., 2024). This technique involves covertly embedding a flag within the generated text. The watermark remains closely related to the original text at a semantic level, allowing for detection even after paraphrasing. To ensure that watermarks can withstand various rewrites and attacks while improving detection accuracy, they must possess key properties: effectiveness, covertness, and robustness. Researchers use various techniques for watermarking, including specific encoding strategies during text generation, embedding unobtrusive keywords or phrases, and utilizing internal representations of LLMs. The presence of watermarking significantly eases the detection process and aids in accurately identifying text content (Ren et al., 2024).
Liu and colleagues proposed CheckGPT, a detection tool designed to identify and verify academic texts that may be generated by ChatGPT (Liu et al., 2024c). CheckGPT features task-specific, discipline-specific, and unified detectors. It achieves an average classification accuracy of 98% to 99%. Additionally, CheckGPT is highly portable and requires no tuning to maintain an accuracy range of 90% to 98% across different domains. Despite this, detecting model-generated content remains challenging. Researchers like Liyanage have used a data set that simulates human behavior when employing large models for detection. Their findings reveal that existing detection models struggle to identify content when fine-tuned or when generating text based on previous text (Liyanage and Buscaldi, 2023).

4 Socio-ethical implications of the LLMs

In today’s technological landscape, LLMs face both information security threats and socio-ethical security challenges. Their application is influenced by various objective factors, including the quality of training data sets, model architecture, algorithmic constraints, and product design (Ferrara, 2023). Moreover, the misuse and manipulation of LLMs by users intensify ethical concerns and societal security risks. These issues include hallucination of outputs (Cascella et al., 2023; Ji et al., 2023), potential misinformation and bias (Meyer et al., 2023; Zhou et al., 2023), risks of data privacy breaches (Su, 2024), and impacts on human autonomy (Ellis et al., 2024).

4.1 Hallucination problem in the output of LLMs

Despite their impressive capabilities, LLMs are prone to hallucination, where they generate content that is entirely fictitious or inconsistent with reality (Ji et al., 2023). They can quickly produce seemingly credible materials that is only partially true or completely false (Cascella et al., 2023). In public health, this rapid text generation can exacerbate misinformation, leading to information epidemics (De Angelis et al., 2023). The inaccuracies in medical information generated by LLMs could even result in fatal outcomes due to biased training data and outdated information (Liu et al., 2023c). In scientific research, LLMs can generate authentic-looking yet entirely fabricated papers (Májovský et al., 2023) and fictitious references (Agathokleous et al., 2023), making it difficult for non-specialists to distinguish fact from fiction. Research has shown that assigning tasks to LLM systems raises ethical concerns and decreases trust in researchers’ future work compared to human oversight (Niszczota and Conway, 2023). Consequently, relying on LLMs for content generation can significantly affect the integrity of science. This issue also extends to digital government building, where the presence of inaccurate information creates uncertainty regarding investments and impacts organizational structures and business processes (Fan, 2023). To address these challenges, some researchers suggest incorporating specific features into LLMs to help identify generated content and establishing expert groups to oversee their usage (De Angelis et al., 2023).

4.2 Bias problem in LLMs

The term “bias in LLMs” refers to systematic misrepresentation, attribution errors, or factual distortions that prioritize certain groups or perspectives. This can perpetuate stereotypes and lead to erroneous assumptions based on learned patterns (Ferrara, 2023). In scientific research, biases in LLMs amplify coding bias and biases from training data, which may affect research and education (Meyer et al., 2023). In the medical field, the potential for racial and gender discrimination in training data can result in bias in diagnostic reasoning and clinical planning. This bias may also extend into medical education (Corsello and Santangelo, 2023; Zack et al., 2024).
The inherent bias in LLMs poses a risk of lasting impacts, both on a small scale and at the social level. The subtle ideologization of LLMs can influence users’ perceptions and attitudes through language choices and emotional tones, often without promoting a specific ideology overtly (Zhou et al., 2023). If political bias is present in training data or product design, it could affect voter sentiment, leading to heightened competition among government systems and ideologies. This is particularly concerning in authoritarian countries, where AI development may be exploited for manipulation (Jungherr, 2023; Motoki et al., 2024; Rozado, 2023; Saetra, 2023; Sang and Yu, 2023; Zhang, 2023). Additionally, group bias can cause LLMs to provide contradictory responses to different groups, potentially resulting in social justice issues (Ferrara, 2023; Yu and Fan, 2023).

4.3 Data privacy problem in LLMs

Training LLMs requires a large volume of data, often sourced from the Internet. This can lead to the use of unauthorized, copyright-protected content, resulting in potential data infringement and privacy violations (Sang and Yu, 2023). Informed consent is a crucial ethical principle in medicine, especially in pediatrics (Corsello and Santangelo, 2023). However, LLM systems are likely to compromise sensitive information during their processes. The training models for LLMs complicate judicial remedies for data infringement. The current intellectual property rights framework does not protect generative content under copyright law and does not recognize the originality of general-purpose AIs. This lack of protection raises significant risks for potential infringement of generated works and could hinder the development of these technologies (Su, 2024).

4.4 Impact of LLMs on human autonomy

The high speed and quality of text generation capabilities of LLMs can lead to misuse. In education, teachers often employ LLMs as teaching aids, while students increasingly depend on them for tasks like writing and programming. LLMs have impressive generative capabilities, enabling them to produce code comparable to that of upper-intermediate students (Ellis et al., 2024). This may foster excessive reliance on LLMs, potentially impairing students’ critical thinking and problem-solving skills and resulting in rigid thinking patterns and a lack of creativity (Agathokleous et al., 2023; Rahman and Watanobe, 2023). Moreover, instances of unintentional plagiarism can arise, posing significant challenges to the fairness and reliability of educational systems (Meyer et al., 2023; Rasul et al., 2023; Su, 2024).
Additionally, the use of LLMs can lead to the “information cocoon effect.” Overreliance on these tools may result in individuals receiving increasingly one-sided information, negatively impacting the diversity of viewpoints and stifling innovative thinking (Sang and Yu, 2023; Xiao and Lai, 2024). The rapid advancement of LLM technology also impacts the labor market, creating a crisis of identity and self-perception (Xiao and Lai, 2024). Although the technology generates new job opportunities (Zhang, 2023), those in low-skilled positions may face unemployment, exacerbating psychological distress and widening the economic gap between the wealthy and the impoverished (Song et al., 2023; Yu and Fan, 2023; Zhang, 2023). Furthermore, younger individuals are increasingly drawn to digital environments, while older populations may struggle with digitalization. This shift has contributed to declining fertility rates and an aging society (Zhang, 2023).
The challenge posed by LLMs to human autonomy is especially clear at the legal level. To legally protect intellectual property rights that are prone to infringement in the age of LLMs (Song et al., 2023) and to address complex legal liabilities (Xiao and Lai, 2024), some researchers argue that it is crucial to investigate the actual parties liable for actions taken by generative AI. This involves tracing the allocation of liabilities (Yuan, 2023) and assessing whether developers anticipate potential criminal uses of LLMs and implement necessary preventive measures (Chu and Wei, 2023). Other scholars suggest that new entities, which may lack characteristics of natural persons but could possess some degree of independent consciousness and will, might also be held criminally liable (Liu, 2023d). It is essential to consider situations where generative AI is involved as an active participant in criminal activities.
Fig.3 illustrates that Chinese scholars emphasize the exploration of social and ethical implications from a macro perspective, addressing areas such as politics and law. In contrast, Western scholars focus on specific fields like education and medicine from a localized viewpoint. In the political and social domains of mutual interest, Chinese scholars express greater concern about issues related to digital governance and socioeconomic disparities, while Western scholars highlight potential biases in LLMs regarding governmental elections and ideological influences.
Fig.3 A comparative analysis of Chinese and western research on the social impact of LLMs.

Full size|PPT slide

5 Discussion

While LLMs have enabled the digitization of many aspects of life, they have also raised significant information security and social ethical concerns. These issues include the potential misuse of LLMs for phishing, malware generation, and hacking. There are also concerns related to bias, privacy leakage, and human autonomy in social ethics. A review of the defense analysis of these issues shows that current methods—such as parameter processing, input preprocessing, and adversarial training—have made some improvements in LLM security. However, these approaches have common shortcomings. They often lack robust defenses against certain types of attacks (Robey et al., 2023), show insufficient sustainability (Jiang et al., 2022), and struggle with adaptability in multilingual contexts (Deng et al., 2024b; Li et al., 2024). Additionally, some references in this paper are preprints from arXiv, which may lack peer review for discussions on cutting-edge technologies. Nonetheless, this paper’s primary focus is to explore ethical issues related to large model technologies, making the references relevant to the review’s focus.
1) Intelligence for automated adversarial training:
The rapid evolution of AI technology has led to new types of attacks, requiring an evolution in defense strategies. Most existing adversarial training methods depend on manually designing attack samples or using extensive computational resources, making them costly and slow. This approach often cannot keep pace with the speed of evolving attacks (Ge et al., 2024). Therefore, developing intelligent and automated adversarial training methods is crucial.
2) Multimodal and cross-language defense mechanisms:
Given the variety of techniques and languages the attackers may use, researchers must explore multimodal and cross-language defense mechanisms (Deng et al., 2024b; Li et al., 2024). This effort goes beyond technical challenges, addressing issues like multimodal feature fusion and cross-language learning frameworks. Understanding how cultural differences affect attack patterns is also essential. Furthermore, international data sharing and cooperation are vital. Multilingual data resources can support the development of cross-lingual defense techniques, enhancing LLM security globally.
3) Establishment of ethical and legal framework
As the role of LLMs in society expands, establishing appropriate ethical and legal frameworks is crucial for their healthy development. This includes regulating data privacy (Sang and Yu, 2023), protecting intellectual property (Song et al., 2023), and defining the legal responsibilities of AI actors (Chu and Wei, 2023; Liu, 2023d; Yuan, 2023). Through international cooperation, legal adaptations, and the development of ethical norms, we can create a solid foundation for the responsible application of LLMs.
4) Alignment with human values
The progress in affective computing and the emergence of self-awareness in LLMs raise important questions about aligning machine values with human values. The integration of LLMs into robots, along with the creation of human-computer interaction systems that quantify emotions through computational personality (Liu et al., 2023b; Liu, 2024a), has resulted in robots that are increasingly woven into human society. This integration prompts concerns about whether machine consciousness truly reflects human values.
In summary, as concerns regarding information security and social ethics emerge from LLMs, the traditional defensive strategies are being replaced by more sophisticated, automated defense mechanisms. The introduction of a cross-language learning framework can enhance the adaptability of this technology on a global level. Establishing ethical and legal frameworks is becoming an important research area to support the responsible development of LLM technology. As LLMs become more widespread, there will be an emphasis on research and discussions focused on information security and ethical considerations.

6 Conclusions

As LLMs are increasingly used across various fields, their associated information security and ethical challenges are becoming more prominent. This paper analyzes the security threats, defense techniques, and socio-ethical issues related to LLMs, drawing on the latest academic advancements in information security. A systematic literature review uncovers new security threats posed by LLMs. These include phishing attacks, malware threats, hacking incidents, social engineering attacks, disinformation, and other misuses that exploit LLM capabilities. Additionally, it highlights risks like model inversion attacks, poisoning attacks, backdoor attacks, hint injections, and jailbreak attacks. The paper also presents several strategies for enhancing LLM security. It addresses the technology’s impact on social ethics and discusses the future of automated adversarial training techniques, the study of attack adaptations in multilingual environments, and the need for ethical and legal frameworks. These insights provide a strong theoretical foundation and a comprehensive perspective for the future security applications and development of LLM technology.
This is a preview of subscription content, contact us for subscripton.

References

[1]
Andreas A, Smith C (2008). Mathematical programming algorithms for two-path routing problems with reliability considerations. INFORMS Journal on Computing, 20(4): 553–564
CrossRef Google scholar
[2]
Andreas A, Smith C (2009). Decomposition algorithms for the design of a non-simultaneous capacitated evacuation tree network. Networks, 53(2): 91–103
CrossRef Google scholar
[3]
Bababeik M, Khademi N, Chen A (2018). Increasing the resilience level of a vulnerable rail network: The strategy of location and allocation of emergency relief trains. Transportation Research Part E: Logistics and Transportation Review, 119: 110–128
CrossRef Google scholar
[4]
Berman O, Krass D, Menezes M (2007). Facility reliability issues in network p-median problems: Strategic centralization and co-location effects. Operations Research, 55(2): 332–350
CrossRef Google scholar
[5]
Butts C T (2009). Revisiting the foundations of network analysis. Science, 325(5939): 414–416
CrossRef Pubmed Google scholar
[6]
Cacchiani V, Huisman D, Kidd M, Kroon L, Toth P, Veelenturf L, Wagenaar J (2014). An overview of recovery models and algorithms for real-time railway rescheduling. Transportation Research Part B: Methodological, 63: 15–37
CrossRef Google scholar
[7]
Cai M, Deng Y, Tang Z (2010). An optimal spatio-temporal path algorithm for urban emergency rescue. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 38: 1–5
[8]
Castro J, Nasini S, Saldanha-da-Gama F (2017). A cutting-plane approach for large scale capacitated multi-period facility location using a specialized interior-point method. Mathematical Programming, 163(1–2): 411–444
CrossRef Google scholar
[9]
Chen F (2016). SJKZ Construction Engineering Project as An Example of Railway Construction Project Risk Management. Dissertation for the Master’s Degree. Jinan: Shandong University (in Chinese)
[10]
Cheng Y, Liang Z (2014). A strategic planning model for the railway system accident rescue problem. Transportation Research Part E: Logistics and Transportation Review, 69: 75–96
CrossRef Google scholar
[11]
Cosgrave J (1996). Decision making in emergencies. Disaster Prevention and Management, 5(4): 28–35
CrossRef Google scholar
[12]
Dai G, Da Q (2000). The study of combinatorial scheduling problem in emergency systems. Systems Engineering-Theory & Practice, 20(9): 52–55 (in Chinese)
[13]
Deng Y, Zheng S, Liu G, Liu T (2005). Study on city emergency capability assessment system. Journal of Safety Science and Technology, (6): 33–36 (in Chinese)
[14]
Dong S, Yang Y, Li F, Cheng H, Li J, Bilgaev A, Li Z, Li Y (2018). An evaluation of the economic, social, and ecological risks of China–Mongolia–Russia high-speed railway construction and policy suggestions. Journal of Geographical Sciences, 28(7): 900–918
CrossRef Google scholar
[15]
Duan M, Chen G, Dong B, Li S (2017). Emergency rescue path selection model under uncertain information. Journal of Transportation Systems Engineering and Information Technology, 17(4): 173–181 (in Chinese)
[16]
Fan W (2007). Thoughts and suggestions on scientific problems in national emergency management during emergency crisis. Bulletin of National Natural Science Foundation of China, (2): 71–76 (in Chinese)
[17]
Fan Y, Li Z, Pei J, Li H, Sun J (2015). Applying systems thinking approach to accident analysis in China: Case study of “7.23” Yong–Tai–Wen high-speed train accident. Safety Science, 76: 190–201
CrossRef Google scholar
[18]
Gao Y (2012). Uncertain models for single facility location problems on networks. Applied Mathematical Modelling, 36(6): 2592–2599
CrossRef Google scholar
[19]
Gao Y, Wen M, Ding S (2013). (s, S) policy for uncertain single period inventory problem. International Journal of Uncertainty, Fuzziness and Knowledge-based Systems, 21(6): 945–953
CrossRef Google scholar
[20]
Ge C, Wang X, Guan X (2011). A multi-covering model and its algorithm for facility location response for large-scale emergencies. Operations Research and Management Science, 20(5): 50–56 (in Chinese)
[21]
Gu Y (2009). The Study of Regional Emergency Material Reserve and Dispatching Facing Major Emergencies. Dissertation for the Doctoral Degree. Wuhan: Wuhan University of Technology (in Chinese)
[22]
Guo J (2018). Discussion on pre-plan design in railway emergency management. Shandong Industrial Technology, 266(12): 211 (in Chinese)
[23]
Guo R, Zhou F (2015). Study of the progress control of the oversea EPC railway project. Value Engineering, 34(32): 71–73 (in Chinese)
[24]
Guo X (2017). The study of emergency material dispatching of hydropower enterprises under sudden natural disasters. Dissertation for the Master Degree. Chongqing: Chongqing University (in Chinese)
[25]
Han H (2017). Emergency Material Dispatch Method Taking into Account Road Conditions. Dissertation for the Master’s Degree. Wuhan: Wuhan University (in Chinese)
[26]
Hong X, Lejeune M A, Noyan N (2015). Stochastic network design for disaster preparedness. IIE Transactions, 47(4): 329–357
CrossRef Google scholar
[27]
Jena S D, Cordeau J F, Gendron B (2015). Dynamic facility location with generalized modular capacities. Transportation Science, 49(3): 484–499
CrossRef Google scholar
[28]
Jenkins L (2000). Selecting scenarios for environmental disaster planning. European Journal of Operational Research, 121(2): 275–286
CrossRef Google scholar
[29]
Jin J (2019). Research on Risk Evaluation of International Railway Corridors Construction among Countries along “The Belt and Road”. Dissertation for the Doctoral Degree. Beijing: China Academy of Railway Sciences (in Chinese)
[30]
Jin J, Li Z, Zhu L, Yang C (2019a). A research on risk assessment of China railway “Go-Global” project construction. Railway Transport and Economy, 41(2): 82–87 (in Chinese)
[31]
Jin J, Li Z, Zhu L, Tong X, Yang C (2019b). Application of BP neural network in risk evaluation of railway construction. Journal of Railway Engineering Society, 36(3): 103–109 (in Chinese)
[32]
Knight K, Robinson Fayek A (2002). Use of fuzzy logic for predicting design cost overruns on building projects. Journal of Construction Engineering and Management, 128(6): 503–512
CrossRef Google scholar
[33]
Kovacevic M S, Gavin K, Oslakovic S, Bacic M (2016). A new methodology for assessment of railway infrastructure condition. Transportation Research Procedia, 14: 1930–1939
CrossRef Google scholar
[34]
Lagadec L R, Moulin L, Braud I, Chazelle B, Breil P (2018). A surface runoff mapping method for optimizing risk assessment on railways. Safety Science, 110: 253–267
CrossRef Google scholar
[35]
Leitner B (2017). A general model for railway systems risk assessment with the use of railway accident scenarios analysis. Procedia Engineering, 187: 150–159
CrossRef Google scholar
[36]
Li A, Nozick L, Xu N, Davidson R (2012). Shelter location and transportation planning under hurricane conditions. Transportation Research Part E: Logistics and Transportation Review, 48(4): 715–729
CrossRef Google scholar
[37]
Li H (2010). Research on new model of the emergency management phase theory. Journal of Safety Science and Technology, 6(5): 18–22 (in Chinese)
[38]
Li H (2015a). Research of Electric Power Emergency Materials Scheduling Optimization Model under Natural Disasters. Dissertation for the Master’s Degree. Beijing: North China Electric Power University (in Chinese)
[39]
Li H (2015b). A Research on Construction Schedule Risk of Railway Engineering Project. Dissertation for the Doctoral Degree. Nanchang: Jiangxi University of Science and Technology (in Chinese)
[40]
Li L, Wang F Z (2012). Railway incident and emergency decision-making research. Journal of Institute of Disaster Prevention, 14(3): 58–63 (in Chinese)
[41]
Li Q, Liu R, Zhang J, Sun Q (2014). Quality risk management model for railway construction projects. Procedia Engineering, 84: 195–203
CrossRef Google scholar
[42]
Li S, Liu J, Wang B, Xiao L (2010). Unconventional incident management research based on scenarios—“The First International Forum on Incident Management” (IFIM09) overview. Journal of University of Electronic Science and Technology of China (Social Sciences Edition), 12(1): 1–3, 14 (in Chinese)
[43]
Li S, Tian Y, Wu Y (2019). Research on the risk assessment of railway engineering project based on FAHP model. Journal of Railway Engineering Society, 36(7): 92–99 (in Chinese)
[44]
Li Y, Peng S, Li Y, Jiang W (2020). A review of condition-based maintenance: Its prognostic and operational aspects. Frontiers of Engineering Management, 7(3): 323–334
CrossRef Google scholar
[45]
Liang X (2011). Research on Safety Management Information System of Railway Construction Project. Dissertation for the Master’s Degree. Beijing: Beijing Jiaotong University (in Chinese)
[46]
Liao R (2019). Modeling and simulation of risk prevention and control of TIR system based on super-network. Journal of Shanghai Maritime University, 40(1): 51–58 (in Chinese)
[47]
Lin X (2007). Research on Resource Scheduling of Emergency Management in Emergencies. Dissertation for the Master’s Degree. Xiamen: Xiamen University (in Chinese)
[48]
Liu C, Shi J, Li C (2002). Selection of the combinatorial optimal scheme for fuzzy emergency system. Journal of Industrial Engineering and Engineering Management, 16(2): 25–28 (in Chinese)
[49]
Liu H (2014). Research on Robustness of Network Facilities Location under Uncertainty. Dissertation for the Doctoral Degree. Wuhan: Huazhong University of Science and Technology (in Chinese)
[50]
Liu M, You D (2011). Research on construction mode of risk early warning system of railway construction project. Project Management Technology, 9(8): 58–63 (in Chinese)
[51]
Liu T (2012). Study on scenario planning and construction of major emergencies. China Emergency Management, (4): 18–23 (in Chinese)
[52]
Liu X (2017). Research on Assistant Decision Method of Railway Emergency Management. Dissertation for the Doctoral Degree. Beijing: China Academy of Railway Sciences (in Chinese)
[53]
Luo X (2017). Research on Location Model of Water Emergency and Rescue Comprehensive Base in the Three Gorges Reservoir Area. Dissertation for the Doctoral Degree. Wuhan: Wuhan University of Technology (in Chinese)
[54]
Mileti D S (1975). Natural Hazard Warning Systems in the United States: A Research Assessment. Boulder, CO: Institute of Behavioral Science, University of Colorado
[55]
Mulholland B, Christian J (1999). Risk assessment in construction schedules. Journal of Construction Engineering and Management, 125(1): 8–15
CrossRef Google scholar
[56]
Niu H, Li P, Wang F (2009). Research on modeling and simulation of multi-emergency-areas model for emergency resource dispatch in railway emergency events. Railway Computer Application, 18(12): 20–22 (in Chinese)
[57]
Ortiz-Astorquiza C, Contreras I, Laporte G (2019). An exact algorithm for multilevel uncapacitated facility location. Transportation Science, 53(4): 1085–1106
CrossRef Google scholar
[58]
Özdamar L, Ekinci E, Küçükyazici B (2004). Emergency logistics planning in natural disasters. Annals of Operations Research, 129(1–4): 217–245
CrossRef Google scholar
[59]
Peng B (2011). Quality Risk Analysis of Beijing–Shanghai High-Speed Railway Construction Project Based on Bayesian Network. Dissertation for the Master’s Degree. Chengdu: Southwest Jiaotong University (in Chinese)
[60]
Perry R W, Lindell M K (2003). Preparedness for emergency response: Guidelines for the emergency planning process. Disasters, 27(4): 336–350
CrossRef Pubmed Google scholar
[61]
Qin Z, Kar S (2013). Single-period inventory problem under uncertain environment. Applied Mathematics and Computation, 219(18): 9630–9638
CrossRef Google scholar
[62]
Ren X (2010). Engineering Risk Management. Beijing: Beijing Jiaotong University Press
[63]
Rong L (2014). Research on the construction method of emergency plan system. China Emergency Management, (8): 23–29 (in Chinese)
[64]
Salmerón J, Apte A (2010). Stochastic optimization for natural disaster asset prepositioning. Production and Operations Management, 19(5): 561–574
CrossRef Google scholar
[65]
Sañudo R, Bordagaray M, dell’Olio L, Ibeas Á (2014). Discrete choice models to determine high-speed passenger stop under emergency conditions. Transportation Research Procedia, 3: 234–240
CrossRef Google scholar
[66]
Shi W, Yan H, Wang Z (2007). Control method for multi-period production/inventory model under random demands. Control and Decision, 22(9): 994–999 (in Chinese)
[67]
Song Q (2011). Research on Emergency Resource Dispatching Problem of Railway Emergencies. Dissertation for the Master’s Degree. Chengdu: Southwest Jiaotong University (in Chinese)
[68]
Sorrill C M (1987). Risk analysis for large projects: Models, methods and cases. Journal of the Operational Research Society, 38(12): 1217
[69]
Stallings R A, Quarantelli E L (1985). Emergent citizen groups and emergency management. Public Administration Review, 45: 93–100
CrossRef Google scholar
[70]
Suh S D (2000). Risk management in a large-scale new railway transport system project: Evaluation of Korean high speed railway experience. IATSS Research, 24(2): 53–63
CrossRef Google scholar
[71]
Sun W (2012). Research on Route Selection of Emergency Rescue Vehicle under Sudden Disaster. Dissertation for the Master’s Degree. Changchun: Jilin University (in Chinese)
[72]
Tan X, Gong K (2015). Emergency rescue route optimization based on decision utility analysis. Systems Engineering, 33(4): 131–135 (in Chinese)
[73]
Tang K, Yang C, Yang J (2008). Research on multi-stage random location inventory model. Journal of Wuhan University of Tech-nology (Information & Management Engineering), 30(5): 795–799 (in Chinese)
[74]
Tang S, Li X (2013). Study on method for assessment of vulnerability of railway emergency rescue system. Journal of the China Railway Society, 35(7): 14–20 (in Chinese)
[75]
Törnquist Krasemann J (2012). Design of an effective algorithm for fast response to the re-scheduling of railway traffic during disturbances. Transportation Research Part C: Emerging Technologies, 20(1): 62–78
CrossRef Google scholar
[76]
Tufekci S, Wallace W A (1998). The emerging area of emergency management and engineering. IEEE Transactions on Engineering Management, 45(2): 103–105
CrossRef Google scholar
[77]
Wang D (2009). Study on railway construction project risk management. Inner Mongolia Science Technology & Economy, (11): 75–76 (in Chinese)
[78]
Wang L, Li Y, Wang E (2011). Research on risk management of railway engineering construction. Systems Engineering Procedia, 1: 174–180
CrossRef Google scholar
[79]
Wang Y (2011). Research on Context Reconstruction Model of Unconventional Emergency. Dissertation for the Doctoral Degree. Harbin: Harbin Institute of Technology (in Chinese)
[80]
Wei R, Chen J, Yang L (2009). A decision-making model for the location of emergency rescue facilities. Industrial Safety and Environmental Protection, 35(11): 50–52 (in Chinese)
[81]
Wei X, Lv W, Song W (2013). Rescue route reselection model and algorithm for the unexpected accident. Procedia Engineering, 62: 532–537
CrossRef Google scholar
[82]
Wuni I Y, Shen G Q, Hwang B G (2020). Risks of modular integrated construction: A review and future research directions. Frontiers of Engineering Management, 7(1): 63–80
CrossRef Google scholar
[83]
Xie T (2014). Research on Risk Assessment and Management of Railway Construction Engineering. Dissertation for the Master’s Degree. Chengdu: Southwest Jiaotong University (in Chinese)
[84]
Xie X (2010). Study on the scheduling problem of railway emergency materials. Technology & Economy in Areas of Communications, 12(2): 52–55 (in Chinese)
[85]
Xing X (2012). Research on Combined Evaluation Method of Railway Emergency Plan for Emergency. Dissertation for the Master’s Degree. Lanzhou: Lanzhou Jiaotong University (in Chinese)
[86]
Xu J, Tang Z, Yuan X, Nie Y, Ma Z, Wei X, Zhang J (2018). A VR-based the emergency rescue training system of railway accident. Entertainment Computing, 27: 23–31
CrossRef Google scholar
[87]
Yang B, Fang Z, Liu S, Guo B (2011). Optimal resources allocation model for emergency rescue process based on the GERT network. Chinese Journal of Management, 8(12): 1879–1883 (in Chinese)
[88]
Yang J, Zhang W (2017). Research on robust optimal allocation of emergency resources in the Three Gorges Reservoir area. Journal of Chongqing Jiaotong University (Natural Science), 36(11): 71–77 (in Chinese)
[89]
Zhang H (2008). Optimum design model of equipment supply chain network based on risk control. Journal of Ordnance Engineering College, 20(3): 11–14 (in Chinese)
[90]
Zhang H (2013). Study on the Emergency Vehicle Distribution Rrouting Optimization Problem Which Is Based on the Real-Time Information. Dissertation for the Master’s Degree. Xi’an: Chang’an University (in Chinese)
[91]
Zhang J, Liu H, Yu G, Ruan J, Chan F T S (2019). A three-stage and multi-objective stochastic programming model to improve the sustainable rescue ability by considering secondary disasters in emergency logistics. Computers & Industrial Engineering, 135: 1145–1154
CrossRef Google scholar
[92]
Zhang M (2016). Study on Unconventional Emergency Scenario Reasoning Method the Case-Based. Dissertation for the Doctoral Degree. Wuhan: Huazhong University of Science and Technology (in Chinese)
[93]
Zhang X (2010). Study of the dynamic risk-managing model for high-speed railway construction projects. Traffic Engineering and Technology for National Defence, 8(6): 42–44, 38 (in Chinese)
[94]
Zhang Z (2014). Research on Construction of Emergency Scenarios and Dynamic Deduction Technology for Railway Emergencies. Dissertation for the Doctoral Degree. Lanzhou: Lanzhou Jiaotong University (in Chinese)
[95]
Zhao L (2016). Research on Vehicle Routing Optimization Problem in Different Stages of Natural Disaster Emergency Rescue. Dissertation for the Master’s Degree. Beijing: Beijing Jiaotong University (in Chinese)
[96]
Zheng Y (2018). Emergency train scheduling on Chinese high-speed railways. Transportation Science, 52(5): 1077–1091
CrossRef Google scholar
[97]
Zheng Y, Zhang M, Ling H, Chen S (2015). Emergency railway transportation planning using a hyper-heuristic approach. IEEE Transactions on Intelligent Transportation Systems, 16(1): 321–329
CrossRef Google scholar
[98]
Zhong S (2004). A framework of the state emergency S&T system. Forum on Science and Technology in China, (5): 33–36 (in Chinese)
[99]
Zhou Y (2016). Research on Risk Early Warning Management of Railway Engineering Construction Stage that Based on AHP. Dissertation for the Master’s Degree. Dalian: Dalian University of Technology (in Chinese)
[100]
Zhu J, Liu S, Ghosh S (2019). Model and algorithm of routes planning for emergency relief distribution in disaster management with disaster information update. Journal of Combinatorial Optimization, 38(1): 208–223
CrossRef Google scholar

RIGHTS & PERMISSIONS

2021 Higher Education Press
AI Summary AI Mindmap
PDF(20075 KB)

Part of a collection:

Information Management and Information Systems

Accesses

Citations

Detail

Sections
Recommended

/