Securing educational LLMs: A generalised taxonomy of attacks on LLMs and DREAD risk assessment
Farzana Zahid , Anjalika Sewwandi , Lee Brandon , Vimal Kumar , Roopak Sinha
High-Confidence Computing ›› 2026, Vol. 6 ›› Issue (1) : 100371
Due to perceptions of efficiency and significant productivity gains, various organisations, including in education, are adopting Large Language Models (LLMs) into their workflows. Educator-facing, learner-facing, and institution-facing LLMs, collectively, Educational Large Language Models (eLLMs), complement and enhance the effectiveness of teaching, learning, and academic operations. However, their integration into an educational setting raises significant cybersecurity concerns. A comprehensive landscape of contemporary attacks on LLMs and their impact on the educational environment is missing. This study presents a generalised taxonomy of fifty attacks on LLMs, which are categorised as attacks targeting either models or their infrastructure. The severity of these attacks is evaluated in the educational sector using the DREAD risk assessment framework. Our risk assessment indicates that token smuggling, adversarial prompts, direct injection, and multi-step jailbreak are critical attacks on eLLMs. The proposed taxonomy, its application in the educational environment, and our risk assessment will help academic and industrial practitioners to build resilient solutions that protect learners and institutions.
Cyber attacks / Large language models (LLMs) / Risk assessment / DREAD / Education
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
Technology Innovation Institute (TII), Falcon LLM: Open-source large language models, 2023, (Accessed 20 November 2024). |
| [12] |
mhopkins-msft DOMARS, aviviano, Models. |
| [13] |
mhopkins-msft DOMARS, aviviano, GPT-4. |
| [14] |
DeepSeek,DeepSeek - into the unknown,2025, Retrieved on: 2025-02-08. |
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
Shafi Parvez Mohammed, |
| [19] |
|
| [20] |
Farzad Nourmohammadzadeh Motlagh, |
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
Asimily, 4 cyberattacks that shook universities and colleges in the last year, 2024, Asimily Blog. |
| [27] |
|
| [28] |
Microsoft Learn Challenge, Threat modeling for drivers, 2024, (Accessed 18 September 2024). |
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
Arijit Ghosh Chowdhury, |
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
AI Incident Database https://incidentdatabase.ai/. |
| [47] |
|
| [48] |
|
| [49] |
|
| [50] |
|
| [51] |
|
| [52] |
|
| [53] |
|
| [54] |
|
| [55] |
|
| [56] |
|
| [57] |
|
| [58] |
|
| [59] |
|
| [60] |
|
| [61] |
|
| [62] |
|
| [63] |
|
| [64] |
|
| [65] |
|
| [66] |
|
| [67] |
|
| [68] |
|
| [69] |
|
| [70] |
|
| [71] |
|
| [72] |
|
| [73] |
|
| [74] |
Antispoofing, Data poisining attacks and LLMs chatbots: How experts are responding, 2024, (Accessed 18 December 2024). |
| [75] |
|
| [76] |
|
| [77] |
|
| [78] |
|
| [79] |
|
| [80] |
|
| [81] |
|
| [82] |
|
| [83] |
|
| [84] |
|
| [85] |
|
| [86] |
|
| [87] |
|
| [88] |
|
| [89] |
|
| [90] |
|
| [91] |
|
| [92] |
|
| [93] |
|
| [94] |
OWASP Top 10 for LLM Applications 2025. |
| [95] |
|
| [96] |
|
| [97] |
|
| [98] |
|
| [99] |
|
| [100] |
|
| [101] |
|
| [102] |
|
| [103] |
|
| [104] |
|
| [105] |
|
| [106] |
|
| [107] |
|
| [108] |
|
| [109] |
|
| [110] |
|
| [111] |
|
| [112] |
|
| [113] |
|
| [114] |
|
| [115] |
|
| [116] |
|
| [117] |
|
| [118] |
|
| [119] |
|
| [120] |
|
| [121] |
|
| [122] |
|
| [123] |
|
| [124] |
National Vulnerability Database (NVD), National Institute of Standards and Technology (NIST), 2024, (Accessed 10 December 2024). |
| [125] |
OpenAI, Introducing study mode, 2025, https://openai.com/index/chatgpt-study-mode/. (Accessed 08 October 2025). |
| [126] |
|
| [127] |
Lionel Nganyewou Tidjon, |
| [128] |
Stephen Burabari Tete, Threat modelling and risk analysis for large language model (LLM)-powered applications, 2024, p. 2406, ArXiv E-Prints. |
/
| 〈 |
|
〉 |