Development and Evaluation of Fine-Tuned Large Language Models for Real-Time Cybersecurity Incident Triage and Ethical Decision-Making
AI Ethics, Cybersecurity, Decision Support Systems, Ethical AI, FineTuning, Incident Triage, Large Language Models, Real-Time Response, Responsible AI Deployment, Threat Intelligence
Abstract
Because cybersecurity attacks are becoming more sophisticated and frequent, incident response decisions must be made quickly and morally. The creation and assessment of optimized large language models (LLMs) for ethical reasoning and real-time cybersecurity incident triage are presented in this work. In order to improve the models, we used pre-trained transformer topologies and carefully selected domain-specific datasets that included ethical standards, incident reports, and threat intelligence. Response accuracy, contextual relevance, ethical alignment, and delay under high-pressure situations were the main evaluation measures. The findings show that optimized LLMs greatly improve incident assessment speed and quality while upholding moral standards including non-maleficence, proportionate reaction, and data privacy. In order to strike a compromise between automation and responsibility, we also suggest a hybrid decisionsupport architecture that combines these LLMs with human analysts. While highlighting the significance of openness, bias prevention, and ongoing monitoring in applications that are morally sensitive, the results also demonstrate the revolutionary potential of AI in cybersecurity operations.

