International Journal on Science and Technology

E-ISSN: 2229-7677     Impact Factor: 9.88

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 16 Issue 4 October-December 2025 Submit your research before last 3 days of December to publish your research paper in the issue of October-December.

Explainable AI (XAI) for Enhanced Cyber Threat Intelligence: Building Interpretable Intrusion Detection Systems

Author(s) Naresh Kalimuthu
Country United States
Abstract The growth of complex cyber threats calls for integrating advanced Artificial Intelligence (AI) and Machine Learning (ML) technologies into Cyber Threat Intelligence (CTI) frameworks, especially for Intrusion Detection Systems (IDS). While these models, particularly deep learning architectures, achieve high accuracy in identifying complex and unprecedented attacks, the "black-box" nature creates trust, adoption, and operational challenges. This lack of transparency often results in opaque decision-making, diminishes trust among security personnel, and increases alert fatigue, weakening the security these systems aim to provide. An emerging body of work in Explainable AI (XAI) addresses this issue by offering explanations for AI-driven decisions and actions. This paper examines the integration of XAI into IDS to enhance trust, collaboration, and resilience in cyber defense systems. It outlines three primary research challenges that have so far impeded the development of Explainable IDS (X-IDS): balancing model accuracy with system fidelity, meeting the technical requirements for real-time XAI processing, and establishing standards to evaluate explanation quality.
This paper critically reviews various mitigation strategies, including Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) post-hoc explainability frameworks, attention-based explainable AI models, and emerging federated and lightweight XAI paradigms. Syntheses of findings from CICIDS2017, UNSW-NB15, and other benchmark data that illustrate XAI’s potential to improve forensic analysis and reduce false positive rates without compromising detection accuracy. The paper argues that XAI is essential for the future of cyber threat intelligence (CTI), enabling IDS to evolve from opaque alert generators into trustworthy partners alongside human analysts. It calls for future research to develop federated, lightweight real-time XAI, unified benchmarking frameworks for XAI evaluation, and defenses against adversarial XAI sabotage.
Keywords Explainable AI (XAI), Intrusion Detection System (IDS), Cyber Threat Intelligence (CTI), Machine Learning, Interpretability, SHAP, LIME, Network Security.
Field Engineering
Published In Volume 16, Issue 4, October-December 2025
Published On 2025-10-22
DOI https://doi.org/10.71097/IJSAT.v16.i4.9535
Short DOI https://doi.org/hbb8gf

Share this