UNI-MB - logo
UMNIK - logo
 
E-viri
Celotno besedilo
Recenzirano Odprti dostop
  • An Explainable Ensemble Dee...
    Shtayat, Mousa'B Mohammad; Hasan, Mohammad Kamrul; Sulaiman, Rossilawati; Islam, Shayla; Khan, Atta Ur Rehman

    IEEE access, 2023, Letnik: 11
    Journal Article

    Ensuring the security of critical Industrial Internet of Things (IIoT) systems is of utmost importance, with a primary focus on identifying cyber-attacks using Intrusion Detection Systems (IDS). Deep learning (DL) techniques are frequently utilized in the anomaly detection components of IDSs. However, these models often generate high false-positive rates, and their decision-making rationale remains opaque, even to experts. Gaining insights into the reasons behind an IDS's decision to block a specific packet can aid cybersecurity professionals in assessing the system's effectiveness and creating more cyber-resilient solutions. In this paper, we offer an explainable ensemble DL-based IDS to improve the transparency and robustness of DL-based IDSs in IIoT networks. The framework incorporates Shapley additive explanations (SHAP) and Local comprehensible-independent Clarifications (LIME) methods to elucidate the decisions made by DL-based IDSs, providing valuable insights to experts responsible for maintaining IIoT network security and developing more cyber-resilient systems. The ToN_IoT dataset was used to evaluate the efficacy of the suggested framework. As a baseline intrusion detection system, the extreme learning machines (ELM) model was implemented and compared with other models. Experiments show the effectiveness of ensemble learning to improve the results.