DIKUL - logo
E-viri
Celotno besedilo
Recenzirano
  • An engineer's guide to eXpl...
    Naser, M.Z.

    Automation in construction, September 2021, 2021-09-00, 20210901, Letnik: 129
    Journal Article

    While artificial intelligence (AI), and by extension machine learning (ML), continues to be adopted in parallel engineering disciplines, the integration of AI/ML into the structural engineering domain remains minutus. This resistance towards AI and ML primarily stems from two folds: 1) the fact that coding/programming is not a frequent element in structural engineering curricula, and 2) these methods are displayed as blackboxes; the opposite of that often favored by structural engineering education and industry (i.e., testing, empirical analysis, numerical simulation, etc.). Naturally, structural engineers are reluctant to leverage AI/ML during their tenure as such technology is viewed as opaque. In the rare instances of engineers adopting AI/ML, a clear emphasis is displayed towards chasing goodness metrics to imply “viable” inference. However, and just like the notion of correlation does not infer causation, forced goodness is prone to indicate a false sense of inference. To overcome this challenge, this paper advocates for a modern form of AI, one that is humanly explainable; thereby eXplainable Artificial Intelligence (XAI) and interpretable machine learning (IML). Thus, this work dives into the inner workings of a typical analysis to demystify how AI/ML model predictions can be evaluated and interpreted through a collection of agnostic methods (e.g., feature importance, partial dependence plots, feature interactions, SHAP (SHapley Additive exPlanations), and surrogates) via a thorough examination of a case study carried out on a comprehensive database compiled on reinforced concrete (RC) beams strengthened with fiber-reinforced polymer (FRP) composite laminates. In this case study, three algorithms, namely: Extreme Gradient Boosted Trees (ExGBT), Light gradient boosted trees (LGBT), and Keras Deep Neural Networks (KDNN), are applied to predict the maximum moment capacity of FRP-strengthened beams and the propensity of the FRP system to fail under various mechanisms. Finally, a philosophical engineering perspective into future research directions pertaining to this domain is presented and articulated. •Explainable AI and interpretable ML are discussed from a structural engineering perspective.•Three algorithms are examined.•Five explainability methods are explored in depth.•Potential implications of XAI and IML in engineering contexts are discussed.