Bridging AI and explainability in civil engineering: the Yin-Yang of predictive power and interpretability

Monjurul Hasan , Ming Lu

AI in Civil Engineering ›› 2025, Vol. 4 ›› Issue (1)

PDF
AI in Civil Engineering ›› 2025, Vol. 4 ›› Issue (1) DOI: 10.1007/s43503-025-00066-6
Review
review-article

Bridging AI and explainability in civil engineering: the Yin-Yang of predictive power and interpretability

Author information +
History +
PDF

Abstract

Civil engineering relies on data from experiments or simulations to calibrate models that approximate system behaviors. This paper examines machine learning (ML) algorithms for AI-driven decision support in civil engineering, specifically construction engineering and management, where complex input–output relationships demand both predictive accuracy and interpretability. Explainable AI (XAI) is critical for safety and compliance-sensitive applications, ensuring transparency in AI decisions. The literature review identifies key XAI evaluation attributes—model type, explainability, perspective, and interpretability and assesses the Enhanced Model Tree (EMT), a novel method demonstrating strong potential for civil engineering applications compared to commonly applied ML algorithms. The study highlights the need to balance AI’s predictive power with XAI’s transparency, akin to the Yin–Yang philosophy: AI advances in efficiency and optimization, while XAI provides logical reasoning behind conclusions. Drawing on insights from the literature, the study proposes a tailored XAI assessment framework addressing civil engineering's unique needs—problem context, data constraints, and model explainability. By formalizing this synergy, the research fosters trust in AI systems, enabling safer and more socially responsible outcomes. The findings underscore XAI’s role in bridging the gap between complex AI models and end-user accountability, ensuring AI’s full potential is realized in the field.

Keywords

Explainable AI (XAI) / AI transparency / Causal reasoning / Sensitivity analysis / Feature relevance / AI in construction engineering / Data-driven engineering

Cite this article

Download citation ▾
Monjurul Hasan, Ming Lu. Bridging AI and explainability in civil engineering: the Yin-Yang of predictive power and interpretability. AI in Civil Engineering, 2025, 4(1): DOI:10.1007/s43503-025-00066-6

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

AdadiA, BerradaM. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 2018, 6: 52138-52160.

[2]

AliG, Al-ObeidatF, TubaishatA, ZiaT, IlyasM, RochaA. Counterfactual explanation of Bayesian model uncertainty. Neural Computing and Applications, 2023, 35(11): 8027-8034.

[3]

AlqadhiS, MallickJ, AlkahtaniM, AhmadI, AlqahtaniD, HangHT. Developing a hybrid deep learning model with explainable artificial intelligence (XAI) for enhanced landslide susceptibility modeling and management. Natural Hazards, 2024, 120(4): 3719-3747.

[4]

AnconaM, CeoliniE, ÖztireliC, GrossMGradient-Based Attribution Methods, 2019Springer Verlag. .

[5]

ArrietaAB, Díaz-RodríguezN, Del SerJ, BennetotA, TabikS, BarbadoA, GarcíaS, Gil-LópezS, MolinaD, BenjaminsR, ChatilaR. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 2020, 58: 82-115.

[6]

BachS, BinderA, MontavonG, KlauschenF, MüllerKR, SamekW. On pixel-wise explanations for nonlinear classifier decisions by layer-wise relevance propagation. PLoS ONE, 2015, 107. e0130140

[7]

BakhtMN, El-DirabyTE. Synthesis of decision-making research in construction. Journal of Construction Engineering and Management, 2015.

[8]

BelleV, PapantonisI. Principles and practice of explainable machine learning. Frontiers in Big Data, 2021, 4. 688969

[9]

Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015, August). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1721–1730).

[10]

ConfalonieriR, CobaL, WagnerB, BesoldTR. A historical perspective of explainable artificial intelligence. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2021, 111e1391

[11]

DoJ, FerreiraVC, BobarshadH, TorabzadehkashiM, RezaeiS, HeydarigorjiA, AlvesV. Cost-effective, energy-efficient, and scalable storage computing for large-scale AI applications. ACM Transactions on Storage, 2020, 16(4): 1-37.

[12]

DobrovolskisA, KazanavičiusE, KižauskienėL. Building XAI-based agents for IoT systems. Applied Sciences, 2023, 136. 4040

[13]

ElhishiS, ElashryAM, El-MetwallyS. Unboxing machine learning models for concrete strength prediction using XAI. Scientific Reports, 2023, 13119892.

[14]

FanFL, XiongJ, LiM, WangG. On interpretability of artificial neural networks: A survey. IEEE Transactions on Radiation and Plasma Medical Sciences, 2021, 5(6): 741-760.

[15]

FiokK, FarahaniFV, KarwowskiW, AhramT. Explainable artificial intelligence for education and training. The Journal of Defense Modeling and Simulation, 2022, 19(2): 133-144.

[16]

GaoW. The application of machine learning in geotechnical engineering. Applied Sciences, 2024, 14114712.

[17]

GelmanA, CarlinJB, SternHS, DunsonDB, VehtariA, RubinDBBayesian Data Analysis, 20133London. .

[18]

GeyerP, SinghMM, ChenX. Explainable AI for engineering design: A unified approach of systems engineering and component-based deep learning demonstrated by energy-efficient building design. Advanced Engineering Informatics, 2024, 62. 102843

[19]

GharagozMM, NoureldinM, KimJ. Explainable machine learning (XML) framework for seismic assessment of structures using Extreme Gradient Boosting (XGBoost). Engineering Structures, 2025, 327. 119621

[20]

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA) (pp. 80–89). IEEE.

[21]

Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580–587).

[22]

GoodmanB, FlaxmanS. European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 2017, 38(3): 50-57.

[23]

GreifHModels, Algorithms, and the Subjects of Transparency, 2022Springer Science and Business Media Deutschland GmbH. .

[24]

GuidottiR, MonrealeA, RuggieriS, TuriniF, GiannottiF, PedreschiD. A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 2018, 51(5): 1-42.

[25]

GuillaumeS. Designing fuzzy inference systems from data: An interpretability-oriented review. IEEE Transactions on Fuzzy Systems, 2001, 9(3): 426-443.

[26]

GunningD, StefikM, ChoiJ, MillerT, StumpfS, YangGZ. XAI—Explainable artificial intelligence. Science Robotics, 2019, 437. eaay7120

[27]

Guzmán-TorresJA, Domínguez-MotaFJ, Tinoco-GuerreroG, Tinoco-RuízJG, Alonso-GuzmánEM. Extreme fine-tuning and explainable AI model for non-destructive prediction of concrete compressive strength, the case of ConcreteXAI dataset. Advances in Engineering Software, 2024, 192. 103630

[28]

Hasan, M. M. (2023). Analytical Approaches for Predicting Variance in Construction Productivity using Regression Methods. PhD Thesis, Department of Civil and Environmental Engineering, University of Alberta. https://doi.org/10.7939/r3-h2d6-ex15

[29]

HasanM, LuM. Enhanced model tree for quantifying output variances due to random data sampling: Productivity prediction applications. Automation in Construction, 2023, 158. 105218

[30]

HasanM, LuM. Estimating output variance of a regressing tree model: A case study of concrete strength prediction. Computing in Civil Engineering, 2023.

[31]

HatamiM, FloodI, FranzB, ZhangXState-of-the-art review on the applicability of AI methods to automated construction manufacturing, 2019American Society of Civil Engineers. .

[32]

KeaneMT, SmythBGood counterfactuals and where to find them: A case-based technique for generating counterfactuals for explainable AI (XAI), 2020Springer International Publishing.

[33]

KhanAM, TariqMA, RehmanSKU, SaeedT, AlqahtaniFK, SherifM. BIM integration with XAI using LIME and MOO for automated green building energy performance analysis. Energies, 2024, 17133295.

[34]

KhosraviH, ShumSB, ChenG, ConatiC, TsaiYS, KayJ, KnightS, Martinez-MaldonadoR, SadiqS, GaševićD. Explainable artificial intelligence in education. Computers and Education: Artificial Intelligence, 2022, 3100074

[35]

Kiely, T., & Gielen, G. (2004). Performance modeling of analog integrated circuits using least-squares support vector machines. In Proceedings - Design, Automation and Test in Europe Conference and Exhibition (Vol. 1, pp. 448–453). https://doi.org/10.1109/DATE.2004.1268887

[36]

KimB, ParkJ, SuhJ. Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information. Decision Support Systems, 2020.

[37]

Klein, G. (2004). The power of intuition: How to use your gut feelings to make better decisions at work. Crown Currency.

[38]

KwakK, LeeEH. Impact of road transport system on groundwater quality inferred from explainable artificial intelligence (XAI). Science of the Total Environment, 2024, 917170388.

[39]

Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J. (2017). Interpretable & explorable approximations of black box models. arXiv preprint arXiv:1707.01154.

[40]

LeutheD, MirlachJ, WenningerS, WietheC. Leveraging explainable AI for informed building retrofit decisions: Insights from a survey. Energy and Buildings, 2024, 318. 114426

[41]

LevesqueHJCommon sense, the Turing test, and the quest for real AI, 2017MIT press. .

[42]

LiptonZC. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 2018, 16(3): 31-57.

[43]

LovePE, FangW, MatthewsJ, PorterS, LuoH, DingL. Explainable artificial intelligence (XAI): Precepts, models, and opportunities for research in construction. Advanced Engineering Informatics, 2023, 57. 102024

[44]

MahboobaB, TimilsinaM, SahalR, SerranoM. Explainable artificial intelligence (XAI) to enhance trust management in intrusion detection systems using decision tree model. Complexity, 2021.

[45]

MarrBArtificial intelligence in practice: How 50 successful companies used AI and machine learning to solve problems, 2019John Wiley & Sons.

[46]

MinhD, WangHX, LiYF, NguyenTN. Explainable artificial intelligence: A comprehensive review. Artificial Intelligence Review, 2022, 55: 3503-3568.

[47]

MohsenijamA, SiuMFF, LuM. Modified stepwise regression approach to streamlining predictive analytics for construction engineering applications. Journal of Computing in Civil Engineering, 2016, 31304016066.

[48]

MolnarC, CasalicchioG, BischlB. Interpretable machine learning–a brief history, state-of-the-art and challenges. Joint European conference on machine learning and knowledge discovery in databases, 2020Springer International Publishing. 417431

[49]

NimonKF, OswaldFL. Understanding the results of multiple linear regression: Beyond standardized regression coefficients. Organizational Research Methods, 2013, 16(4): 650-674.

[50]

NoureldinM, Al KabbaniA, LopezA, Korkiala-TanttuL. Interpretable artificial intelligence approach for understanding shear strength in stabilized clay soils using real field soil samples. Frontiers of Structural and Civil Engineering, 2025.

[51]

PanY, ZhangL. Roles of artificial intelligence in construction engineering and management: A critical review and future trends. Automation in Construction, 2021, 122. 103517

[52]

Poulin, B., Eisner, R., Szafron, D., Lu, P., Greiner, R., Wishart, D. S., Fyshe, A., Pearcy, B., MacDonell, C., & Anvik, J. (2006). Visual explanation of evidence with additive classifiers. In Proceedings of the national conference on artificial intelligence (Vol. 21, No. 2, p. 1822). Menlo Park, CA: Cambridge, MA: London; AAAI Press; MIT Press; 1999. https://cdn.aaai.org/AAAI/2006/AAAI06-301.pdf

[53]

Quinlan, J. R. (1992). Learning with continuous classes. In 5th Australian joint conference on artificial intelligence (Vol. 92, pp. 343–348). https://sci2s.ugr.es/keel/pdf/algorithm/congreso/1992-Quinlan-AI.pdf

[54]

ReddyYBS, KangdaMZ, FarsangiENIntegration of building information modeling and machine learning for predictive maintenance, 2025Woodhead Publishing. .

[55]

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144).

[56]

RosenblattFPrinciples of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, 1961Cornell Aeronautical Lab. .

[57]

SaeedW, OmlinC. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 2023, 263. 110273

[58]

SalkhordehM, MirtaheriM, SoroushianS. A decision-tree-based algorithm for identifying the extent of structural damage in braced-frame buildings. Structural Control and Health Monitoring, 2021, 2811. e2825

[59]

ShabbirK, NoureldinM, SimSH. Data-driven model for seismic assessment, design, and retrofit of structures using explainable artificial intelligence. Computer-Aided Civil and Infrastructure Engineering, 2025, 40(3): 281-300.

[60]

ShahriariB, SwerskyK, WangZ, AdamsRP, De FreitasN. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 2015, 104(1): 148-175.

[61]

ShinD. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 2021, 146. 102551

[62]

ShuaiZ, KwonTJ, XieQ. Using explainable AI for enhanced understanding of winter road safety: Insights with support vector machines and SHAP. Canadian Journal of Civil Engineering, 2024, 51(9): 943-953.

[63]

SolomatineDP, XueY. M5 model trees and neural networks: Application to flood forecasting in the upper reach of the Huai River in China. Journal of Hydrologic Engineering, 2004, 9(6): 491-501.

[64]

StrejcV. Least squares and regression methods. Pergamon, 1981.

[65]

TariqR, Recio-GarciaJA, Cetina-QuiñonesAJ, Orozco-del-CastilloMG, BassamA. Explainable artificial intelligence twin for metaheuristic optimization: Double-skin facade with energy storage in buildings. Journal of Computational Design and Engineering, 2025.

[66]

ThudumuS, BranchP, JinJ, SinghJ. A comprehensive survey of anomaly detection techniques for high dimensional big data. Journal of Big Data, 2020, 7: 1-30.

[67]

Van Lent, M., Fisher, W., & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. Menlo Park CA: Cambridge, MA. 1999.

[68]

Veiber, L., Allix, K., Arslan, Y., Bissyandé, T. F., & Klein, J. (2020). Challenges Towards {Production-Ready} Explainable Machine Learning. In 2020 USENIX Conference on Operational Machine Learning (OpML 20).

[69]

VincentVU. Integrating intuition and artificial intelligence in organizational decision-making. Business Horizons, 2021, 64(4): 425-438.

[70]

WaqarA. Intelligent decision support systems in construction engineering: An artificial intelligence and machine learning approaches. Expert Systems with Applications, 2024, 249. 123503

[71]

WangS, HasanM, LuM. Global sensitivity analysis methodology for construction simulation models: Multiple linear regressions versus multilayer perceptions. Journal of Construction Engineering and Management, 2024, 150504024035.

[72]

Wang, Y., & Witten, I. H. (1997). Inducing model trees for continuous classes. In Proceedings of the ninth European conference on machine learning (Vol. 9, No. 1, pp. 128–137). https://ml.cms.waikato.ac.nz/publications/1997/Wang-Witten-Induct.pdf

[73]

ZhangG, PanY, ZhangL, TiongRLK. Cross-scale generative adversarial network for crowd density estimation from images. Engineering Applications of Artificial Intelligence, 2020, 94. 103777

[74]

ZhangS. Challenges in KNN classification. IEEE Transactions on Knowledge and Data Engineering, 2021, 34(10): 4663-4675.

[75]

ZouY, KiviniemiA, JonesSW. A review of risk management through BIM and BIM-related technologies. Safety Science, 2017, 97: 88-98.

RIGHTS & PERMISSIONS

The Author(s)

AI Summary AI Mindmap
PDF

217

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/