Bridging AI and explainability in civil engineering: the Yin-Yang of predictive power and interpretability
Monjurul Hasan , Ming Lu
Bridging AI and explainability in civil engineering: the Yin-Yang of predictive power and interpretability
Civil engineering relies on data from experiments or simulations to calibrate models that approximate system behaviors. This paper examines machine learning (ML) algorithms for AI-driven decision support in civil engineering, specifically construction engineering and management, where complex input–output relationships demand both predictive accuracy and interpretability. Explainable AI (XAI) is critical for safety and compliance-sensitive applications, ensuring transparency in AI decisions. The literature review identifies key XAI evaluation attributes—model type, explainability, perspective, and interpretability and assesses the Enhanced Model Tree (EMT), a novel method demonstrating strong potential for civil engineering applications compared to commonly applied ML algorithms. The study highlights the need to balance AI’s predictive power with XAI’s transparency, akin to the Yin–Yang philosophy: AI advances in efficiency and optimization, while XAI provides logical reasoning behind conclusions. Drawing on insights from the literature, the study proposes a tailored XAI assessment framework addressing civil engineering's unique needs—problem context, data constraints, and model explainability. By formalizing this synergy, the research fosters trust in AI systems, enabling safer and more socially responsible outcomes. The findings underscore XAI’s role in bridging the gap between complex AI models and end-user accountability, ensuring AI’s full potential is realized in the field.
Explainable AI (XAI) / AI transparency / Causal reasoning / Sensitivity analysis / Feature relevance / AI in construction engineering / Data-driven engineering
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015, August). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1721–1730). |
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA) (pp. 80–89). IEEE. |
| [21] |
Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 580–587). |
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
Hasan, M. M. (2023). Analytical Approaches for Predicting Variance in Construction Productivity using Regression Methods. PhD Thesis, Department of Civil and Environmental Engineering, University of Alberta. https://doi.org/10.7939/r3-h2d6-ex15 |
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
Kiely, T., & Gielen, G. (2004). Performance modeling of analog integrated circuits using least-squares support vector machines. In Proceedings - Design, Automation and Test in Europe Conference and Exhibition (Vol. 1, pp. 448–453). https://doi.org/10.1109/DATE.2004.1268887 |
| [36] |
|
| [37] |
Klein, G. (2004). The power of intuition: How to use your gut feelings to make better decisions at work. Crown Currency. |
| [38] |
|
| [39] |
Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J. (2017). Interpretable & explorable approximations of black box models. arXiv preprint arXiv:1707.01154. |
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
| [48] |
|
| [49] |
|
| [50] |
|
| [51] |
|
| [52] |
Poulin, B., Eisner, R., Szafron, D., Lu, P., Greiner, R., Wishart, D. S., Fyshe, A., Pearcy, B., MacDonell, C., & Anvik, J. (2006). Visual explanation of evidence with additive classifiers. In Proceedings of the national conference on artificial intelligence (Vol. 21, No. 2, p. 1822). Menlo Park, CA: Cambridge, MA: London; AAAI Press; MIT Press; 1999. https://cdn.aaai.org/AAAI/2006/AAAI06-301.pdf |
| [53] |
Quinlan, J. R. (1992). Learning with continuous classes. In 5th Australian joint conference on artificial intelligence (Vol. 92, pp. 343–348). https://sci2s.ugr.es/keel/pdf/algorithm/congreso/1992-Quinlan-AI.pdf |
| [54] |
|
| [55] |
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). |
| [56] |
|
| [57] |
|
| [58] |
|
| [59] |
|
| [60] |
|
| [61] |
|
| [62] |
|
| [63] |
|
| [64] |
|
| [65] |
|
| [66] |
|
| [67] |
Van Lent, M., Fisher, W., & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. Menlo Park CA: Cambridge, MA. 1999. |
| [68] |
Veiber, L., Allix, K., Arslan, Y., Bissyandé, T. F., & Klein, J. (2020). Challenges Towards {Production-Ready} Explainable Machine Learning. In 2020 USENIX Conference on Operational Machine Learning (OpML 20). |
| [69] |
|
| [70] |
|
| [71] |
|
| [72] |
Wang, Y., & Witten, I. H. (1997). Inducing model trees for continuous classes. In Proceedings of the ninth European conference on machine learning (Vol. 9, No. 1, pp. 128–137). https://ml.cms.waikato.ac.nz/publications/1997/Wang-Witten-Induct.pdf |
| [73] |
|
| [74] |
|
| [75] |
|
The Author(s)
/
| 〈 |
|
〉 |