Concrete cracking poses a significant threat to the safety and stability of crucial infrastructure such as bridges, roads, and building structures. Recognizing and accurately measuring the morphology of cracks is essential for assessing the structural integrity of these elements. This paper introduces a novel Crack Segmentation method known as CG-CNNs, which combines a Clustering-guided (CG) block with a Convolutional Neural Network (CNN). The innovative CG block operates by categorizing extracted image features into K groups, merging these features, and then simultaneously feeding the augmented features and original image into the CNN for precise crack image segmentation. It automatically determines the optimal K value by evaluating the Silhouette Coefficient for various K values, utilizing the grayscale feature value of each cluster centroid as a defining characteristic for each category. To bolster our approach, we curated a dataset of 2500 crack images from concrete structures, employing rigorous pre-processing and data augmentation techniques. We benchmarked our method against three prevalent CNN architectures: DeepLabV3 + , U-Net, and SegNet, each augmented with the CG block. An algorithm specialized for assessing crack edge recognition accuracy was employed to analyze the proposed method's performance. The comparative analysis demonstrated that CNNs enhanced with the CG block exhibited exceptional crack image recognition capabilities and enabled precise segmentation of crack edges. Further investigation revealed that the CG-DeepLabV3 + model excelled, achieving an F1 score of 0.90 and an impressive intersection over union (IoU) value of 0.82. Notably, the CG-DeepLabV3 + model significantly reduced the recognition error for locating crack edges to a mere 2.31 pixels. These enhancements mark a significant advancement in developing accurate algorithms based on deep neural networks for identifying concrete crack edges reliably. In conclusion, our CG-CNNs approach offers a highly accurate method for crack segmentation, which is invaluable for machine-based measurements of cracks on concrete surfaces.
The accurate prediction of the deterioration of Continuously Reinforced Concrete Pavement (CRCP) is essential for the effective management of pavements and the maintenance of infrastructure. In this study, a comprehensive approach that integrates descriptive statistics, correlation analysis, and machine learning algorithms is employed to develop models and predict punchouts in CRCP. The dataset used in this study is extracted from the Long-Term Pavement Performance (LTPP) database and contains a wide range of pavement attributes, such as age, climate zone, thickness, and traffic data. Initial exploratory analysis reveals varying distributions among the input features, which serves as the foundation for subsequent analysis. A correlation heatmap matrix is utilized to elucidate the relationships between these attributes and punchouts, guiding the selection of features for modeling. By employing the random forest algorithm, key predictors like age, climate zone, and total thickness are identified. Various machine learning techniques, encompassing linear regression, decision trees, support vector machines, ensemble methods, Gaussian process regression, artificial neural networks, and kernel-based approaches, are compared. It is noteworthy that ensemble methods such as boosted trees and Gaussian process regression models exhibit promising predictive performance, with low root mean square error (RMSE) and high R-squared values. The outcomes of this study provide valuable insights for the development of pavement management strategies, facilitating informed decision-making regarding resource allocation and infrastructure maintenance. Future research could focus on refining models, exploring additional features, and validating results through real-world implementation trials. This study contributes to advancing predictive modeling techniques for optimizing CRCP infrastructure management and durability.
Strip foundations, as a widely applied form of shallow foundation, involve foundation displacements and soil deformations under loading, which are critical issues in geotechnical engineering. Traditional limit analysis methods can only provide solutions for ultimate bearing capacity, while numerical methods require remeshing and remodeling for different scenarios. To address these challenges, this study proposes a deep learning approach based on the DeepONet neural operator for rapid and accurate predictions of load–displacement curves and vertical displacement fields of strip foundations under various conditions. A dataset with randomly distributed parameters was generated using finite element method, with the training set employed to train the neural network. Validation on the test set shows that the proposed method not only accurately predicts ultimate bearing capacity but also captures the nonlinear characteristics of high-dimensional data. As an offline model alternative to finite element methods, the proposed approach holds promise for efficient and real-time prediction of the mechanical behavior of shallow foundations under loading.
Relational databases containing construction-related data are widely used in the Architecture, Engineering, and Construction (AEC) industry to manage diverse datasets, including project management and building-specific information. This study explores the use of large language models (LLMs) to convert construction data from relational databases into formal semantic representations, such as the resource description framework (RDF). Transforming this data into RDF-encoded knowledge graphs enhances interoperability and enables advanced querying capabilities. However, existing methods like R2RML and Direct Mapping face significant challenges, including the need for domain expertise and scalability issues. LLMs, with their advanced natural language processing capabilities, offer a promising solution by automating the conversion process, reducing the reliance on expert knowledge, and semantically enriching data through appropriate ontologies. This paper evaluates the potential of four LLMs (two versions of GPT and Llama) to enhance data enrichment workflows in the construction industry and examines the limitations of applying these models to large-scale datasets.
Soil permeability is a critical parameter that dictates the movement of water through soil, and it impacts processes such as seepage, erosion, slope stability, foundation design, groundwater contamination, and various engineering applications. This study investigates the permeability of soil amended with waste foundry sand (WFS) at a replacement level of 10%. Permeability measurements are conducted for three distinct relative densities, spanning from 65% to 85%. The dataset compiled from these measurements is employed to develop ensemble artificial intelligence (AI) models. Specifically, four regressor AI models are considered: Nearest Neighbor (NNR), Decision Tree (DTR), Random Forest (RFR) and Support Vector Machine (SVR). These models are enhanced with four distinct base learners: Gradient Boosting (GB), Stacking Regressor (SR), AdaBoost Regressor (ADR), and XGBoost (XGB). The input parameters include fraction of base sand (BS), fraction of waste foundry sand (WFS), relative density (RD), duration of flow (T), quantity of flow (Q) and permeability (k), totalling 165 data points. Through comparative analysis, the Gradient Boost with Decision Tree (GB-DTR) model is found to be best-performed model, with R2 = 0.9919. Sensitivity analysis reveals that Q is the most influential input parameter in predicting soil permeability.
Machine learning (ML) has garnered significant attention within the engineering domain. However, engineers without formal ML education or programming expertise may encounter difficulties when attempting to integrate ML into their work processes. This study aims to address this challenge by offering a tutorial that guides readers through the construction of ML models using Python. We introduce three simple datasets and illustrate how to preprocess the data for regression, classification, and clustering tasks. Subsequently, we navigate readers through the model development process utilizing well-established libraries such as NumPy, pandas, scikit-learn, and matplotlib. Each step, including data preparation, model training, validation, and result visualization, is covered with detailed explanations. Furthermore, we explore explainability techniques to help engineers understand the underlying behavior of their models. By the end of this tutorial, readers will have hands-on experience with three fundamental ML tasks and understand how to evaluate and explain the developed models to make engineering projects efficient and transparent.
Effective water management and flood prevention are critical challenges encountered by both urban and rural areas, necessitating precise and prompt monitoring of waterbodies. As a fundamental step in the monitoring process, waterbody segmentation involves precisely delineating waterbody boundaries from imagery. Previous research using satellite images often lacks the resolution and contextual detail needed for local-scale analysis. In response to these challenges, this study seeks to address them by leveraging common natural images that are more easily accessible and provide higher resolution and more contextual information compared to satellite images. However, the segmentation of waterbodies from ordinary images faces several obstacles, including variations in lighting, occlusions from objects like trees and buildings, and reflections on the water surface, all of which can mislead algorithms. Additionally, the diverse shapes and textures of waterbodies, alongside complex backgrounds, further complicate this task. While large-scale vision models have typically been leveraged for their generalizability across various downstream tasks that are pre-trained on large datasets, their application to waterbody segmentation from ground-level images remains underexplored. Hence, this research proposed the Visual Aquatic Generalist (VAGen) as a countermeasure. Being a lightweight model for waterbody segmentation inspired by visual In-Context Learning (ICL) and Visual Prompting (VP), VAGen refines large visual models by innovatively adding learnable perturbations to enhance the quality of prompts in ICL. As demonstrated by the experimental results, VAGen demonstrated a significant increase in the mean Intersection over Union (mIoU) metric, showing a 22.38% enhancement when compared to the baseline model that lacked the integration of learnable prompts. Moreover, VAGen surpassed the current state-of-the-art (SOTA) task-specific models designed for waterbody segmentation by 6.20%. The performance evaluation and analysis of VAGen indicated its capacity to substantially reduce the number of trainable parameters and computational overhead, and proved its feasibility to be deployed on cost-limited devices including unmanned aerial vehicles (UAVs) and mobile computing platforms. This study thereby makes a valuable contribution to the field of computer vision, offering practical solutions for engineering applications related to urban flood monitoring, agricultural water resource management, and environmental conservation efforts.
Traditional methods for proportioning of high-performance concrete (HPC) have certain shortcomings, such as high costs, usage constraints, and nonlinear relationships. Implementing a strategy to optimize the mixtures of HPC can minimize design expenses, time spent, and material wastage in the construction sector. Due to HPC's exceptional qualities, such as high strength (HS), fluidity and resilience, it has been broadly used in construction projects. In this study, we employed Generalized Regression Neural Network (GRNN), Nonlinear AutoRegressive with exogenous inputs (NARX neural network), and Random Forest (RF) models to estimate the Compressive Strength (CS) of HPC in the first scenario. In contrast, the second scenario involved the development of an ensemble model using the Radial Basis Function Neural Network (RBFNN) to detect inferior performance of standalone model combinations. The output variable was the 28 Days CS in MPa, while the input variables included slump (S), water-binder ratio (W/B) %, water content (W) kg/m3, fine aggregate ratio (S/a) %, silica fume (SF)%, and superplasticizer (SP) kg/m3. An RF model was developed by using R Studio; GRNN and NARX-NN models were developed by using the MATLAB 2019a toolkit; and the pre- and post-processing of data was carried out by using E-Views 12.0. The results indicate that in the first scenario, the Combination M1 of the RF model outperformed other models, with greater prediction accuracy, yielding a PCC of 0.854 and MAPE of 4.349 during the calibration phase. In the second scenario, the ensemble of RF models surpassed all other models, achieving a PCC of 0.961 and MAPE of 0.952 during the calibration phase. Overall, the proposed models demonstrate significant value in predicting the CS of HPC.
This study is primarily aimed at creating three machine learning models: artificial neural network (ANN), random forest (RF), and k-nearest neighbour (KNN), so as to predict the crippling load (CL) of I-shaped steel columns. Five input parameters, namely length of column (L), width of flange (b f), flange thickness (t f), web thickness (t w) and height of column (H), are used to compute the crippling load (CL). A range of performance indicators, including the coefficient of determination (R 2), variance account factor (VAF), a-10 index, root mean square error (RMSE), mean absolute error (MAE) and mean absolute deviation (MAD), are used to assess the effectiveness of the established machine learning models. The results show that all of the three ML (machine learning) models can accurately predict the crippling load, but the performance of ANN is superior: it delivers the highest value of R 2 = 0.998 and the lowest value of RMSE = 0.008 in the training phase, as well as the highest value of R 2 = 0.996 and the smaller value of RMSE = 0.012 in the testing phase. Additional methods, including rank analysis, reliability analysis, regression plot, Taylor diagram and error matrix plot, are employed to assess the models’ performance. The reliability index (β) of the models is calculated by using the first-order second moment (FOSM) technique, and the result is compared with the actual value. Additionally, sensitivity analysis is performed to check the impact of the input variables on the output (CL), finding that b f has the greatest impact on the crippling load, followed by t f, t w, H and L, in that order. This study demonstrates that ML techniques are useful for developing a reliable numerical tool for measuring the crippling load of I-shaped steel columns. It is found that the proposed techniques can also be used to predict other kinds of failures as well as different kinds of perforated columns.
Optimization is the process of creating the best possible outcome while taking into consideration the given conditions. The ultimate goal of optimization is to maximize or minimize the desired effects to meet the technological and management requirements. When faced with a problem that has several possible solutions, an optimization technique is used to identify the best one. This involves checking different search domains at the right time, depending on the specific problem. To solve these optimization problems, nature-inspired algorithms are used as part of stochastic methods. In civil engineering, numerous design optimization problems are nonlinear and can be difficult to solve via traditional techniques. In such points, metaheuristic algorithms can be a more useful and practical option for civil engineering usages. These algorithms combine randomness and decisive paths to compare multiple solutions and select the most satisfactory one. This article briefly presents and discusses the application and efficiency of various metaheuristic algorithms in civil engineering topics.
Earthquakes pose a significant threat to life and property worldwide. Rapid and accurate assessment of earthquake damage is crucial for effective disaster response efforts. This study investigates the feasibility of employing deep learning models for damage detection using drone imagery. We explore the adaptation of models like VGG16 for object detection through transfer learning and compare their performance to established object detection architectures like YOLOv8 (You Only Look Once) and Detectron2. Our evaluation, based on various metrics including mAP, mAP50, and recall, demonstrates the superior performance of YOLOv8 in detecting damaged buildings within drone imagery, particularly for cases with moderate bounding box overlap. This finding suggests its potential suitability for real-world applications due to the balance between accuracy and efficiency. Furthermore, to enhance real-world feasibility, we explore two strategies for enabling the simultaneous operation of multiple deep learning models for video processing: frame splitting and threading. In addition, we optimize model size and computational complexity to facilitate real-time processing on resource-constrained platforms, such as drones. This work contributes to the field of earthquake damage detection by (1) demonstrating the effectiveness of deep learning models, including adapted architectures, for damage detection from drone imagery, (2) highlighting the importance of evaluation metrics like mAP50 for tasks with moderate bounding box overlap requirements, and (3) proposing methods for ensemble model processing and model optimization to enhance real-world feasibility. The potential for real-time damage assessment using drone-based deep learning models offers significant advantages for disaster response by enabling rapid information gathering to support resource allocation, rescue efforts, and recovery operations in the aftermath of earthquakes.
Quarry dust, conventionally considered waste, has emerged as a potential solution for sustainable construction materials. This paper comprehensively review the mechanical properties of blocks manufactured from quarry dust, with a particular focus on the transformative role of machine learning (ML) in predicting and optimizing these properties. By systematically reviewing existing literature and case studies, this paper evaluates the efficacy of ML methodologies, addressing challenges related to data quality, feature selection, and model optimization. It underscores how ML can enhance accuracy in predicting mechanical properties, providing a valuable tool for engineers and researchers to optimize the design and composition of blocks made from quarry dust. This synthesis of mechanical properties and ML applications contributes to advancing sustainable construction practices, offering insights into the future integration of technology for predictive modeling in material science.
Small ground anchors are widely used to fix securing tents in disaster relief efforts. Given the urgent nature of rescue operations, it is crucial to obtain prompt and accurate estimations of their pullout capacity. In this study, a stacking machine learning (ML) model is developed for the rapid estimation of pullout capacity offered by small ground anchors used for temporary tents, leveraging cone penetration data. The proposed stacking model incorporates three ML algorithms as the base regression models: K-nearest neighbors (KNN), support vector regression (SVR), and extreme gradient boosting (XGBoost). A dataset comprising 119 in-situ anchor pullout tests, where the cone penetration data were measured, is utilized to train and assess the stacking model performance. Three metrics, i.e., coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE), are employed to evaluate the predictive accuracy of the proposed model and compare its performance against four popular ML models and an empirical formula to highlight the advantages of the proposed stacking approach. The results affirm that the proposed stacking model outperforms other ML models and the empirical approach as achieving higher R2 and lower MAE and RMSE and more predicted data points falling within 20% error line. Thus, the proposed stacking model holds promising potential as a solution for efficiently predicting the pullout capacity of small ground anchors.
Topology optimization techniques are increasingly utilized in structural design to create efficient and aesthetically pleasing structures while minimizing material usage. Many existing topology optimization methods may generate slender structural members under compression, leading to significant buckling issues. Consequently, incorporating buckling considerations is essential to ensure structural stability. This study investigates the capabilities of the bi-directional evolutionary structural optimization method, particularly its extension to handle multiple load cases in buckling optimization problems. The numerical examples presented focus on three classical cases relevant to civil engineering: maximizing the buckling load factor of a compressed column, performing buckling-constrained optimization of a frame structure, and enhancing the buckling resistance of a high-rise building. The findings demonstrate that the algorithm can significantly improve structural stability with only a marginal increase in compliance. The detailed mathematical modeling, sensitivity analyses, and optimization procedures discussed provide valuable insights and tools for engineers to design structures with enhanced stability and efficiency.
Concrete is one of the most common construction materials used all over the world. Estimating the strength properties of concrete traditionally demands extensive laboratory experimentation. However, researchers have increasingly turned to predictive models to streamline this process. This review focuses on predicting the compressive strength of self-compacting concrete using artificial intelligence (AI) techniques. Self-compacting concrete represents an advanced construction material particularly suited for scenarios where traditional vibrational methods face limitations due to intricate formwork or reinforcement complexities. This review evaluates various AI techniques through a comparative performance analysis. The findings highlight that employing Deep Neural Network models with multiple hidden layers significantly enhances predictive accuracy. Specifically, artificial neural network (ANN) models exhibit robustness, consistently achieving R2 values exceeding 0.7 across reviewed studies, thereby demonstrating their efficacy in predicting concrete compressive strength. The integration of ANN models is recommended for formulating various civil engineering properties requiring predictive capabilities. Notably, the adoption of AI models reduces both time and resource expenditures by obviating the need for extensive experimental testing, which can otherwise delay construction activities.