The past decade has witnessed a notable transformation in the Architecture, Engineering and Construction (AEC) industry, with efforts made both in the academia and industry to facilitate improvement of efficiency, safety and sustainability in civil projects. Such advances have greatly contributed to a higher level of automation in the lifecycle management of civil assets within a digitalised environment. To integrate all the achievements delivered so far and further step up their progress, this study proposes a novel theory, Engineering Brain, by effectively adopting the Metaverse concept in the field of civil engineering. Specifically, the evolution of the Metaverse and its key supporting technologies are first reviewed; then, the Engineering Brain theory is presented, including its theoretical background, key components and their inter-connections. Outlooks of this theory’s implementation within the AEC sector are offered, as a description of the Metaverse of future engineering. Through a comparison between the proposed Engineering Brain theory and the Metaverse, their relationships are illustrated; and how Engineering Brain may function as the Metaverse for future engineering is further explored. Providing an innovative insight into the future engineering sector, this study can potentially guide the entire industry towards its new era based on the Metaverse environment.
Research has been continually growing toward the development of image-based structural health monitoring tools that can leverage deep learning models to automate damage detection in civil infrastructure. However, these tools are typically based on RGB images, which work well under ideal lighting conditions, but often have degrading performance in poor and low-light scenes. On the other hand, thermal images, while lacking in crispness of details, do not show the same degradation of performance in changing lighting conditions. The potential to enhance automated damage detection by fusing RGB and thermal images together within a deep learning network has yet to be explored. In this paper, RGB and thermal images are fused in a ResNET-based semantic segmentation model for vision-based inspections. A convolutional neural network is then employed to automatically identify damage defects in concrete. The model uses a thermal and RGB encoder to combine the features detected from both spectrums to improve its performance of the model, and a single decoder to predict the classes. The results suggest that this RGB-thermal fusion network outperforms the RGB-only network in the detection of cracks using the Intersection Over Union (IOU) performance metric. The RGB-thermal fusion model not only detected damage at a higher performance rate, but it also performed much better in differentiating the types of damage.
Placing and vibrating concrete are vital activities that affect its quality. The current monitoring method relies on visual and time-consuming feedbacks by project managers, which can be subjective. With this method, poor workmanship cannot be detected well on the spot; rather, the concrete is inspected and repaired after it becomes hardened. To address the problems of retroactive quality control measures and to achieve real-time quality assurance of concrete operations, this paper presents a monitoring and warning solution for concrete placement and vibration workmanship quality. Specifically, the solution allows for collecting and compiling real-time sensor data related to the workmanship quality and can send alerts to project managers when related parameters are out of the required ranges. This study consists of four steps: (1) identifying key operational factors (KOFs) which determine acceptable workmanship of concrete work; (2) reviewing and selecting an appropriate positioning technology for collecting the data of KOFs; (3) designing and programming modules for a solution that can interpret the positioning data and send alerts to project managers when poor workmanship is suspected; and (4) testing the solution at a certain construction site for validation by comparing the positioning and warning data with a video record. The test results show that the monitoring performance of concrete placement is accurate and reliable. Follow-up studies will focus on developing a communication channel between the proposed solution and concrete workers, so that feedbacks can be directly delivered to them.
For soil liquefaction prediction from multiple data sources, this study designs a hierarchical machine learning model based on deep feature extraction and Gaussian Process with integrated domain adaption techniques. The proposed model first combines deep fisher discriminant analysis (DDA) and Gaussian Process (GP) in a unified framework, so as to extract deep discriminant features and enhance the model performance for classification. To deliver fair evaluation, the classifier is validated in the approach of repeated stratified K-fold cross validation. Then, five different data resources are presented to further verify the model’s robustness and generality. To reuse the gained knowledge from the existing data sources and enhance the generality of the predictive model, a domain adaption approach is formulated by combing a deep Autoencoder with TrAdaboost, to achieve good performance over different data records from both the in-situ and laboratory observations. After comparing the proposed model with classical machine learning models, such as supported vector machine, as well as with the state-of-art ensemble learning models, it is found that, regarding seismic-induced liquefaction prediction, the predicted results of this model show high accuracy on all datasets both in the repeated cross validation and Wilcoxon signed rank test. Finally, a sensitivity analysis is made on the DDA-GP model to reveal the features that may significantly affect the liquefaction.
Causality is the science of cause and effect. It is through causality that explanations can be derived, theories can be formed, and new knowledge can be discovered. This paper presents a modern look into establishing causality within structural engineering systems. In this pursuit, this paper starts with a gentle introduction to causality. Then, this paper pivots to contrast commonly adopted methods for inferring causes and effects, i.e., induction (empiricism) and deduction (rationalism), and outlines how these methods continue to shape our structural engineering philosophy and, by extension, our domain. The bulk of this paper is dedicated to establishing an approach and criteria to tie principles of induction and deduction to derive causal laws (i.e., mapping functions) through explainable artificial intelligence (XAI) capable of describing new knowledge pertaining to structural engineering phenomena. The proposed approach and criteria are then examined via a case study.
Current research on Digital Twin (DT) is largely focused on the performance of built assets in their operational phases as well as on urban environment. However, Digital Twin has not been given enough attention to construction phases, for which this paper proposes a Digital Twin framework for the construction phase, develops a DT prototype and tests it for the use case of measuring the productivity and monitoring of earthwork operation. The DT framework and its prototype are underpinned by the principles of versatility, scalability, usability and automation to enable the DT to fulfil the requirements of large-sized earthwork projects and the dynamic nature of their operation. Cloud computing and dashboard visualisation were deployed to enable automated and repeatable data pipelines and data analytics at scale and to provide insights in near-real time. The testing of the DT prototype in a motorway project in the Northeast of England successfully demonstrated its ability to produce key insights by using the following approaches: (i) To predict equipment utilisation ratios and productivities; (ii) To detect the percentage of time spent on different tasks (i.e., loading, hauling, dumping, returning or idling), the distance travelled by equipment over time and the speed distribution; and (iii) To visualise certain earthwork operations.
The development of automatic methods to recognize cracks in surfaces of concrete has been under focus in recent years, firstly through computer vision methods and more recently focusing on convolutional neural networks that are delivering promising results. Challenges are still persisting in crack recognition, namely due to the confusion added by the myriad of elements commonly found on concrete surfaces. The robustness of these methods would deal with these elements if access to correspondingly heterogeneous datasets was possible. Even so, this would be a cumbersome methodology, since training would be needed for each particular case and models would be case dependent. Thus, efforts from the scientific community are focusing on generalizing neural network models to achieve high performance in images from different domains, slightly different from those in which they were effectively trained. The generalization of networks can be achieved by domain adaptation techniques at the training stage. Domain adaptation enables finding a feature space in which features from both domains are invariant, and thus, classes become separable. The work presented here proposes the DA-Crack method, which is a domain adversarial training method, to generalize a neural network for recognizing cracks in images of concrete surfaces. The domain adversarial method uses a convolutional extractor followed by a classifier and a discriminator, and relies on two datasets: a source labeled dataset and a target unlabeled small dataset. The classifier is responsible for the classification of images randomly chosen, while the discriminator is dedicated to uncovering to which dataset each image belongs. Backpropagation from the discriminator reverses the gradient used to update the extractor. This enables fighting the convergence promoted by the updating backpropagated from the classifier, and thus generalizing the extractor enabling it for crack recognition of images from both source and target datasets. Results show that the DA-Crack training method improved accuracy in crack classification of images from the target dataset in 54 percentage points, while accuracy on the source dataset remains unaffected.
Engineering inspection and maintenance technologies play an important role in safety, operation, maintenance and management of buildings. In project construction control, supervision of engineering quality is a difficult task. To address such inspection and maintenance issues, this study presents a computer-vision-guided semi-autonomous robotic system for identification and repair of concrete cracks, and humans can make repair plans for this system. Concrete cracks are characterized through computer vision, and a crack feature database is established. Furthermore, a trajectory generation and coordinate transformation method is designed to determine the robotic execution coordinates. In addition, a knowledge base repair method is examined to make appropriate decisions on repair technology for concrete cracks, and a robotic arm is designed for crack repair. Finally, simulations and experiments are conducted, proving the feasibility of the repair method proposed. The result of this study can potentially improve the performance of on-site automatic concrete crack repair, while addressing such issues as high accident rate, low efficiency, and big loss of skilled workers.