The yearly production of plastic garbage is rising in the current environment as a result of the fast population rise. Recycling and reusing plastic trash is essential for sustainable development. The need of the hour is to utilize waste polythene for various supporting reasons since it is not biodegradable. These materials are made of polymers like polyethylene, polypropylene, and polystyrene. Due to the enhanced performance and elimination of the environmental issue, adding plastic waste to flexible pavement has emerged as a desirable choice. A composite material known as bituminous concrete (BC) is often utilized in construction projects such as road paving, airport terminals, and stopover areas. It includes mineral aggregate and black top or bitumen, which are combined, laid down in layers, and then compacted. The bituminous mixture in this research article was combined with plastic to use a chemical stabilizer. The ideal bitumen content is replaced by 0, 15%, 27%, and 36% plastic, as well as the bitumen's weight, stability, and Marshall value to create hypothermal. A linear scale is used to compare the flow rates to the bituminous mixture. The characterization of plastics contains bituminous materials are done by the SEM–EDX, XRD, FTIR and BET analysis. There have been several studies on the addition of trash to bituminous mixes, but this one is focused on the use of plastic waste as a modification in a bitumen binder for flexible pavement. According to research, bituminous mixes containing up to 4 percent plastic waste are excellent for sustainable development.
Recent diffusion-based AI art platforms can create impressive images from simple text descriptions. This makes them powerful tools for concept design in any discipline that requires creativity in visual design tasks. This is also true for early stages of architectural design with multiple stages of ideation, sketching and modelling. In this paper, we investigate how applicable diffusion-based models already are to these tasks. We research the applicability of the platforms Midjourney, DALL
The use of deep generative models (DGMs) such as variational autoencoders, autoregressive models, flow-based models, energy-based models, generative adversarial networks, and diffusion models has been advantageous in various disciplines due to their high data generative skills. Using DGMs has become one of the most trending research topics in Artificial Intelligence in recent years. On the other hand, the research and development endeavors in the civil structural health monitoring (SHM) area have also been very progressive owing to the increasing use of Machine Learning techniques. As such, some of the DGMs have also been used in the civil SHM field lately. This short review communication paper aims to assist researchers in the civil SHM field in understanding the fundamentals of DGMs and, consequently, to help initiate their use for current and possible future engineering applications. On this basis, this study briefly introduces the concept and mechanism of different DGMs in a comparative fashion. While preparing this short review communication, it was observed that some DGMs had not been utilized or exploited fully in the SHM area. Accordingly, some representative studies presented in the civil SHM field that use DGMs are briefly overviewed. The study also presents a short comparative discussion on DGMs, their link to the SHM, and research directions.
Implementing Structural Health Monitoring (SHM) systems with extensive sensing layouts on all civil structures is obviously expensive and unfeasible. Thus, estimating the state (condition) of dissimilar civil structures based on the information collected from other structures is regarded as a useful and essential way. For this purpose, Structural State Translation (SST) has been recently proposed to predict the response data of civil structures based on the information acquired from a dissimilar structure. This study uses the SST methodology to translate the state of one bridge (Bridge #1) to a new state based on the knowledge acquired from a structurally dissimilar bridge (Bridge #2). Specifically, the Domain-Generalized Cycle-Generative (DGCG) model is trained in the Domain Generalization learning approach on two distinct data domains obtained from Bridge #1; the bridges have two different conditions: State-H and State-D. Then, the model is used to generalize and transfer the knowledge on Bridge #1 to Bridge #2. In doing so, DGCG translates the state of Bridge #2 to the state that the model has learned after being trained. In one scenario, Bridge #2’s State-H is translated to State-D; in another scenario, Bridge #2’s State-D is translated to State-H. The translated bridge states are then compared with the real ones via modal identifiers and mean magnitude-squared coherence (MMSC), showing that the translated states are remarkably similar to the real ones. For instance, the modes of the translated and real bridge states are similar, with the maximum frequency difference of 1.12% and the minimum correlation of 0.923 in Modal Assurance Criterion values, as well as the minimum of 0.947 in Average MMSC values. In conclusion, this study demonstrates that SST is a promising methodology for research with data scarcity and population-based structural health monitoring (PBSHM). In addition, a critical discussion about the methodology adopted in this study is also offered to address some related concerns.
The process of estimating the level of water surface in two-stage waterways is a crucial aspect in the design of flood control and diversion structures. Human activities carried out along the course of rivers, such as agricultural and construction operation, have the potential to modify the geometry of floodplains, leading to the formation of compound channels with non-prismatic floodplains, thus possibly exhibiting convergent, divergent, or skewed characteristics. In the current investigation, the Support Vector Machine (SVM) technique is employed to approximate the water surface profile of compound channels featuring narrowing floodplains. Some models are constructed by utilizing significant experimental data obtained from both contemporary and previous investigations. Water surface profiles in these channels can be estimated through the utilization of non-dimensional geometric and flow parameters, including: converging angle, width ratio, relative depth, aspect ratio, relative distance, and bed slope. The results of this study indicate that the SVM-generated water surface profile exhibits a high degree of concordance with both the empirical data and the findings from previous research, as evidenced by its R 2 value of 0.99, RMSE value of 0.0199, and MAPE value of 1.263. The findings of this study based on statistical analysis demonstrate that the SVM model developed is dependable and suitable for applications in this particular domain, exhibiting superior performance in forecasting water surface profiles.
Rainfall forecasting can play a significant role in the planning and management of water resource systems. This study employs a Markov chain model to examine the patterns, distributions and forecast of annual maximum rainfall (AMR) data collected at three selected stations in the Kurdistan Region of Iraq using 32 years of 1990 to 2021 rainfall data. A stochastic process is used to formulate three states (i.e., decrease—"d"; stability—"s"; and increase—"i") in a given year for estimating quantitatively the probability of making a transition to any other one of the three states in the following year(s) and in the long run. In addition, the Markov model is also used to forecast the AMR data for the upcoming five years (i.e., 2022–2026). The results indicate that in the upcoming 5 years, the probability of the annual maximum rainfall becoming decreased is 44%, that becoming stable is 16%, and that becoming increased is 40%. Furthermore, it is shown that for the AMR data series, the probabilities will drop slowly from 0.433 to 0.409 in about 11 years, as indicated by the average data of the three stations. This study reveals that the Markov model can be used as an appropriate tool to forecast future rainfalls in such semi-arid areas as the Kurdistan Region of Iraq.
This paper represents experimental work on the mechanical and durability parameters of self-compacting concrete (SCC) with copper slag (CS) and fly ash (FA). In the first phase of the experiment, certain SCC mixes are prepared with six percentages of FA replacing the cement ranging from 5% to 30%. In the second phase, copper slag replaces fine aggregate at an interval of 20% to 100% by taking the optimum percentage value of FA. The performance of SCC mixes containing FA and copper slag is measured with fresh properties, compressive, split tensile and flexural strengths. SCC durability metrics, such as resistance against chloride and voids in the concrete matrix, is measured with rapid chloride ion penetration test (RCPT) and sorptivity techniques. The microstructure of the SCC is analyzed by using SEM and various phases available in the concrete matrix identified with XRD analysis. It is found that when replacing cement with 20% of FA and replacing fine aggregate with 40% of copper slag in SCC, higher mechanical strengths will be delivered. Resistance of chloride and voids in the concrete matrix reaches the optimum value at 40%; and with the increase of dosage, the quality of SCC will be improved. Therefore, it is recommended that copper slag be used as a sustainable material for replacement of fine aggregate.
Water stored in reservoirs has a lot of crucial function, including generating hydropower, supporting water supply, and relieving lasting droughts. During floods, water deliveries from reservoirs must be acceptable, so as to guarantee that the gross volume of water is at a safe level and any release from reservoirs will not trigger flooding downstream. This study aims to develop a well-versed assessment method for managing reservoirs and pre-releasing water outflows by using the machine learning technology. As a new and exciting AI area, this technology is regarded as the most valuable, time-saving, supervised and cost-effective approach. In this study, two data-driven forecasting models, i.e., Regression Tree (RT) and Support Vector Machine (SVM), were employed for approximately 30 years’ hydrological records, so as to simulate reservoir outflows. The SVM and RT models were applied to the data, accurately predicting the fluctuations in the water outflows of a Bhakra reservoir. Different input combinations were used to determine the most effective release. For cross-validation, the number of folds varied. It is found that quadratic SVM for 10 folds with seven different parameters would give the minimum RMSE, maximum R 2, and minimum MAE; therefore, it can be considered as the best model for the dataset used in this study.
Crushed over-burnt clay bricks (COBCBs) are a promising alternative to the natural gravel aggregate in lightweight concrete (LWC) production because of their high strength-to-weight ratio. Besides, COBCBs are considered a green aggregate as they solve the problem of solid waste disposal. In this paper, a total of fifteen reinforced concrete (RC) beams were constructed and tested up to failure. The experimental beams were classified into five groups. The control beams were cast with normal weight concrete (NWC), while the remaining four groups of beams were prepared from LWC. The test parameters were the concrete type, reinforcement ratio and silica fume (SF) content. The behavior of beams was evaluated in terms of the crack pattern, failure mode, ultimate deflection, and ductility. The experimental results suggested that the weight and strength of the concrete prepared satisfied the requirements of LWC. In addition, the increase in the reinforcement ratio and SF content improved the behavior of the beams. It is noteworthy that the SF addition caused measurable enhancements to the majority of the performance characteristics of LWC beams. Thus, COBCBs were successfully used as coarse aggregates in the production of high-quality LWC. Both ACI 318-19 and CSA-A23.3-19 made acceptable predictions of the cracking moment, ultimate capacity and mid-span deflection.
Slippery road conditions, such as snowy, icy or slushy pavements, are one of the major threats to road safety in winter. The U.S. Department of Transportation (USDOT) spends over 20% of its maintenance budget on pavement maintenance in winter. However, despite extensive research, it remains a challenging task to monitor pavement conditions and detect slippery roadways in real time. Most existing studies have mainly explored indirect estimates based on pavement images and weather forecasts. The emerging connected vehicle (CV) technology offers the opportunity to map slippery road conditions in real time. This study proposes a CV-based slippery detection system that uses vehicles to acquire data and implements deep learning algorithms to predict pavements' slippery conditions. The system classifies pavement conditions into three major categories: dry, snowy and icy. Different pavement conditions reflect different levels of slipperiness: dry surface corresponds to the least slippery condition, and icy surface to the most slippery condition. In practice, more attention should be paid to the detected icy and snowy pavements when driving or implementing pavement maintenance and road operation in winter. The classification algorithm adopted in this study is Long Short-Term Memory (LSTM), which is an artificial Recurrent Neural Network (RNN). The LSTM model is trained with simulated CV data in VISSIM and optimized with a Bayesian algorithm. The system can achieve 100%, 99.06% and 98.02% prediction accuracy for dry pavement, snowy pavement and icy pavement, respectively. In addition, it is observed that potential accidents can be reduced by more than 90% if CVs can adjust their driving speed and maintain a greater distance from the leading vehicle after receiving a warning signal. Simulation results indicate that the proposed slippery detection system and the information sharing function based on the CV technology and deep learning algorithm (i.e., the LSTM network implemented in this study) are expected to deliver real-time detection of slippery pavement conditions, thus significantly eliminating the potential risk of accidents.
Current research on Digital Twin (DT) is largely focused on the performance of built assets in their operational phases as well as on urban environment. However, Digital Twin has not been given enough attention to construction phases, for which this paper proposes a Digital Twin framework for the construction phase, develops a DT prototype and tests it for the use case of measuring the productivity and monitoring of earthwork operation. The DT framework and its prototype are underpinned by the principles of versatility, scalability, usability and automation to enable the DT to fulfil the requirements of large-sized earthwork projects and the dynamic nature of their operation. Cloud computing and dashboard visualisation were deployed to enable automated and repeatable data pipelines and data analytics at scale and to provide insights in near-real time. The testing of the DT prototype in a motorway project in the Northeast of England successfully demonstrated its ability to produce key insights by using the following approaches: (i) To predict equipment utilisation ratios and productivities; (ii) To detect the percentage of time spent on different tasks (i.e., loading, hauling, dumping, returning or idling), the distance travelled by equipment over time and the speed distribution; and (iii) To visualise certain earthwork operations.
The development of automatic methods to recognize cracks in surfaces of concrete has been under focus in recent years, firstly through computer vision methods and more recently focusing on convolutional neural networks that are delivering promising results. Challenges are still persisting in crack recognition, namely due to the confusion added by the myriad of elements commonly found on concrete surfaces. The robustness of these methods would deal with these elements if access to correspondingly heterogeneous datasets was possible. Even so, this would be a cumbersome methodology, since training would be needed for each particular case and models would be case dependent. Thus, efforts from the scientific community are focusing on generalizing neural network models to achieve high performance in images from different domains, slightly different from those in which they were effectively trained. The generalization of networks can be achieved by domain adaptation techniques at the training stage. Domain adaptation enables finding a feature space in which features from both domains are invariant, and thus, classes become separable. The work presented here proposes the DA-Crack method, which is a domain adversarial training method, to generalize a neural network for recognizing cracks in images of concrete surfaces. The domain adversarial method uses a convolutional extractor followed by a classifier and a discriminator, and relies on two datasets: a source labeled dataset and a target unlabeled small dataset. The classifier is responsible for the classification of images randomly chosen, while the discriminator is dedicated to uncovering to which dataset each image belongs. Backpropagation from the discriminator reverses the gradient used to update the extractor. This enables fighting the convergence promoted by the updating backpropagated from the classifier, and thus generalizing the extractor enabling it for crack recognition of images from both source and target datasets. Results show that the DA-Crack training method improved accuracy in crack classification of images from the target dataset in 54 percentage points, while accuracy on the source dataset remains unaffected.
Engineering inspection and maintenance technologies play an important role in safety, operation, maintenance and management of buildings. In project construction control, supervision of engineering quality is a difficult task. To address such inspection and maintenance issues, this study presents a computer-vision-guided semi-autonomous robotic system for identification and repair of concrete cracks, and humans can make repair plans for this system. Concrete cracks are characterized through computer vision, and a crack feature database is established. Furthermore, a trajectory generation and coordinate transformation method is designed to determine the robotic execution coordinates. In addition, a knowledge base repair method is examined to make appropriate decisions on repair technology for concrete cracks, and a robotic arm is designed for crack repair. Finally, simulations and experiments are conducted, proving the feasibility of the repair method proposed. The result of this study can potentially improve the performance of on-site automatic concrete crack repair, while addressing such issues as high accident rate, low efficiency, and big loss of skilled workers.
Causality is the science of cause and effect. It is through causality that explanations can be derived, theories can be formed, and new knowledge can be discovered. This paper presents a modern look into establishing causality within structural engineering systems. In this pursuit, this paper starts with a gentle introduction to causality. Then, this paper pivots to contrast commonly adopted methods for inferring causes and effects, i.e., induction (empiricism) and deduction (rationalism), and outlines how these methods continue to shape our structural engineering philosophy and, by extension, our domain. The bulk of this paper is dedicated to establishing an approach and criteria to tie principles of induction and deduction to derive causal laws (i.e., mapping functions) through explainable artificial intelligence (XAI) capable of describing new knowledge pertaining to structural engineering phenomena. The proposed approach and criteria are then examined via a case study.