A hybrid machine learning model to estimate self-compacting concrete compressive strength

Hai-Bang LY , Thuy-Anh NGUYEN , Binh Thai PHAM , May Huu NGUYEN

Front. Struct. Civ. Eng. ›› 2022, Vol. 16 ›› Issue (8) : 990 -1002.

PDF (19456KB)
Front. Struct. Civ. Eng. ›› 2022, Vol. 16 ›› Issue (8) : 990 -1002. DOI: 10.1007/s11709-022-0864-7
RESEARCH ARTICLE
RESEARCH ARTICLE

A hybrid machine learning model to estimate self-compacting concrete compressive strength

Author information +
History +
PDF (19456KB)

Abstract

This study examined the feasibility of using the grey wolf optimizer (GWO) and artificial neural network (ANN) to predict the compressive strength (CS) of self-compacting concrete (SCC). The ANN-GWO model was created using 115 samples from different sources, taking into account nine key SCC factors. The validation of the proposed model was evaluated via six indices, including correlation coefficient (R), mean squared error, mean absolute error (MAE), IA, Slope, and mean absolute percentage error. In addition, the importance of the parameters affecting the CS of SCC was investigated utilizing partial dependence plots. The results proved that the proposed ANN-GWO algorithm is a reliable predictor for SCC’s CS. Following that, an examination of the parameters impacting the CS of SCC was provided.

Graphical abstract

Keywords

artificial neural network / grey wolf optimize algorithm / compressive strength / self-compacting concrete

Cite this article

Download citation ▾
Hai-Bang LY, Thuy-Anh NGUYEN, Binh Thai PHAM, May Huu NGUYEN. A hybrid machine learning model to estimate self-compacting concrete compressive strength. Front. Struct. Civ. Eng., 2022, 16(8): 990-1002 DOI:10.1007/s11709-022-0864-7

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Concrete is the most used material in the building and construction industry because it is easy to produce, affordable, and has valuable structural characteristics [1,2]. It may be used in various structures such as buildings, bridges, roads, and dams. In line with the scientific growth path, the need for high-performance concrete (HPC) is developing continuously. As a result, several particular concrete types have been proposed with notable features in physicochemical properties and fresh state properties [35].

Japan’s construction industry has adopted self-compacting concrete (SCC) quickly. This type of concrete can be applied to and can fill the corners of formwork without the need for a compaction phase [6,7]. Various studies have now focused on further developing this kind of concrete [8,9]. SCC is a type of HPC with good flexibility, good segregation resistance, and less blockage around the reinforcement. The exclusion of the compaction step brings several advantages of using SCC, including economic efficiency (e.g., accelerated casting speed, saving labour, energy, and cost), enhancement of the working environment, and the possibility of novel approaches to automating the concrete construction [6,1013].

On the other hand, to achieve its desired flowable behaviours and good mechanical properties, SCC requires a complex balance of several mixture variables [10,11]. For instance, the SCC’s water-to-binder ratio (W/B) is lower than that of conventional concrete, and the mixture usually incoprorates special additives and superplasticizers to obtain the desired workability [1417]. Also, the grading of the aggregates, including aggregate shapes, texture, mineralogy, and strength, are always carefully considered to ensure workability and concrete strengths [18,19]. These features lead to a significant challenge in establishing a universal correlation between the SCC properties and its constituent parameters [8,9,20]. Traditional analytical models can represent the influence of each parameter upon the properties of SCC, and such models can then be optimized via regression analysis. However, no explicit equations have been established because these methods of analysis are less productive for nonlinearly separable data and are also complicated [21,22].

In this regard, over the past few decades, various approaches have been adopted, such as stochastic multiscale method [2326], deep learning [2730] and artificial neural networks (ANN) [3133] for modeling a variety of current problems in the field of civil engineering [3436]. Among these, ANN is a more general and efficient approach since it can classify and capture interrelationships among input-output data pairs. Numerous researchers have each proposed their own ANN model to predict different properties of concrete [3739]. Several machine learning (ML) models have also been presented to predict the SCC’s compressive strength (CS) [4042]. Yeh has demonstrated the possibility of adapting ANN so that the CS of HPC can be predicted [40]. The viability of utilizing ANNs to forecast the characteristics of SCC that uses fly ash as a cementitious substitute has been examined by Belalia Douma et al. [41]. In that work, numerous experimental results were collected from previous studies and employed to train and evaluate the proposed model. In another attempt, Siddique et al. [42] demonstrated the useability of neural networks to predict SCC’s CS. Their proposed model could be readily adapted to analyze the effects of alternative input parameters, such as using bottom ash to replace sand. Despite this, no comprehensive research into a better ANN algorithm to predict the CS of SCC has been conducted. The demand for a novel, adequate ANN model to estimate the SCC’s CS is growing all the time, in accordance with scientific progress.

Therefore, in the present research, the ANN approach optimized by the grey wolf optimizer (GWO) for forecasting the CS of SCC is examined. Various databases from different independent sources were gathered and employed to train and assess the proposed model. The ANN model was established based on two groups of input parameters, including concrete mixture components (i.e., the contents of binder, fine and coarse aggregates, superplasticizer and W/B), and the fresh properties SCC such as slump flow, V-funnel, and L-box tests. The SCC’s CS was considered the output parameter. The influence of the used parameters was then evaluated and discussed.

2 Materials and methods

2.1 Machine learning methods

2.1.1 Artificial neural network

ANN provides a commonly used basis for solving prediction problems by drawing on biology’s understanding of how the nervous system functions [40,4345]. ANN contains many simple processing elements, the so-called neurons, and is made up of linked components and nodes. The three layers that make up the ANN are the input, hidden, and output layers. The ANN is able to predict a given target based on a set of input parameters, thanks to the training process [46].

In general, an ANN includes the minimum number of neurons that can simulate the training progress. A linking between neurons is a weighted representation of an earlier learning stage. The input-output correlation can be determined based on changes in weightings. The system is taught to recreate the correlation between inputs and output, called optimal weightings [47,48]. In neural networks, such a correlation is determined by the collected data points, and as the process is self-asessing, it is feasible to execute many processes simultaneously.

In order to take advantage of these benefits, the rule-of-thumb is used by most models to determine the appropriate hidden layers and nodes; random designs can also be used. Furthermore, appropriate parameters similar to learning speed and momentum are needed for chosen hidden layers and nodes [4042,49]. Overall, ANN has been found to be a reliable method to determine concrete CS.

2.1.2 Grey wolf optimizer

Optimization algorithms have been widely employed in engineering fields for over two decades. For example, the GWO, a metaheuristic optimization algorithm developed by Mirjalili et al. [50], was invented based on the leadership and hunting skills of the grey wolf pack’s communal life. To simulate the order of management, each single wolf pack comprises primary forms of grey wolves, including alpha, beta, delta, and omega, respectively denoted as α, β, δ, and ω. In this structure, grey wolves follow strict rules, which divide their responsibilities. Accordingly, α wolves work as the most responsible wolves, whereas ω wolves have the least obligation (Fig.1).

Each location of a grey wolf might result in a viable solution to the optimization problem. From a mathematical standpoint, the optimal option is chosen among α, β, and δ wolves with the closest proximity to the prey. Every iteration follows the same method for the second and third-best answers. The locations of all other wolves (i.e., ω ones) are determined by the positions of α, β and δ wolves. Based on this technique, several works [5153] focused on the reliability of the GWO to estimate CS.

2.2 Database construction

To realize the objective of the current study, the dataset containing 115 SCC’s CS data is collected from 12 published experimental works [42,5464]. The ANN model is designed with nine inputs, such as the W/B ratio, coarse aggregate (C), fly ash percentage (P), fine aggregate (F), slump flow (D), binder content (B), V-funnel test, superplasticizer dosage (SP), and L-box test. In detail, the values of the W/B, P, F, D, B range between 0.26–0.45, 590–935 kg, 0 and 60%, 656–1038 kg, 480–880 mm, and 370–733 kg, respectively. The V-funnel test value ranges from 1.95 to 19.2, the superplasticizer dose is between 0.74 and 21.84, and the L-box test value is between 1.95 and 19.2. In addition, the CS values are in the range of 10.2 to 86.8 MPa. Specifically, Tab.1 details the analysis of input and output parameters.

Herein, the proposed ANN model is trained using 70 percent of the 115 sets of experimental data, while 30 percent of the data are utilized to evaluate the model. Thus, there are 81 samples for the training data set and 34 samples used to determine the projected performance of the ANN network. All the real values are scaled in the range [0,1] to minimize simulation errors, as recommended in Witten and Frank [65], using Eq. (1):

χscaled=2(χλ)μχ1,

where λ and μ are respectively the min, max values of the considered variables, and χ is the corresponding value of the variable to be scaled.

2.3 Quality assessment criteria

In this work, six indicators were employed to assess the ML model’s prediction accuracy, namely the correlation coefficient (R), root mean squared error (RMSE), index of agreement (IA), mean absolute error (MAE), Slope, and mean absolute percentage error (MAPE). The R criterion, which is generally in the range [−1 to 1], is extensively employed in the literature [66]. The average degree of inaccuracy between actual and predicted outputs are measured by RMSE, MAE, and MAPE [67]. The closer the absolute value of R is to one, the more accurate that model is. These values are represented by:

RMSE=1Nj=1N(Qj,AVQj,PV)2,

MAE=1Nj=1N|Qj,AVQj,PV|,

MAPE=1Nj=1N|Qj,AVQj,PV|Qj,AV,

R=j=1N(Qj,AVQ¯AV)(Qj,PVQ¯PV)j=1N(Qj,AVQ¯AV)2j=1N(Qj,PVQ¯PV)2,

IA=1j=1N(Qj,AVQj,PV)2j=1N(|QAVQ¯AV|+|QPVQ¯PV|)2,

where N denotes the data points; QAV and Q¯AV represent the actual and the average actual values; QPV and Q¯PV denote the predicted and average predicted values and these are computed from the predictive model.

2.4 Partial dependence plot

The partial dependency plot (PDP) is introduced by Friedman [68] to interpret complex ML algorithms. Some algorithms are predictive, but they do not show whether a variable has a positive or negative effect on the model. Hence, the PDP helps to depict the functional relationship between the inputs and the targets. PDP can also indicate whether the relationship between the inputs and the target is linear, monotonic or more complex.

Let X = {X1,X2,...,Xn} be the model’s inputs with the predictive function Y(X). X is subdivided into two subsets XM and its complement XN=X/XM.

For the output Y(X) of a ML algorithm, the partial dependence of y on a subset of variables XM is defined as:

YM(XM)=EXN[Y¯(XM,XN)]=Y¯(XM,XN)PM(XN)dXN,

where PN(XN) is the probability density on the marginal distribution of all variables in the test, determined as follows:

PN(XN)=P(X)dXN.

In fact, the PDP is simply calculated by averaging over a training data set

Y¯M(XM)=1ni=1nY¯(XM,Xi,N),

where Xi,N (i = 1,2,...,n) are the values of XN appearing in the training sample.

To simplify PDP construction (3), XM = X1 is set as the predictor variable of interest with unique values. PDP (3) then follows the following steps.

Step 1: For i ∈{1,2,...,k}, copy the training database and replace the original value of X1 with X1i is the constant. From a modified copy of training data, compute the vectors of the predicted values. Calculate the mean predicted value to obtain Y¯1(X1i).

Step 2: From pairs (X1i,Y¯1(X1i))(a,b) with i = 1,2,...,k, draw graphs representing PDP.

3 Results and discussion

3.1 Analysis of optimal artificial neural networks and gray wolf optimization parameters

It is discussed in this part how to optimize the structure of the proposed ANN-GWO model, including how to determine population size in the GWO algorithm, and how to evaluate the neurons associated with the hidden layers. In ML algorithms, the architecture of the ANN model plays a vital role. As previously mentioned, the ANN structure includes three layers, in which the hidden layer could contain one or many further layers. Research has demonstrated that ANN structures with a single hidden layer can successfully solve complex nonlinear problems [6971]. Therefore, such ANN architecture is proposed herein. The next problem is to identify the appropriate neurons of ANN and optimize the population size of the GWO algorithm. For this purpose, the GWO algorithm is run with the population changing from 30 to 300 in steps of 30, and with the hidden layer’s neuron count changed from 3 to 30 by 3. To optimize parameters, a grid search strategy is utilized. The effects of different values of two parameters on the performance of the proposed model are evaluated according to six statistical criteria, as mentioned above. Here, a maximum number of iterations of 500 is used for optimization purposes.

Fig.2 presents 3D models of the mesh search. As observed, when the number of neurons is low, the population size increases, and the model efficiency is still low. This is evident in the low R, IA values, whereas RMSE, MAE, and MAPE are high. The number of neurons does not increase the prediction performance in the case of low population numbers. However, where the number of neurons increases while increasing the population sizes allow the model’s performance to improve. The results of the mesh search technique show that the best hyper-parameters of the ANN-GWO model are obtained when the number of neurons is 21 and the population size is 240. With this setting, all the performance evaluation criteria of the model are the most accurate.

3.2 Analysis of convergence

The preceding part used 1000 Monte Carlo simulations to find the optimization neurons in the hidden layer and the population size in GWO. This section performs a convergence estimation of all the quality assessment criteria. The line representing the normalized convergence of the six statistical criteria is shown in Fig.3. For the R, IA, and Slope indices, only roughly 200 simulations with the test set and 100 with the training set are required to obtain convergence results (less than 0.1%). After 200 iterations, the RMSE index seems to be the harshest since only 1% of normalized convergence is reached. There is a distinct difference between MAE and MAPE compared with RMSE, when their convergence is identical to R, IA, and Slope. The results indicated that all six criteria achieve static convergence following 1000 Monte Carlo simulations. That means that such runs were enough to assess the effectiveness and quantify the proposed model’s robustness.

3.3 Analysis of distribution of performance criteria

The statistical assessment of the ANN-GWO model’s performance is reported in this section. Fig.4 shows the probability distribution over 1000 simulations of the criteria, namely, R (Fig.4(a)), IA (Fig.4(b)), Slope (Fig.4(c)), RMSE (Fig.4(d)), MAE (Fig.4(e)), and MAPE (Fig.4(f)). Solid and dashed lines present the training and test set’s probability distribution function. In addition, the summary statistical information of all the error metrics for both parts is highlighted (Tab.2).

Regarding the R criterion, the average and StD values for the training part were 0.959 and 0.006, and 0.918 and 0.011 for the testing part (Tab.2). With the IA criterion, the average and StD values for the training part were 0.979 and 0.003, while those values were 0.957 and 0.006 for the testing part. The Slope criterion values were 0.917 and 0.012 with the training database; 0.951 and 0.012 with the testing database. The average and StD of RMSE for the training database were 5.099 and 0.390, while for the testing database these values were 6.594 and 0.445. For MAE, these values were 4.150 and 0.343 for the training database and 5.238 and 0.354 for the testing database. Finally, for the MAPE criterion, the mean and StD were 9.519 and 0.742 and these values were 12.214 and 0.834 for the training and testing datasets, respectively. These findings showed that the proposed ANN-GWO model might be a high-accuracy and reliable predictor of SCC’s CS.

3.4 Analysis of artificial neural networks optimization by gray wolf optimization

The weight and bias values of ANN’s neurons are optimized using the GWO algorithm with metrics such as R, RMSE, and MAE, throughout the process. Fig.5 presents a cost function that evaluates the convergence of criteria during the training process. As shown, an increased training iteration can improve the prediction accuracy with lower RMSE and MAE values, and higher R values. The findings of five hundred iterations have likewise been shown to be trustworthy. The R, RMSE, and MAE measures are essentially identical from iteration 200 onwards, as shown in Fig.5. Consequently, the number of iterations for optimizing the ANN-GWO is chosen to be 500, which ensures the relative error between two iterations is less than 0.1%.

3.5 Analysis of typical results

This section presents typical SCC’s CS prediction results, which is the best predictive result of the ANN-GWO model with structure [9-21-1] over 1000 Monte Carlo simulations. The regression analysis between the actual and output CS value by the ANN-GWO algorithm is shown in Fig.6. The results compare the training data (Fig.6(a)) and testing data (Fig.6(b)). Regression lines are also highlighted in red to show the accuracy of ANN-GWO. The R values for the training and testing parts are 0.951 and 0.940, indicating the model’s excellent predictive ability. As can be deduced, there was a strong linear relationship between outputs of ANN-GWO and actual CS. The performance of the proposed ANN-GWO model is given in detail in Tab.3.

In addition, the model’s excellent predictive ability is confirmed by the error calculated for each sample in the database, as shown in Fig.7(a) for the 85 training data and Fig.7(b) for the 34 testing points. In the training data, the primary samples have the errors found in the range [−7 to 8] MPa; only five samples have errors outside the above scope, and there is one sample with the largest error of −13 MPa. In the testing data, the error of the samples is mainly in the range −9 to 10 MPa. The errors are mostly concentrated around 0 for the training and testing parts, indicating that the predictability of ANN-GWO is good.

Uncertainty analysis is also utilized to quantify the change in SCC’s CS as a result of changing input parameters. Estimating the uncertainty of prediction is needed to evaluate the reliability of the results [72]. Quantification is usually done by estimating statistical quantities of interest such as quantum mean, median, and population. Starting with Q10 and up to Q90, nine percentile levels of the target CS are specified. The confidence intervals for estimating the CS of SCC are shown in Fig.8, showing the amount of data in each Q-level, along with the mean and confidence intervals corresponding to 70%, 95%, and 99%. It is observed that the confidence interval of the proposed ANN-GWO is the smallest, proving that the accuracy of the predicted model is high. This is in excellent agreement with the performance analysis of the model presented above.

Finally, the proposed ANN-GWO model is compared with several state-of-the-art ML models, namely non-tuned ANN, support vector regression (SVR), k-nearest neighbors regression (KNN), decision tree (DT), and extreme learning machine (ELM). For comparison purposes, the same training and testing datasets applied to ANN-GWO are utilized to develop and evaluate these models. The final hyper-parameters for these models are briefly summarized in Tab.4 along with the prediction performance evaluated on the training and testing datasets. Based on the results in Tab.4, it can be concluded that the ANN-GWO achieves the highest prediction performance compared with other ML models.

3.6 Sensitivity analysis and discussion

Sensitivity analysis is an effective step to figure out how well a model does and how big the dataset is for a given model and prediction problem [7375]. Thus, the effect of the inputs on the CS of SCC is discussed in this section. In detail, a PDP is utilized to estimate the efficiency of all input variables (i.e., W/B, C, F, D, B, V-funnel, SP, L-box). The strategy is to vary one single input parameter while keeping the median value of the remaining input parameters constant. Thus, the approach can quantify the sensitivity of a model based on input variables. Herein, input data was monitored at the following levels, given as percentages: 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100. One input parameter was chosen before running the model eleven times (i.e., from 0 to 100% values). The other parameters were maintained at median values (i.e., 50%). Principally, the approach delivered the changes in the deviation of the output variable while changing the input parameters. In the current study, deviation of the output, or the sensitivity rate δmn, for the nth input parameter is estimated by:

δmn=QmnQrefQref,

where Qref is the reference output and Qmn is the one utilizing nth input at the corresponding mth percentile. In addition, the following formula was used to obtain the global percentage of sensitivity of each input:

Δn=m=111|δmn|.

Tab.5 lists the output variables of the ANN-GWO model proposed for each percentile. According to the PDP analysis (Fig.9), the input parameters W/B and C were impacted on the predicted results significantly. Next is the input parameter P. The remaining parameters were insignificantly affected on ANN-GWO outputs compared to W/B, C, P.

Clearly, W/B, C, and FA are vital input variables. In other words, changing these parameters can significantly change the accuracy of the ANN-GWO model. Fig.10 depicts the overall sensitivity index determined by adding the degrees of sensitivity for each input variable and calculating the result [76]. It is noticed that W/B and C are the two most important fact ors (accounting for 20% and 29%) that affect the CS of SCC, followed by input P (accounting for 16.5%). A further important input group includes five parameters, namely F, D, B, V-funnel and SP (whose sensitivity indices range from 4% to 5.5%). Finally, the parameter that has the least impact on the CS of SCC is L-box (only about 1%).

4 Conclusions

This research uses an ANN algorithm to predict the CS of SCC, utilizing the GWO algorithm to optimize the values of neuron weightings and biases. Several input factors were considered, including six inputs related to the SCC mix component and three further input properties. This model was constructed using an experimental database including 115 observations. In addition, six statistical criteria were used to assess the accuracy of ANN-GWO in predicting SCC’s CS.

The findings indicate that the proposed ANN-GWO is a promising model to predict the CS of SCC, with R, IA, Slope, RMSE, MAE, and MAPE values of 0.951, 0.974, 0.904, 5.132, 4.112, and 9.293 for the training data set, respectively. Similarly, these six criteria have 0.94, 0.969, 0.96, 5.515, 4.427, and 10.2, respectively. Additionally, a sensitivity analysis based on PDP is used to determine the impact of the nine inputs, such as the W/B ratio, C, P, F, D, B, V-funnel test, SP, and L-box test on the CS of SCC. The findings suggested that the most critical factors are the W/B, C, and F. Overall, the ANN-GWO predictor can be employed as a robust and trustworthy solution for highly nonlinear issues, including the prediction of SCC’s CS, with great accuracy. However, the GWO used is only one type of heuristic algorithm; there are several other commonly used heuristic methods, such as the genetic algorithm (GA), particle swarm optimization (PSO), Differential Evolution (DE), Simulated Annealing Algorithm (SA), and Tabu search methods, that should be investigated for a thorough comparison. Furthermore, sensitivity analysis should be expanded to include methods other than PDP. Future studies should study and address the aforementioned factors to give multidimensional approaches.

References

[1]

NevilleA MAdamM. Properties of Concrete. Pearson, 2011

[2]

Kovler K, Roussel N. Properties of fresh and hardened concrete. Cement and Concrete Research, 2011, 41(7): 775–792

[3]

Wiktor V, Jonkers H M. Quantification of crack-healing in novel bacteria-based self-healing concrete. Cement and Concrete Composites, 2011, 33(7): 763–770

[4]

Kan A, Demirboğa R. A novel material for lightweight concrete production. Cement and Concrete Composites, 2009, 31(7): 489–495

[5]

Houst Y F, Bowen P, Perche F, Kauppi A, Borget P, Galmiche L, Le Meins J F, Lafuma F, Flatt R J, Schober I, Banfill P F G, Swift D S, Myrvold B O, Petersen B G, Reknes K. Design and function of novel superplasticizers for more durable high performance concrete (superplast project). Cement and Concrete Research, 2008, 38(10): 1197–1209

[6]

OkamuraH. Self-compacting high-performance concrete. Concrete International, 1997, 19: 50–54

[7]

K.OZAWA. High-performance concrete based on the durability design of concrete structures. In: Proceedings of the Second East Asia-Pacific Conference on Structural Engineering and Construction. 1989

[8]

Domone P L. A review of the hardened mechanical properties of self-compacting concrete. Cement and Concrete Composites, 2007, 29(1): 1–12

[9]

Shi C, Wu Z, Lv K, Wu L. A review on mixture design methods for self-compacting concrete. Construction & Building Materials, 2015, 84: 387–398

[10]

Okamura H, Ouchi M. Self-compacting concrete. Journal of Advanced Concrete Technology, 2003, 1(1): 5–15

[11]

Domone P L. Self-compacting concrete: An analysis of 11 years of case studies. Cement and Concrete Composites, 2006, 28(2): 197–208

[12]

Brouwers H J H, Radix H J. Self-compacting concrete: Theoretical and experimental study. Cement and Concrete Research, 2005, 35(11): 2116–2136

[13]

Shi C, Yanzhong W. Mixture proportioning and properties of self-consolidating lightweight concrete containing glass powder. ACI Materials Journal, 2005, 102: 355–363

[14]

Haach V G, Vasconcelos G, Lourenço P B. Influence of aggregates grading and water/cement ratio in workability and hardened properties of mortars. Construction & Building Materials, 2011, 25(6): 2980–2987

[15]

Molero M, Segura I, Izquierdo M A G, Fuente J V, Anaya J J. Sand/cement ratio evaluation on mortar using neural networks and ultrasonic transmission inspection. Ultrasonics, 2009, 49(2): 231–237

[16]

Okamura H, Ozawa K, Ouchi M. Self-compacting concrete. Structural Concrete, 2000, 1(1): 3–17

[17]

Esmaeilkhanian B, Khayat K H, Yahia A, Feys D. Effects of mix design parameters and rheological properties on dynamic stability of self-consolidating concrete. Cement and Concrete Composites, 2014, 54: 21–28

[18]

Yaşar E, Erdoğan Y, Kılıç A. Effect of limestone aggregate type and water–cement ratio on concrete strength. Materials Letters, 2004, 58(5): 772–777

[19]

Uysal M, Tanyildizi H. Estimation of compressive strength of self compacting concrete containing polypropylene fiber and mineral additives exposed to high temperature using artificial neural network. Construction & Building Materials, 2012, 27(1): 404–414

[20]

Sathyan D, Anand K B, Prakash A J, Premjith B. Modeling the fresh and hardened stage properties of self-compacting concrete using random kitchen sink algorithm. International Journal of Concrete Structures and Materials, 2018, 12(1): 1–10

[21]

Sonebi M. Applications of statistical models in proportioning medium-strength self-consolidating concrete. ACI Materials Journal, 2004, 101(5): 339–346

[22]

Sonebi M. Medium strength self-compacting concrete containing fly ash: Modelling using factorial experimental plans. Cement and Concrete Research, 2004, 34(7): 1199–1208

[23]

Vu-Bac N, Lahmer T, Keitel H, Zhao J, Zhuang X, Rabczuk T. Stochastic predictions of bulk properties of amorphous polyethylene based on molecular dynamics simulations. Mechanics of Materials, 2014, 68: 70–84

[24]

Vu-Bac N, Lahmer T, Zhang Y, Zhuang X, Rabczuk T. Stochastic predictions of interfacial characteristic of polymeric nanocomposites (PNCs). Composites. Part B, Engineering, 2014, 59: 80–95

[25]

Vu-Bac N, Silani M, Lahmer T, Zhuang X, Rabczuk T. A unified framework for stochastic predictions of mechanical properties of polymeric nanocomposites. Computational Materials Science, 2015, 96: 520–535

[26]

Vu-Bac N, Rafiee R, Zhuang X, Lahmer T, Rabczuk T. Uncertainty quantification for multiscale modeling of polymer nanocomposites with correlated parameters. Composites. Part B, Engineering, 2015, 68: 446–464

[27]

GuoHZhuangXRabczukT. A deep collocation method for the bending analysis of Kirchhoff plate. Computers, Materials & Continua, 2019, 59(2): 433–456

[28]

Samaniego E, Anitescu C, Goswami S, Nguyen-Thanh V M, Guo H, Hamdia K, Zhuang X, Rabczuk T. An energy approach to the solution of partial differential equations in computational mechanics via machine learning: Concepts, implementation and applications. Computer Methods in Applied Mechanics and Engineering, 2020, 362: 112790

[29]

Guo H, Zhuang X, Chen P, Alajlan N, Rabczuk T. Stochastic deep collocation method based on neural architecture search and transfer learning for heterogeneous porous media. Engineering with Computers, 2022, 1–26

[30]

Guo H, Zhuang X, Chen P, Alajlan N, Rabczuk T. Analysis of three-dimensional potential problems in non-homogeneous media with physics-informed deep collocation method using material transfer learning and sensitivity analysis. Engineering with Computers, 2022, 1–22

[31]

Liu B, Vu-Bac N, Zhuang X, Fu X, Rabczuk T. Stochastic full-range multiscale modeling of thermal conductivity of polymeric carbon nanotubes composites: A machine learning approach. Composite Structures, 2022, 289: 115393

[32]

Liu B, Vu-Bac N, Rabczuk T. A stochastic multiscale method for the prediction of the thermal conductivity of polymer nanocomposites through hybrid machine learning algorithms. Composite Structures, 2021, 273: 114269

[33]

AnitescuCAtroshchenkoEAlajlanNRabczukT. Artificial neural network methods for the solution of second order boundary value problems. Computers, Materials & Continua, 2019, 59(1), 345–359

[34]

Dao D V, Ly H B, Trinh S H, Le T T, Pham B T. Artificial intelligence approaches for prediction of compressive strength of geopolymer concrete. Materials (Basel), 2019, 12(6): 983

[35]

LyH BPhamB TDaoD VLeV MLeL MLeT T. Artificial intelligence approaches for prediction of compressive strength of geopolymer concrete. Materials (Basel), 2019, 12(6): 3841

[36]

LeT HNguyenH LPhamB TNguyenM HPhamC TNguyenN LLeT TLyH B. Artificial intelligence-based model for the prediction of dynamic modulus of stone mastic asphalt. Applied Sciences, 2020, 10(15), 5242.

[37]

Sebastiá M, Fernández Olmo I, Irabien A. Neural network prediction of unconfined compressive strength of coal fly ash–cement mixtures. Cement and Concrete Research, 2003, 33(8): 1137–1146

[38]

Lee S C. Prediction of concrete strength using artificial neural networks. Engineering Structures, 2003, 25(7): 849–857

[39]

Ly H B, Nguyen M H, Pham B T. Metaheuristic optimization of Levenberg–Marquardt-based artificial neural network using particle swarm optimization for prediction of foamed concrete compressive strength. Neural Computing and Applications, 2021, 33(24): 17331–17351

[40]

Yeh I C. Modeling of strength of high-performance concrete using artificial neural networks. Cement and Concrete Research, 1998, 28(12): 1797–1808

[41]

Belalia Douma O, Boukhatem B, Ghrici M, Tagnit-Hamou A. Prediction of properties of self-compacting concrete containing fly ash using artificial neural network. Neural Computing & Applications, 2017, 28(S1): 707–718

[42]

Siddique R, Aggarwal P, Aggarwal Y. Prediction of compressive strength of self-compacting concrete containing bottom ash using artificial neural networks. Advances in Engineering Software, 2011, 42(10): 780–786

[43]

Topçu İ B, Sarıdemir M. Prediction of rubberized concrete properties using artificial neural network and fuzzy logic. Construction & Building Materials, 2008, 22(4): 532–540

[44]

Trtnik G, Kavčič F, Turk G. Prediction of concrete strength using ultrasonic pulse velocity and artificial neural networks. Ultrasonics, 2009, 49(1): 53–60

[45]

Dias W P S, Pooliyadda S P. Neural networks for predicting properties of concretes with admixtures. Construction & Building Materials, 2001, 15(7): 371–379

[46]

Pradhan B, Lee S. Landslide susceptibility assessment and factor effect analysis: backpropagation artificial neural networks and their comparison with frequency ratio and bivariate logistic regression modelling. Environmental Modelling & Software, 2010, 25(6): 747–759

[47]

Nehdi M, Djebbar Y, Khan A. Neural network model for preformed-foam cellular concrete. Materials Journal, 2001, 98(5): 402–409

[48]

Delnavaz M, Ayati B, Ganjidoust H. Prediction of moving bed biofilm reactor (MBBR) performance for the treatment of aniline using artificial neural networks (ANN). Journal of Hazardous Materials, 2010, 179(1−3): 769–775

[49]

Aiyer B G, Kim D, Karingattikkal N, Samui P, Rao P R. Prediction of compressive strength of self-compacting concrete using least square support vector machine and relevance vector machine. KSCE Journal of Civil Engineering, 2014, 18(6): 1753–1758

[50]

MirjaliliSMirjaliliS MLewisA. Grey wolf optimizer. Advances in Engineering Software, 2014, 69: 46–61

[51]

Li B, Yan H, Zhang J, Zhou N. Compaction property prediction of mixed gangue backfill materials using hybrid intelligence models: A new approach. Construction & Building Materials, 2020, 247: 118633

[52]

Behnood A, Golafshani E M. Predicting the compressive strength of silica fume concrete using hybrid artificial neural network with multi-objective grey wolves. Journal of Cleaner Production, 2018, 202: 54–64

[53]

Golafshani E M, Behnood A, Arashpour M. Predicting the compressive strength of normal and high-performance concretes using ANN and ANFIS hybridized with Grey Wolf Optimizer. Construction & Building Materials, 2020, 232: 117266

[54]

Gettu R, Izquierdo J, Pcc G, Josa A. Development of high-strength self compacting concrete with fly ash: A four-step experimental methodology. In: Proceedings of 27th Conference on Our World in Concrete & Structures. Singapore CI-Premier Pte Ltd., 2002, 217–224

[55]

Patel R. Development of statistical models to simulate and optimize self-consolidating concrete mixes incorporating high volumes of fly ash. Thesis for the Master’s Degree. Toronto: Toronto Metropolitan University, 2004, 1802–1802

[56]

Uysal M, Yilmaz K. Effect of mineral admixtures on properties of self-compacting concrete. Cement and Concrete Composites, 2011, 33(7): 771–776

[57]

Mahalingam B, Nagamani K. Effect of processed fly ash on fresh and hardened properties of self compacting concrete. International Journal of Earth Sciences and Engineering, 2011, 4: 930–940

[58]

Bingöl A F, Tohumcu İ. Effects of different curing regimes on the compressive strength properties of self compacting concrete incorporating fly ash and silica fume. Materials & Design, 2013, 51: 12–18

[59]

Siddique R, Aggarwal P, Aggarwal Y. Influence of water/powder ratio on strength properties of self-compacting concrete containing coal fly ash and bottom ash. Construction & Building Materials, 2012, 29: 73–81

[60]

Nepomuceno M C, Pereira-de-Oliveira L A, Lopes S M R. Methodology for the mix design of self-compacting concrete using different mineral additions in binary blends of powders. Construction & Building Materials, 2014, 64: 82–94

[61]

Güneyisi E, Gesoğlu M, Özbay E. Strength and drying shrinkage properties of self-compacting concretes incorporating multi-system blended mineral admixtures. Construction & Building Materials, 2010, 24(10): 1878–1887

[62]

Krishnapal P, Yadav R K, Rajeev C. Strength characteristics of self compacting concrete containing fly ash. Research Journal of Engineering Sciences, 2013, 9472: 1–5

[63]

Dhiyaneshwaran S, Ramanathan P, Baskar I, Venkatasubramani R. Study on durability characteristics of self-compacting concrete with fly ash. Jordan Journal of Civil Engineering, 2013, 7: 342–352

[64]

Şahmaran M, Yaman İ Ö, Tokyay M. Transport and mechanical properties of self consolidating concrete with high volume fly ash. Cement and Concrete Composites, 2009, 31(2): 99–106

[65]

Witten I H, Frank E. Data mining: Practical machine learning tools and techniques with Java implementations. SIGMOD Record, 2002, 31(1): 76–77

[66]

MenardS. Coefficients of determination for multiple logistic regression analysis. The American Statistician, 2000, 54(1), 17–24

[67]

Willmott C, Matsuura K. Advantages of the mean absolute error (mae) over the root mean square error (RMSE) in assessing average model performance. Climate Research, 2005, 30: 79–82

[68]

Friedman J H. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 2001, 29(5): 1189–1232

[69]

Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 1989, 2(4): 303–314

[70]

Bounds L, Mathew W. A multilayer perceptron network for the diagnosis of low back pain. In: Proceedings of IEEE 1988 International Conference on Neural Networks. New York: IEEE, 1988, 481–489

[71]

Mohamad E T, Faradonbeh R S, Armaghani D J, Monjezi M, Majid M Z A. An optimized ANN model based on genetic algorithm for predicting ripping production. Neural Computing & Applications, 2017, 28(S1): 393–406

[72]

SternFMusteMBeninatiMEichingerW E. Summary of Experimental Uncertainty Assessment Methodology with Example. No. 406. IIHR Report. 1999

[73]

Vu-Bac N, Lahmer T, Zhuang X, Nguyen-Thoi T, Rabczuk T. A software framework for probabilistic sensitivity analysis for computationally expensive models. Advances in Engineering Software, 2016, 100: 19–31

[74]

Vu-Bac N, Zhuang X, Rabczuk T. Uncertainty quantification for mechanical properties of polyethylene based on fully atomistic model. Materials (Basel), 2019, 12(21): 3613

[75]

Liu B, Vu-Bac N, Zhuang X, Rabczuk T. Stochastic multiscale modeling of heat conductivity of Polymeric clay nanocomposites. Mechanics of Materials, 2020, 142: 103280

[76]

Ly H B, Le L M, Duong H T, Nguyen T C, Pham T A, Le T T, Le V M, Nguyen-Ngoc L, Pham B T. Hybrid artificial intelligence approaches for predicting critical buckling load of structural members under compression considering the influence of initial geometric imperfections. Applied Sciences (Basel, Switzerland), 2019, 9(11): 2258

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (19456KB)

3355

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/