Optimization of machine learning models for predicting the compressive strength of fiber-reinforced self-compacting concrete

Hai-Van Thi MAI , May Huu NGUYEN , Son Hoang TRINH , Hai-Bang LY

Front. Struct. Civ. Eng. ›› 2023, Vol. 17 ›› Issue (2) : 284 -305.

PDF (11844KB)
Front. Struct. Civ. Eng. ›› 2023, Vol. 17 ›› Issue (2) : 284 -305. DOI: 10.1007/s11709-022-0901-6
RESEARCH ARTICLE
RESEARCH ARTICLE

Optimization of machine learning models for predicting the compressive strength of fiber-reinforced self-compacting concrete

Author information +
History +
PDF (11844KB)

Abstract

Fiber-reinforced self-compacting concrete (FRSCC) is a typical construction material, and its compressive strength (CS) is a critical mechanical property that must be adequately determined. In the machine learning (ML) approach to estimating the CS of FRSCC, the current research gaps include the limitations of samples in databases, the applicability constraints of models owing to limited mixture components, and the possibility of applying recently proposed models. This study developed different ML models for predicting the CS of FRSCC to address these limitations. Artificial neural network, random forest, and categorical gradient boosting (CatBoost) models were optimized to derive the best predictive model with the aid of a 10-fold cross-validation technique. A database of 381 samples was created, representing the most significant FRSCC dataset compared with previous studies, and it was used for model development. The findings indicated that CatBoost outperformed the other two models with excellent predictive abilities (root mean square error of 2.639 MPa, mean absolute error of 1.669 MPa, and coefficient of determination of 0.986 for the test dataset). Finally, a sensitivity analysis using a partial dependence plot was conducted to obtain a thorough understanding of the effect of each input variable on the predicted CS of FRSCC. The results showed that the cement content, testing age, and superplasticizer content are the most critical factors affecting the CS.

Graphical abstract

Keywords

compressive strength / self-compacting concrete / artificial neural network / decision tree / CatBoost

Cite this article

Download citation ▾
Hai-Van Thi MAI, May Huu NGUYEN, Son Hoang TRINH, Hai-Bang LY. Optimization of machine learning models for predicting the compressive strength of fiber-reinforced self-compacting concrete. Front. Struct. Civ. Eng., 2023, 17(2): 284-305 DOI:10.1007/s11709-022-0901-6

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Self-compacting concrete (SCC) is a well-known form of concrete that can self-flow and fill holes or areas with small gaps. SCC is frequently used in the building industry to increase economic efficiency and concrete quality. Mineral and chemical additives are often blended into standard concrete mixtures to produce SCC with higher workability and other performance characteristics [1,2]. Owing to advancements in building materials, fiber-reinforced self-compacting concrete (FRSCC) has been developed [24]. The use of fibers in SCC has numerous advantages for building structures, including reducing unexpected failures, improving the modulus of rupture, preventing crack propagation, limiting shrinkage, increasing flexural strength, and enhancing tensile strength and ductility [58]. Studies have shown that these SCC properties are directly influenced by fiber characteristics such as type, shape, and content [4,911]. Steel, carbon, polypropylene (PP), polyester (PE), and glass are the most prevalent fibers used to reinforce SCC [1214].

Compressive strength (CS) is considered an essential mechanical characteristic of concrete in general, and FRSCC in particular. Currently, two main methods are used to determine the CS of FRSCC: non-destructive and destructive techniques. Ultrasonic pulse velocity [15] and Schmidt hammer [16] tests are commonly used in the non-destructive method. However, because it only measures the CS indirectly through the texture, density, or stiffness of a material, this method often provides an approximation of the CS. In the destructive method, the CS is determined directly when breaking the test sample under a compressor [4,1720]. Because the maximum bearing capacity of the material is directly determined at the failure point through loading, the result is more accurate than the non-destructive technique [21]. However, the destructive method is typically only used on a limited number of laboratory samples. In addition to being time-consuming, these procedures require extensive testing facilities and skilled personnel [22]. In addition, several studies have presented simple regression models to estimate the CS based on a few of the most influential factors [8,11,23]. However, the CS of FRSCC is dependent on various factors; hence, the use of relatively few variables may not effectively represent its CS features. In contrast, increasing the number of variables complicates the regression procedure [2427]. Therefore, an accurate and effective method for estimating the CS of FRSCC is required.

Over the past few decades, numerous modeling approaches have been implemented in civil engineering. These approaches include ensemble models [2830], finite element analysis [3137], machine learning (ML) [3843], and other novel methods [4448]. Such approaches have been used to model various contemporary problems. Among these, ML techniques have been developed and widely employed in recent years to solve various practical difficulties [4955]. Several ML models have been effectively deployed to predict the CS of SCC. For instance, Asteris et al. [56] applied multivariate adaptive regression splines (MARS) and an M5P-type tree to construct a new formula to predict the 28-d CS of SCC containing metakaolin based on seven inputs: the cement content, coarse aggregate to fine aggregate ratio, water, metakaolin, water reducer, and maximum particle sizes of the aggregate and binder. The results showed that the MARS and M5P methods are effective models for predicting the CS of SCC. The effect of the inputs on the CS was assessed using sensitivity analysis, which showed that the cement content was the most influential variable. In addition, Farooq et al. [57] applied an artificial neural network (ANN), support vector regression (SVM), and gene expression programming (GEP) to estimate the CS of SCC incorporating waste materials. Input factors such as cement, the ratio of water to cement (W/C), coarse and fine aggregates, fly ash, and superplasticizers were used in the models. The findings indicated that GEP is the most effective model. They also demonstrated that variables such as the percentage of cement and fly ash had the greatest impact on the CS (53% of the total parameters). The authors also stated that knowing this ratio in the mixture is critical for evaluating the CS of SCC. Neglecting this influence may result in less-than-ideal outcomes. Uysal and Tanyildizi [58] used four different backpropagation algorithms with an ANN to predict the loss of the CS of FRSCC using PP and SCC with additional minerals. The samples were exposed to high temperatures (200, 400, 600, and 800 °C) at an age of 56 d and compared with the control samples at 20 °C. Tests were conducted to determine the loss of CS, which also resulted in a dataset comprising 85 samples and 11 inputs. The results showed that, at 600 °C, all concrete samples had a significant decrease in CS, although FRSCC exhibited a decrease in fire spalling. The findings indicated that the BFGS-quasi-Newton-based ANN model provided the highest prediction accuracy. In addition, Meesaraganda et al. [22] used an ANN on a dataset containing 99 samples with five inputs (i.e., aggregate content, glass fiber, PP fiber, additive, and W/C ratio) to predict the CS of SCC. The developed ANN model was compared with other approaches and demonstrated superior prediction accuracy and high reliability. The authors also suggested that the ANN algorithm can be applied to concrete materials if sufficient source data are available. Nguyen et al. [59] developed an ANN model using the Levenberg–Marquardt (LM) algorithm along with an adaptive neuro-fuzzy inference system (ANFIS) on 131 data points to estimate the CS of FRSCC at an age of 28 d. The models used eight main input factors related to the mixture components. A six-node single hidden layer ANN-LM performed better than the ANFIS model. The study also showed that the CS of the 28-d FRSCC was more susceptible to changes in the W/C ratio and less sensitive to changes in fiber content. Thus, ML-based algorithms are better than conventional methods because they can address complex problems that include many influencing factors, varying from 2 to 11 inputs, with different database sizes (i.e., ranging from 40 to 131 samples). A summary of previous studies using ML models to predict the CS of FRSCC is given in Tab.1.

Overall, ML-based models for predicting the CS of FRSCC have been proven to be a promising approach but are still limited owing to the difficulties of gathering data to improve their generalizability (i.e., a limited number of data points, considering another mixture component such as mineral powder, or different fiber types and shapes). Moreover, no research has been conducted to estimate the CS of FRSCC using newly developed ML models such as CatBoost. Only classical models have been introduced thus far, such as ANN, ANFIS, FL, SVM, or regression models. This model has not been compared with other strategies such as random forest (RF) or ANN. The current limitations of ML applications in predicting the CS of FRSCC are 1) limited experimental data, 2) model generalizability by considering different mixture components, and 3) application of newly developed ML models.

Therefore, the main objective of this paper is to address the current research gaps by proposing an effective and reliable ML model to predict the CS of FRSCC. Hence, we constructed a database with more experimental data, including 381 samples from 11 relevant studies. Moreover, to improve the generalizability of the model, we considered 15 inputs related to the CS of FRSCC in this database. In this study, k-fold cross-validation (CV) was used to assess the prediction capability of the models and to control overfitting. In addition, after fine-tuning the model hyperparameters, we subjected the best ML algorithm to a sensitivity analysis to investigate the effect of variables on the CS of FRSCC. Different statistical measures were used to assess the predictive performances of the models.

2 Database description and analysis

The prediction performance of an ML model is determined by various variables, one of which is the database. For this research, a database of 381 samples was collected from 11 published papers, the details of which are reported in Tab.2. To the best of our knowledge, this is the largest dataset collected for ML studies on the prediction of the CS of FRSCC.

For the simulation using ML algorithms, the collected database was randomly separated into two sets. For the training dataset, 70% of the total data were randomly selected and used to train the model and were cross-validated for hyperparameter selection. For the testing dataset, the remaining 30% of the data were used to evaluate the accuracy of the proposed models. Additionally, all data values were standardized to a range between 0 and 1 during model construction to minimize errors generated by the simulation. Tab.3 summarizes the statistical analyses of the input and output variables.

The distribution of the input data utilized in this investigation is shown in Fig.1. Most of the input variables had a large range of values. More precisely, the cement content ranged from 220 to 754 kg/m3. The coarse aggregate content ranged between 0 and 1311.9 kg/m3 but was most concentrated between 700 and 850 kg/m3. Similarly, a reasonably large range of fine aggregate content was observed, ranging from 0.83 to 1220 kg/m3, with most of the variation occurring between 800 and 1200 kg/m3. The water content mostly ranged between 137 and 239 kg/m3. The limestone content varied widely, ranging from 0 to 288.9 kg/m3. The fly ash content varied between 0 and 306 kg/m3. The steel fiber content was mostly concentrated between 0 and 45 kg/m3, with the highest value being 156 kg/m3. The sample’s testing age was concentrated in a small range, with the lowest age value being 1 d and the greatest being 90 d. Additionally, there were a few input factors with limited data distribution, such as glass fiber (0–8 kg/m3), PP fiber (0–12 kg/m3), nano-CuO (0–13.8 kg/m3), superplasticizer (0–33 kg/m3), and viscosity-modifying admixture (0–0.9 L/m3).

Additionally, Fig.2 shows the association between input parameters and CS. The degree of correlation can be defined based on the Pearson correlation (rs) value. For the equation used to calculate this index, interested readers can refer to Ref. [68]. The correlation was classified into the following levels: rs = 0–0.19 (very weak correlation), rs = 0.2–0.39 (weak correlation), rs = 0.4–0.59 (moderate correlation), rs = 0.6–0.79 (high correlation), and rs = 0.8–1 (very high correlation). Thus, based on the values of rs in Fig.2, the correlation between the input variables and output was rather low (i.e., max rs = 0.54). Only a few exceptional cases were determined, with a few high correlations between pairs of input variables, such as cement and coarse aggregate with steel fiber (rs = 0.83 and –0.88, respectively), coarse aggregate with superplasticizer (rs = –0.84), and steel fiber with superplasticizer (rs = 0.84). To ensure the generalizability of the ML model, we retained the 15 input parameters of the dataset in the input space.

3 Methods

3.1 Machine learning methods

3.1.1 Artificial neural network

An ANN is a type of ML algorithm that is widely employed in various applications. An ANN is primarily based on clusters of neurons that serve as the fundamental unit of the system. A signal converter designed for a particular purpose is an artificial neuron. The model’s name is derived from the fact that its activity is highly similar to that of neurons in the human brain. At its most fundamental level, an ANN comprises three distinct layers: an input layer, one or more hidden layers, and an output layer [69,70]. Fig.3 depicts the proposed network design.

3.1.2 Random forest

Ho [71] proposed the method in 1995 by utilizing an algorithm for RF judgments. Later, the model was developed using Breiman’s botanical concept and random feature selection for algorithm development [72]. The packing approach is a synthetic training that incorporates bootstrap and aggregation techniques. Bootstrapping is a step toward identical distributions, and it generates independent datasets by randomly selecting the original dataset. The aggregation stage builds datasets that can individually train the base forecasting factors. Finally, the predictions of each tree are averaged using a synthesis process, and the resulting output is referred to as the goal output [73]. Regression and classification applications can benefit from the speed, simplicity, and high accuracy of RF.

3.1.3 Categorical gradient boosting

CatBoost is a technique for successfully processing classification features in input parameters and utilizing the fact that they are processed during training rather than during pre-processing. This method is an order-enhanced method that substitutes the classic gradient boosting approach. This approach can successfully manage noise points during training to reduce gradient estimation errors, address unavoidable gradient errors during repetition, and improve model generalization. The trees are employed as the base forecasting elements. The segmentation at each level of the tree must be consistent. Thus, the plant’s structure is balanced, an unbiased estimate of the gradient is obtained, and the slope of the gradient is reduced, eliminating overfitting [74].

3.2 Partial dependence plots

A partial dependence plots (PDP) analysis indicates the marginal influence of one or two characteristics on the prediction outcomes of an ML model. A PDP graph may indicate whether the connection between a target and feature is linear, monotonic, or complex. Assuming that all data points must have the same feature value, the partial dependence function is the average prediction for this feature value. To accurately depict how a feature affects prediction, a PDP must not be associated with other features.

3.3 Performance indices of models

The performance of the model varies depending on the configuration of the parameters. For the best predictive model, multiple parameter choices must be considered and their impact on prediction performance evaluated. The coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) are the three indicators considered in this study. The proposed ML algorithms are considered accurate when they have low RMSE and MAE values, whereas R2 values are close to 1. Readers can refer to Refs. [7577] on calculating these criteria.

4 Methodology flowchart

The methodology of developing the ANN, RF, and CatBoost models for the prediction of the CS of FRSCC is explained in this section. The framework of the methodology was divided into different stages: data preparation, model development, best model selection, and performance evaluation.

First, data were collected from experimental research and randomly separated into two data sets: training (70%) and testing (30%).

Second, the training data entails creating an initial model with the default values of the hyperparameters. The optimization process was successively conducted for each hyperparameter, and the best hyperparameter in the previous step was directly used to optimize the remaining hyperparameters. All model hyperparameter-setting modifications were built and adjusted using the averaged results of a 10-fold CV procedure. The best model was selected for further analysis and analyzed based on the obtained evaluation metrics.

Third, the database divided for the testing part was used to assess the performance of the models using the proposed metrics.

Finally, to better comprehend the link between predictor and response variables, we conducted partial dependence analysis to capture the main impacts of the individual predictors and the interaction effects between predictor variables. Fig.4 shows a schematic of the method.

5 Results and discussion

In this study, three numerical simulation tools, namely ANN, RF, and CatBoost, were built to predict the CS of FRSCC. The primary objective of this procedure was to determine the values of the hyperparameters that provide the ML models with their best performance. A set of relevant parameters was selected for optimization, and a turn-by-turn approach was conducted using a simplified grid search method to reduce the computation time. To optimize the first parameter of the models, we varied its value within a given range, whereas the remaining parameters of the model were set to the default values. After successfully determining the best value for the first model’s hyperparameter, it was used in the model, and the other hyperparameters were successively optimized using this process. Additionally, to determine the optimal structure of the ANN, RF, and CatBoost, we focused on predictability and stability, which were compared based on the R2 and corresponding standard deviation (StD) evaluation criteria. The R2 value was determined by averaging the 10-fold CV. It is important to note that the 10-fold CV approach was used only on the training dataset (which accounted for 70% of the total data) but not on the testing dataset (30%). The models were unknown to the testing dataset throughout the training and validation phases. Moreover, the training and testing datasets were randomly split (by controlling a so-called “random state” in Python’s Scikit-learn library) following a uniform distribution. After 30 simulations, we observed that the prediction performance was stable with a minor variation in R2 (StD of 0.02). Therefore, only the results with the highest prediction accuracies are presented.

5.1 Optimizing the ANN parameters

The procedure for optimizing the ANN parameters is presented in this section. The number of neurons in the hidden layer, activation function for the hidden layer, solver for weight optimization, and number of training epochs were the four parameters selected, as the efficiency of the ANN is highly dependent on these factors. Studies have shown that a single-hidden-layer ANN model is sufficient to correctly determine the complex relationship between the inputs and outputs [78,79]. As a result, the ANN structure comprised three layers: an input layer with 15 neurons (reflecting 15 inputs), an output layer with one neuron corresponding to the CS of the FRSCC, and a hidden layer in the middle with the optimal number of neurons to be determined. Different researchers have developed various formulae to estimate the number of neurons in a hidden layer, but the findings have been considerably variable [8082].

First, this study explored the effects of varying the number of neurons present in the hidden layer from 1 to 30, whereas default values were selected for all remaining parameters. Fig.5(a) shows the prediction performance of the ANN structures with various numbers of neurons using the 10-fold CV approach. The R2 values represent the average of 10-fold CV. We observed that the general trend was that as the number of neurons in the hidden layer increased, the model’s predictive ability tended to increase with an increase in R2, except for a few special cases, such as when the ANN structure using eight neurons in the hidden layer outperformed the structure with nine neurons (R2 = 0.852). In addition, the model was less stable in predicting the output when the StD values of the metrics were large. Consequently, the simulation results revealed that the stability of the model decreased when the number of neurons was 8, 16, 21, 27, 29, or 30. In particular, the examples with 18 and 29 neurons correlated with the largest StD values in the validation part. The final ANN structure was selected with 19 neurons in the hidden layer because the 10-fold CV scores were fairly accurate, as indicated by the relatively high mean R2 value of 0.915 for the training dataset and R2 = 0.859 for the validation part. The prediction performance of this model was also stable, as indicated by the small StD values, which were 0.016 and 0.092 for the training and validation datasets, respectively. Thus, although some other structures had R2 values, these models were not selected because of an unstable predictive capacity reflected by high StD values. For example, the ANN structure using 18 neurons had an R2 of 0.938 but StD of 0.481, whereas the ANN structure of 16 neurons had an R2 of 0.910 but StD of 0.246.

As mentioned earlier, the determined optimal number of neurons (i.e., 19 neurons) was used to determine the appropriate activation function for the ANN model. Fig.5(b) shows the prediction results of various ANN architectures using logistic, tanh, ReLU, and identity activation functions for the training and validation phases. Compared with the other three scenarios, the ReLU activation function was the most ideal because it had the best accuracy and prediction stability (R2 = 0.915, StD = 0.016 for training, and R2 = 0.860, StD = 0.092 for validation).

In the next step, Fig.5(c) shows the process of determining the suitable solver for weight optimization for the proposed ANN architecture. Two possible solvers were considered: the limited-memory Broyden–Fletcher–Goldfarb–Shanno (lbfgs) algorithm or the adaptive moment estimation (Adam) solver. The lbfgs solver had a better and more stable prediction performance than the Adam solver. Specifically, the lbfgs solver resulted in an R2 of 0.86 for validation and 0.93 for training, while using the Adam solver resulted in low values of R2 (i.e., 0.70 and 0.60 for training and validation scores, respectively).

The number of epochs, which is also an important parameter of the ANN model, is another parameter that must be tuned. In this case, the number of epochs was varied from 100 to 1000, and other ANN hyperparameters were used, including 19 neurons, the ReLU function, and the lbfgs solver. Generally, the prediction performance of the ANN improved as the number of epochs increased (Fig.5(d)). However, when the epochs exceeded 700, the predictive performance of the models stopped improving and remained at R2 = 0.92 for training and R2 = 0.86 for validation scores. As a result, an ANN structure with 800 epochs was selected.

Thus, after the ANN optimization process, the most optimal ANN structure was selected as the one with 19 neurons of the hidden layer, and the ReLU activation function and lbfgs solver for weight optimization and 800 epochs were used. The performance of the best ANN model was R2 = 0.92 for the training score and 0.86 for the validation score.

5.2 Optimizing the parameters of the RF model

This section presents the optimization of the RF model parameters to determine the most accurate RF model. With tree-based ML models such as RF, the common hyperparameters are 1) the number of estimators, 2) maximum depth, 3) number of samples split, 4) number of sample leaves, and 5) number of leaf nodes. The optimization approach was similar to that of the above-mentioned ANN model. The appropriate RF number of estimators was first optimized, and then the other parameters were successively fine-tuned. The remaining parameters were set to default values. The final RF model, with all five optimal parameters and the highest prediction ability, was selected for further analysis.

Fig.6 shows the 10-fold CV results of parameter optimization of the RF model. Fig.6(a) shows the results with different numbers of estimators ranging from 5 to 100 in steps of 5. The prediction capability of the RF model did not vary significantly as the estimators increased during the training stage, and all the models had R2 values of approximately 0.982. However, there was a distinction in the validation step. The number of estimators increased from five to 15 as the R2 value increased from 0.912 to 0.924. However, R2 decreased as the number of estimators continued to increase from 20 onwards. As a result, 15 estimators were selected as the hyperparameters because the model had the highest prediction capability in this scenario. Moreover, the number of estimators of 15 was the most stable predictive model, as verified by the lowest StD value (i.e., StD = 0.048) compared with all other RF models. As a result, the ideal RF model included 15 estimators. Similarly, the optimization results were depicted for the depth of the tree, as reflected by the maximum depth hyperparameter (Fig.6(b)). A value of 12 was selected because the performance of the model could not be improved with higher values.

The investigation results for the number of splits (Fig.6(c)) and number of leaves (Fig.6(d)) revealed that increasing the number of splits and leaves reduced the prediction performance of the RF models. Generally, the greater the split and leaf number, the more complex the RF model and the less predictive performance of the model. Thus, the optimal RF model had the minimum values of split and leaf samples. Specifically, the split was 2, and the sample leaf was 1.

To determine the appropriate number of leaf nodes, we gradually increased the value of the number of leaf nodes in steps of 5 from 5 to 100 (Fig.6(e)). The results show that the RF models with a leaf node of 60 had the highest performance and were the most stable, with R2 = 0.98 (for training score) and 0.92 (for validation score). In this case, the R2 values obtained by increasing the number of leaf nodes remained constant, and the accuracy of the model did not improve. The maximum number of leaf nodes was 60.

Overall, after optimizing five parameters, the optimal RF model included 15 estimators: maximum tree depth of 12, sample split = 2, sample leaf = 1, and leaf nodes = 60. This RF model yielded R2 = 0.98 (for the training score) and 0.92 (for the validation score) in predicting the CS of FRSCC.

5.3 Optimizing the parameters of CatBoost

Similar to the previous sections, this section presents the optimization results of the hyperparameters of the CatBoost model. The number of iterations, maximum tree depth, loss guide maximum leaves, and growing policy were the parameters to be optimized.

As shown in Fig.7(a), the CatBoost model structure with 400 iterations (equivalent to 400 estimators) was the most optimal because of its higher accuracy and stability compared with other cases, as evidenced by the maximum R2 and minimum StD for the validation scores (i.e., R2 = 0.971 and StD = 0.015, respectively).

When changing the tree depth from 1 to 10 (Fig.7(b)), a tree depth of 6 yielded the best prediction results in both the training and validation phases, with the highest R2 value (R2 = 0.971) and lowest StD (StD = 0.015). The prediction performance of the model was high and stable when using a maximum tree depth of 6. In particular, the outcome of this scenario was comparable to the one before because the package containing the algorithm had a default value of maximum depth set to 6.

Next, different CatBoost architectures with 400 iterations (maximum tree depth = 6, and maximum leaves varying from 2 to 30) were investigated (Fig.7(c)). The CatBoost model with maximum leaves of 20 had the highest prediction accuracy (R2 = 0.999 for training score and 0.974 for validation score), compared with other CAT models. In addition, this structure provided the smallest StD values (StD = 0 for training and 0.010 for validation).

The growing policy was the last parameter to be investigated (Fig.7(d)). We easily observed that the symmetric growing policy was superior to the depthwise growing policy. The performance of the symmetric growing policy was better shown by the higher R2 value for validation than in the other cases.

As a result of the investigation, the best CatBoost model structure was identified: a CatBoost model with 400 iterations, maximum tree depth = 6, maximum leaves = 20, and a symmetric growing policy.

5.4 Representative prediction results

The optimization process of the hyperparameters of the three models, namely ANN, RF, and CatBoost, is provided in the previous sections. As mentioned, the best ANN architecture yielded a prediction performance of R2 = 0.92 for training and 0.86 for validation. With the RF model, prediction results of R2 = 0.98 (training) and 0.92 (validation) were obtained. Finally, the CatBoost model was the most optimum model, yielding robust predictive outcomes with R2 = 0.998 for training and 0.971 for validation. Therefore, the optimized CatBoost model had the best predictive ability.

The representative simulation results of the best CatBoost model are shown in Fig.8, Fig.9, and Fig.10. Fig.8 shows the resulting correlation between the predicted and experimental CSs. The model’s training ability was almost ideal, whereas for the testing phase, most of the samples had prediction results that were very close to the experimental results.

Furthermore, the relative errors between the actual CS and CatBoost simulation results are presented (Fig.9). In this case, the relative error was defined using Eq. (1) as follows:

RelativeError=|ActualOutput|Actual.

The higher predictability of the CatBoost model was reflected by the low errors and the closeness of the output value to the actual value. Most of these errors were very small (approximately 1% for the training and 2% for the testing parts). Only one sample (No. 140) of a total of 267 samples in the training dataset had an error of 4%, and one sample (No. 82) of a total of 114 samples in the test dataset had an error of 25%. However, the number of samples with a high error was insignificant compared with the total number. Therefore, the predictions made by the CatBoost model were accurate.

The actual and CatBoost CS values are compared in Fig.10. The error histograms between the predicted and actual CS values for both the training (Fig.10(a)) and testing (Fig.10(b)) datasets provided further evidence of the excellent accuracy of the CatBoost model. Both the training and testing datasets included very few errors overall. Most of the error values were concentrated in the range [–1,1] and [–3,3] (MPa) for the training and testing parts, respectively. Owing to these errors, it is clear that the proposed CatBoost model for calculating the CS of FRSCC has an excellent predictive capacity.

In addition, the accurate values of the model for the statistical criterion demonstrated its high predictive performance. With the best results for CatBoost, these values were RMSE = 0.589, MAE = 0.441, and R2 = 0.999 for the training dataset, and RMSE = 2.639, MAE = 1.669, and R2 = 0.986 for the testing dataset. The very high R2 values combined with the low error proved that the proposed CatBoost algorithm can accurately predict the CS of FRSCC.

Finally, the best CatBoost model was also compared with some state-of-the-art ML models, namely extreme gradient boosting (XGB) [83], light gradient boosting machine (LGB) [84], and a classical one, the decision tree (DT) model. Readers can refer to the literature for the theoretical basis of the ML models. Herein, only the prediction results are shown to highlight the effectiveness of the proposed approach in achieving the most accurate CatBoost model. For hyperparameter selection of these three models, a preliminary analysis was conducted and the default values were reasonably accepted for comparison purposes. The results are shown in Fig.11 for each dataset. The CatBoost model had better prediction results than the XGB model (R2 = 0.985 and 0.979 for training and testing data, respectively), LGB model (R2 = 0.915 and 0.887 for training and testing data, respectively), and DT model (R2 = 1 and 0.933 for training and testing data, respectively). Moreover, the DT model exhibited an overfitting problem, as the training phase achieved absolute accuracy, whereas the accuracy for the testing phase was much lower.

5.5 Discussion on PDP

PDP analysis was conducted to investigate the prediction outcomes based on 15 input parameters with the finely tuned CatBoost model that has been determined previously. Sequentially, the influence of each input variable on the predicted CS was computed by adjusting the value of the parameter under examination while maintaining the values of the other 14 parameters constant. This enabled the effect of each input variable to be evaluated independently.

First, Fig.12(a) depicts the PDP of the cement content. Generally, for a greater quantity of cement present, the CS of FRSCC increased. Thus, the cement content had a beneficial effect on the CS. This observation was in good agreement with the experimental findings of Ref. [85]. The CS of FRSCC improved considerably when the cement content was between 350 and 500 kg/m3 (from 56 to 72.5 MPa), and the CS increased by approximately 29.5%. This demonstrated that utilizing an appropriate proportion of cement increases the CS of FRSCC.

Fig.12(b)–Fig.12(d) depict the PDP curves for coarse aggregate, fine aggregate, and water, respectively. The results demonstrated that these three input factors had a negative effect on the CS of FRSCC. When the coarse aggregate content increased from 750 to 1000 kg/m3, the CS decreased from 68 to 60 MPa. However, when the coarse aggregate content increased to higher than 1000 kg/m3, the CS remained constant. Similarly, an increase in fine aggregate content from 800 to 1100 kg/m3 caused the CS to decrease by 8.2%. Moreover, when the water content increased to between 145 and 180 kg/m3, the CS decreased from 70.5 to 62 MPa (about 12.1%). This was also confirmed in several previously published works [8688].

The fly ash content also positively influenced the CS, as shown by the PDP results in Fig.12(e). This means that when the fly ash content increased within the examined range, the CS increased. Specifically, when the fly ash increased in the range of 50–300 kg/m3, the CS changed from 62 to 73 MPa (by approximately 17.7%). This result was consistent with that reported in Ref. [89].

The effects of fiber type on the CS of FRSCC are shown in Fig.12(f)–Fig.12(h). First, we observed that the presence of steel fibers had a beneficial impact on the CS of FRSCC. It increased significantly from 64.5 to 75 MPa (approximately 16.3%) when the amount of steel fiber increased from 0 to 158 kg/m3. The other two types of fibers, i.e., glass and PP fibers, had both a negative and positive effect (denoted as Mix) on the CS of FRSCC. However, the CS increased only within a specific range; otherwise, the CS of FRSCC decreased. Specifically, when the content of glass fiber varied within 0–275 kg/m3, the CS increased from 65.7 to 68.8 MPa (approximately 4.7%). However, when it was in the range between 4 and 48 kg/m3, the CS decreased from 68.8 to 64.6 MPa (approximately 6.1%). Thus, we may conclude that the optimal glass fiber content is 4 kg/m3. Similarly, when the PP fiber content increased from 0 to 2 kg/m3 and from 2 to 9 kg/m3, the CS of FRSCC decreased. However, when the PP fiber content increased from 2 to 9 kg/m3, the CS of the FRSCC increased. This indicated that the optimal PP fiber content is 9 kg/m3. Experimental studies by Beigi et al. [66] and Ghorbani et al. [90] reached the same conclusion when adding fibers to SCC. Additionally, input factors such as limestone, nano-CuO, and VMA had a mixed influence on the CS (Fig.1(i), Fig.1(k), and Fig.1(n)). When the limestone content increased from 50 to 153 kg/m3, the CS decreased. However, when it increased from 153 to 300 kg/m3, the CS increased. Thus, the most negative limestone content was 153 kg/m3, which yielded the lowest CS. Similarly, the most negative nano-CuO content was 4.5 kg/m3, whereas that of the VMA content was 0.5 L/m3.

The PDP analysis of nano-silica and metakaolin is depicted in Fig.12(j) and Fig.12(l). When the content of nano-silica increased from 0 to 36 kg/m3, the CS increased from 64.5 to 74.5 (corresponding to an increase of approximately 15.5%). However, when the content of nano-silica continued to increase, the CS remained almost unchanged. According to this study, to improve the CS of FRSCC, the nano-silica content should only be in the range of 0–36 kg/m3. This is consistent with the experimental results of Beigi et al. [66]. When the metakaolin content increased from 0 to 90 kg/m3, the CS also increased. Overall, nano-silica and metakaolin had a positive effect on the CS of FRSCC.

The presence of a superplasticizer contributed to the favorable impact of the FRSCC on the CS (Fig.12(m)). Using a superplasticizer from 5 to 17 kg/m3 increased the CS, but the effect was insignificant (only an increase of approximately 3%). In contrast, with a higher superplasticizer content (from 17 to 33 kg/m3), the CS increased more significantly (approximately 17.4%).

Finally, Fig.12(o) shows the PDP curve with respect to the testing age of the sample. The testing age of the sample had a positive effect on CS. As the concrete testing age increased, the CS increased. However, the CS increased from 57.5 to 73.5 MPa as the sample age increased from 3 to 56 d. This change corresponded to approximately 27.8% of the CS of FRSCC. This was also consistent with many empirical studies on FRSCC [19,23].

Additionally, the CS variation (Δ) for each parameter can be determined depending on the maximum and minimum values of the PDP for each input parameter. A higher Δ value of each input parameter indicates that the parameter has a greater influence on the predicted output. Detailed results are presented in Tab.4.

The PDP graph analysis revealed that the input parameters positively influence the CS of FRSCC, such as cement, fly ash, steel fiber, nano-silica, metakaolin, superplasticizer, and testing age. Coarse aggregate, fine aggregate, and water were input factors with a negative influence. The factors with both positive and negative effects were glass fiber, PP fiber, limestone, nano-CuO, and VMA. The impacts of the inputs on the CS of FRSCC, in descending order, were cement, age, superplasticizer, water, steel fiber, nano-silica, coarse aggregate, fly ash, fine aggregate, metakaolin, glass fiber, PP fiber, nano-CuO, limestone, and VMA. Based on the results of this analysis, material engineers can orient and pre-quantify compositional components when designing FRSCC mixtures. This enables the design phase of FRSCC with an appropriate CS to satisfy the intended requirements.

6 Conclusions and perspectives

Owing to the current research gaps in the literature, the primary objective of this study was to propose an appropriately designed ML model to rapidly and efficiently predict the CS of FRSCC. A database containing 381 samples was considered in this study to extend the prediction range of the ML algorithms. Three ML models were investigated to attain this objective: ANN, RF, and CatBoost as a recently proposed ML model. Four parameters of ANN, five parameters of RF, and four parameters of CatBoost were finely tuned to determine the optimal structure with the highest predictive performance. CatBoost was observed to be the most optimum model for predicting the CS of FRSCC based on three measures: R2, MAE, and RMSE. In addition, a sensitivity analysis was performed using PDP to assess the impact of the input parameters on the CS of the FRSCC. The findings of the sensitivity analysis indicate that cement, fly ash, steel fiber, nano-silica, metakaolin, superplasticizer, and age positively affect the CS of FRSCC. The contents of the coarse aggregate, fine aggregate, and water have a negative influence on CS. In addition, the contents of glass fiber, PP fiber, limestone, nano-CuO, and VMA are the input factors that might have both positive and negative impacts on the CS. Moreover, PDP analysis revealed that the cement content has the most significant influence on the CS of FRSCC, whereas VMA is the most negligible factor. The results of this study may be used to construct a reliable soft computing tool for accurately forecasting the CS while also aiding material engineers to create a good orientation in the design phase of FRSCC.

References

[1]

Okamura H, Ozawa K. Self-compacting high performance concrete. Structural engineering international, 1996, 6(4): 269–270

[2]

ZeyadA MSabaA M, Influence of fly ash on the properties of self-compacting fiber reinforced concrete. Scientific Journal of King Faisal University (Basic and Applied Sciences). 2018, 19(2): 55–67

[3]

Sahmaran M, Yurtseven A, Yaman I O. Workability of hybrid fiber reinforced self-compacting concrete. Building and Environment, 2005, 40(12): 1672–1677

[4]

Zeyad A M. Effect of fibers types on fresh properties and flexural toughness of self-compacting concrete. Journal of Materials Research and Technology, 2020, 9(3): 4147–4158

[5]

Madandoust R, Ranjbar M M, Ghavidel R, Shahabi S F. Assessment of factors influencing mechanical properties of steel fiber reinforced self-compacting concrete. Materials & Design, 2015, 83: 284–294

[6]

Lin C, Kayali O, Morozov E V, Sharp D J. Influence of fibre type on flexural behaviour of self-compacting fibre reinforced cementitious composites. Cement and Concrete Composites, 2014, 51: 27–37

[7]

Khayat K H, Kassimi F, Ghoddousi P. Mixture design and testing of fiber-reinforced self-consolidating concrete. ACI Materials Journal, 2014, 111(2): 143–152

[8]

Salari Z, Vakhshouri B, Nejadi S. Analytical review of the mix design of fiber reinforced high strength self-compacting concrete. Journal of Building Engineering, 2018, 20: 264–276

[9]

Majain N, Rahman A B A, Mohamed R N, Adnan A. Effect of steel fibers on self-compacting concrete slump flow and compressive strength. IOP Conference Series: Materials Science and Engineering, 2019, 513(1): 012007

[10]

FathiHLameieTMalekiMYazdaniR. Simultaneous effects of fiber and glass on the mechanical properties of self-compacting concrete. Construction and Building Materials, 2017,133: 443–449

[11]

Prakash R, Raman S N, Divyah N, Subramanian C, Vijayaprabha C, Praveenkumar S. Fresh and mechanical characteristics of roselle fibre reinforced self-compacting concrete incorporating fly ash and metakaolin. Construction & Building Materials, 2021, 290: 123209

[12]

Boz A, Sezer A, Özdemir T, Hızal G E, Azdeniz Dolmacı Ö. Mechanical properties of lime-treated clay reinforced with different types of randomly distributed fibers. Arabian Journal of Geosciences, 2018, 11(6): 1–14

[13]

Fallah S, Nematzadeh M. Mechanical properties and durability of high-strength concrete containing macro-polymeric and polypropylene fibers with nano-silica and silica fume. Construction & Building Materials, 2017, 132: 170–187

[14]

Bhogayata A C, Arora N K. Fresh and strength properties of concrete reinforced with metalized plastic waste fibers. Construction & Building Materials, 2017, 146: 455–463

[15]

Nik A S, Omran O L. Estimation of compressive strength of self-compacted concrete with fibers consisting nano-SiO2 using ultrasonic pulse velocity. Construction & Building Materials, 2013, 44: 654–662

[16]

Revilla-Cuesta V, Skaf M, Serrano-López R, Ortega-López V. Models for compressive strength estimation through non-destructive testing of highly self-compacting concrete containing recycled concrete aggregate and slag-based binder. Construction & Building Materials, 2021, 280: 122454

[17]

Saba A M, Khan A H, Akhtar M N, Khan N A, Rahimian Koloor S S, Petrů M, Radwan N. Strength and flexural behavior of steel fiber and silica fume incorporated self-compacting concrete. Journal of Materials Research and Technology, 2021, 12: 1380–1390

[18]

ZatarWNguyenT, Mixture design study of fiber-reinforced self-compacting concrete for prefabricated street light post structures. Advances in Civil Engineering, 2020: e8852320

[19]

Harihanandh M, Rajeshkumar V, Elango K S. Study on mechanical properties of fiber reinforced self compacting concrete. Materials Today: Proceedings, 2021, 45: 3124–3131

[20]

Karimipour A, Ghalehnovi M, de Brito J, Attari M. The effect of polypropylene fibres on the compressive strength, impact and heat resistance of self-compacting concrete. Structures, 2020, 25: 72–87

[21]

Ramkumar K B, Kannan Rajkumar P R, Noor Ahmmad S, Jegan M. A review on performance of self-compacting concrete—Use of mineral admixtures and steel fibres with artificial neural network application. Construction & Building Materials, 2020, 261: 120215

[22]

Meesaraganda L V P, Saha P, Tarafder N. Artificial neural network for strength prediction of fibers’ self-compacting concrete. In: Soft Computing for Problem Solving. Singapore: Springer, 2019, 15–24

[23]

Yehia S, Douba A, Abdullahi O, Farrag S. Mechanical and durability evaluation of fiber-reinforced self-compacting concrete. Construction & Building Materials, 2016, 121: 120–133

[24]

Ding Y, Azevedo C, Aguiar J B, Jalali S. Study on residual behaviour and flexural toughness of fibre cocktail reinforced self compacting high performance concrete after exposure to high temperature. Construction & Building Materials, 2012, 26: 21–31

[25]

HuangJ SLiewJ XLiewK M. Data-driven machine learning approach for exploring and assessing mechanical properties of carbon nanotube-reinforced cement composites. Composite Structures, 2021, 267: 113917

[26]

Pons G, Mouret M, Alcantara M, Granju J L. Mechanical behaviour of self-compacting concrete with hybrid fibre reinforcement. Materials and Structures, 2007, 40(2): 201–210

[27]

Song Q, Yu R, Wang X, Rao S, Shui Z. A novel self-compacting ultra-high performance fibre reinforced concrete (SCUHPFRC) derived from compounded high-active powders. Construction & Building Materials, 2018, 158: 883–893

[28]

Karami B, Shishegaran A, Taghavizade H, Rabczuk T. Presenting innovative ensemble model for prediction of the load carrying capacity of composite castellated steel beam under fire. Structures., 2021, 33: 4031–4052

[29]

Shishegaran A, Saeedi M, Kumar A, Ghiasinejad H. Prediction of air quality in Tehran by developing the nonlinear ensemble model. Journal of Cleaner Production, 2020, 259: 120825

[30]

Shishegaran A, Shokrollahi M, Mirnorollahi A, Shishegaran A, Mohammad Khani M. A novel ensemble model for predicting the performance of a novel vertical slot fishway. Frontiers of Structural and Civil Engineering, 2020, 14(6): 1418–1444

[31]

Shishegaran A, Khalili M R, Karami B, Rabczuk T, Shishegaran A. Computational predictions for estimating the maximum deflection of reinforced concrete panels subjected to the blast load. International Journal of Impact Engineering, 2020, 139: 103527

[32]

Naghsh M A, Shishegaran A, Karami B, Rabczuk T, Shishegaran A, Taghavizadeh H, Moradi M. An innovative model for predicting the displacement and rotation of column-tree moment connection under fire. Frontiers of Structural and Civil Engineering, 2021, 15(1): 194–212

[33]

Shishegaran A, Karami B, Rabczuk T, Shishegaran A, Naghsh M A, Mohammad Khani M. Performance of fixed beam without interacting bars. Frontiers of Structural and Civil Engineering, 2020, 14(5): 1180–1195

[34]

Shishegaran A, Karami B, Safari Danalou E, Varaee H, Rabczuk T. Computational predictions for predicting the performance of steel 1 panel shear wall under explosive loads. Engineering Computations, 2021, 38(9): 3564–3589

[35]

Bigdeli A, Shishegaran A, Naghsh M A, Karami B, Shishegaran A, Alizadeh G. Surrogate models for the prediction of damage in reinforced concrete tunnels under internal water pressure. Journal of Zhejiang University-SCIENCE A, 2021, 22(8): 632–656

[36]

Shishegaran A, Moradi M, Naghsh M A, Karami B, Shishegaran A. Prediction of the load-carrying capacity of reinforced concrete connections under post-earthquake fire. Journal of Zhejiang University-SCIENCE A, 2021, 22(6): 441–466

[37]

Shishegaran A, Ghasemi M R, Varaee H. Performance of a novel bent-up bars system not interacting with concrete. Frontiers of Structural and Civil Engineering, 2019, 13(6): 1301–1315

[38]

GuoHZhuangXRabczukT. A deep collocation method for the bending analysis of Kirchhoff plate. Computers, Materials & Continua, 2019(5): 433–456

[39]

AnitescuCAtroshchenkoEAlajlanNRabczukT. Artificial neural network methods for the solution of second order boundary value problems. Computers, Materials & Continua, 2019, 59(1): 345–359

[40]

Samaniego E, Anitescu C, Goswami S, Nguyen-Thanh V M, Guo H, Hamdia K, Zhuang X, Rabczuk T. An energy approach to the solution of partial differential equations in computational mechanics via machine learning: Concepts, implementation and applications. Computer Methods in Applied Mechanics and Engineering, 2020, 362: 112790

[41]

Zhuang X, Guo H, Alajlan N, Zhu H, Rabczuk T. Deep autoencoder based energy method for the bending, vibration, and buckling analysis of Kirchhoff plates with transfer learning. European Journal of Mechanics. A, Solids, 2021, 87: 104225

[42]

GuoHZhuangXChenPAlajlanNRabczukT. Stochastic deep collocation method based on neural architecture search and transfer learning for heterogeneous porous media. Engineering with Computers, 2022

[43]

GuoHZhuangXChenPAlajlanNRabczukT. Analysis of three-dimensional potential problems in non-homogeneous media with physics-informed deep collocation method using material transfer learning and sensitivity analysis. Engineering with Computers, 2022

[44]

Shishegaran A, Varaee H, Rabczuk T, Shishegaran G. High correlated variables creator machine: Prediction of the compressive strength of concrete. Computers & Structures, 2021, 247: 106479

[45]

Shishegaran A, Daneshpajoh F, Taghavizade H, Mirvalad S. Developing conductive concrete containing wire rope and steel powder wastes for route deicing. Construction & Building Materials, 2020, 232: 117184

[46]

Varaee H, Shishegaran A, Ghasemi M R. The life-cycle cost analysis based on probabilistic optimization using a novel algorithm. Journal of Building Engineering, 2021, 43: 103032

[47]

Es-Haghi M S, Shishegaran A, Rabczuk T. Evaluation of a novel Asymmetric Genetic Algorithm to optimize the structural design of 3D regular and irregular steel frames. Frontiers of Structural and Civil Engineering, 2020, 14(5): 1110–1130

[48]

Shishegaran A, Boushehri A N, Ismail A F. Gene expression programming for process parameter optimization during ultrafiltration of surfactant wastewater using hydrophilic polyethersulfone membrane. Journal of Environmental Management, 2020, 264: 110444

[49]

Tran-Ngoc H, Khatir S, Le-Xuan T, De Roeck G, Bui-Tien T, Abdel Wahab M. A novel machine-learning based on the global search techniques using vectorized data for damage detection in structures. International Journal of Engineering Science, 2020, 157: 103376

[50]

Khatir S, Tiachacht S, Le Thanh C, Ghandourah E, Mirjalili S, Abdel Wahab M. An improved Artificial Neural Network using arithmetic optimization algorithm for damage assessment in FGM composite plates. Composite Structures, 2021, 273: 114287

[51]

Wang S, Wang H, Zhou Y, Liu J, Dai P, Du X, Abdel Wahab M. Automatic laser profile recognition and fast tracking for structured light measurement using deep learning and template matching. Measurement, 2021, 169: 108362

[52]

Ho L V, Trinh T T, De Roeck G, Bui-Tien T, Nguyen-Ngoc L, Abdel Wahab M. An efficient stochastic-based coupled model for damage identification in plate structures. Engineering Failure Analysis, 2022, 131: 105866

[53]

Tran-Ngoc H, Khatir S, De Roeck G, Bui-Tien T, Abdel Wahab M. An efficient artificial neural network for damage detection in bridges and beam-like structures by improving training parameters using cuckoo search algorithm. Engineering Structures, 2019, 199: 109637

[54]

Khatir S, Boutchicha D, Le Thanh C, Tran-Ngoc H, Nguyen T N, Abdel-Wahab M. Improved ANN technique combined with Jaya algorithm for crack identification in plates using XIGA and experimental analysis. Theoretical and Applied Fracture Mechanics, 2020, 107: 102554

[55]

Nguyen-Le D H, Tao Q B, Nguyen V H, Abdel-Wahab M, Nguyen-Xuan H. A data-driven approach based on long short-term memory and hidden Markov model for crack propagation prediction. Engineering Fracture Mechanics, 2020, 235: 107085

[56]

Asteris P G, Ashrafian A, Rezaie-Balf M. Prediction of the compressive strength of self-compacting concrete using surrogate models. Computers and Concrete, 2019, 24: 137–150

[57]

Farooq F, Czarnecki S, Niewiadomski P, Aslam F, Alabduljabbar H, Ostrowski K A, Śliwa-Wieczorek K, Nowobilski T, Malazdrewicz S. A comparative study for the prediction of the compressive strength of self-compacting concrete modified with fly ash. Materials (Basel), 2021, 14(17): 1–27

[58]

Uysal M, Tanyildizi H. Estimation of compressive strength of self compacting concrete containing polypropylene fiber and mineral additives exposed to high temperature using artificial neural network. Construction & Building Materials, 2012, 27(1): 404–414

[59]

NguyenTPham DuyHPham ThanhTVuH H, Compressive strength evaluation of fiber-reinforced high-strength self-compacting concrete with artificial intelligence. Advances in Civil Engineering, 2020: e3012139

[60]

Saha P, Prasad M L V, RathishKumar P. Predicting strength of SCC using artificial neural network and multivariable regression analysis. Computers and Concrete, 2017, 20(1): 31–38

[61]

Mashhadban H, Kutanaei S S, Sayarinejad M. Prediction and modeling of mechanical properties in fiber reinforced self-compacting concrete using particle swarm optimization algorithm and artificial neural network. Construction & Building Materials, 2016, 119: 277–287

[62]

Naseri F, Jafari F, Mohseni E, Tang W, Feizbakhsh A, Khatibinia M. Experimental observations and SVM-based prediction of properties of polypropylene fibres reinforced self-compacting composites incorporating nano-CuO. Construction & Building Materials, 2017, 143: 589–598

[63]

GencelOÖzelCKoksalFMartínez-BarreraGBrostowWPolatH. Fuzzy logic model for prediction of properties of fiber reinforced self-compacting concrete. Medziagotyra, 2013, 19(2)

[64]

Tavakoli H R, Omran O L, Shiade M F, Kutanaei S S. Prediction of combined effects of fibers and nano-silica on the mechanical properties of self-compacting concrete using artificial neural network. Latin American Journal of Solids and Structures, 2014, 11: 1906–1923

[65]

BegumS JAnjaneyuluP J DRatnamM. A study on effect of steel fiber in fly ash based self compacting concrete. International Journal for Innovative Research in Science & Technology, 2018(1), 5: 95–99

[66]

Beigi M H, Berenjian J, Lotfi Omran O, Sadeghi Nik A, Nikbin I M. An experimental survey on combined effects of fibers and nanosilica on the mechanical, rheological, and durability properties of self-compacting concrete. Materials & Design, 2013, 50: 1019–1029

[67]

Gencel O, Brostow W, Datashvili T, Thedford M. Workability and mechanical performance of steel fiber-reinforced self-compacting concrete with fly ash. Composite Interfaces, 2011, 18(2): 169–184

[68]

Ly H B, Nguyen M H, Pham B T. Metaheuristic optimization of Levenberg–Marquardt-based artificial neural network using particle swarm optimization for prediction of foamed concrete compressive strength. Neural Computing & Applications, 2021, 33(24): 17331–17351

[69]

Rumelhart D E, Widrow B, Lehr M A. The basic ideas in neural networks. Communications of the ACM, 1994, 37(3): 87–92

[70]

Adhikary B B, Mutsuyoshi H. Prediction of shear strength of steel fiber RC beams using neural networks. Construction & Building Materials, 2006, 20(9): 801–811

[71]

HoT K. Random decision forests. In: Proceedings of 3rd International Conference on Document Analysis and Recognition. Montreal: IEEE, 1995

[72]

Breiman L. Random forests. Machine Learning, 2001, 45(1): 5–32

[73]

Ben Chaabene W, Flah M, Nehdi M L. Machine learning prediction of mechanical properties of concrete: Critical review. Construction & Building Materials, 2020, 260: 119889

[74]

DorogushA VErshovVGulinA. CatBoost: gradient boosting with categorical features support. 2018, arXiv:1810.11363

[75]

Le T T, Pham B T, Ly H B, Shirzadi A, Le L M. Development of 48-hour Precipitation Forecasting Model using Nonlinear Autoregressive Neural Network. In: Proceedings of the 5th International Conference on Geotechnics, Civil Engineering Works and Structures. Hanoi: Singapore, 2020, 1191–1196

[76]

Pham B T, Nguyen M D, Ly H B, Pham T A, Hoang V, Van Le H, Le T T, Nguyen H Q, Bui G L. Development of artificial neural networks for prediction of compression coefficient of soft soil. In: Proceedings of the 5th International Conference on Geotechnics, Civil Engineering Works and Structures. Hanoi: Singapore, 2020, 1167–1172

[77]

Thanh T T M, Ly H B, Pham B T. A possibility of AI application on mode-choice prediction of transport users. In: Proceedings of the 5th International Conference on Geotechnics, Civil Engineering Works and Structures. Hanoi: Singapore, 2020, 1179–1184

[78]

Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems, 1989, 2(4): 303–314

[79]

Bounds D G, Lloyd P J, Mathew B G, Waddell G. A multilayer perceptron network for the diagnosis of low back pain. In: IEEE 1988 International Conference on Neural Networks. San Diego, CA: IEEE, 1988, 481–489

[80]

Ripley B D. Statistical aspects of neural networks. Networks and Chaos—Statistical and Probabilistic Aspects, 1993, 50: 40–123

[81]

Sheela K G, Deepa S N. Review on methods to fix number of hidden neurons in neural networks. Mathematical Problems in Engineering, 2013, 2013: e425740

[82]

Zhang Z, Ma X, Yang Y. Bounds on the number of hidden neurons in three-layer binary neural networks. Neural Networks, 2003, 16(7): 995–1002

[83]

Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, CA: Association for Computing Machinery, 2016, 785–794

[84]

Ke G, Meng Q, Finley T, Wang T, Chen W, Ma W, Ye Q, Liu T Y. Lightgbm: A highly efficient gradient boosting decision tree. Advances in Neural Information Processing Systems, 2017, 30: 1–9

[85]

Mohammed B S, Azmi N J. Strength reduction factors for structural rubbercrete. Frontiers of Structural and Civil Engineering, 2014, 8(3): 270–281

[86]

Oner A, Akyuz S. An experimental study on optimum usage of GGBS for the compressive strength of concrete. Cement and Concrete Composites, 2007, 29(6): 505–514

[87]

Shen J, Xu Q. Effect of moisture content and porosity on compressive strength of concrete during drying at 105 °C. Construction & Building Materials, 2019, 195: 19–27

[88]

Zhou J, Chen X, Wu L, Kan X. Influence of free water content on the compressive mechanical behaviour of cement mortar under high strain rate. Sadhana, 2011, 36(3): 357–369

[89]

Oner A, Akyuz S, Yildiz R. An experimental study on strength development of concrete containing fly ash and optimum usage of fly ash in concrete. Cement and Concrete Research, 2005, 35(6): 1165–1171

[90]

Ghorbani S, Sharifi S, Rokhsarpour H, Shoja S, Gholizadeh M, Rahmatabad M A D, de Brito J. Effect of magnetized mixing water on the fresh and hardened state properties of steel fibre reinforced self-compacting concrete. Construction & Building Materials, 2020, 248: 118660

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (11844KB)

3109

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/