1 Introduction
Incorporating structural modeling uncertainties becomes a significant issue whenever the structure’s performance is to be assessed. In Eccentric Braced Frames (EBFs), the shear link is known as an essential element in seismic structural responses. It is crucial to know more about shear links’ behavior to provide a reliable estimation of their responses under seismic loads. For this purpose, Moammer and Dolatshahi [
1], calibrated the hysteresis behavior of the shear links by collecting over 70 cyclic tests, and they defined seven implicit mathematical relationships using stepwise multivariable regression analysis. Each of these mathematical relationships provides an estimate of the median value and associated Coefficient of Variation for the parameter with regard to the geometrical and material properties of the shear link section. Before their work, Lignos and Krawinkler [
2], collected a set of experimental test results of beams in Moment Resisting Frames (MRFs) and proposed similar relationships for estimating the behavioral parameters. Hartloper and Lignos [
3] also expanded the work to cover the column elements in MRFs. On the basis of these works, various attempts have been made to assess the effects of uncertainties in modeling parameters on the structural responses. For instance, Masoomzadeh et al. [
4] incorporated modeling uncertainties of a steel EBF structure using the behavioral parameters introduced by Moammer and Dolatshahi [
1]. They showed that web thickness and yield strength of the shear link are important in the variation of the structural responses under seismic excitations. In all of the study works regarding the effect of uncertainties, the large amount of the required computational effort and also the need to enhance the accuracy of the procedure adopted in uncertainty propagation are the main challenges and have made this problem an open field of research.
Accounting for different sources of uncertainties is necessary to have an accurate and reliable estimation of seismic risks [
5]. Different research works have been done to efficiently assess the effect of epistemic and aleatory uncertainties in structural responses. For instance, Liel et al. [
6], by using Response Surface (RS), created a mathematical equation to estimate the structural responses of a reinforced concrete (RC) structure instead of Finite Element (FE) modeling and Nonlinear Time History Analyses (NTHA). Then, they used Monte Carlo (MC) method in conjunction with the RS to propagate the uncertainties into the capacity of the structure against collapse. They utilized Central Composite Design (CCD) and Full Factorial Design [
7] procedures in order to design experiments used in constructing RS. They demonstrated that various sources of uncertainties might increase the dispersion and shift the median value of the collapse capacity in RC buildings. Pourreza et al. [
8] also used RS in estimating structural responses of a 5-story steel MRF and used Fractional Factorial Design (FrFD) [
7] in order to consider effective interactions and define a relationship between modeling parameters and the collapse capacity of the structure. They proposed a framework to propagate the uncertainties in modeling variables introduced in [
2,
3] and achieved similar results as Liel et al. [
6]. It is worth mentioning that both of them used structural modeling uncertainties as well as Record-to-Record (RTR) variability to quantify their effects in response-history analyses realistically and expediently. The methods to understand and quantify the impact of variations in model parameters on model prediction have been utilized in several research studies related to structural engineering. Vu-Bac et al. [
9] considered a local and global sensitivity analysis to quantify the effect of uncertainties in input parameters on the predicted elastic modulus and yield stress of glassy polyethylene. Vu-Bac et al. [
10] studied the effect of strain rate and temperature on the mechanical properties of amorphous polyethylene. In terms of sensitivity analysis, Vu-Bac et al. [
11] provided a robust toolbox consisting of Matlab functions for sensitivity analysis (SA). Via the aforementioned toolbox it is possible to study the influence of parameters’ uncertainties on the uncertain model outputs. These works are some of the state-of-the-art studies in this field that face the problem of large computational effort while maintaining the desired accuracy. However, there are still many other methods which may be used to decrease computational efforts to a considerable extent.
Sampling-based methods tend to be used to achieve acceptable accuracy in uncertainty propagation in various engineering fields. In this context, there are various studies that attempted to quantify the effect of various sources of uncertainties. For instance, Ghasemi et al. [
12] investigated the effect of geometry and material uncertainties on polymeric nanocomposite continuum structures. Sampling-based reliability methods such as MC method have been used in different studies [
13–
15] for quantifying the effect of different sources of uncertainties on the collapse fragility curves of structures. Latin Hypercube Sampling (LHS) is another useful method that can be defined by manipulating the crude MC. It has also been used by importance sampling [
16], and it is an interesting sampling-based method for reliability analysis. The main idea of LHS is a square grid (Latin square) that contains the samples, and in each row and each column, there is only one sample. A generalized format of this concept with an arbitrary number of dimensions is named the Latin hypercube, in which each axis-aligned hyperplane contains only one sample. Correlation Latin Hypercube Sampling (CLHS) has also been used in some studies [
16–
18]. CLHS is similar to LHS, with a minor modification of the LHS formulation using Cholesky Decomposition in order to reduce spurious correlations.
Because of the time-consuming nature of nonlinear response-history analyses in structural performance assessment, Function Approximation (FA) techniques are usually used to address this problem, especially when a large number of nonlinear response-history analyses are to be used. In this regard, it is possible to use Artificial Neural Networks (ANNs) instead of RS, which have been used in some previous studies [
6,
8]. ANNs are known for their complex computational structures, which are capable of solving engineering problems utilizing the general rules of human brain function [
19–
21]. Therefore, it paves the way for approximating the problem’s responses with FA or Pattern Recognition. They can be used with the aid of analytical procedures in computers based upon various philosophies than the conventional ones [
22–
25]. Pamuncak et al. [
26] used Convolutional Neural Network (CNN) to estimate structural responses using environmental variations. They performed pre-processing in order to transform the data into data frames containing sequences of data. They considered various metrics like Mean Absolute Error (
MAE), Mean Absolute Percentage Error (
MAPE) and Root Mean Square Error (
RMSE) to investigate the effectiveness of their method (CNN) compared with other machine learning techniques. Also, there are various types of Artificial Intelligence (AI) methods that can be used according to demands in reliability analysis. For instance, Counterpropagation Networks [
27], Radial Basis Networks [
19], Gradient-Based Networks [
19] and Support Vector Machines [
28] are known as various types of ANNs. Masoomzadeh et al. [
4] used ANNs in conjunction with CLHS for the estimation of seismic structural responses corresponding to various performance levels and considering modeling uncertainties. Their results demonstrated that ANNs could be an efficient tool for the estimation of seismic structural responses. Radial Basis Function networks (RBFs) are one of the strongest types of methods in machine learning and specifically in ANNs. They have different architectures and only one hidden layer and a straightforward interpretation of the meaning or function of each node in the hidden layer is a merit of this method. RBFs are also faster in training, so the present study utilizes RBFs for FA purposes.
In terms of response history analyses of the structures via FE models, Incremental Dynamic Analysis (IDA) proposed by Vamvatsikos and Cornell [
29] for the first time in 2002, is usually used for earthquake risk assessment of the structures. IDA can capture the nonlinearity of the structure until collapse, but it has high computational demand. Vamvatsikos [
30] upgraded an algorithm in order to paralyze IDA and increase its capability, but it did not prosper in solving IDA’s defect, which is the large computational demand. Endurance Time (ET) method, as an alternative NTHA procedure proposed by Estekanchi et al. [
31], can be a decent procedure for decreasing the amount of required computational effort. This method is an acceptable tool to capture the nonlinearity of the structure and structural components with lower computational demand compared to IDA. In ET, the structure is subjected to intensifying dynamic excitations, and their responses in multiple intensity levels are monitored. Estekanchi and Basim [
32] utilized the ET method in order to acquire optimal placement of viscous dampers in a steel structure. They attained desirable estimations of structural performance at different hazard levels, simultaneously. Basim and Estekanchi [
33] proposed an optimum design procedure by using the ET method for the performance-based design of structures considering the uncertainties in the framework of FEMA-P-58 [
34]. In another work by Shirkhani et al. [
35], ET method was used to estimate the expected seismic losses of structures with Rotational Friction Dampers and acquire the optimal friction moment of the devices. They proposed a practical method to attain the optimal frictional moment of the dampers at various hazard levels. These researches illustrate a decent result for utilizing ET rather than IDA to accelerate applicability of the seismic performance assessment of structures via nonlinear models. In particular, it can be beneficial in the reliability assessment of structures and probabilistic risk assessment of structures in various performance levels using the ET method is one of the significant goals of the present study.
In this research, the probabilistic seismic performance assessment of a 4-story EBF structure considering the modeling and RTR uncertainties with recruiting efficient computational tools is targeted. In this regard, sampling-based reliability analysis based on the results of the ET method is utilized. First, a sensitivity analysis was conducted to determine the most critical behavioral parameters. To this end, One-Variable-at-a-Time (OVAT) procedure has been used to perturb the behavioral parameters. ET method has been used as the NTHA in order to acquire the structural responses at various intensity levels. Second, CLHS has been used to quantify the effect of uncertainties in important modeling parameters on the structural fragility curves. This step involves a large number of response history analyses and is conducted to provide a reference for the proposed method. In the next step, ANNs have been trained as an FA problem to estimate the structural responses (i.e., median and dispersion of the fragility curves). CCD and FrFD have been used to produce an effective data set to be used in ANN training. These networks have been used to propagate the effect of uncertainties in structural responses with much less computational demand. The results from ANNs are certified with those by the CLHS using the ET method and results from previous studies by IDA [
11,
36].
2 Structural model
To achieve a reliable pre-collapse assessment of structures, a detailed analytical model of the prototype structure (the same structure studied in Ref. [
4]) with consideration of degradation properties as well as plastic deformations has been assumed. This is a 2D idealized model of a 3-dimensional 4-story EBF with a symmetric plan. The area of plan per story is equal to 2500 ft
2. This structure has three bays, each with 200 inch length, 140 inch height and fixed supports, as shown in Fig.1. The general loading is applied as a commercial building according to Section 6, INBC [
37], which is similar to ASCE07-10 [
38]. Moreover, as can be seen in Fig.1, a leaning column has been considered for accounting
effect of the assumed middle frames in 3-dimensional EBF. For modeling the leaning columns, it is crucial to model them on a side of the assumed model with a pinned support at the base and connect it with truss elements to the frame in all stories. The shear links and braces are selected from standard W and HSS profiles and are designed according to Section 10, INBC [
39], which closely matches AISC360-10 [
40]. This model is also used in another work by Masoomzadeh et al. [
4], in which more details of the model may be found.
Fiber elements make it possible to indicate the potential of nonlinearity along the element and is used for modeling the columns of the presumed EBF. For beams in the right/left side bays of the model (Fig.1) and for out-of-link beams in middle bays, elastic beam-column elements have been used. Truss elements have been used for modeling braces, and the buckling mode has not been considered. Concentrated plastic hinges have been modeled using the zero-length element in OpenSees to represent the nonlinear behavior of the shear link in the EBF. Empirical equations that Moammer and Dolatshahi [
1] recommended have been used here to define analytical springs (i.e., concentrated plastic hinges) at the middle of shear links. As an example, Fig.2 illustrates the monotonic backbone curve for W16X77 as a shear link based on the empirical equations in [
1]. Moreover, Tab.1 illustrates the empirical equations to estimate the median parameters and corresponding dispersions suggested by Ref. [
1]. The median values of behavioral parameters for the Median Structure (MS) have been presented in Tab.2. It is worth mentioning that MS in this manuscript stands for the structure in which all behavioral parameters are set to their median values.
3 Seismic response assessment
To assess the seismic performance of the structure, specific sets of earthquake records may be used to simulate its nonlinear behavior with NTHA. Life Safety (LS) and Collapse Prevention (CP) are the most well-known performance levels used in design codes and the related literature. The seismic response of the structure can be captured in these performance levels using NTHA methods (e.g., IDA or ET). It is noteworthy mentioning that except IDA and ET, various NTHA methodologies such as the cloud method [
41] and Multiple-Strip Analysis [
41] have been established. In the present study, ET method has been proposed to be used as the NTHA method in assessing the effect of uncertainties in seismic structural responses of EBFs. This method provides the means to capture the structural responses at multiple intensity levels with a considerably reduced amount of computational effort. Then, the results by the ET method will be verified by IDA from a previous study in Ref. [
4]. Here, a brief description of both methods and their advantages and drawbacks are discussed.
3.1 Incremental dynamic analysis
IDA is an acceptable procedure for calculating average or median spectral acceleration
with high accuracy and determining Record-To-Record variability
corresponding to the predefined performance levels. For this goal, IDA can be used to acquire the relationships between inter-story drift ratios and the intensity measure (IM). Among many other IM proposed previously, spectral acceleration at the first mode period of the structure with 5% damping (
) has been used here as the IM. In IDA, the assumed structure is subjected to a set of earthquake records while the scales of the records are increased repeatedly until the collapse of the structure. The output of the IDA method is fragility curves, which show the probability of collapse or the probability of exceeding any performance level. The high computational effort required in IDA may be considered as its demerit. The results of the present study are compared with those obtained by IDA in [
4]. In that study, the first horizontal components of 22 far-field ground motions according to FEMA-P695 [
42] were used. According to ASCE41-06 [
43], 1.5% and 2% Interstory Drift Ratio (IDR) have been used as the considered thresholds of LS and CP performance levels for the sake of simplicity.
3.2 Endurance time method
In the ET method, as a response history-based approach, intensifying accelerograms are used as the base excitation and the structural responses are captured while the intensity of the excitations increases. In all ET acceleration functions, excitation starts at a low intensity, and as time goes by, the intensity of excitation grows larger until the collapse of the structure. Hence, this method makes it feasible to observe seismic structural responses in a range of intensities [
33]. As IDA has high computational demands, the present study aims to using the ET method to reduce the computational demand required in uncertainty propagation.
Indeed, excitation intensity in ET accelerograms is correlated with analysis time. In this regard, the concept of acceleration response spectrum plays a key role in defining the excitation intensity. Numerical optimization techniques have been used to generate ET accelerograms in such a way that the response spectrum of every time window of them, from zero to any specific time, matches a template spectrum with a scale factor. This scale factor increases while the analysis time increases, so, the response spectra of the ET accelerogram match the scaled template spectrum at any other time. In the ET accelerograms produced by now, a linear intensification scheme has been used, and the ET accelerogram acceleration response spectrum amplifies proportionally with time. As an example, their response spectrum in a predefined time (
tTarget) matches a template spectrum while at
t =
tTarget /2 and
t =
tTarget × 2, the resultant response spectrum equals the template spectrum scaled by 0.5 and 2, respectively [
33].
The template spectrum could be a code-based design spectrum or a spectrum resulting from a seismic hazard analysis or an average response spectrum. Also, some other characteristics of earthquake ground motions, such as their strong motion duration and the number of vibration cycles have been considered in producing the ET accelerograms in order to enhance the accuracy of this method in estimating the nonlinear behavior of structures [
44,
45]. Different research works on the aforementioned method and its capabilities are underway to attain more acceptable consistency with ground motions. Nevertheless, numerous studies have shown that the available ET excitation functions are capable of providing acceptable estimates of structural responses at different excitation levels with much less computational demand [
33]. A detailed explanation of the concept and application of ET and its different sets produced with various template spectra are available through the ET method website [
46]. ETA20kd series multiplied by 3 have been used in the present study. In these series, the average response spectrum of a set of five records (i.e., longitudinal accelerogram) selected from FEMA-P440A [
47] is used as the template spectrum. These series consist of five acceleration functions. The first acceleration function is demonstrated in Fig.3. It can be seen that the acceleration response spectrum grows larger as the ET time increases, therefore, each excitation time corresponds to a specific
. This correlation for each of the five acceleration functions at the fundamental period of the structure
s is presented in Fig.4.
ET curves are commonly used to present the results of ET analyses. In these curves, the maximum absolute value of structural response parameter in a smoothed format is plotted against analysis time (e.g., Fig.5). The efficiency of the ET method in assessing the seismic performance of structures has been investigated by various research studies [
48–
51]. Reasonably, acceptable estimates of expected seismic responses at different hazard levels can be obtained via ET analysis by correlating excitation time and the considered IM. The ET method may be used to acquire the fragility curves. kd series, consisting of 5 accelerograms, are more appropriate for this purpose than other series with three accelerograms. Fig.6 demonstrates the fragility curve for the MS in two performance levels using ET records in which lognormal distribution is fitted to estimate the median
and the associated dispersion (
). The acquired dispersion provides an estimation of the RTR variability. The fragility curves for the same structure are also acquired by the IDA method in Ref. [
4], which are presented here in Fig.7 to be compared with those by the ET method. The two fragility curves are compared in Fig.8, which shows that the ET method can provide acceptable estimates of the median IM and the RTR variability at both performance levels. According to the results obtained from the comparison of ET and IDA in Fig.8, there are 8.5% and 7.5% differences in the estimation of the
corresponding to LS and CP performance levels, respectively. Also, 47.7% and 54.1% discrepancies have been seen for estimating (
) corresponding to LS and CP levels, respectively. It can be inferred that the ET method could successfully estimate the median IM corresponding to exceeding LS and CP levels while large discrepancies exist in
values.
4 Sensitivity analysis and results
SA via the OVAT procedure has been used here to interrogate the importance of variations in the modeling parameters on the structural responses. While there are different SA methodologies in local and global contexts to investigate the impact of variations in model parameters on the model responses [
52], a local SA is used here to explore the sensitivity of the model to the variation of parameters in the vicinity of the nominal (median) values. Sensitivity coefficients derived from a local sensitivity analysis are intuitive to interpret and the rankings are normally independent from the method employed while being computationally efficient. This allows easy comparison of results across different models and analysis methods, regardless of the variations in parameters. Readers are referred to Refs. [
10,
53] for more on the global SA techniques. Thus, for local sensitivity analysis, median value, and median plus and minus a standard deviation have been used for every specific behavioral parameter. The perturbation results via sensitivity analysis are shown in Fig.9. In the
y-axis of this figure, the shear link parameters are arranged in the order of their importance on the structural responses in LS and CP levels based on ET results. The
x-axis illustrates the IM value, which is considered the
corresponding to exceeding LS and CP performance levels. According to the arrangement of the random variables of shear link, uncertainty of
has the least effect on the seismic structural responses in both LS and CP performance levels. Here, the seven behavioral parameters of shear links in 4 stories are assumed to be fully correlated. Therefore, seven independent behavioral parameters will be perturbed in the OVAT procedure. In this regard, median structure and the frames with perturbed parameters with minus and plus one standard deviation (totally 15 models) have been used in sensitivity analysis while each model is subjected to 5 ET records.
,
,
and
are the most important behavioral parameters of the frame for LS and CP performance levels. Consequently, according to the mentioned results and [
1], web thickness
and its yield strength
are the important variables in variation of seismic structural response because
is implicitly a function of
and is the most important geometric variable and
is a function of
. The extracted results from sensitivity analysis in this paper are comparable to the results of the sensitivity analysis via IDA in [
4].
Fig.10 presents the structural responses (IDR) of the median structure and the frame considering minus and plus one standard deviation under the ET20kd01 acceleration function. In this figure, the frames have been schematically assumed on a shake table, and their behavior is monitored until they exceed the CP and LS performance levels. It is obvious that each frame exceeds these levels at different times and at each specific time. The acceleration response spectrum at these two different ET times is also shown in this figure. From this figure and Fig.9, it can be seen that the IM () corresponding to exceedance from LS level changes by 4.2% and 2.1% for the frames with minus and plus one standard deviation, respectively, compared to the median structure. These changes are 6.8% and 9.8% for the CP level. As expected, the variations in structural responses due to nonlinear modeling properties are more at the CP level. Fig.11 shows the hysteresis behavior of every specific shear link in the median structure and the structure with minus and plus one standard deviation. It can be seen that the random variations in these parameters may alter the hysteretic behavior of elements considerably, which may result in changing the failure mechanism of the structure and corresponding performance level.
5 Uncertainty propagation via the FE model
5.1 Correlation Latin hypercube sampling
There are various reliability-based procedures for incorporating structural modeling uncertainties in the seismic performance assessment of structures. For instance, sampling-based reliability procedures provide a straightforward way to estimate the effect of uncertainties in seismic structural responses with higher accuracy [
54]. In this paper, CLHS has been utilized to estimate median spectral acceleration (
) corresponding to LS and CP levels and associated dispersion (
). CLHS can be defined by manipulating MC sampling to generate a near-random sample of parameters and reduce the number of samples required. Readers are encouraged to refer to Ref. [
16] for more about this method. Fig.12 illustrates a 3-dimensional representation of the method for 3 variables while the number of samples increases in 4 steps until all the space is filled. This goal has been achieved via four steps in Fig.12, which is an example of the application of LHS. Lower accuracy will be achieved if larger intervals are considered. Therefore, it is better to use an acceptable number of samples in CLHS to acquire acceptable accuracy; however, the amount of required samples is much less compared to brute force random sampling and the computational demand can be considered affordable by CLHS [
55,
56]. Therefore, according to previous studies by the authors [
4] and work by other researchers [
17,
18], 200 samples have been used in this study to propagate the uncertainties in modeling variables. The 200 samples for each variable are acquired based on their distribution presented in Tab.2.
Based on the results of sensitivity analysis, the basic strength deterioration parameter has been eliminated from the problem as an unimportant behavioral parameter. Therefore, six behavioral parameters, according to Tab.1, will be considered in the uncertainty assessment. The probability distribution of each parameter is represented graphically in Fig.13. As mentioned before, 200 samples generated by CLHS have been used to quantify the effects of modeling uncertainties on the structural responses. In this regard, 200 samples (structures) have been analyzed subjected to 5 ETA20kd records. The results of the CHLS method will be compared with the results of the proposed method via ANNs in the following section.
5.2 Combination of epistemic and aleatory uncertainties
As explained in the introduction, the sources of uncertainties in the problem under study are categorized into epistemic and aleatory uncertainties. RTR variability is the only source of aleatory uncertainty which has been considered in this research. Also, epistemic uncertainty is mainly related to a lack of knowledge, e.g., uncertainty in deterioration parameters, uncertainty in material, etc. The modeling uncertainties explained in previous sections are considered to be in this category. Various methods have been adopted to combine these uncertainties. The mean estimates approach is one of these methods that has been used in deconvolving the uncertainties from different sources. In this simple method, denoted as the ‘mean method’, the median of the distribution of responses due to RTR variability is unchanged when incorporating modeling uncertainties and only the variance increases by adding the logarithmic variances due to RTR and modeling uncertainties via square-root-of-the-sum-of-the-squares (
) [
57]. In this method, it is assumed that the median of the IM value associated with exceeding LS and CP performance levels are random variables, and their variability due to RTR variability (
) is independent of the variability due to the modeling uncertainty (
) [
58]. It is worth mentioning that
may be calculated by performing IDA or ET analyses on the median structure as in Fig.6 and Fig.7. The confidence interval approach has been used in some studies [
59,
60]. However, the resulting predictions by this method are highly dependent on the confidence level chosen [
6]. As a rational method, the confidence interval approach is adopted here to generate a fragility curve considering both RTR and modeling uncertainties. Using this method, a combined fragility curve is calculated using the 200 fragilities obtained by the CLHS as shown in Fig.14. In this figure, each fragility curve in gray represents the RTR variability calculated by the ET method for each sample on modeling parameters. In the combined fragility with 50% confidence, any IM corresponding to any probability is the expected value of the IMs from the 200 fragilities with the same probability as displayed in Fig.14. For the 84% confidence, at any probability of exceedance, there will be 16% probability that the actual value of IMs is less than the calculated fragility. Since the ET method underestimates the RTR variability of the IM, RTR variability may be adopted directly from IDA analysis on the median structure (Fig.7) which are
and
, and the epistemic uncertainty (modeling variability) results in the distribution on the median of the IM values calculated by the ET method. The fragilities calculated by these approaches are compared in Fig.14.
6 Artificial neural networks
Although the ET method was used in the previous sections as an efficient tool for response-history analyses to be used in CLHS, the repetitive nature of sampling techniques in uncertainty propagation necessitates the application of FA tools to be used instead of detailed nonlinear FE models of the structure. Thus, the application of a machine learning technique is another significant purpose of this research. Different research works have been done in the literature to predict structural responses by using ANNs [
19,
21,
61]. For instance, RBFs are an efficient machine learning procedure that have been used for different purposes like FA [
62]. Easy and efficient training and design process, strong tolerance to input noise, and capability to generalization are some of the advantages of RBFs, which make it special in comparison with other machine learning methods [
63]. Also, RBF is capable of universal approximation and is a beneficial alternative to the commonly used multilayer perceptron (MLP), because it has a faster training process and more straightforward structure for FA and other attributes [
64]. The advantages of RBF may be lucrative to estimate the median values as well as dispersions corresponding to exceeding LS and CP performance levels and to accelerate the training process with higher accuracy. RBFs have been used in various engineering problems (e.g., clustering, time series prediction, classification, and system control, etc.). As illustrated in Fig.15, input layers, including behavioral parameters connect via synapses to kernel functions, then particular linear weights are specified and their summation is used to approximate the function. The main difference between RBF and MLP is the structure of algorithms. In particular, RBF uses specific activation functions, and in the first step, the behavioral parameters connect to kernel functions. In contrast, MLP summation has been used before using activation functions [
65]. FA using RBFs pave the way to attain a mathematical relationship to achieve acceptable structural responses. Hence, to achieve the maximum efficiency of the trained networks, four RBFs have been trained based on the response data from a number of structural samples (
and
) corresponding to exceeding LS and CP performance levels.
Design of Experiments (DOE) techniques tend to be considered for finding the most optimum samples to decrease the number of training data. The application of DOE procedures in improving neural network models has been investigated in some research studies [
66,
67]. DOE defines a variation scheme for the behavioral parameters in order to reveal their effect on output parameters by using an efficient experimental plan using the minimum number of experimental runs that renders the most information about responses. In the present research, CCD and FrFD have been used to produce 77 samples on the six input parameters to be used in ANN training. Readers are encouraged to refer to the work by Pourreza, et al. [
8] to find out more about the method of designing the required samples. Considering the six input parameters and one output for each RBF network, 77 samples have been used for training four ANNs considering Levenberg-Marquardt (LM) and kernel functions. Inputs are the random structural variables as considered in Fig.15, and the outputs are
,
,
,
.
Fig.15 demonstrates a simple defining architecture of a neuron that receives random input variables of the behavioral parameters while RBF has been used as an activation function (i.e., kernel function). Also, the ANNs considered in this paper are illustrated in Fig.16, considering inputs, outputs and the number of neurons. The factors that have been considered in this research to evaluate the performance of the training ANNs using FA are the correlation coefficient (
R), Mean Square Error (
MSE) [
19], and Performance Index (
PI) [
68].
PI has been proposed by Gandomi and Roke [
68] to examine the performance of a trained model as a function of
R and the relative root mean square error (
RRMSE). According to the definition of
PI, higher
R values and lower
RRMSE values, which show a more precise model result in lower
PI.
PI values range from zero to positive infinity, while smaller values show better performance for the model. Their suggested value for the
PI of an acceptable model is
PI ≤ 0.2.
Several strategies have been proposed to avoid overfitting in mathematical modeling. In the present study, the data has been split into training, validation, and test sets, which helps monitor the network’s performance on unseen data and stop the training if performance on the validation set starts to degrade, which is a sign of overfitting. In other words, training has been stopped when the performance on the validation set starts to worsen while the training performance continues to improve. This helps to prevent the network from learning the noise in the training data. Moreover, regularization techniques such as weight decay are applied to penalize large weights in the network. This helps in keeping the model simpler and reducing overfitting. Other methods, such as cross-validation, have also been implemented in some studies, such as Refs. [
10,
53], to ensure that the data are not overfitted.
As can be seen in Fig.17, the training ANNs for different performance levels have acceptable accuracy. Tab.3 illustrates the estimated dispersion as well as median values considering their specific R factor, MSE and PI. According to Tab.3 and the PI values, trained RBF is reasonable, and overfitting is less likely. Using the data provided in this table, it is possible to seek any sign of overfitting by comparing these metrics (e.g., if the training error is significantly lower than the validation or test error). Even though acceptable results have been acquired for these samples, it is vital to test these trained ANNs. Therefore, in the following, the test results associated with the performance of the trained mathematical relationships in predicting the structural responses have been provided.
Fifteen data from the 200 samples in the CLHS method were selected randomly and used to test every ANN to verify their performance. The test results for the four trained networks are shown in Tab.3. According to this table, this is an efficient procedure for estimation of the and at LS and CP levels with the least computational effort. Using ANNs, estimating and considering epistemic and aleatory uncertainties is possible.
To verify the variations of the predicted values by the input parameters, a parametric analysis has been conducted on the four trained networks [
9,
11,
61,
69]. Using this parametric study, presented in Fig.18, the variation in the four response parameters (
,
,
,
), due to the variations of the six random parameters may be investigated . It can be seen that the most important parameters, such as
and
have the most effect on the variation of the response parameters. The results confirm that the trained ANNs are capable of capturing the effects of random input parameters on the structural responses [
10,
53].
6.1 Fragility curves
After training four ANNs to estimate the values of the
and
at LS and CP levels, the method introduced in Ref. [
6] has been followed here to combine the aleatory and epistemic uncertainties and generate the fragility curve. According to this method, the two ANNs for
and
is used to regenerate 10000 fragilities for each LS and CP performance levels. The average of these fragilities is considered as the one with both modeling and RTR uncertainties. At a given IM value on each fragility curve, the probability of exceeding the considered performance level shows the RTR uncertainty, and the variation of these probabilities among the generated fragilities indicates the effect of modeling uncertainty. Fig.19 shows the combined fragility using this method in which the expected value of the probabilities at each spectral acceleration has been calculated. It can be seen that considering both RTR and modeling uncertainty results in a decrease in the median IM corresponding to the exceedance from LS and CP performance levels and also an increase in dispersion.
6.2 Comparison of the fragility curves
In Fig.20 the combined fragility curves by different methods have been displayed and compared. It can be seen that the adopted approach by ANNs has provided an acceptable estimation of the fragility curve obtained by the CLHS method with much less computational effort. The estimated values are also compared in Tab.4. According to the CLHS method, adding the epistemic (modeling) uncertainty to the RTR uncertainty has shifted the median IM by −4% at the LS level and by −5% at the CP level. In the case of dispersions, the combination of the two sources of uncertainties causes 17% increase in and 20% increase in . According to Tab.4, the ANNs have estimated these changes successfully by a maximum 4% deviation from the results by CLHS.
7 Conclusions
This study has addressed the issue of combining epistemic and aleatory uncertainties by using the ET method. For this goal, a 4-story EBF has been modeled in OpenSees, and sensitivity analysis has been used to evaluate the importance of the modeling parameters. Next, the unimportant behavioral parameter has been eliminated to reduce the problem’s dimension. Then, using CLHS, the effect of modeling uncertainties on fragility curves has been quantified by estimating the value of the and corresponding . The use of the ET method provides the opportunity to assess the effects of uncertainties in two performance levels (LS and CP) without the need to redo the NTHAs. In the next step, DOE methods have been utilized to provide the most efficient training data for ANNs. Then, RBFs were used to decrease the required computational effort by replacing the FE models with FA tools. Finally, using RBFs, the value of as well as have been estimated and compared with those by CLHS. The results obtained in this work may be summarized as follows.
1) The sensitivity analysis reveals the minor importance of the basic strength deterioration parameter () on the structural responses.
2) Pre-capping rotation capacity (), post-capping rotation capacity (), post-capping strength deterioration parameter (), equivalent shear strength to nominal shear strength ratio (), pre-capping strength deterioration (), peak shear strength to the equivalent nominal shear strength ratio () have been known as the most important behavioral parameters. As expected, the results show that these parameters’ effect is more at the CP level.
3) ET analysis provides acceptable estimates of the median IM corresponding to exceeding both LS and CP levels with a maximum error of 8% while this method underestimates the RTR dispersion with a near 50% error. Therefore, this study focused on the dispersion variations while considering various sources of uncertainties. The authors are studying the methods to improve the capability of the ET method in capturing the RTR variability.
4) Accounting for the modeling uncertainty in EBFs, not only increases the by 17% and by 20%, but also shifts the median IM values by −4% at the LS level and by −5% at the CP level.
5) ANN models are efficient tools for estimating the value of the dispersion and median structural responses corresponding to the LS and CP performance levels. By using them, it is feasible to decrease the computational demand exponentially. In particular, using RBFs and considering ET as an NTHA in sensitivity analysis for producing samples for training data for RBFs, can decrease the computational efforts to a considerable extent. The trained ANNs have estimated the median IM and dispersion values by a maximum 4% deviation from the results by CLHS.