Prediction of the Charpy V-notch impact energy of low carbon steel using a shallow neural network and deep learning
Si-wei Wu , Jian Yang , Guang-ming Cao
International Journal of Minerals, Metallurgy, and Materials ›› 2021, Vol. 28 ›› Issue (8) : 1309 -1320.
Prediction of the Charpy V-notch impact energy of low carbon steel using a shallow neural network and deep learning
The impact energy prediction model of low carbon steel was investigated based on industrial data. A three-layer neural network, extreme learning machine, and deep neural network were compared with different activation functions, structure parameters, and training functions. Bayesian optimization was used to determine the optimal hyper-parameters of the deep neural network. The model with the best performance was applied to investigate the importance of process parameter variables on the impact energy of low carbon steel. The results show that the deep neural network obtains better prediction results than those of a shallow neural network because of the multiple hidden layers improving the learning ability of the model. Among the models, the Bayesian optimization deep neural network achieves the highest correlation coefficient of 0.9536, the lowest mean absolute relative error of 0.0843, and the lowest root mean square error of 17.34 J for predicting the impact energy of low carbon steel. Among the variables, the main factors affecting the impact energy of low carbon steel with a final thickness of 7.5 mm are the thickness of the original slab, the thickness of intermediate slab, and the rough rolling exit temperature from the specific hot rolling production line.
prediction / shallow neural network / deep neural network / impact energy / low carbon steel
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
G.B. Huang, Q.Y. Zhu, and C.K. Siew, Extreme learning machine: A new learning scheme of feedforward neural networks, [in] 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), Vol. 2, Budapest, 2004, p. 985. |
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
D.P. Kingma and J.L. Ba, Adam: A method for stochastic optimization, [in] 3rd International Conference for Learning Representations, San Diego, 2015. |
| [24] |
|
| [25] |
X.Q. Zeng and G. Luo, Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection, Health Inf. Sci. Syst., 5(2017), No. 1, art. No. 2. |
| [26] |
J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl, Algorithms for hyper-parameter optimization, [in] J. Shawe-Taylor, R.S. Zemel, P.L. Bartlett, F. Pereira, and K.Q. Weinberger, eds., Proceedings of the 24th International Conference on Neural Information Processing Systems, Granada, 2011, p. 2546. |
| [27] |
|
| [28] |
T.F. Awolusi, O.L. Oke, O.O. Akinkurolere, A.O. Sojobi, and O.G. Aluko, Performance comparison of neural network training algorithms in the modeling properties of steel fiber reinforced concrete, Heliyon, 5(2019), No. 1, art. No. e01115. |
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
C.C. Qi, A. Fourie, G.W. Ma, X.L. Tang, and X.H. Du, Comparative study of hybrid artificial intelligence approaches for predicting hangingwall stability, J. Comput. Civ. Eng., 32(2018), No. 2, art. No. 04017086. |
| [34] |
|
/
| 〈 |
|
〉 |