Defect detection method of magnetic disk image based on improved convolutional neural network
Ming-hai Yao, Meng-li Ma, Jia-min Liu
Optoelectronics Letters ›› 2020, Vol. 16 ›› Issue (5) : 396-400.
Defect detection method of magnetic disk image based on improved convolutional neural network
This paper proposes an efficient method for defect detection of magnetic disk image based on improved convolutional neural network. We build a model named DiskNet on the basis of VGGNet-19, in which the optimal activation function is selected predictively through a weighted probability learning curve model (WP-Model). First, we use Markov Chain Monte Carlo (MCMC) to infer the predicted value and determine prediction probability. Then, the evaluation point (EP) is determined by the effective information of training curve. In the process of DiskNet training, when the prediction probability is higher than the threshold, the neural network will select the current activation function. If the training epochs exceed the EP and the threshold is not reached, the original activation function will be used. The experimental results show that the accuracy of the proposed method in detecting defects on the magnetic disk image data set is 96.9%.
[1] |
|
[2] |
|
[3] |
|
[4] |
Redmon J., Divvala S., Girshick R. and Farhadi A., You Only Look Once: Unified, Real-Time Object Detection, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 779 (2016).
|
[5] |
|
[6] |
|
[7] |
Naderi N. and Nasersharif B., Multiresolution Convolutional Neural Network for Robust Speech Recognition, IEEE Iranian Conference on Electrical Engineering, 1459 (2017).
|
[8] |
Lim W., Jang D. and Lee T., Speech Emotion Recognition Using Convolutional and Recurrent Neural Networks, IEEE Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2016.
|
[9] |
Liu P, Zhang H and Zhang K, Multi-level Wavelet-CNN for Image Restoration, IEEE Conference on Computer Vision and Pattern Recognition, 773 (2018).
|
[10] |
|
[11] |
Nair V. and Hinton G., Rectified Linear Units Improve Restricted Boltzmann Machines, In ICML, 2010.
|
[12] |
Ioffe S. and Szegedy C., Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, arXiv preprint arXiv:1502.03167, 2015.
|
[13] |
Klambauer G., Unterthiner T., Mayr A. and Hochreiter S., Self-normalizing Neural Networks, Advances in Neural Information Processing Systems, 2017.
|
[14] |
Simonyan K. and Zisserman A., Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv preprint arXiv:1409.1556, 2015.
|
[15] |
Clevert DA., Unterthiner T. and Hochreiter S., Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs), arXiv preprint arXiv:1511.07289, 2015.
|
[16] |
Martín A., Paul B., Jianmin C., Zhifeng Chen, Andy D., Jeffrey D., Matthieu D., Sanjay G., Geoffrey I., Michael I., Manjunath K., Josh L., Rajat M., Sherry M., Derek G. M., Benoit S., Paul T., Vijay V., Pete W., Martin W., Yu Yuan and Zheng Xiaoqiang, Tensor-flow: A System for Large-scale Machine Learning, Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2016.
|
[17] |
Glorot X. and Bengio Y., Understanding the Difficulty of Training Deep Feedforward Neural Networks, In AISTATS, 2010.
|
[18] |
Kingma DP. and Ba J., Adam: A Method for Stochastic Optimization, arXiv preprint arXiv:1412.6980, 2014.
|
[19] |
Krizhevsky A, Sutskever I and Hinton G E, ImageNet Classification with Deep Convolutional Neural Networks, Proceedings of the 25th International Conference on Neural Information Processing Systems, 1097 (2012).
|
[20] |
He Kaiming, Zhang Xiangyu, Ren Shaoqing and Sun Jian, Deep Residual Learning for Image Recognition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770 (2016).
|
/
〈 |
|
〉 |