A froth velocity measurement method based on improved U-Net++ semantic segmentation in flotation process
Yiwei Chen, Degang Xu, Kun Wan
A froth velocity measurement method based on improved U-Net++ semantic segmentation in flotation process
During flotation, the features of the froth image are highly correlated with the concentrate grade and the corresponding working conditions. The static features such as color and size of the bubbles and the dynamic features such as velocity have obvious differences between different working conditions. The extraction of these features is typically relied on the outcomes of image segmentation at the froth edge, making the segmentation of froth image the basis for studying its visual information. Meanwhile, the absence of scientifically reliable training data with label and the necessity to manually construct dataset and label make the study difficult in the mineral flotation. To solve this problem, this paper constructs a tungsten concentrate froth image dataset, and proposes a data augmentation network based on Conditional Generative Adversarial Nets (cGAN) and a U-Net++-based edge segmentation network. The performance of this algorithm is also evaluated and contrasted with other algorithms in this paper. On the results of semantic segmentation, a phase-correlation-based velocity extraction method is finally suggested.
froth flotation / froth segmentation / froth image / data augmentation / velocity extraction / image features
[[1]] |
|
[[2]] |
|
[[3]] |
|
[[4]] |
|
[[5]] |
|
[[6]] |
|
[[7]] |
W.X. Wang and O. Stephansson, A robust bubble delineation algorithm for froth images, [in] Proceedings of the Second International Conference on Intelligent Processing and Manufacturing of Materials. IPMM’99, Honolulu, 2002, p. 471.
|
[[8]] |
|
[[9]] |
|
[[10]] |
|
[[11]] |
|
[[12]] |
|
[[13]] |
J. Zhang, Z.H. Tang, Y.F. Xie, M.X. Ai, and W.H. Gui, Convolutional memory network-based flotation performance monitoring, Miner. Eng., 151(2020), art. No. 106332.
|
[[14]] |
|
[[15]] |
|
[[16]] |
K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, [in] International Conference on Learning Representations, San Diego, 2015.
|
[[17]] |
K.M. He, X.Y. Zhang, S.Q. Ren, and J. Sun, Deep residual learning for image recognition, [in] 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 2016, p. 770.
|
[[18]] |
H. Noh, S. Hong, and B. Han, Learning deconvolution network for semantic segmentation, [in] 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, p. 1520.
|
[[19]] |
|
[[20]] |
R. Girshick, J. Donahue, T. Darrell, and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, [in] 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 2014, p. 580.
|
[[21]] |
|
[[22]] |
J. Wang, Y. Yang, J.H. Mao, Z.H. Huang, C. Huang, and W. Xu, CNN-RNN: A unified framework for multi-label image classification, [in] 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, 2016, p. 2285.
|
[[23]] |
A. Garcia-Garcia, S. Orts-Escolano, S.O. Oprea, V. Villena-Martinez, and J. Garcia-Rodriguez, A review on deep learning techniques applied to semantic segmentation, 2017. https://arxiv.org/abs/1704.06857v1.
|
[[24]] |
J. Long, E. Shelhamer, and T. Darrell, Fully convolutional networks for semantic segmentation, [in] 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 2015, p. 3431.
|
[[25]] |
|
[[26]] |
|
[[27]] |
B.K. Gharehchobogh, Z.D. Kuzekanani, J. Sobhi, and A.M. Khiavi, Flotation froth image segmentation using Mask R-CNN, Miner. Eng., 192(2023), art. No. 107959.
|
[[28]] |
O. Ronneberger, P. Fischer, and T. Brox, U-Net: Convolutional networks for biomedical image segmentation, [in] N. Navab, J. Hornegger, W.M. Wells, and AF. Frangi, eds., Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Part III, Munich, 2015, p. 234.
|
[[29]] |
Z.W. Zhou, M.M.R. Siddiquee, N. Tajbakhsh, and J.M. Liang, UNet++: A nested U-Net architecture for medical image segmentation, [in] D. Stoyanov, Z. Taylor, G. Carneiro, et al., eds., Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (DLMIA 2018, ML-CDS 2018), Granada, 2018, p. 3.
|
[[30]] |
|
[[31]] |
|
[[32]] |
|
[[33]] |
X. Yi, E. Walia, and P. Babyn, Generative adversarial network in medical imaging: A review, Med. Image Anal., 58(2019), art. No. 101552.
|
[[34]] |
M. Mirza and S. Osindero, Conditional generative adversarial nets, 2014. https://arxiv.org/abs/1411.1784
|
[[35]] |
A. Radford, L. Metz, and S. Chintala, Unsupervised representation learning with deep convolutional generative adversarial networks, 2015. https://arxiv.org/abs/1511.06434
|
[[36]] |
C. Szegedy, W. Liu, Y.Q. Jia, et al., Going deeper with convolutions, [in] 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 2015, p. 1.
|
[[37]] |
|
[[38]] |
A. Painsky and G. Wornell, On the universality of the logistic loss function, [in] 2018 IEEE International Symposium on Information Theory (ISIT), Vail, 2018, p. 936.
|
[[39]] |
P. Ramachandran, B. Zoph, and Q.V. Le, Searching for activation functions, 2017. http://arxiv.org/abs/1710.05941
|
[[40]] |
|
[[41]] |
|
[[42]] |
|
[[43]] |
|
/
〈 | 〉 |