High-dimensional features of adaptive superpixels for visually degraded images
Feng-feng Liao , Ke-ye Cao , Yu-xiang Zhang , Sheng Liu
Optoelectronics Letters ›› 2019, Vol. 15 ›› Issue (3) : 231 -235.
High-dimensional features of adaptive superpixels for visually degraded images
This study presents a novel and highly efficient superpixel algorithm, namely, depth-fused adaptive superpixel (DFASP), which can generate accurate superpixels in a degraded image. In many applications, particularly in actual scenes, vision degradation, such as motion blur, overexposure, and underexposure, often occurs. Well-known color-based superpixel algorithms are incapable of producing accurate superpixels in degraded images because of the ambiguity of color information caused by vision degradation. To eliminate this ambiguity, we use depth and color information to generate superpixels. We map the depth and color information to a high-dimensional feature space. Then, we develop a fast multilevel clustering algorithm to produce superpixels. Furthermore, we design an adaptive mechanism to adjust the color and depth information automatically during pixel clustering. Experimental results demonstrate that regardless of boundary recall, under segmentation error, run time, or achievable segmentation accuracy, DFASP is better than state-of-the-art superpixel methods.
| [1] |
|
| [2] |
|
| [3] |
Liang X., Shen X., Feng J., Lin L. and Yan S., Semantic Object Parsing with Graph LSTM, European Conference on Computer Vision, 125 (2016). |
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
Shi Jianbo and Jitendra Malik, Normalized Cuts and Image Segmentation, Departmental Papers (CIS), 107 (2000). |
| [15] |
|
/
| 〈 |
|
〉 |