Contextual modeling on auxiliary points for robust image reranking

Ying LI , Xiangwei KONG , Haiyan FU , Qi TIAN

Front. Comput. Sci. ›› 2019, Vol. 13 ›› Issue (5) : 1010 -1022.

PDF (606KB)
Front. Comput. Sci. ›› 2019, Vol. 13 ›› Issue (5) : 1010 -1022. DOI: 10.1007/s11704-018-7403-7
RESEARCH ARTICLE

Contextual modeling on auxiliary points for robust image reranking

Author information +
History +
PDF (606KB)

Abstract

Image reranking is an effective post-processing step to adjust the similarity order in image retrieval. As key components of initialized ranking lists, top-ranked neighborhoods of a given query usually play important roles in constructing dissimilarity measure. However, the number of pertinent candidates varies with respect to different queries. Thus the images with short lists of ground truth suffer from insufficient contextual information. It consequently introduces noises when using k-nearest neighbor rule to define the context. In order to alleviate this problem, this paper proposes auxiliary points which are added as assistant neighbors in an unsupervised manner. These extra points act on revealing implicit similarity in the metric space and clustering matched image pairs. By isometrically embedding each constructed metric space into the Euclidean space, the image relationships on underlying topological manifolds are locally represented by distance descriptions. Furthermore, by combining Jaccard index with our auxiliary points, we present a contextual modeling on auxiliary points (CMAP) method for image reranking.With richer contextual activations, the Jaccard similarity coefficient defined by local distribution achieves more reliable outputs as well as more stable parameters. Extensive experiments demonstrate the robustness and effectiveness of the proposed method.

Keywords

image retrieval / unsupervised reranking / context construction / Jaccard distance / query expansion

Cite this article

Download citation ▾
Ying LI, Xiangwei KONG, Haiyan FU, Qi TIAN. Contextual modeling on auxiliary points for robust image reranking. Front. Comput. Sci., 2019, 13(5): 1010-1022 DOI:10.1007/s11704-018-7403-7

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Jégou H, Douze M, Schmid C. Improving bag-of-features for large scale image search. International Journal of Computer Vision, 2010, 87(3): 316–336

[2]

Song G, Tan X. Hierarchical deep hashing for image retrieval. Frontiers of Computer Science, 2017, 11(2): 253–265

[3]

Lowe D G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004, 60(2): 91–110

[4]

Li Y, Kong X, Zheng L, Tian Q. Exploiting hierarchical activations of neural network for image retrieval. In: Proceedings of the 24nd ACM International Conference on Multimedia. 2016, 132–136

[5]

Jégou H, Perronnin F, Douze M, Sanchez J, Perez P, Schmid C. Aggregating local image descriptors into compact codes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(9): 1704–1716

[6]

Liu Z, Wang S, Tian Q. Fine-residual VLAD for image retrieval. Neurocomputing, 2016, 173: 1183–1191

[7]

Zheng L, Wang S, Liu Z, Tian Q. Packing and padding: coupled multiindex for accurate image retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014, 1939–1946

[8]

Chum O, Mikulik A, Perdoch M, Matas J. Total recall II: query expansion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2011, 889–896

[9]

Bai S, Bai X, Tian Q, Latecki L J. Regularized diffusion process for visual retrieval. In: Proceedings of the AAAI Conference on Artificial Intelligence. 2017, 3967–3973

[10]

Bai S, Zhou Z, Wang J, Bai X, Latecki L J, Tian Q. Ensemble diffusion for retrieval. In: Proceedings of the IEEE Conference on Computer Vision. 2017, 774–783

[11]

Köknar-Tezel S, Latecki L J. Improving SVM classification on imbalanced time series data sets with ghost points. Knowledge and Information Systems, 2011, 28(1): 1–23

[12]

Jégou H, Douze M, Schmid C. On the burstiness of visual elements. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2009, 1169–1176

[13]

Zhu Y, Jiang J, Han W, Ding Y, Tian Q. Interpretation of users’ feedback via swarmed particles for content-based image retrieval. Information Sciences, 2017, 375: 246–257

[14]

Zheng L, Yang Y, Tian Q. Sift meets CNN: a decade survey of instance retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(5): 1224–1244

[15]

Chen K, Ding G, Han J. Attribute-based supervised deep learning model for action recognition. Frontiers of Computer Science, 2017, 11(2): 219–229

[16]

Gong Y, Wang L, Guo R, Lazebnik S. Multi-scale orderless pooling of deep convolutional activation features. In: Proceedings of the European Conference on Computer Vision. 2014, 392–407

[17]

Babenko A, Slesarev A, Chigorin A, Lempitsky V. Neural codes for image retrieval. In: Proceedings of the European Conference on Computer Vision. 2014, 584–599

[18]

Kalantidis Y, Mellina C, Osindero S. Cross-dimensional weighting for aggregated deep convolutional features. In: Proceedings of the European Conference on Computer Vision. 2016, 685–701

[19]

Ng J Y, Yang F, Davis L S. Exploiting local features from deep networks for image retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2015, 53–61

[20]

Babenko A, Lempitsky V. Aggregating local deep features for image retrieval. In: Proceedings of the IEEE International Conference on Computer Vision. 2015, 1269–1277

[21]

Chum O, Philbin J, Sivic J, Isard M, Zisserman A. Total recall: automatic query expansion with a generative feature model for object retrieval. In: Proceedings of the IEEE International Conference on Computer Vision. 2007, 1–8

[22]

Qin D, Gammeter S, Bossard L, Quack T, Gool L V. Hello neighbor: accurate object retrieval with k-reciprocal nearest neighbors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2011, 777–784

[23]

Jégou H, Schmid C, Harzallah H, Verbeek J. Accurate image search using the contextual dissimilarity measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(1): 2–11

[24]

Sun S, Li Y, Zhou W, Tian Q, Li H. Local residual similarity for image re-ranking. Information Sciences, 2017, 417: 143–153

[25]

Arandjelovi′c R, Zisserman A. Three things everyone should know to improve object retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2012, 2911–2918

[26]

Yang X, Prasad L, Latecki L J. Affinity learning with diffusion on tensor product graph. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(1): 28–38

[27]

Donoser M, Bischof H. Diffusion processes for retrieval revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2013, 1320–1327

[28]

Bai S, Bai X, Tian Q. Scalable person re-identification on supervised smoothed manifold. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017, 3356–3365

[29]

Bai S, Bai X, Tian Q, Latecki L J. Regularized diffusion process on bidirectional context for object retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 99: 1

[30]

Bai S, Bai X. Sparse contextual activation for efficient visual reranking. IEEE Transactions on Image Processing, 2016, 25(3): 1056–1069

[31]

Nister D, Stewenius H. Scalable recognition with a vocabulary tree. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2006, 2161–2168

[32]

Jégou H, Douze M, Schmid C. Hamming embedding and weak geometric consistency for large scale image search. In: Proceedings of the European Conference on Computer Vision. 2008, 304–317

[33]

Balntas V, Lenc K, Vedaldi A, Mikolajczyk K. Hpatches: a benchmark and evaluation of handcrafted and learned local descriptors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017, 3852–3861

[34]

Philbin J, Chum O, Isard M, Sivic J, Zisserman A. Object retrieval with large vocabularies and fast spatial matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2007, 1–8

[35]

Huiskes M J, Lew M S. The mir flickr retrieval evaluation. In: Proceedings of the ACM International Conference on Multimedia Information Retrieval. 2008, 39–43

[36]

Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T. Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia. 2014, 675–678

RIGHTS & PERMISSIONS

Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature

AI Summary AI Mindmap
PDF (606KB)

Supplementary files

Supplementary Material

846

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/