Applying rotation-invariant star descriptor to deep-sky image registration

Haiyang ZHOU, Yunzhi YU

PDF(828 KB)
PDF(828 KB)
Front. Comput. Sci. ›› 2018, Vol. 12 ›› Issue (5) : 1013-1025. DOI: 10.1007/s11704-017-6495-9
RESEARCH ARTICLE

Applying rotation-invariant star descriptor to deep-sky image registration

Author information +
History +

Abstract

Image registration is a critical process of many deep-sky image processing applications. Image registration methods include image stacking to reduce noise or achieve long exposure effects within a short exposure time, image stitching to extend the field of view, and atmospheric turbulence removal. The most widely used method for deep-sky image registration is the triangle- or polygon-based method, which is both memory and computation intensive. Deepsky image registration mainly focuses on translation and rotation caused by the vibration of imaging devices and the Earth’s rotation, where rotation is the more difficult problem. For this problem, the best method is to find corresponding rotation-invariant features between different images. In this paper, we analyze the defects introduced by applying rotation-invariant feature descriptors to deep-sky image registration and propose a novel descriptor. First, a dominant orientation is estimated from the geometrical relationships between a described star and two neighboring stable stars. An adaptive speeded-up robust features (SURF) descriptor is then constructed. During the construction of SURF, the local patch size adaptively changes based on the described star size. Finally, the proposed descriptor is formed by fusing star properties, geometrical relationships, and the adaptive SURF. Extensive experiments demonstrate that the proposed descriptor successfully addresses the gap resulting from applying the traditional feature-based method to deep-sky image registration and performs well compared to state-of-the-art descriptors.

Keywords

image registration / feature descriptor / deep-sky image / rotation-invariant descriptor

Cite this article

Download citation ▾
Haiyang ZHOU, Yunzhi YU. Applying rotation-invariant star descriptor to deep-sky image registration. Front. Comput. Sci., 2018, 12(5): 1013‒1025 https://doi.org/10.1007/s11704-017-6495-9

References

[1]
Zitová B, Flusser J. Image registration methods: a survey. Image and Vision Computing, 2003, 21(11): 977–1000
CrossRef Google scholar
[2]
Lowe D G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004, 60(2): 91–110
CrossRef Google scholar
[3]
Ruiz-del-Solar J, Loncomilla P, Zorzi P. Applying SIFT descriptors to stellar image matching. In: Proceedings of the 13th Iberoamerican Congress on Pattern Recognition. 2008, 618–625
CrossRef Google scholar
[4]
Marszalek M, Rokita P. Pattern matching with differential voting and median transformation derivation improved point-pattern matching algorithm for two-dimensional coordinate lists. In: Proceedings of International Conference on Computer Vision and Graphics. 2006, 1002–1007
CrossRef Google scholar
[5]
Francisco G V, Luis E C, Juan D V, Peter B S. FOCAS automatic catalog matching algorithms. Publications of the Astronomical Society of the Pacific, 1995, 107(717): 1119
[6]
Bay H, Ess A, Tuytelaars T, Van Gool L. Speeded-up robust features (SURF). Computer Vision and Image Understanding, 2008, 110(3): 346–359
CrossRef Google scholar
[7]
Mikolajczyk K, Schmid C. A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(10): 1615–1630
CrossRef Google scholar
[8]
Rublee E, Rabaud V, Konolige K, Bradski G. ORB: an efficient alternative to SIFT or SURF. In: Proceedings of International Conference on Computer Vision. 2011, 2564–2571
CrossRef Google scholar
[9]
Leutenegger S, Chli M, Siegwart R Y. BRISK: binary robust invariant scalable keypoints. In: Proceedings of International Conference on Computer Vision. 2011, 2548–2555
CrossRef Google scholar
[10]
Takacs G, Chandrasekhar V, Tsai S S, Chen D, Grzeszczuk R, Girod B. Fast computation of rotation-invariant image features by an approximate radial gradient transform. IEEE Transactions on Image Processing, 2013, 22(8): 2970–2982
CrossRef Google scholar
[11]
Alahi A, Ortiz R, Vandergheynst P. FREAK: fast retina keypoint. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 510–517
CrossRef Google scholar
[12]
Alcantarilla P F, Bartoli A, Davison A J. KAZE features. In: Proceedings of the 12th European Conference on Computer Vision. 2012, 214–227
CrossRef Google scholar
[13]
Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2005, 886–893
CrossRef Google scholar
[14]
Wan J, Ruan Q, Li W, Deng S. One-shot learning gesture recognition from RGB-D data using bag of features. Journalof Machine Learning Research, 2013, 14(1): 2549–2582
[15]
Wan J, Ruan Q, Li W, An G, Zhao R. 3D SMoSIFT: three-dimensional sparse motion scale invariant feature transform for activity recognition from RGB-D videos. Journal of Electronic Imaging, 2014, 23(2): 023017
CrossRef Google scholar
[16]
Calonder M, Lepetit V, Strecha C, Fua P. BRIEF: binary robust independent elementary features. In: Proceedings of the 11th European Conference on Computer Vision. 2010, 778–792
CrossRef Google scholar
[17]
Huang D, Zhu C, Wang Y, Chen L. HSOG: a novel local image descriptor based on histograms of the second-order gradients. IEEE Transactions on Image Processing, 2014, 23(11): 4680–4695
CrossRef Google scholar
[18]
Moo Yi K, Verdie Y, Fua P, Lepetit V. Learning to assign orientations to feature points. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, 107–116
[19]
Pizarro D, Bartoli A. Feature-based deformable surface detection with self-occlusion reasoning. International Journal of Computer Vision, 2011, 97(1): 54–70
CrossRef Google scholar
[20]
Belongie S, Malik J, Puzicha J. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(4): 509–522
CrossRef Google scholar
[21]
Arbabmir M V, Mohammadi S M, Salahshour S, Somayehee F. Improving night sky star image processing algorithm for star sensors. Journal of the Optical Society of America A, 2014, 31(4): 794–801
CrossRef Google scholar
[22]
Moffat A. A theoretical investigation of focal stellar images in the photographic emulsion and application to photographic photometry. Astronomy and Astrophysics, 1969, 3: 455
[23]
Fitzgibbon A, Pilu M, Fisher R B. Direct least square fitting of ellipses. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, 21(5): 476–480
CrossRef Google scholar
[24]
Wan J, Guo G, Li S Z. Explore efficient local features from RGB-D data for one-shot learning gesture recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(8): 1626–1639
CrossRef Google scholar
[25]
Lee M H, Cho M, Park I K. Feature description using local neighborhoods. Pattern Recognition Letters, 2015, 68: 76–82
CrossRef Google scholar
[26]
Fischler M A, Bolles R C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communicationsof the ACM, 1981, 24(6): 381–395
CrossRef Google scholar
[27]
Gauglitz S, Turk M, Höllerer T. Improving keypoint orientation assignment. In: Proceedings of the British Machine Vision Conference. 2011, 1–11
CrossRef Google scholar
[28]
Goshtasby A A. Image Registration: Principles, Tools and Methods. Springer Science & Business Media, 2012
CrossRef Google scholar

RIGHTS & PERMISSIONS

2018 Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature
AI Summary AI Mindmap
PDF(828 KB)

Accesses

Citations

Detail

Sections
Recommended

/