RESERACH ARTICLE

An ellipse detection method for 3D head image fusion based on color-coded mark points

  • Zhen GUO ,
  • Xu LIU ,
  • Han WANG ,
  • Zhenrong ZHENG
Expand
  • State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou 310027, China

Received date: 01 Jul 2012

Accepted date: 08 Aug 2012

Published date: 05 Dec 2012

Copyright

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg

Abstract

In this paper, we proposed a new kind of mark points coded by color and a new quasi-ellipse detector on pixel level. This method is especially applicable to three-dimensional (3D) head panoramic reconstruction. Images of adjacent perspectives can be stitched by matching pasted color-coded mark points in overlap area to calculate the transformation matrix. This paper focuses on how the color-coded mark points work and how to detect and match corresponding points from different perspectives. Tests are performed to show the efficiency and accuracy of this method based on the original data obtained by structured light projection.

Cite this article

Zhen GUO , Xu LIU , Han WANG , Zhenrong ZHENG . An ellipse detection method for 3D head image fusion based on color-coded mark points[J]. Frontiers of Optoelectronics, 2012 , 5(4) : 395 -399 . DOI: 10.1007/s12200-012-0278-6

Introduction

The measurement of the 3D profile is very important in both 3D modeling and 3D display. It attracted lots of attention in the last decades [1,2]. Generally, to measure the whole 3D profile of an object, one needs to take multi-measurement which is partially overlapped, and then fuses the data in different perspective to get the whole point cloud data of the object under the same coordinate. The fusion of the data measured is a quite difficult task in the reconstruction of 3D profile. There are some of the common fusion methods. One is using accurately controlled devices. This method is based on the rotation and translation matrix offered by equipment which needs rather accurate equipment and is complicated to operate [3].
Iterative closest point (ICP) algorithm is another method which solves coordinate conversion matrix by iterative algorithm [4]. However, ICP algorithm is only applied to irregular objects which have strong curvature variations. And it has neither high efficiency nor high accuracy [5].
Calculating transformation matrix based on mark points pasted on overlap area is a widely used method for image fusion too [6,7]. Though this method needs to paste points manually, it is easy to operate and has certain accuracy [8]. Therefore, on the base of mark points, this paper presents a new method for 3D multi-perspective image fusion.
The data source for image fusion in this paper is 3D spatial information of a head model by using structured light projection. Structured light projection is a method to get depth information of object from demodulated stripe on the surface of the object by using projecting grating images from a digital projector [9]. For this method, phase information (depth information) is acquired by phase unwrapping algorithm, and plane position information is acquired by system calibration, thus we can acquire the original 3D spatial information of certain point on the surface of the object [10]. In the experiment, it needs to take an original texture image without grating stripe for every perspective. Image fusion is mostly based on these texture images and the pasted mark points in their overlapped area.

Mark points design

If we want to find the corresponding relationship between images of different perspectives, we need to create a series of points with unique characteristic which can easily be distinguished from the background image.
Mark points fall into two categories: coded and non-coded. Coded mark points have a wide use in the process of image fusion, because it can provide the matching condition for multi-view connection by using the coding points along with the position information. For this reason, we will use coded mark points for image fusion in our experiment.
It’s common that circular point is selected as mark point for the fusion process of a 3D measuring system, because the detection method of circular points can greatly reduce the error probability of point detection [11]. Therefore, most coded mark points are composed by a circular point in the center and some coded points around the central point [12], see Fig. 1. The size of the central point is generally bigger than that of the coded points. Central point provides the position information, while the surrounding points provide the coding information by the presence or absence of the point in certain position. It’s decided by the arrangement of the coded points that how many different kinds of mark points it can present.
Fig.1 A common kind of coded mark points

Full size|PPT slide

When choosing the type of mark points for 3D head image fusion, we have some considerations as following:
1) Limited by the effective area, if the size of coded points is too big, it will reduce the number of mark points; however, if the size is too small, it will be hard to detect after projection and distortion;
2) Because of the projective distortion, the absolute position of the coded points in the captured image will change, which reduce the accuracy of matching.
3) The more circular points on the model, the more time it will take to calculate for compute, while immediate processing is needed.
4) The color of head mostly consists of red, black and skin color. The relationship of RGB channels of these colors will stay still in general no matter how the luminance changes.
5) Based on the considerations above, color-coded mark points are designed for this research. Parameters of the mark points are in Table 1.
Tab.1 Parameters of color-coded mark points
pointvalues of RGBexamplepointvalues of RGBexample
type 10 255 0<InlineMediaObject OutputMedium="Online"><ImageObject FileRef="images\hcm0000470534.jpg" ScaleToFitWidth="" ScaleToFit="1"/></InlineMediaObject><InlineMediaObject OutputMedium="All"><ImageObject FileRef="images\hcm0000470533.tif" Format="TIFF" OrigFileRef="foe-12028-gz-tb1fig1.tif" ScaleToFitHeight="27.0pt" ScaleToFitWidth="" ScaleToFit="1"/></InlineMediaObject>type 30 255 255<InlineMediaObject OutputMedium="Online"><ImageObject FileRef="images\hcm0000470532.jpg" ScaleToFitWidth="" ScaleToFit="1"/></InlineMediaObject><InlineMediaObject OutputMedium="All"><ImageObject FileRef="images\hcm0000470531.tif" Format="TIFF" OrigFileRef="foe-12028-gz-tb1fig3.tif" ScaleToFitHeight="24.255pt" ScaleToFitWidth="" ScaleToFit="1"/></InlineMediaObject>
type 20 0 255<InlineMediaObject OutputMedium="Online"><ImageObject FileRef="images\hcm0000470570.jpg" ScaleToFitWidth="" ScaleToFit="1"/></InlineMediaObject><InlineMediaObject OutputMedium="All"><ImageObject FileRef="images\hcm0000470569.tif" Format="TIFF" OrigFileRef="foe-12028-gz-tb1fig1.tif" ScaleToFitHeight="24.255pt" ScaleToFitWidth="" ScaleToFit="1"/></InlineMediaObject>type 4255 0 255<InlineMediaObject OutputMedium="Online"><ImageObject FileRef="images\hcm0000470568.jpg" ScaleToFitWidth="" ScaleToFit="1"/></InlineMediaObject><InlineMediaObject OutputMedium="All"><ImageObject FileRef="images\hcm0000470567.tif" Format="TIFF" OrigFileRef="foe-12028-gz-tb1fig4.tif" ScaleToFitHeight="26.5125pt" ScaleToFitWidth="" ScaleToFit="1"/></InlineMediaObject>
Actual size: 5.0 mm
In addition, there are some principles when pasting the points on the model as follows: first, there should be certain distance between two mark points and should be non-collinear. Second, mark points should be pasted on relatively smooth areas like forehead or cheek and points at a similar position had better in different colors. Moreover, the number of points in overlap regions of adjacent perspectives should be no fewer than three.

Mark point detection and image fusion

Image preprocessing

Before image fusion, we need to do preprocessing of the images that is detecting the mark points from edge images, which belongs in object recognition domain.
In luminance image, drastic change of luminosity function happens on the borders of an object. Border of an object in the image is a property of single pixel, which can be calculated by image function in the neighborhood of the pixel. It is a vector with both amplitude and direction. The amplitude to border is that of gradient, and the direction of border is the gradient direction rotated by -90°. Gradient direction is the maximizing growth direction of the function.
After experiments and comparison, Canny edge detection algorithm is treated as the optimal method for image fusion based on color-coded mark points in our research [13]. The calculations of the algorithm are as follows:
1) Get the image to make convolution integral with Gaussian function with a scale factor S.
2) Evaluate the normal direction of the border through every pixel of the image.
3) Find the edge by using non-maximum suppression.
4) Calculate the edge intensity.
5) Do lag threshold processing to the edge image to eliminate false response.

Quasi-ellipse mark points detection

After getting edge image, the next step is to distinguish the mark points from background. Mark point is single-color-circular-point with white grounding. It will be distorted to quasi-ellipse based on the properties of perspective projection transformation, which will have certain interference to detection. There are a lot of algorithms for ellipse detection. Examples are like Hough transform, method based on inertia, method with geometrical conditions and some improved algorithm based on the methods above [14-18]. However, those methods have some common disadvantages, like iteration of invalid data point, time-consuming, large storage and calculations, and what’s more, no anti-interference quality for quasi-arc.
Therefore, we design a new detection method for quasi-ellipse mark points on 3D model. It is more targeted as well as efficient.
According to the position of suspected points and the included angle between the tangent plane and image plane, the projected size and deformation of mark points can be forecasted. Thus, we can set threshold to test and verify whether a suspected point is mark point or not. Figure 2 shows the flow chart for distinguishing mark points.
Fig.2 Flow chart for distinguishing mark points

Full size|PPT slide

When checking whether a certain area is a mark point, there are some criteria as follows:
1) Use both the edge image and texture image to estimate whether a pixel is of an edge according to the gray value of the pixel, since in the edge image value 0 means black which is the color of edges and 255 means white of the background.
2) Check whether the distance of boundaries is in the threshold of the length of mark points.
3) Check whether the edge is closed.
4) Use arc criterion to see to distinguish ellipse and rectangle.
In addition, by using the color of mark points, the same points can be matched between two perspective images directly and rapidly. This requires that when pasting mark points, in an area of certain size there are no mark point in same color. Thus, according to geometric constraint and color constraint, we make it possible to detect and match mark points on 3D model rapidly and automatically.
According to the multi-criteria like edge, internal color, size and closed figure, it can detect mark points and find its center accurately, and it can avoid the interference of non-mark-point image.

Point cloud data fusion

After getting matched mark point pair, we can use some mature algorithms like SVD (singular value decomposition) [19] or quaternion method [20]. Through this process, we can get the transformation matrix, including rotation matrix and translation matrix, between two prospective images.
Because of transform error, the transformed data cannot totally be consistent with the data in original image in most time. To eliminate the gap in overlap area in transformed image, we need to use equalization method in the following process.

Experimental result and evaluation

By using turn table to take photos from different perspectives, we can get the original images for panoramic reconstruction. Figure 3 shows two original texture images from adjacent perspectives. And Fig. 4 shows the edge images of the two texture images by using Canny edge detection algorithm.
Fig.3 Original texture images from two adjacent perspectives. (a) Perspective 1; (b) perspective 2

Full size|PPT slide

Fig.4 Edge images of two perspectives. (a) Perspective 1; (b) perspective 2

Full size|PPT slide

The size of the images is 500×550 pixels each and the detected corresponding mark points are described in Table 2.
Tab.2 Detected corresponding mark points in each perspective
perspective 1perspective 2
(565,263,233)(454,262,151)
(566,286,238)(454,285,163)
(617,363,204)(502,362,192)
(643,356,169)(539,355,208)
(575,477,191)(473,473,140)
(550,481,207)(419,383,42)
After obtaining matched point pairs, we choose this SVD method to calculate transformation matrix, including rotation matrix BoldItalic and translation matrix BoldItalic, because of the self-monitoring property of SVD. The final reconstruction head model based on the two images above is shown in Fig. 5.
Fig.5 Reconstruction head model

Full size|PPT slide

To evaluate the transforming quality of the point cloud data of perspective 2, the transforming errors are listed in Table 3.
Tab.3 Comparison of mark point coordinates before and after transformation
perspective 1perspective 2 before transformationperspective 2 after transformationtransforming error/%
(565,263,233)(454,262,151)(558.7992,267.7609,234.4135)1.1167
(566,286,238)(454,285,163)(571.3145,289.9258,236.5492)1.1067
(617,363,204)(502,362,192)(612.2411,371.3796,209.5037)2.1167
(643,356,169)(539,355,208)(646.4414,348.9348,176.1394)2.1167
(575,477,191)(473,473,140)(576.1719,490.5505,196.7524)1.8500
(550,481,207)(419,383,42)(542.5946,484.5465,208.3486)0.7867

Conclusions

In this paper, we have presented a way to splice images from different perspectives based on the point cloud data obtained by structured light projection. According to the characteristics of 3D head model, we proposed a new method by pasting color-coded mark points. The experiment shows that this method has some advantages as follows: it occupies less space than other position coded mark points so that it has more flexibility; and it avoids the reduction of accuracy caused by deformation of coded points; meanwhile it improves computational efficiency. Moreover, colors of the mark point make it possible for corresponding points to match rapidly and automatically. For the covered area by mark points, it can be complemented by interpolation method after image fusion.

Acknowledgements

This work was supported by research grants from the National Key Basic Research Program of 973 (No. 2009CB320803), the National Natural Science Foundation of China (Grant No. 61177015), and Fundamental Research Funds for the Central Universities of China. Great thanks to the State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, for providing the necessary support.
1
Fang Z Y. The research of there-dimension face reconstruction system. Dissertation for the Master Degree. Shanghai: East China Normal University, 2008 (in Chinese)

2
D'Apuzzo N. Overview of 3D surface digitization technologies in Europe. In: Proceedings of SPIE-The International Society for Optical Engineering, 2006, 6056: 605605-605606

3
Wan R, Chen C Y, Yang A K. Application of registration technique in 3D reconstruction. Journal for instrument and apparatus user, 2008, 15(2): 55-56 (in Chinese)

4
Xie Z X, Xu S. A survey on the ICP algorithm and its variants in registration of 3D point clouds. Periodical of Ocean University of China, 2010, 40(1): 99-103 (in Chinese)

5
Gu J B. Research on fusion and reperforating of 3D scanning system. Dissertation for the Master Degree. Nanjing: Southeast University, 2008, 8-16 (in Chinese)

6
Ouyang X B, Zong Z J, Xiong H Y. An automatic registration method for point-clouds based on marked points. Journal of Image and Graphics, 2008, 2(13): 298-301 (in Chinese)

7
Yang W G. Mark point recognition in 3D scanning system and stereo matching of face image. Dissertation for the Master Degree. Nanjing: Southeast University, 2008, 8-16 (in Chinese)

8
Liang Y B, Deng W Y, Luo X P, Lv N G. Automatic registration method of multi-view 3D data based on marked points. Journal of Beijing Information Science and Technology University, 2010, 25(1): 30-33 (in Chinese)

9
Zhang S. Recent progress on real time 3D measurement using digital fringe projection techniques. Optics and Lasers in Engineering, 2010, 48(2): 149-158

DOI

10
Ma S P. The calibration of fringe projection 3D measurement system and their proved calibration algorithm. Dissertation for the Master Degree. Xi’an: Xi’an Technological University, 2011 (in Chinese)

11
Yin Y K, Liu X L, Li A M, Peng X. Sub-pixel location of circle target and its application. Infrared and Laser Engineering, 2008, 37(4): 47-50 (in Chinese)

12
Ma Y B, Zhong Y X, Zheng L. Design and recognition of coded targets for 3D registration. Journal of Tsinghua University (Sci & Tech), 2006, 46(2): 169-171 (in Chinese)

13
Su Q. Research on circle detection algorithm and its application in the measurement of concentricity. Dissertation for the Master Degree. Guilin: Guangxi Normal University, 2008. (in Chinese)

14
Mclaughlin R A. Randomized hough transform: improved ellipse detection with comparison. Pattern Recognition Letters, 1998, 19(3-4): 299-305

DOI

15
Zhang S C. Liu Z Q. A robust, real-time ellipse detector. Pattern Recognition, 2005, 38(2): 273-287

DOI

16
Xue T, Sun M, Zhang T, Wu B. Complete approach to automatic identification and sub-pixel center location for ellipse feature. Journal of Optoelectronics·Laser, 2008, 19 (8): 1076-1078 (in Chinese)

17
Li Z Q, He Y P. Fast randomized multi-circle detection based on local search. Application Research of Computers, 2008, 25(2): 469-472 (in Chinese)

18
Zhang H, Da F P, Xing D K. Algorithm of centre location of ellipse in optical measurement. Journal of Applied Optics, 2008, 29(6): 905-911 (in Chinese)

19
Arun K S, Huang T S, Blostein S D. Least-squares fitting of two 3-D point sets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1987, 9(5): 698-700

DOI PMID

20
Horn B K P. Closed-form solution of absolute Orientation using unit quaternion. Journal of the Optical Society of America A, 1987, 4(4): 629-642

DOI

Outlines

/