RESEARCH ARTICLE

Lens distortion correction based on one chessboard pattern image

  • Yubin WU ,
  • Shixiong JIANG ,
  • Zhenkun XU ,
  • Song ZHU ,
  • Danhua CAO
Expand
  • School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan 430074, China

Received date: 13 Jun 2014

Accepted date: 05 Sep 2014

Published date: 18 Sep 2015

Copyright

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg

Abstract

This paper proposes a detection method of chessboard corner to correct camera distortions –including radial distortion, decentering distortion and prism distortion. This proposed method could achieve high corner detection rate. Then we used iterative procedure to optimize distortion parameter to minimize distortion residual. In this method, first, non-distortion points are evaluated by four points near image center; secondly, Levenberg-Marquardt nonlinear optimization algorithm was adopted to calculate distortion parameters, and then to correct image by these parameters; thirdly, we calculated corner points on the corrected image, and repeated previous two steps until distortion parameters converge. Results showed the proposed method by iterative procedure can make the impact of slight distortion around image center negligible and the average of distortion residual of one line is almost 0.3 pixels.

Cite this article

Yubin WU , Shixiong JIANG , Zhenkun XU , Song ZHU , Danhua CAO . Lens distortion correction based on one chessboard pattern image[J]. Frontiers of Optoelectronics, 2015 , 8(3) : 319 -328 . DOI: 10.1007/s12200-015-0453-7

Introduction

Precise calibration for a camera is fundamental in computer vision and vision metric. A camera is often modeled as an ideal pin-hole one, which images on the focus plane without distortion. However, lens distortion will make the image distorted from ideal one.
There are several distortion models depending on the type of lens. In this paper, radial, decentering and prism distortion models proposed by Brown [ 1, 2] were to be considered.
To identify each distortion model, several methods have been proposed. The first method uses the known object 3D world coordinates [ 3], with which the result could be inaccurate when both camera intrinsic and extrinsic parameters are estimated at the same time. The second method uses point correspondences in different views [ 4]. This method is based on fundament matrix without the intrinsic and extrinsic explicitly. However, it is difficult to find correct point correspondences in multi-views. The third method, which estimates the distortion without camera parameters, is based on projective invariants [ 5- 7]. Nowadays, more and more researches depend on this method. This method estimates the distortion without camera parameters. The most widely used projective invariant is straight line, which remains straight from different view if there is no lens distortion based on pin-hole camera. This method needs to know the scene in advance, as it is important to find out the straight line or other invariant objects in distortion image. The forth method is also based on point correspondences between image points and world points [ 8], and it uses planar invariant instead. Planar calibration pattern points and non-distortion image points are mapped by homography matrix. This method calculates the distortion model parameters to make the correspondence most fit.
There are two different ways to compute the distortion models, backward mapping and forward mapping. Zhang [ 4] used forward mapping in both model definition and computing, while Hartley and Kang [ 9] used backward mapping.
In this paper, a new camera calibration method is presented, we used only one chessboard pattern image to solve the distortion models, which includes three main types of lens distortion–radial, decentering and prism distortions. It has four steps. First, image feature points are detected by chessboard point detecting algorithm. Second, the non-distortion feature point coordinate is calculated from detected feature point on distortion image. Third, nonlinear optimization is done to get the distortion model. Last, iterate former the three steps until converge. This method is based on the idea that the distortion near the optic axis is tiny, and we can use iterative procedure to reduce influence of this distortion. Therefore, non-distortion feature points can be calculated using the near center feature points on distortion image.
The paper is organized as follows. Section 2 describes the camera lens distortion model. Section 3 proposes the method of distortion model calculating. In Section 4, several experimental results are reported on both real and synthetic data. The paper ends with some concluding remarks.

Lens distortion model

Lens distortion makes the actual image different from ideal pin-hole image. Three forms of lens distortion, namely radial distortion, decentering distortion and thin prism distortion are taken into account in this paper [ 8]. The distortion can be formed as follow:
[ x y ] = [ x * y * ] + [ Δ x * Δ y * ] .
[ x , y ] T is the ideal image without distortion, [ x * , y * ] T is the actual image with distortion, [ Δ x * , Δ y * ] T is the amount of distortion. Different forms of lens distortion result in the different part of distortion factor.

Radial distortion

Radial distortion is caused by inconsistent magnification in the field of view. This distortion is the significant part of lens distortion. Many researches only dealt with radial distortion [ 5, 7, 10].
The amount of radial distortion is circularly symmetric about optical center as shown in Fig. 1, which means there is no distortion at the optical center. In general, the farther the image point is away from the optical axis, the greater the distortion is. Radial distortion can be expressed as follows:
{ Δ r x ( x * , y * ) = k 1 x * ( ( x * ) 2 + ( y * ) 2 ) + k 2 x * ( ( x * ) 2 + ( y * ) 2 ) 2 Δ r y ( x * , y * ) = k 1 y * ( ( x * ) 2 + ( y * ) 2 ) + k 2 y * ( ( x * ) 2 + ( y * ) 2 ) 2 ,
where k1, k2 are distortion factors of radial distortion model.
Fig.1 (a) Barrel distortion image; (b) pincushion distortion image

Full size|PPT slide

Decentering distortion

Decentering distortion is due to non-strict collineation of the optical centers of lens elements, as shown in Fig. 2. The decentering distortion model is formed as Eq. (3).
{ Δ d x ( x * , y * ) = p 1 ( 3 ( x * ) 2 + ( y * ) 2 ) + 2 p 2 x * y * Δ d y ( x * , y * ) = p 2 ( ( x * ) 2 + 3 ( y * ) 2 ) + 2 p 1 x * y * ,
where p1, p2 are distortion factors of decentering distortion model.
Fig.2 Image with decentering distortion

Full size|PPT slide

Thin prism distortion

Thin prism distortion arises from the imperfection in lens design and manufacturing as well as camera, as shown in Fig. 3.The thin prism distortion model is described in Eq. (4).
{ Δ p x ( x * , y * ) = s 1 ( ( x * ) 2 + ( y * ) 2 ) Δ p y ( x * , y * ) = s 2 ( ( x * ) 2 + ( y * ) 2 ) ,
where s1, s2 are distortion factors of prism distortion model.
Fig.3 Image with thin prism distortion

Full size|PPT slide

In this paper, three forms of distortion are considered. Then, the total amount of distortion is the sum of three distortions.
{ Δ x * ( x * , y * ) = Δ r x ( x * , y * ) + Δ d x ( x * , y * ) + Δ p x ( x * , y * ) Δ y * ( x * , y * ) = Δ r y ( x * , y * ) + Δ d y ( x * , y * ) + Δ p y ( x * , y * ) .

Distortion correct with one chessboard pattern image

The proposed method consists following steps: 1) preprocess chessboard image and pick up the coordinate of corner points; 2) reconstruct non-distortion corner coordinate from distortion image; 3) initialize distortion model parameters using Levenberg-Marquardt nonlinear algorithm (LM); 4) use iterative procedure to minimize distortion residual; 5) use forward mapping method to correct image.

Corner points detection on distortion image

The performance of distortion correction is affected by the accuracy of the corner point coordinate. In this paper, a new corners detecting method of chessboard pattern image is proposed. This method has high corner detection rate.
The proposed method detects corner points according to the dissimilar distribution of pixels gray-scale between corner region and non-corner region. It defines three operators to calculate the dissimilarity distribution on different directions as follows. S is centrosymmetric operator. V is vertical symmetry operator. H is horizontal symmetry operator.
{ s ( x 0 , y 0 ) = ( x , y ) n e i g h b o r h o o d ( x 0 , y 0 ) | p ( x , y ) - p ( 2 x 0 - x , 2 y 0 - y ) | / c a r d ( n e i g h b o r h o o d ( x 0 , y 0 ) ) v ( x 0 , y 0 ) = ( x , y ) n e i g h b o r h o o d ( x 0 , y 0 ) | p ( x , y ) - p ( x , 2 y 0 - y ) | / c a r d ( n e i g h b o r h o o d ( x 0 , y 0 ) ) h ( x 0 , y 0 ) = ( x , y ) n e i g h b o r h o o d ( x 0 , y 0 ) | p ( x , y ) - p ( 2 x 0 - x , y ) | / c a r d ( n e i g h b o r h o o d ( x 0 , y 0 ) ) ,
where n e i g h b o r h o o d ( x 0 , y 0 ) is 8 × 8rectangle region with point ( x 0 , y 0 ) located at the center, card() returns the pixel number of the region.
Define M as pixels response, whose value is associated with three operators. The response value of corner point is calculated by the following steps:
1)Compute the value of pixels by the three operators;
2)If all of the three operator results are between the pre-defined ranges, the final response value is the result by operator S. Otherwise, M is set to zero.
M = { s | s [ s min , s max ] , v [ v min , v max ] , h [ h min , h max ] } { 0 | s [ s min , s max ] o r v [ v min , v max ] o r h [ h min , h max ] } ,
Median filtering algorithm is used to remove isolated detective point which is fault detection caused by noise. The corner coordinate can be computed by calculating the connected region centroid of response M, described as follows:
c e n t r o i d = c o n n e c t e d m * p o s i t i o n c o n n e c t e d m ,
where m M , connected region is detected by 8-adjacent connection, positionm is the coordinate of the point with value m.
In this paper, distortion of the four corner points around the center is tiny for distortion image. Non-distortion points are reconstructed by the geometry feature (considering the same size of each block) of chessboard, which exploits these four corner points, as shown in Fig. 4, defined in Eq. (9).
{ x ( i , j ) = x ( 0 , 0 ) + j * ( x ( 0 , 1 ) - x ( 0 , 0 ) ) + i * ( x ( 1 , 0 ) - x ( 0 , 0 ) ) y ( i , j ) = y ( 0 , 0 ) + j * ( y ( 0 , 1 ) - y ( 0 , 0 ) ) + i * ( y ( 1 , 0 ) - y ( 0 , 0 ) ) ,
where (i,j) is the points array order, i and j could be negative integer, center four points are (0,0), (0,1), (1,0), (1,1).
To minimize the influence of the distortion of four center points, this paper introduces iterative process. The distortion of the four points would reduce after image correction, and then the corrected image is treated as an input distortion image for next iteration to reconstruct non-distortion points, iterate this procedure until four points unchanged after correction.
Fig.4 (a) Distortion image; (b) the dots are detected corners in image (a) and the circles are ideal points location.

Full size|PPT slide

Distortion model calculate

Three forms of distortion indicate that the optical center region is always non-distortion.
As long as the distortion and non-distortion points are achieved, the total amount of distortion would be computed by Eq. (1).
Expansion Eq. (5) is the whole model formed by three distortions.
{ Δ x ( x * , y * ) = k 1 x * ( ( x * ) 2 + ( y * ) 2 ) + k 2 x * ( ( x * ) 2 + ( y * ) 2 ) 2 + [ p 1 ( 3 ( x * ) 2 + ( y * ) 2 ) + 2 p 2 x * y * ] + s 1 ( ( x * ) 2 + ( y * ) 2 ) Δ y ( x * , y * ) = k 1 y * ( ( x * ) 2 + ( y * ) 2 ) + k 2 y * ( ( x * ) 2 + ( y * ) 2 ) 2 + [ p 2 ( ( x * ) 2 + 3 ( y * ) 2 ) + 2 p 1 x * y * ] + s 2 ( ( x * ) 2 + ( y * ) 2 ) .
In this paper, the optical center is set to the image center. Different center only affects the distortion model values, which would not influence the result of distortion correction. Then, the linear equation is showed as follows
X - X * = A P ,
where
{ A = [ x * ( ( x * ) 2 + ( y * ) 2 ) , x * ( ( x * ) 2 + ( y * ) 2 ) 2 , 3 ( x * ) 2 + ( y * ) 2 , 2 x * y * , ( x * ) 2 + ( y * ) 2 , 0 y * ( ( x * ) 2 + ( y * ) 2 ) , y * ( ( x * ) 2 + ( y * ) 2 ) 2 , 2 x * y * , ( x * ) 2 + 3 ( y * ) 2 , 0 , ( x * ) 2 + ( y * ) 2 ] P = [ k 1 k 2 p 1 p 2 s 1 s 2 ] T X = [ x y ] T X * = [ x * y * ] T .
P is the required model parameters, X is the non-distortion position, X* is the distortion position.
The model P can be calculated by LM nonlinear optimization [ 11]. The objective function is defined as follows:
1 2 [ X 0 - X ( P , X 0 * ) 2 , , X j - X ( P , X j * ) 2 , , X m - X ( P , X m * ) 2 ] = 0 ,
where X j is non-distortion pixel coordinate, X ( P , X j * ) is the pixel coordinate transferred by Eq. (11) related to distortion image.
The Jacobian matrix is used for LM algorithm. From Eq. (12), the Jacobian matrix can be easy to get.
Then, the procedure for calculating distortion parameters is:
1)use corner detection to obtain point coordinate;
2)calculate the non-distortion point location depending on near center point;
3)using LM method to optimize distortion model parameters;
4)calculate the distortion residual, repeat steps 1), 2) and 3) until distortion residual convergence;
The parameter of convergence is the best solution of proposed method.

Distortion image correction

Image is corrected by the distortion parameters obtained in last steps. This procedure contains two steps.
1)Coordinate transfer: transfer the pixel coordinate from distortion image to non-distortion image (forward-mapping, as shown in Fig. 5) or from non-distortion image to distortion image (backward-mapping). This paper uses forward-mapping method to rectify image.
Fig.5 Forward-mapping from distortion image to correction image

Full size|PPT slide

2)Gray-scale interpolation: in this paper, bilinear interpolation is used to reconstruct the correction image.

Experiments

Corner points detection algorithm

The performance of the proposed corner detective algorithm is tested by comparing with the chessboard corner detection algorithm in OpenCV. It is shown in Fig. 6 that the proposed method achieves higher detection rate and is more accurate than the OpenCV method.
Fig.6 White rectangles are the detected corners: (a) the proposed method; (b) the OpenCV method

Full size|PPT slide

Synthetic data experiment

The performance of the proposed method is tested in several experiments. First, use the proposed method to estimate parameters of artificial distortion images. We distorted the origin image (Fig. 7) by adding all the three forms of distortion, and then corrected them with the proposed method.
Fig.7 Origin image without distortion

Full size|PPT slide

Fig.8 (a) and (b) are images with different distortion parameters; (c) and (d) are corrected image by proposed method, which parameters are shown in Table 1

Full size|PPT slide

Tab.1 Compares the distortion between man-made parameters and the results of proposed method
origin distortion image Figure 8(a) Figure 8 (b)
distortion model k 1 = 3.0 e - 6 , k 2 = - 5.8 e - 13 p 1 = - 2.4 e - 5 , p 2 = 2.86 e - 5 s 1 = 6.5 e - 6 , s 2 = - 7.2 e - 6 k 1 = - 2.3 e - 6 , k 2 = 4.3 e - 13 p 1 = 5.0 e - 5 , p 2 = - 3.4 e - 5 s 1 = 7.8 e - 6 , s 2 = 5.6 e - 6
correct image by proposed method Figure 8(c) Figure 8(d)
correct model by proposed method k 1 = 3.67 e - 6 , k 2 = - 3.47 e - 12 p 1 = - 1.99 e - 5 , p 2 = 2.36 e - 5 s 1 = - 1.26 e - 6 , s 2 = 2.87 e - 7 k 1 = - 2.67 e - 6 , k 2 = 1.37 e - 13 p 1 = 4.90 e - 5 , p 2 = - 2.79 e - 5 s 1 = 1.48 e - 5 , s 2 = - 6.87 e - 6
This method has a good estimation about parameters k 1 , p 1 , p 2 , which are the main factors of distortion. The correction images are very similar to the origin image without distortion, shown in Fig. 8.
Synthetic data are used for further analysis for the performance of the proposed method. The synthetic data are setup as Gao and Yin did inRef [ 8], shown in Fig. 9(a), defining the optical axis as z-axis, the horizon direction on the image plane as x-axis, and vertical direction on the image plane as y-axis.
Fig.9 Performance of the proposed method with the synthetic data, dots are distorted points and circles are non-distorted points. (a) The distorted image; (b) the corrected image by the proposed method

Full size|PPT slide

Fig. 10 shows the three methods’ performance versus noises. In this experiment, Gaussian noise with zero mean and σ standard deviation is added to distorted points. We test the performance with noise standard deviation σ from 0.1 to 1.5 pixels. Define ERROR as root of mean square error in Eq. (13), which is to describe the correction accuracy. For each noise level, we used 100 images to evaluate the method’s performance. Brown’s method does not correct prism distortion. It indicates that the method without considering the entire distortion model could not make a good image correction.
E R R O R = i ( x d - x r ) 2 + ( y d - y r ) 2 n ,
where n is the number of samples, (xd,yd) is test data and (xr,yr) is the real data.
Fig.10 Compare ERROR of the proposed method with Gao’s and Brown’s method versus different noises levels

Full size|PPT slide

To test the robust of the proposed method, the image data are rotated around z-axis or x-axis. The results are shown in Figs. 11 and 12. Figure 11 shows the ERROR results when the chessboard is rotated around z-axis. It shows that the correction accuracy is similar between different rotation angles. In Fig. 12, though the rotation around x-axis is even up to 7.5°, the ERROR is similar with Gao’s method.
Fig.11 ERROR of different rotations around the z-axis of image versus noises levels

Full size|PPT slide

Fig.12 ERROR versus different rotation around the x-axis of image

Full size|PPT slide

The proposed method is very fast to converge. In these experiments, the iteration times are nearly 2 to 4, and the total time consuming to solve distortion model is almost 10 ms (image size is 752 × 480, using PC with CPU of i3 2.5 GHz).

Real data experiment

Real distortion images are taken by the Pentax CCTV lens (F1.2) with the focal lengths of 4 and 8 mm, which are captured by smart camera based on TMS320DM6437 DSP. Figure 7 is the object for taking photos. Figure 13 shows the pictures with different lens.
Fig.13 (a) Pictures taken with 4 mm lens, image size is 752 ×480; (b) pictures is taken with 8 mm lens, image size is 640 ×480

Full size|PPT slide

Use the proposed method to correct the distortion images in Fig. 13, and results are shown in Fig. 14. It shows that proposed method makes the collinear point in straight pattern in image. Table 2 presents the solutions in detail. We evaluate distortion residual of the distance between point and line. Since the points on each columns and rows are actually collinear, the distortion residual is the average distance between points and line. First, least square method is used to estimate the line function. Second, the absolute distance from points to the line is computed. The average absolute distance of all the point indicates the error of distortion, named as root mean square error (RMSE). Maximum distortion rate is calculated by the maximum distance from point to line dividing the image diagonal length.
Fig.14 (a) Corrected image taken with 4 mm lens; (b) corrected image taken with 8 mm lens

Full size|PPT slide

Tab.2 Experiment results with proposed method
len 1 focal f = 4 mm len 2 focal f = 8 mm
correct model k 1 = 8.86 e - 7 , k 2 = 3.97 e - 12 p 1 = 1.23 e - 5 , p 2 = 5.64 e - 5 s 1 = 1.43 e - 5 , s 2 = - 2.23 e - 5 k 1 = 7.72 e - 7 , k 2 = 1.98 e - 14 p 1 = 1.05 e - 8 , p 2 = 2.89 e - 8 s 1 = 1.37 e - 8 , s 2 = 2.15 e - 8
RMSE 0.34 pixel 0.25 pixel
maximum distortion rate 0.13% 0.10%
Table 2 indicates that proposed method is effective, which makes the image with little distortion and even the maximum distortion rather small.

Conclusions

This paper proposed a new method for lens distortion correction. Only one chessboard pattern image was used to rectify the three distortion models, which makes the implement convenient. To minimize the influence by the tiny distortion of the center four points, iterative procedure was introduced to optimize the distortion model and reevaluate the non-distorted points. Both synthetic data and real data have been adopted to test by the proposed method. Synthetic data experiment showed that the proposed method had good performance versus noises, and it was more accurate for lens distortion correction by considering the whole distortion models. Besides, chessboard rotation experiment was conducted to show the robustness of this method. The result of real data experiment also indicated that the proposed method was effective.
1
Abellard A, Bouchouicha M, Ben Khelifa M. A genetic algorithm application to stereo calibration. In: Proceedings of 2005 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Espoo, Finland. 2005, 285–290

2
Brown D C. Close-range camera calibration. Photogrammetric Engineering, 1971, 37(8): 855–866

3
Weng J, Cohen P, Herniou M. Camera calibration with distortion models and accuracy evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(10): 965–980

DOI

4
Zhang Z. On the epipolar geometry between two images with lens distortion. In: Proceedings of the 13th International Conference on Pattern Recognition (ICPR), Vienna. 1996, 407–411

5
Du X, Li H, Zhu Y. Camera lens radial distortion correction using two-view projective invariants. Optics Letters, 2011, 36(24): 4734–4736

DOI PMID

6
Goljan M, Fridrich J. Estimation of lens distortion correction from single images. In: Proceedings of SPIE 9028, Media Watermarking, Security, and Forensics. 2014, 90280N

7
Prescott B, McLean G. Line-based correction of radial lens distortion. Graphical Models and Image Processing, 1997, 59(1): 39–47

DOI

8
Gao D, Yin F. Computing a complete camera lens distortion model by planar homography. Optics & Laser Technology, 2013, 49: 95–107

DOI

9
Hartley R, Kang S B. Parameter-free radial distortion correction with center of distortion estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(8): 1309–1321

DOI PMID

10
Park J, Byun S C, Lee B U. Lens distortion correction using ideal image coordinates. IEEE Transactions on Consumer Electronics, 2009, 55(3): 987–991

DOI

11
Manolis L. Levenberg-Marquardt nonlinear least squares algorithms in C/C++

Outlines

/