Modelling of beam propagation and its applications for underwater imaging
Yuzhang CHEN
,
Kecheng YANG
,
Xiaohui ZHANG
,
Min XIA
,
Wei LI
Expand
Wuhan National Laboratory for Optoelectronics, College of Optoelectronic Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
Received date: 26 Apr 2011
Accepted date: 14 Jun 2011
Published date: 05 Dec 2011
Copyright
2014 Higher Education Press and Springer-Verlag Berlin Heidelberg
In order to process underwater imaging to the best possible level, an imaging model based on beam propagation was established. The presented model included not only the laser beam propagation affected by absorption and scattering, but also the effects of underwater turbulence and the diffraction limit of sensors. By this model approximately quantified optical transfer functions (OTFs) were studied. Thus, under this framework, the approaches of image enhancement, restoration and super-resolution reconstruction (SRR) can be extended by incorporating underwater optical properties based on OTF or point spread function (PSF) of the imaging system. Experimental results proved that the imaging range and the image quality can be effectively enhanced, which are critical in underwater imaging or detecting.
Yuzhang CHEN, Kecheng YANG, Xiaohui ZHANG, Min XIA, Wei LI. Modelling of beam propagation and its applications for underwater imaging[J]. Frontiers of Optoelectronics, 2011, 4(4): 398-406. DOI: 10.1007/s12200-011-0219-9
Introduction
Underwater imaging has not been a new concept since the year 1963 when an underwater transmission window of 470-580 nm, in which the attenuation of blue-green laser is far less than those other light-wave was found by Duntley [1]. The range and quality of underwater imaging are of vital importance to military and civilian applications. However, factors such as absorption, scattering and turbulence significantly reduce the light arrived at sensors and cause blurring and distortion in resulting images. Over past years, efforts have been taken to overcome these limitations, such as laser scanning imaging [2], polarized laser imaging [3] and range-gated imaging [4] which can effectively eliminate backscatter. Appropriately increasing the intensity of laser, enhancing detection rates of sensors and reducing error rate are also effective ways, however, these will undoubtedly increase the system cost greatly.
Therefore, in order to further improve the range and quality of underwater imaging beyond the hardware limitations, digital image process is needed. Image denoising, enhancement, restoration and super-resolution reconstruction (SRR) techniques are widely used. The main challenge work with digital image processing is the choice of accurate methods. Information based on the knowledge of system [5] can effectively enhance the performance of image process such as restoration. The system response which is a typical prior knowledge can be derived from underwater optical theory or image itself by measuring resolution chart at different distances. The modelling for underwater beam propagation [6-8] based on underwater optical theory can derive the system response such as point spread function (PSF). It has been studied since 1970’s when several research groups [6] suggested that linear system theory can be applied to underwater beam transmission. But most research works use traditional PSF models which cannot fully suit particular situations such as range-gated imaging.
Based on linear transmission theory, an underwater imaging model suitable for range-gated imaging system was established in this study including laser beam propagation affected by absorption and scattering, and the effects of underwater turbulence and the diffraction limit of sensors. The model-derived modulation transfer function (MTF) and PSF are applied for the theoretical basis of underwater image processing.
Underwater imaging model
The purpose of the underwater imaging model is to predict the image intensity at each pixel as a function of illumination, reflectance properties of objects, medium, and sensor characteristics. The main difference between image formation in air and in water is that the interaction between light and medium must be considered in underwater conditions unlike similar situations in the air.
Underwater laser beam transmission
Generally speaking, a typical underwater imaging system consists of three parts: the light source (laser), the image acquisition system (CCD or Intensified-CCD (ICCD)), and the object. Light on the image plane of sensor is formed by three components including nonscattered or direct light reflected by the object, the forward-scattered light from the object, and the back-scattered light from the medium (water). So, the total irradiance on image plane is generated by the linear plus of direct, forward-scattered, and back-scattered components:
Figure 1 shows the light path of the three components. The direct light component received by the image sensor is the exponential decay of that reflected by the object with distance due to the attenuation of water:where l is imaging distance between the transmitting plane and the target plane, and is the volume attenuation coefficient. Typical values of k in clear ocean, coastal and turbid waters are 0.05, 0.20 and 0.33 m-1, respectively [9], and is the scalar irradiance upon the reflectance plane of the object. According to the theory of light intensity and luminous flux, regarding the underwater target as a Lambert emitter, the direct light (Lux) arrived at receiving plane can be calculated aswhere denotes the coordinates on the image plane, l denotes the imaging distance, is the peak power of laser, is the pulse width, is the divergence angle (half angle) of laser beam after expansion, so the energy of one single pulse is ; and are the optical transmittance of the optical transmission and receiving systems respectively, is the relative aperture of receiving optical system, D denotes receiving diameter, f is the focal length; is the distance between optical transmission and receiving systems, is the average reflectivity of the target, and n denote attenuation coefficient and refractive index of water respectively.
As the forward-scattered light is scattered through the water after reflected by the object, according to the Fourier optics, the forward-scattered light component can be calculated by the convolution operation of direct light and the PSF of water. An empirical expression for the PSF of water has the form [10,11]:where k denotes the attenuation coefficient of water, G is a empirical constant related to k by , B is an empirical damping factor of scattering, l is the distance of light propagation in the water, denotes the angular frequency and stands for the inverse Fourier transform operator. This empirical PSF is the calculating basis of our model. As a result, the forward-scattered light arrived at sensors can be calculated bywhere and denotes spatial frequency, * denotes convolution operation.
The back-scattered light is usually ignored in many studies for that it is reduced by separating the sensor from light source [12] and range-gating. However, it does exist even in the gated time and cannot be fully eliminated. In order to make the model more accurate, we still consider the effect of back-scattered light. For the calculation of back-scattered light, we can divide the water into several lamellas, so the distance of each lamellar from the image plane of sensors iswhere n is the number of lamellas, and is the imaging distance based on receiving axis unlike the l which is based on transmitting axis. Like the calculation of the forward-scattered light, according to the Fourier optics, the back-scattered light component can also be derived by the convolution operation of the direct backlight and the PSF of water. Thus, the back-scattered light of the th layer of water arriving at the image plane has the form:which is composed by the ith direct component and scattered component, where,anddenote the light intensity of the ith water lamellar, parameters G, k, T1, , L are defined as same as that in the calculation of direct light. denotes the volume scatter function, which has several kinds of forms developed by different researchers, like Duntley [13], Dolin et al. [14] and Wells [15]. In the model developed by this paper, the divergence angle (half angle) of laser beam and the relative aperture of receiving optics are below 10°, so it fits the range limitation of Wells’ theory () in which the form of can be expressed as [8]where and are the total attenuation coefficient and scattering albedo respectively, and relates to the mean scattering angle; as a result, the back-scattered light arriving at image plane can be calculated aswhich denotes the summation of the back-scattered intensity from all the water lamellas.
As for range-gated imaging, the back-scattered component is just from water around the target if the gating delay is set properly, we can set , then we havewhere is the angle between receiving axis and transmitting axis. As a result, the back-scattered component can be calculated from the time integration by setting the time as the gating time.
Calculation of contrast transmittance and modulation transfer function (MTF)
In the three components of light received by sensor, the direct and forward-scattered components carry the information of target, while the back-scattered component is the background light along with varies noise which do not carry useful information, so, the contrast transmittance of image and MTF of water can be defined as mathematical operation of light components in the image at a location x, y:where denotes the contrast transmittance of image, and denotes the MTF of medium (water). In order to simplify the calculation of , some constant parameters determined by environments like the average reflectivity of the target, attenuation coefficient and refractive index of water, are simplified as K1, K2, etc. Then, the expression of can be simplified asand aswhere and denote parameters as a function of position. Then, can be derived aswhere parameters are defined as same as that in calculation of the light components.
We can see that, increasing the peak power of laser pulse () and relative aperture of receiving optics (D/f), can enhance the value of , while the increase of distance between sensor and the target (l) has a opposite effect. As a result, based on the calculation of contrast transmittance, parameters of hardware can be adjusted for the purpose of image enhancement. For instance, increasing the laser power can improve the image quality. The caluculated contrast transmittance can also be used for underwater image evaluation. Higher contrast transmittance value means that more information is contained in the image which is of better quality.
Absorption and scattering are not the only factors hinder the underwater imaging, the turbulence in underwater can also severely limits underwater visibility. Hou et al. [7] gave the optical transfer function (OTF) of underwater optical turbulence using Kolmogorov turbulence model [16] by the form:where denotes the angular spatial frequency, is the transmission range, is the mean wavelength for underwater transmission, denotes the seeing parameter, is the optical turbulence strength, is constant, is the dissipation rate of temperature or sanity variances, and is the kinetic energy dissipation rate. In strong turbulent environments, typical values of former parameters are . For the situations in non-turbulent environment, the value of MTF of underwater optical turbulence can be approximately .
The diffraction limit of the optical system of sensors is also an important factor inside MTF. When only one main lens exsits in the optical system, the diffraction limit factor can be defined as [17]where denotes the spatial frequency, and is the optical cut off frequency at the image plane, which has the form:where denotes the focal length, and D represents the diameter of the lens, is the wavelength of operation. By far the CCD sensor is mainly chosen for the detection in range-gated imaging, the MTF of the CCD sensor depends on the size of its pixels, which can be defined aswhere denotes the size of pixels, and f represents the spatial frequency.
Therefore, the MTF of the whole system can be obtained by multiplying the MTF caused by each factors described above, which can be expressed as
The curve of as a function of spatial frequency is shown in Fig. 2.
Fig.2 Comparison of relative MTFs of different factors: (a) MTF contribution from medium; (b) MTF contribution from turbulence; (c) MTF contribution from diffraction; (d) MTF contribution from ccd sensor; (e) MTF of the whole system
The optical devices of the imaging system such as lenses, apertures usually have circular symmetry, therefore, the PSF and MTF of the system can be obtained using hankel transform [18] from each other which has the following form:where denotes the MTF of optical system, and for the PSF, is the spatial frequency, and l is imaging distance. The MTF and PSF which give the system response including the imaging system as well as the effects of medium are intuitive ways for describing the image formation. As a result, they can help to enhance the performance of image processing which will be discussed in Section 3.
Image processing
In order to improve the image quality of underwater imaging systems, the hardware can be upgraded such as increasing the output power of laser, improving the detecting ability of sensor and reducing error rate, etc. However, the upgrade of hardware will bring about the increase of substantial cost, and the balance is not easy to control, such as excessive increase of laser output power will cause serious impact of backscatter, this is proved by Ref. [19]. Therefore, improving image quality from the perspective of the image itself became necessary.
Image restoration
The relation between observed blurred image and original or uncorrupted signal can be described aswhere is the PSF of the system, * denotes convolution operation, denotes the noise of the system. Therefore, the original signal can be recovered by reversion or deconvolution from an accurate modeling of image system and medium [20]. The PSF of the imaging system calculated in Section 2 can be applied to various kinds of image restoration like wiener or blind restoration.
Image super-resolution reconstruction (SRR)
Like the contrast, resolution is another important factor for evaluating images. Image SRR [21] offers a possibility of improving image resolution beyond the hardware limitations. It has been widely studied and used recently. It refers to reconstructing a high-resolution (HR) image from one or multiple low-resolution (LR) images using the complementary information between image sequences. SRR can be divided into categories according to frequency domain and spatial domain, including interpolation, Papoulis-Gerchberg (PG) method, iterative back projection (IBP) method, projections onto convex sets (POCS) method, etc.
To apply the former calculated PSF to image super-resolution reconstruction, we choose the POCS method for its flexibility of incorporating prior knowledge of the imaging system. The main idea of the POCS method can be described as an iterative equation:where k denotes the number of limit sets, P is the projection operator, and denote the SR image resulted from (n+1)th iteration and nth iteration, g represents the ith low resolution image, represents relaxed operator, and H denotes blurring operator which can be equivalent to the PSF of imaging system. Therefore, the combination of the POCS method and former calculated PSF can be exciting, research about which will be introduced in next section.
Experimental setup
Our model is applied to an underwater range-gated imaging system for image restoration and reconstruction, calculated contrast transmittance is used for image quality evaluation. Figure 3 shows the schematic diagram of the experimental system which consists of a Q-switch, frequency doubled Nd:YAG laser operated at 532-nm, an ICCD with programmable timing generator as external trigger controller, both the laser and the ICCD are put into a water tank, image data collected by the ICCD is transferred to computer and displayed by software.
The experiment was conducted in a boat pond which is with an attenuation of and scattering albedo of . The angle of field of view (FOV) is about 4°, which fits the range limitation of Wells’ theory (). An obtained image (original size 720 × 576 pixels, region of interest 256 × 256 pixels) of a 2 m × 2 m object in a distance of 35 m is shown in Fig. 5(a). Image restoration was performed with deconvolution filters and the PSF calculated in Section 2, the contrast values are used for evaluation. Restored results by different deconvolution filters are shown in Figs. 5(b)-5(d) with their contrast values in Table 2. It can be seen that these filters we used can contribute to improving image quality, but not significantly.
Fig.5 Restored images: (a) original image (size 256 × 256 pixels); (b) restored by Wiener filter; (c) restored by Lagrange filter; (d) restored by Lucky-Richardson filter
Blind deconvolution filter which is the most popular and widely used filter in image restoration nowadays [22,23] can also be applied with our model. An important parameter for blind deconvolution is the number of iteration. A certain number of iteration can achieve better restoration, results neither do less or too more which will cause a waste of time. Restored results of different iteration number by PSF-based blind deconvolution are shown in Figs. 6(a)–6(c).
From the visual point of view by Fig. 6, we can see that restoration with iteration time of 50 performs better than that of 20, so more iteration can enhance the performance of blind deconvolution, however an iteration time of 100 makes the restoration result more ambiguous. For underwater imaging or detecting, image segmentation and information extraction are even more important, and the effect of which depend very much on the image quality which is based on the computer vision; as a result, using objective evaluation criteria such as the contrast value to gauge the image quality is more effective than the subjective evaluation. The contrast values of the restored images are shown in Table 3.
Tab.3 Contrast values of restored images
image
20 iteration
50 iteration
100 iteration
contrast
25.0099
29.5063
27.6188
It can be seen that the blind deconvolution performs better than former filters, but ringing artifacts exist in the restored result. This could be reduced using regularization such as edge detection. From the contrast value list, we can see that more iteration does not reflect a better effect, and it can be explained by that blind deconvolution algorithm has worse noise tolerance under large iteration number.
So, the quality of image can be enhanced by image restoration, and with which we can deduce the best filter and best iteration number for image restoration.
10 frames are extracted from the test video sequences collected by ICCD for super-resolution reconstruction including the sample image we used for image restoration. Figure 7 shows the reconstruction results of various SRR methods.
From the visual point of view by Fig. 7, the differences of reconstructed results are not obvious, so we can only use the objective evaluation. The contrast values of reconstructed images are shown in Table 4.
Tab.4 Contrast values of reconstructed images
image
bilinear
cubic
PG
IBP
POCS
PSF-POCS
contrast
14.7464
19.1722
29.2447
29.2302
39.2438
51.8804
From the contrast values of the reconstructed images, we can see that the results of interpolation algorithms are not desirable, while other methods can offer a relative better result, this is due to the fact that creating non-exists pixels blindly blurs the boundaries of black and white, which degrades the images of black and white stripe resolution board. Reconstructed images have ringing artifacts owing to the steep cut-off frequency which also can be reduced by regularization such as edge detection. Figure 7(f) is the result of the POCS method based on our calculated PSF. As can be obviously seen from Table 4, this method performs better than other traditional SRR methods. As a result, we can conclude that the PSF based POCS method can substantially enhance the performance of super-resolution reconstruction, which can achieve a best result currently. More future work should be conducted such as the introduction and comparison of traditional PSF models for the POCS method or other SRR methods.
Conclusions
In this paper, an underwater imaging model based on the formation of underwater images is presented, along with the retrieval of optical properties. The model includes the beam propagation, responses of medium as well as sensor, and the effects of underwater turbulence. Issues implemented on underwater imaging such as denoising, image enhancement and restoration are addressed and discussed. The model was applied to a range-gated underwater imaging system for image restoration and super-resolution reconstruction in which varies filters and methods were chosen for comparison, the results show that the calculated MTF and PSF can be used to enhance the performance of both the restoration and reconstruction. Further works can be carried out including more complex methods for image restoration or super-resolution reconstruction applications. The model can also be applied to other underwater imaging systems which have a similar principle of imaging.
Acknowledgements
This paper was supported by the National Natural Science Foundation of China (Grant No. 61008050).
Acharekar M A. Underwater laser imaging system (ULIS). In: Proceedings of SPIE. 1997, 3079: 750
3
Chang P C Y, Walker J G, Hopcraft K I, Ablitt B, Jakeman E. Polarization discrimination for active imaging in scattering media. Optics Communications, 1999, 159(1-3): 1-6
Masters B R, Barrett H H, Myers K J. Foundations of Image Science. In: Saleh B E A, ed. Wiley Series in Pure and Applied Optics. New York: Wiley-Interscience, 2007, 31(2): 114-115
6
Mertens L E, Replogle F S Jr. Use of point spread and beam spread functions for analysis of imaging systems in water. Journal of the Optical Society of America, 1977, 67(8): 1105-1117
Hou W L, Gray D J, Weidemann A D, Arnone R A. Comparison and validation of point spread models for imaging in natural waters. Optics Express, 2008, 16(13): 9958-9965
Duntley S Q. Underwater Lighting by Submerged Lasers and Incandescent Sources. San Diego: Scripts Instituition of Oceanography, University of California, 1971
14
Dolin L S, Gilbert G D, Levin I, Luchinin A. Theory of Imaging Through Wavy Sea Surface. Nizhniy Novgorod: Russian Academy of Sciences, Institution of Applied Physics, 2006
15
Wells W H. Theory of small angle scattering. AGARD Lecture Series, 1973, 61: 3.3.1-3.3.19
16
Tatarskii V I. Wave Propagation in a Turbulent Medium. Silverman R S, translated. New York: McGraw-Hill, 1961: 285
17
Bonnier D, Stephane C, Yvves L, Bruno L, Marc L, Pierre G. Modelling of active TV system for surveillance operations. In: Proceedings of SPIE–The International Society for Optical Engineering. 1999, 3698: 217-228
18
Hou W L, Gray Deric J, Weidemann Alan D, Fournier Georges R, Forand J L. Automated underwater image restoration and retrieval of related optical properties. IEEE International Geoscience and Remote Sensing Symposium, 2007: 1889-1892
19
Han H W, Zhang X H, Ge W L. Performance evaluation of underwater range-gated viewing based on image quality metric. In: Proceedings of the International Conference on ICEME’09. 2009, 4: 441
20
Gonzalez R C, Woods R E. Digital Image Processing. 2nd ed. Upper Saddle River, NJ: Prentice Hall, 2002
21
Park S C, Park M K, Kang M G. Super-resolution image reconstruction: a technical overview. IEEE Signal Processing Magazine, 2003, 20(3): 21-36