Key technologies in chaotic optical communications

Junxiang KE, Lilin YI, Tongtong HOU, Weisheng HU

Front. Optoelectron. ›› 2016, Vol. 9 ›› Issue (3) : 508-517.

PDF(556 KB)
Front. Optoelectron. All Journals
PDF(556 KB)
Front. Optoelectron. ›› 2016, Vol. 9 ›› Issue (3) : 508-517. DOI: 10.1007/s12200-016-0570-y
REVIEW ARTICLE
REVIEW ARTICLE

Key technologies in chaotic optical communications

Author information +
History +

Abstract

In this paper, the key technologies and research progress of chaotic optical communication are reviewed. We first discuss the chaos generation methods based on different nonlinear components. Then we focus on the frontiers of chaotic optical communications, including how to improve the security, and the development about the transmission capacity and distance of chaotic optical communication in laboratory and field. At last, we discuss limitations and potentials of chaotic optical communications and draw a conclusion.

Keywords

chaos / chaotic optical communications / security / capacity / time delay concealment

Cite this article

Download citation ▾
Junxiang KE, Lilin YI, Tongtong HOU, Weisheng HU. Key technologies in chaotic optical communications. Front. Optoelectron., 2016, 9(3): 508‒517 https://doi.org/10.1007/s12200-016-0570-y

1 Introduction

Underwater imaging has not been a new concept since the year 1963 when an underwater transmission window of 470-580 nm, in which the attenuation of blue-green laser is far less than those other light-wave was found by Duntley [1]. The range and quality of underwater imaging are of vital importance to military and civilian applications. However, factors such as absorption, scattering and turbulence significantly reduce the light arrived at sensors and cause blurring and distortion in resulting images. Over past years, efforts have been taken to overcome these limitations, such as laser scanning imaging [2], polarized laser imaging [3] and range-gated imaging [4] which can effectively eliminate backscatter. Appropriately increasing the intensity of laser, enhancing detection rates of sensors and reducing error rate are also effective ways, however, these will undoubtedly increase the system cost greatly.
Therefore, in order to further improve the range and quality of underwater imaging beyond the hardware limitations, digital image process is needed. Image denoising, enhancement, restoration and super-resolution reconstruction (SRR) techniques are widely used. The main challenge work with digital image processing is the choice of accurate methods. Information based on the knowledge of system [5] can effectively enhance the performance of image process such as restoration. The system response which is a typical prior knowledge can be derived from underwater optical theory or image itself by measuring resolution chart at different distances. The modelling for underwater beam propagation [6-8] based on underwater optical theory can derive the system response such as point spread function (PSF). It has been studied since 1970’s when several research groups [6] suggested that linear system theory can be applied to underwater beam transmission. But most research works use traditional PSF models which cannot fully suit particular situations such as range-gated imaging.
Based on linear transmission theory, an underwater imaging model suitable for range-gated imaging system was established in this study including laser beam propagation affected by absorption and scattering, and the effects of underwater turbulence and the diffraction limit of sensors. The model-derived modulation transfer function (MTF) and PSF are applied for the theoretical basis of underwater image processing.

2 Underwater imaging model

The purpose of the underwater imaging model is to predict the image intensity at each pixel as a function of illumination, reflectance properties of objects, medium, and sensor characteristics. The main difference between image formation in air and in water is that the interaction between light and medium must be considered in underwater conditions unlike similar situations in the air.

2.1 Underwater laser beam transmission

Generally speaking, a typical underwater imaging system consists of three parts: the light source (laser), the image acquisition system (CCD or Intensified-CCD (ICCD)), and the object. Light on the image plane of sensor is formed by three components including nonscattered or direct light reflected by the object, the forward-scattered light from the object, and the back-scattered light from the medium (water). So, the total irradiance on image plane is generated by the linear plus of direct, forward-scattered, and back-scattered components:
Erecieve(total)=Ed(direct)+Efs(forwardscattered)+Ebs(backscattered).
Fig.1 Light paths of three components

Full size|PPT slide

Figure 1 shows the light path of the three components. The direct light component received by the image sensor is the exponential decay of that reflected by the object with distance due to the attenuation of water:
Ed=Eobjectexp(-kl),
where l is imaging distance between the transmitting plane and the target plane, and k is the volume attenuation coefficient. Typical values of k in clear ocean, coastal and turbid waters are 0.05, 0.20 and 0.33 m-1, respectively [9], and Eobject is the scalar irradiance upon the reflectance plane of the object. According to the theory of light intensity and luminous flux, regarding the underwater target as a Lambert emitter, the direct light (Lux) arrived at receiving plane can be calculated as
Ed(x,y,L)=n2P0T1T2ρD2cos3φcos4φ16πl2f2sin2(α2)exp[kl(secφ+secφ)],
where
φ=arccosl(x+d0)2+y2+l2
φ=arccoslx2+y2+l2
(x,y) denotes the coordinates on the image plane, l denotes the imaging distance, P0 is the peak power of laser, τ0 is the pulse width, α is the divergence angle (half angle) of laser beam after expansion, so the energy of one single pulse is E0=P0*τ0; T1 and T2 are the optical transmittance of the optical transmission and receiving systems respectively, D/f is the relative aperture of receiving optical system, D denotes receiving diameter, f is the focal length; d0 is the distance between optical transmission and receiving systems, ρ is the average reflectivity of the target, k and n denote attenuation coefficient and refractive index of water respectively.
As the forward-scattered light is scattered through the water after reflected by the object, according to the Fourier optics, the forward-scattered light component can be calculated by the convolution operation of direct light and the PSF of water. An empirical expression for the PSF of water has the form [10,11]:
S(x,y)=(e-Gl-e-kl)×F-1(e-Blfθ)+e-kl×δ(x,y),
where k denotes the attenuation coefficient of water, G is a empirical constant related to k by |G|<<k, B is an empirical damping factor of scattering, l is the distance of light propagation in the water, fθ denotes the angular frequency and F-1 stands for the inverse Fourier transform operator. This empirical PSF is the calculating basis of our model. As a result, the forward-scattered light arrived at sensors can be calculated by
Efs(x,y,l)=[e-Gl-e-kle-kl×Ed(x,y,l)]g(x,y,l),
where
g(x,y,l)=F-1(e-Blfθ)F-1(e-Blfθ)dxdy
and
fθ=l×fx2+fy2
denotes spatial frequency, * denotes convolution operation.
The back-scattered light is usually ignored in many studies for that it is reduced by separating the sensor from light source [12] and range-gating. However, it does exist even in the gated time and cannot be fully eliminated. In order to make the model more accurate, we still consider the effect of back-scattered light. For the calculation of back-scattered light, we can divide the water into several lamellas, so the distance of each lamellar from the image plane of sensors is
ΔRi=ilRn,i=1,2,,n,
where n is the number of lamellas, and lR is the imaging distance based on receiving axis unlike the l which is based on transmitting axis. Like the calculation of the forward-scattered light, according to the Fourier optics, the back-scattered light component can also be derived by the convolution operation of the direct backlight and the PSF of water. Thus, the back-scattered light of the ith layer of water arriving at the image plane has the form:
Ebsid(x,y,l,i)+{[e-Gl-e-kle-kl×Ebsid(x,y,l,i)]g(x,y,l)},
which is composed by the ith direct component Ebsid(x,y,l,i) and scattered component, where,
Ebsid(x,y,l,i)=πe-kRi·T1·Esi(x,y,l,i)·β(θ)·ΔRi·cos3α4fθ2·[l-ΔRi·(i-0.5)l]2
and
Esi(x,y,l,i)=e-GL-e-kLe-kL×{cosα·P04πsin(α2)·Ri2}g(x,y,l)
denote the light intensity of the ith water lamellar, parameters G, k, T1, α, L are defined as same as that in the calculation of direct light. β(θ) denotes the volume scatter function, which has several kinds of forms developed by different researchers, like Duntley [13], Dolin et al. [14] and Wells [15]. In the model developed by this paper, the divergence angle (half angle) of laser beam and the relative aperture of receiving optics are below 10°, so it fits the range limitation of Wells’ theory (0o<θ<10o) in which the form of β(θ) can be expressed as [8]
β(θ)=kωθ02π(θ02+θ2)3/2,
where k and ω are the total attenuation coefficient and scattering albedo respectively, and θ0 relates to the mean scattering angle; as a result, the back-scattered light arriving at image plane can be calculated as
Ebs(x,y,l)=i=1n{Ebsid(x,y,l,i)+[e-Gl-e-kle-kl×Ebsid(x,y,l,i)]g(x,y,l)},
which denotes the summation of the back-scattered intensity from all the water lamellas.
As for range-gated imaging, the back-scattered component is just from water around the target if the gating delay is set properly, we can set ΔRi=lR, then we have
l+lR=ctn,
lR=lcosθs,
dl=cdtn(1+cosθs),
where θs is the angle between receiving axis and transmitting axis. As a result, the back-scattered component can be calculated from the time integration by setting the time as the gating time.

2.2 Calculation of contrast transmittance and modulation transfer function (MTF)

In the three components of light received by sensor, the direct and forward-scattered components carry the information of target, while the back-scattered component is the background light along with varies noise which do not carry useful information, so, the contrast transmittance of image and MTF of water can be defined as mathematical operation of light components in the image at a location x, y:
Cimage=Ed(x,y)+Efs(x,y)Ed(x,y)+Ebs(x,y)+Efs(x,y),
MTFmedium=πF(Cimage)4,
where Cimage denotes the contrast transmittance of image, and MTFmedium denotes the MTF of medium (water). In order to simplify the calculation of Cimage, some constant parameters determined by environments like the average reflectivity of the target, attenuation coefficient and refractive index of water, are simplified as K1, K2, etc. Then, the expression of Ed(x,y,l) can be simplified as
Ed(x,y,l)=K1P0(Df)2L2sin2(α2)exp[K2l]f1(x,y),
and Ebs as
Ebs(x,y,l)=K3π·exp(K4l)·cos4α·P0·(l-K6)l2sin(α2)exp[K5l]·f2(x,y),
where f1(x,y) and f2(x,y) denote parameters as a function of position. Then, Cimage can be derived as
Cimage=K7·P0·(Df)2K7·P0·(Df)2+K8·exp(K9·l)·cos4α·sin(α2)·(l-K10)2·f1(x,y)f2(x,y),
where parameters are defined as same as that in calculation of the light components.
We can see that, increasing the peak power of laser pulse (P0) and relative aperture of receiving optics (D/f), can enhance the value of Cimage, while the increase of distance between sensor and the target (l) has a opposite effect. As a result, based on the calculation of contrast transmittance, parameters of hardware can be adjusted for the purpose of image enhancement. For instance, increasing the laser power can improve the image quality. The caluculated contrast transmittance can also be used for underwater image evaluation. Higher contrast transmittance value means that more information is contained in the image which is of better quality.
Absorption and scattering are not the only factors hinder the underwater imaging, the turbulence in underwater can also severely limits underwater visibility. Hou et al. [7] gave the optical transfer function (OTF) of underwater optical turbulence using Kolmogorov turbulence model [16] by the form:
MTFturbulence(f,r)=exp[-1736·K3·λ13·f53·r]=exp[-3.44(λfR0)53·r],
where f denotes the angular spatial frequency, r is the transmission range, λ=530nm is the mean wavelength for underwater transmission, R0 denotes the seeing parameter, K3=B1χϵ-13 is the optical turbulence strength, B1 is constant, χ is the dissipation rate of temperature or sanity variances, and ϵ is the kinetic energy dissipation rate. In strong turbulent environments, typical values of former parameters are ϵ=10-4,χ=10-10.5,R0=0.003. For the situations in non-turbulent environment, the value of MTF of underwater optical turbulence can be approximately MTFturbulence=1.
The diffraction limit of the optical system of sensors is also an important factor inside MTF. When only one main lens exsits in the optical system, the diffraction limit factor can be defined as [17]
MTFdiffraction=2π[arccosffco-ffco1-(ffco)2]when0<f<fco,
where f denotes the spatial frequency, and fco is the optical cut off frequency at the image plane, which has the form:
fco=Dλ·fl,
where fl denotes the focal length, and D represents the diameter of the lens, λ is the wavelength of operation. By far the CCD sensor is mainly chosen for the detection in range-gated imaging, the MTF of the CCD sensor depends on the size of its pixels, which can be defined as
MTFccd=sin(πdpixelf)πdpixelf,
where dpixel denotes the size of pixels, and f represents the spatial frequency.
Therefore, the MTF of the whole system can be obtained by multiplying the MTF caused by each factors described above, which can be expressed as
MTFimage=MTFmedium×MTFturbulence×MTFdiffraction×MTFccd.
The curve of MTFimage as a function of spatial frequency is shown in Fig. 2.
Fig.2 Comparison of relative MTFs of different factors: (a) MTF contribution from medium; (b) MTF contribution from turbulence; (c) MTF contribution from diffraction; (d) MTF contribution from ccd sensor; (e) MTF of the whole system

Full size|PPT slide

The optical devices of the imaging system such as lenses, apertures usually have circular symmetry, therefore, the PSF and MTF of the system can be obtained using hankel transform [18] from each other which has the following form:
h(θ,L)=2πJ0(2πθφ)H(φ,R)φdφ,
H(φ,l)=2πJ0(2πθφ)h(θ,R)θdθ,
where H(φ,R) denotes the MTF of optical system, and h(θ,R) for the PSF, φ is the spatial frequency, and l is imaging distance. The MTF and PSF which give the system response including the imaging system as well as the effects of medium are intuitive ways for describing the image formation. As a result, they can help to enhance the performance of image processing which will be discussed in Section 3.

3 Image processing

In order to improve the image quality of underwater imaging systems, the hardware can be upgraded such as increasing the output power of laser, improving the detecting ability of sensor and reducing error rate, etc. However, the upgrade of hardware will bring about the increase of substantial cost, and the balance is not easy to control, such as excessive increase of laser output power will cause serious impact of backscatter, this is proved by Ref. [19]. Therefore, improving image quality from the perspective of the image itself became necessary.

3.1 Image restoration

The relation between observed blurred image f(x,y) and original or uncorrupted signal f(x,y) can be described as
f(x,y)=f(x,y)*h(x,y)+n(x,y),
where h(x,y) is the PSF of the system, * denotes convolution operation, n(x,y) denotes the noise of the system. Therefore, the original signal can be recovered by reversion or deconvolution from an accurate modeling of image system and medium [20]. The PSF of the imaging system calculated in Section 2 can be applied to various kinds of image restoration like wiener or blind restoration.

3.2 Image super-resolution reconstruction (SRR)

Like the contrast, resolution is another important factor for evaluating images. Image SRR [21] offers a possibility of improving image resolution beyond the hardware limitations. It has been widely studied and used recently. It refers to reconstructing a high-resolution (HR) image from one or multiple low-resolution (LR) images using the complementary information between image sequences. SRR can be divided into categories according to frequency domain and spatial domain, including interpolation, Papoulis-Gerchberg (PG) method, iterative back projection (IBP) method, projections onto convex sets (POCS) method, etc.
To apply the former calculated PSF to image super-resolution reconstruction, we choose the POCS method for its flexibility of incorporating prior knowledge of the imaging system. The main idea of the POCS method can be described as an iterative equation:
fn+1=[P1P2Pk]fn,
fn+1=P[fn+i=1PλP(gi-Hfi)],
where k denotes the number of limit sets, P is the projection operator, fn+1 and fn denote the SR image resulted from (n+1)th iteration and nth iteration, g represents the ith low resolution image, λ represents relaxed operator, and H denotes blurring operator which can be equivalent to the PSF of imaging system. Therefore, the combination of the POCS method and former calculated PSF can be exciting, research about which will be introduced in next section.

4 Experimental setup

Our model is applied to an underwater range-gated imaging system for image restoration and reconstruction, calculated contrast transmittance is used for image quality evaluation. Figure 3 shows the schematic diagram of the experimental system which consists of a Q-switch, frequency doubled Nd:YAG laser operated at 532-nm, an ICCD with programmable timing generator as external trigger controller, both the laser and the ICCD are put into a water tank, image data collected by the ICCD is transferred to computer and displayed by software.
Fig.3 Framework of range-gated imaging system

Full size|PPT slide

The parameters in our model should be set due to experimental facilities, a typical set of parameters are shown in Table 1.
Tab.1 Main physical properties of underwater range-gated imaging system
laser power (P0)divergence angle (α)optical transmittance (T1, T2)relative aperture (D/f)
107 W36 mrad30%0.25
image distance (L)laser & CCD distance (d0)average reflectivity (ρ)attenuation coefficient (k)refractive index (n)
40 m20 cm15%0.25 m-11.35
The intensity distributions of the three light components are shown in Fig. 4.
Fig.4 Intensity of the light components: (a) non-scattered light; (b) forward-scattered light; (c) back-scattered light; (d) total light

Full size|PPT slide

The experiment was conducted in a boat pond which is 150m×6m×4m with an attenuation of k=0.25m-1 and scattering albedo of ω=0.85. The angle of field of view (FOV) is about 4°, which fits the range limitation of Wells’ theory (0°<θ<10°). An obtained image (original size 720 × 576 pixels, region of interest 256 × 256 pixels) of a 2 m × 2 m object in a distance of 35 m is shown in Fig. 5(a). Image restoration was performed with deconvolution filters and the PSF calculated in Section 2, the contrast values are used for evaluation. Restored results by different deconvolution filters are shown in Figs. 5(b)-5(d) with their contrast values in Table 2. It can be seen that these filters we used can contribute to improving image quality, but not significantly.
Fig.5 Restored images: (a) original image (size 256 × 256 pixels); (b) restored by Wiener filter; (c) restored by Lagrange filter; (d) restored by Lucky-Richardson filter

Full size|PPT slide

Tab.2 Contrast values of restored images
imageoriginalWienerLagrangeLucy-Richardson
contrast24.942124.738524.503224.9456
Blind deconvolution filter which is the most popular and widely used filter in image restoration nowadays [22,23] can also be applied with our model. An important parameter for blind deconvolution is the number of iteration. A certain number of iteration can achieve better restoration, results neither do less or too more which will cause a waste of time. Restored results of different iteration number by PSF-based blind deconvolution are shown in Figs. 6(a)–6(c).
Fig.6 Restored images (size 256 × 256 pixels) with blind deconvolution by (a) 20 iteration; (b) 50 iteration; (c) 100 iteration

Full size|PPT slide

From the visual point of view by Fig. 6, we can see that restoration with iteration time of 50 performs better than that of 20, so more iteration can enhance the performance of blind deconvolution, however an iteration time of 100 makes the restoration result more ambiguous. For underwater imaging or detecting, image segmentation and information extraction are even more important, and the effect of which depend very much on the image quality which is based on the computer vision; as a result, using objective evaluation criteria such as the contrast value to gauge the image quality is more effective than the subjective evaluation. The contrast values of the restored images are shown in Table 3.
Tab.3 Contrast values of restored images
image20 iteration50 iteration100 iteration
contrast25.009929.506327.6188
It can be seen that the blind deconvolution performs better than former filters, but ringing artifacts exist in the restored result. This could be reduced using regularization such as edge detection. From the contrast value list, we can see that more iteration does not reflect a better effect, and it can be explained by that blind deconvolution algorithm has worse noise tolerance under large iteration number.
So, the quality of image can be enhanced by image restoration, and with which we can deduce the best filter and best iteration number for image restoration.
10 frames are extracted from the test video sequences collected by ICCD for super-resolution reconstruction including the sample image we used for image restoration. Figure 7 shows the reconstruction results of various SRR methods.
Fig.7 Reconstructed images (size 512 × 512 pixels) by (a) bilinear interpolation; (b) cubic convolution interpolation; (c) PG method; (d) Iterative back projection method; (e) POCS method; (f) PSF-based POCS method

Full size|PPT slide

From the visual point of view by Fig. 7, the differences of reconstructed results are not obvious, so we can only use the objective evaluation. The contrast values of reconstructed images are shown in Table 4.
Tab.4 Contrast values of reconstructed images
imagebilinearcubicPGIBPPOCSPSF-POCS
contrast14.746419.172229.244729.230239.243851.8804
From the contrast values of the reconstructed images, we can see that the results of interpolation algorithms are not desirable, while other methods can offer a relative better result, this is due to the fact that creating non-exists pixels blindly blurs the boundaries of black and white, which degrades the images of black and white stripe resolution board. Reconstructed images have ringing artifacts owing to the steep cut-off frequency which also can be reduced by regularization such as edge detection. Figure 7(f) is the result of the POCS method based on our calculated PSF. As can be obviously seen from Table 4, this method performs better than other traditional SRR methods. As a result, we can conclude that the PSF based POCS method can substantially enhance the performance of super-resolution reconstruction, which can achieve a best result currently. More future work should be conducted such as the introduction and comparison of traditional PSF models for the POCS method or other SRR methods.

5 Conclusions

In this paper, an underwater imaging model based on the formation of underwater images is presented, along with the retrieval of optical properties. The model includes the beam propagation, responses of medium as well as sensor, and the effects of underwater turbulence. Issues implemented on underwater imaging such as denoising, image enhancement and restoration are addressed and discussed. The model was applied to a range-gated underwater imaging system for image restoration and super-resolution reconstruction in which varies filters and methods were chosen for comparison, the results show that the calculated MTF and PSF can be used to enhance the performance of both the restoration and reconstruction. Further works can be carried out including more complex methods for image restoration or super-resolution reconstruction applications. The model can also be applied to other underwater imaging systems which have a similar principle of imaging.

References

[1]
Maiman T H. Optical and microwave-optical experiments in ruby. Physical Review Letters, 1960, 4(11): 564–566
CrossRef Google scholar
[2]
Lorenz E N. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 1963, 20(2): 130–141
CrossRef Google scholar
[3]
Haken H. Analogy between higher instabilities in fluids and lasers. Physics Letters A, 1975, 53(1): 77–78
CrossRef Google scholar
[4]
Pecora L M, Carroll T L. Synchronization in chaotic systems. Physical Review Letters, 1990, 64(8): 821–824
CrossRef Pubmed Google scholar
[5]
Argyris A, Syvridis D, Larger L, Annovazzi-Lodi V, Colet P, Fischer I, García-Ojalvo J, Mirasso C R, Pesquera L, Shore K A. Chaos-based communications at high bit rates using commercial fibre-optic links. Nature, 2005, 438(7066): 343–346
CrossRef Pubmed Google scholar
[6]
Lavrov R, Jacquot M, Larger L. Nonlocal nonlinear electro-optic phase dynamics demonstrating 10 Gb/s chaos communications. IEEE Journal of Quantum Electronics, 2010, 46(10): 1430–1435
CrossRef Google scholar
[7]
Masoller C. Anticipation in the synchronization of chaotic semiconductor lasers with optical feedback. Physical Review Letters, 2001, 86(13): 2782–2785
CrossRef Pubmed Google scholar
[8]
Wu Y, Wang Y, Li P, Wang A, Zhang M. Can fixed time delay signature be concealed in chaotic semiconductor laser with optical feedback? IEEE Journal of Quantum Electronics, 2012, 48(11): 1371–1379
CrossRef Google scholar
[9]
Rontani D, Locquet A, Sciamanna M, Citrin D S, Ortin S. Time-delay identification in a chaotic semiconductor laser with optical feedback: a dynamical point of view. IEEE Journal of Quantum Electronics, 2009, 45(7): 879–891
CrossRef Google scholar
[16]
Ortín S, Gutiérrez J M, Pesquera L, Vasquez H. Nonlinear dynamics extraction for time-delay systems using modular neural networks synchronization and prediction. Physica A: Statistical Mechanics & Its Applications, 2005, 351(1): 133–141
[17]
Nguimdo R M, Soriano M C, Colet P. Role of the phase in the identification of delay time in semiconductor lasers with optical feedback. Optics Letters, 2011, 36(22): 4332–4334
CrossRef Pubmed Google scholar
[10]
Rontani D, Locquet A, Sciamanna M, Citrin D S. Loss of time-delay signature in the chaotic output of a semiconductor laser with optical feedback. Optics Letters, 2007, 32(20): 2960–2962
CrossRef Pubmed Google scholar
[11]
Uchida A. Optical Communication with Chaotic Lasers. Hoboken: Wiley, 2012
[12]
Goedgebuer J P, Levy P, Larger L, Chen C C, Rhodes W T. Optical communication with synchronized hyperchaos generated electrooptically. IEEE Journal of Quantum Electronics, 2002, 38(9): 1178–1183
CrossRef Google scholar
[13]
Nguimdo R M. Chaos and Synchronization in opto-electronic devices with delayed feedback. Dissertation for the Doctoral Degree. Illes Balears: Universitat de les Illes Balears, 2011
[14]
Nourine M, Chembo Y K, Larger L. Wideband chaos generation using a delayed oscillator and a two-dimensional nonlinearity induced by a quadrature phase-shift-keying electro-optic modulator. Optics Letters, 2011, 36(15): 2833–2835
CrossRef Pubmed Google scholar
[15]
Lavrov R, Peil M, Jacquot M, Larger L, Udaltsov V, Dudley J. Electro-optic delay oscillator with nonlocal nonlinearity: optical phase dynamics, chaos, and synchronization. Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, 2009, 80(2): 026207
CrossRef Pubmed Google scholar
[18]
Nguimdo R M, Verschaffelt G, Danckaert J, Van der Sande G. Loss of time-delay signature in chaotic semiconductor ring lasers. Optics Letters, 2012, 37(13): 2541–2543
CrossRef Pubmed Google scholar
[19]
Hou T, Yi L, Ke J. Time delay signature concealment in chaotic systems for enhanced security. Submitted to Photonics Research, 2016
[20]
Hizanidis J, Deligiannidis S, Bogris A, Syvridis D. Enhancement of chaos encryption potential by combining all-optical and electrooptical chaos generators. IEEE Journal of Quantum Electronics, 2010, 46(11): 1642–1649
CrossRef Google scholar
[21]
Nguimdo R M, Colet P, Larger L, Pesquera L. Digital key for chaos communication performing time delay concealment. Physical Review Letters, 2011, 107(3): 034103
CrossRef Pubmed Google scholar
[22]
Nguimdo R M, Colet P. Electro-optic phase chaos systems with an internal variable and a digital key. Optics Express, 2012, 20(23): 25333–25344
CrossRef Pubmed Google scholar
[23]
Aromataris G, Annovazzi-Lodi V. Enhancing privacy of chaotic communications by double masking. IEEE Journal of Quantum Electronics, 2013, 49(11): 955–959
CrossRef Google scholar
[24]
Ursini L, Santagiustina M, Annovazzi-Lodi V. Enhancing chaotic communication performances by Manchester coding. IEEE Photonics Technology Letters, 2008, 20(6): 401–403
CrossRef Google scholar
[25]
Van Wiggeren G D, Roy R. Communication with chaotic lasers. Science, 1998, 279(5354): 1198–1200
CrossRef Pubmed Google scholar
[26]
Anishchenko V S, Vadivasova T E, Postnov D E, Safonova M A. Synchronization of chaos. International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, 1992, 2(3): 633–644
CrossRef Google scholar
[27]
Colet P, Roy R. Digital communication with synchronized chaotic lasers. Optics Letters, 1994, 19(24): 2056–2058
CrossRef Pubmed Google scholar
[28]
Larger L, Goedgebuer J, Udaltsov V. Ikeda-based nonlinear delayed dynamics for application to secure optical transmission systems using chaos. Comptes Rendus Physique, 2004, 5(6): 669–681
CrossRef Google scholar
[29]
Annovazzi-Lodi V, Donati S, Scire A. Synchronization of chaotic lasers by optical feedback for cryptographic applications. IEEE Journal of Quantum Electronics, 1997, 33(9): 1449–1454
CrossRef Google scholar
[30]
Goedgebuer J P, Larger L, Porte H. Optical cryptosystem based on synchronization of hyperchaos generated by a delayed feedback tunable laser diode. Physical Review Letters, 1998, 80(10): 2249–2252
CrossRef Google scholar
[31]
Mirasso C R, Colet P, Garcia-Fernandez P. Synchronization of chaotic semiconductor lasers: application to encoded communications. IEEE Photonics Technology Letters, 1996, 8(2): 299–301
CrossRef Google scholar
[32]
Uchida A, Sato T, Kannari F. Suppression of chaotic oscillations in a microchip laser by injection of a new orbit into the chaotic attractor. Optics Letters, 1998, 23(6): 460–462
CrossRef Pubmed Google scholar
[33]
Fischer I, Yun L,Davis P.Synchronization of chaotic semiconductor laser dynamics on subnanosecond time scales and its potential for chaos communication. Physical Review A (Atomic, Molecular, and Optical Physics), 2000, 62(1): 011801/1–4
[34]
Sivaprakasam S, Shore K A. Message encoding and decoding using chaotic external-cavity diode lasers. IEEE Journal of Quantum Electronics, 2000, 36(1): 35–39
CrossRef Google scholar
[35]
Tang S, Liu J M. Message encoding-decoding at 2.5 Gbits/s through synchronization of chaotic pulsing semiconductor lasers. Optics Letters, 2001, 26(23): 1843–1845
CrossRef Pubmed Google scholar
[36]
Abarbanel H, Kennel M B, Illing L, Tang S, Chen H F, Liu J M. Synchronization and communication using semiconductor lasers with optoelectronic feedback. IEEE Journal of Quantum Electronics, 2001, 37(10): 1301–1311
CrossRef Google scholar
[37]
Kusumoto K, Ohtsubo J. 1.5-GHz message transmission based on synchronization of chaos in semiconductor lasers. Optics Letters, 2002, 27(12): 989–991
CrossRef Pubmed Google scholar
[38]
Argyris A, Hamacher M, Chlouverakis K E, Bogris A, Syvridis D. Photonic integrated device for chaos applications in communications. Physical Review Letters, 2008, 100(19): 194101
CrossRef Pubmed Google scholar
[39]
Annovazzi-Lodi V, Benedetti M, Merlo S, Norgia M, Provinzano B. Optical chaos masking of video signals. IEEE Photonics Technology Letters, 2005, 17(9): 1995–1997
CrossRef Google scholar
[40]
Argyris A, Grivas E, Hamacher M, Bogris A, Syvridis D. Chaos-on-a-chip secures data transmission in optical fiber links. Optics Express, 2010, 18(5): 5188–5198
CrossRef Pubmed Google scholar
[41]
Gastaud N, Poinsot S, Larger L, Merolla J M, Hanna M, Goedgebuer J P, Malassenet F. Electro-optical chaos for multi-10 Gbit/s optical transmissions. Electronics Letters, 2004, 40(14): 898–899
CrossRef Google scholar

Acknowledgements

This work was supported by the National Basic Research Program of China (973 Program) (No. 2012CB315602), the National Natural Science Foundation of China (Grant Nos. 61575122, 61322507 and 61132004).

RIGHTS & PERMISSIONS

2016 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(556 KB)

Accesses

Citations

Detail

Sections
Recommended

/