1. State key laboratory of precision spectroscopy, School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
2. Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan 030006, China
3. NYU-ECNU Institute of Physics at NYU Shanghai, Shanghai 200062, China
ylyin@phy.ecnu.edu.cn
yxia@phy.ecnu.edu.cn
Show less
History+
Received
Accepted
Published
2023-11-02
2024-02-01
2024-10-15
Issue Date
Revised Date
2024-04-22
PDF
(7779KB)
Abstract
Orbital angular momentums (OAMs) greatly enhance the channel capacity in free-space optical communication. However, demodulation of superposed OAM to recognize them separately is always difficult, especially upon multiplexing more OAMs. In this work, we report a directly recognition of multiplexed fractional OAM modes, without separating them, at a resolution of 0.1 with high accuracy, using a multi-task deep learning (MTDL) model, which has not been reported before. Namely, two-mode, four-mode, and eight-mode superposed OAM beams, experimentally generated with a hologram carrying both phase and amplitude information, are well recognized by the suitable MTDL model. Two applications in information transmission are presented: the first is for 256-ary OAM shift keying via multiplexed fractional OAMs; the second is for OAM division multiplexed information transmission in an eightfold speed. The encouraging results will expand the capacity in future free-space optical communication.
As the rapid advancement appears in massive data transmission, cloud computing, artificial intelligence, traditional communication methods in terms of channel capacity have suffered huge limitation [1,2]. To address this challenge and enhance the channel capacity of communication systems, a novel multiplexing technique incorporating a vortex beam with orbital angular momentum (OAM) has been proposed [3-8]. This method offers a fundamental solution to the channel capacity issue in multiplexed communication. Various methods have been proposed for generating multiplexed beams, not only with the spatial light modulator but also the metasurface and nonlinear photonic crystals [6-11]. Multiplexed information transmission utilizing OAM indeed offers several advantages. First, owing to the infinite number of eigenstates associated with a vortex beam, multiple information channels are transmitted along the same spatial path, thereby expanding the communication dimensionality. Second, the inherent orthogonality of vortex beams with distinct OAM modes permits information modulation on separate vortex beams. As a result, signals transmitted through various OAM channels do not interfere, also enhancing the reliability of information transmission. Furthermore, the increasing advancement of OAM research has garnered considerable attention in the field of information transmission, thanks to the emergence of vortex-beam multiplexing technology, which introduces a novel dimension of multiplexing through OAM [12-20]. However, the afore-mentioned methods only consider eigenmodes with topological charges as integers due to resolution limitations. The intensity distributions of vortex beams are significantly affected by the increasing phase singularity and diffraction effects as the integer topological charge value increases. This presents significant challenges in focusing vortex beams in free space propagation and coupling into fibers. Therefore, to further enhance the communication capacity will ask for incorporating a greater number of OAM states while minimizing the phase singularity, such as using fractional OAM [21-23]. For example, if the interval between adjacent topological charges is 0.1 (namely the resolution of OAM), the fractional OAM modes are readily 10 times more, compared to the adjacent integer one. Therefore, the precision measurement of each fractional OAM mode will be very beneficial in OAM-based free-space optical communication (OAM-FSOC).
In previous studies, the detection of multiplexed OAM modes was largely based on a mode splitting method, where each OAM mode was recognized after separation according to certain rules in space [4-7]. One common method is to use interferometry to detect the vortex beam phase. In 2002, Leach et al. [24] proposed further sorting the multiplexed OAM modes by constructing a cascade of the Mach−Zehnder interferometer with a pair of Dove prisms, altering the rotation angle in such a way that an N-cascade device with N−1 interferometers can split 2N OAM modes in principle. It is important to note that the higher the number of demodulated OAMs, the greater the number of required interferometric systems. To simplify the mode splitting process, in 2010 Berkhout et al. [25] utilized coordinate changes to sort multiplexed OAM modes using two diffractive optical elements. However, this method has limitations in separating small intervals of OAM modes due to the overlapping of adjacent modes. To separate superimposed OAM beams more effectively and efficiently, in 2013 and 2018, Mirhosseini et al. [26] and Wen et al. [27], respectively, improved the scheme to achieve a resolution of 1 for OAM modes. As mentioned above, different OAM modes can be measured by adopting the interference and diffraction properties of a superimposed vortex beam.
The current OAM-based optical communication approaches consist of OAM division multiplexing (OAM-DM), in which separate OAM beams are treated as independent information channels and each OAM state in OAM shift keying (OAM-SK) is encoded as a data bit [1,9]. In 2004, the OAM-FSOC was pioneered by Padgett and his colleagues [28], and has been greatly enhanced with the help of deep learning by models [18,20,29-31], such as artificial neural network (ANN), deep neural network (DNN), and convolutional neural network (CNN), as well as others [32-37], including K-nearest neighbor (KNN), support vector machine (SVM), naive Bayes classifier (NBC). The models not only improve the accuracy of OAM recognition over long-distance transmission, but also increase the number of recognizable OAM states. For example, in 2014, Krenn et al. [18] were the first to realize machine-learning-based OAM transmission through 3 km of strong turbulence over the city of Vienna. They achieved an error rate of around 1.7% using an ANN model. In 2016, they further extended their experiment to a maximum distance of 143 km between two Canary Islands, also relying on the ANN model. The recognition accuracy of superposition OAM modes exceeded 80% [20]. In 2018, a 16-ary OAM-FSOC system was implemented under strong atmospheric turbulence, and the bit error ratio varied from 0 to 4.89 × 10−4 using a CNN model [35]. Furthermore, the number of recognizable multiplexed OAM states have been improved to 32-ary [29] and 100-ary [30]. In 2021, a CNN-based 768-ary Laguerre−Gaussian mode shift keying free-space optical communication system was realized. This system successfully transmitted color images with a received peak signal-to-noise ratio of 35 dB [38].
Among the variety of deep learning methods, multi-task deep learning (MTDL) is a learning paradigm that can jointly learn multiple tasks by utilizing shared structures to enhance generalization performance and reduce the need of manual labeling [39]. MTDL provides a parallel and effective method for measuring each OAM mode more accurately upon multiplexing more than two OAMs, as compared to the common single-task (one mode at a time) detection methods. Herein, using the MTDL model, we demonstrate an approach to directly recognize superposed fractional OAM modes information. Our study deciphers all the modes at a time for any given characteristic distributions in the multiplexed fractional OAMs without demultiplexing them separately; similar work has not been reported before. Furthermore, we give two potential applications: (i) Fractional OAM multiplexing of two-mode, four-mode, and eight-mode for 256-ary OAM-SK. In this case, we also investigate the performance of transferring different image sizes with varying amounts of OAM multiplexing. (ii) An eightfold speed OAM-DM by the MTDL model.
2 Designs and methods
2.1 Experimental setup
Fig.1 shows the schematics. In brief, a stabilized HeNe laser at 632.991 nm is used to generate the Gaussian beam. A spatial light modulator (SLM), which is a phase-only reflective liquid crystal device (1920 pixel × 1080 pixel, 8.0 μm per pixel pitch, 8-bit phase level), has been pre-loaded with a hologram containing both phase and amplitude structures. The phase level of our SLM is 8-bit (256 grayscale value), so it is capable to generate fractional OAM with 0.1 and 0.01 resolution. After collimation and expansion, the Gaussian beam hits the screen of the SLM, and is modulated to carry the multiplexed OAM modes. Then, the pattern images of light distribution are acquired by a CCD camera and modeled using the MTDL (GPU: NVIDIA, RTX-3070; CPU: INTEL, i7-10700). The entire system is first used to train the CNN-based MTDL model. It is then utilized to transmit the picture, e.g., “The Self-Portrait of Van Gogh”. The grayscale values are encoded using different OAM multiplexing methods. This generates specific superposed OAM beams of coded sets for each image pixel, transforming the corresponding light distribution images into the MTDL model. Finally, the received pictures are decoded based on the recognized OAM modes.
2.2 Characteristics of superposition of OAM beams
For multiplexed OAM, it is necessary to generate multiple OAM beams. We use modulated Gaussian beam to generate the optical vortex by carrying a helical wave front. The light field distribution of the OAM beam can be expressed as
where denotes the cylindrical coordinates, A and are the complex amplitude and the waist of the incident Gaussian beam, respectively.
And the superposition of multiple OAM modes can be simply described as
In experiment, we chose to generate multiple OAM beams using a spatial light modulator, which provides an easy way to manipulate optical fields from computer-generated phase-only holograms [40]. However, to accurately generate a good multiple-OAM beam containing stable, reliable, and quantitative features, it requires engineering both phase and amplitude structures simultaneously. Theoretically, we need to codify a complex-valued function onto the SLM. The transmittance function of the imprinted phase profile on the hologram on SLM can be expressed as [41,42]
where m and n are the pixel coordinates, and Λ is the period of the blazed grating pattern. is a normalized positive function of amplitude, 0 ≤ M ≤ 1, and is an analytical function of the amplitude and phase profiles. After a series of calculations based on Taylor–Fourier expansion and spatial filtering of all orders except the first one, these two functions are given by
where represents for the inverse function of the sinc, and is an unnormalized sinc function in the domain of , , .
For the double-mode OAM multiplexing, we use a random combination of and , which both have a resolution of 0.1; is from 2.0 to 3.5 and is from ‒2.0 to ‒3.5. Due to the limited layout, we only show 16 combinations of () in Fig.2: the optical intensity distributions exhibit the stable and clear stripes in the two-mode OAM superposition. For the superposition of two-integer and , there are four petal-like stripes at , five petal-like stripes at or at , and six petal-like stripes at . Apparently, the number of stripes is equal to the . For superposition of one integer plus one fractional OAM mode, the petal stripes start to show a bifurcate, as clearly seen in Fig.2 (at the bottom right corner) for / and /, as well (at the upper right corner) for /, and . Regarding the superposition of both non-integer OAM modes, the two petals begin to diverge simultaneously, as seen (at the right side) for and /.
Similarly, we conduct simulations and experiments to generate superposition of OAM beams in four-mode and eight-mode configurations. These configurations exhibit the patterns similar to the two-mode ones above.
2.3 Framework of MTDL
In this work, the MTDL model is built by hard parameter sharing of hidden layers [43,44]. It is generally applied by sharing the hidden layers among all tasks and keeping several task-specific output layers. The backbone of the MTDL model plays a key role in extracting image features. To make it have better feature acquisition capability, we build it based on the CNN model with a residual network (ResNeXt-50). As a variant of the original residential network (ResNet), ResNeXt effectively deepens and widens the network by establishing the connection and group convolution mechanism between the input and residual signals in parallel − in each convolutional block and each unit block − to improve the classification accuracy [45]. In Fig.3, each raw image is resized to 224 pixel × 224 pixel as input of the network. Then, the backbone of ResNeXt-50, consisting of a basic convolutional layer (I), maxpooling layer, Layers II, III, IV, and V, is utilized as the foundation of feature extraction, from which the transferable features learned are then shared across all tasks [46]. Layers II, III, IV, and V are formed by a stack of residual blocks with the same topology, displayed separately in Fig.3(a)‒(d). Taking Layers II as an example, there are three blocks in a stack, and each block topology is shown in Fig.3(a). The first black box in the topology shows the first layer, the filter size is 1 × 1, and the number of output channels is 128. The second black box represents the second layer, which has a filter size of 3 × 3 and also produces 128 output channels. The notation of “Grouped = 32” suggests grouped convolutions with 32 groups. The third black box shows that the filter size of this layer is 1 × 1, and the number of output channels is 256. These blocks with grouped convolutional layers make a wider but more sparsely connected module than the original bottleneck residual block in ResNet-50. After that, depending on the number of OAM multiplexes, different tasks are designed to recognize these OAMs concurrently, i.e., two classification tasks for two-mode superposition, and four classification tasks for four-mode superposition. The average pooling layer and the fully connected layer are set for each task to fit task-specific structures, and finally a softmax function is used for the classification of each task. In the training process, we use the Adam method for 50 epochs for the best optimization.
3 Results and discussion
3.1 Recognition of multiple fractional OAM beams
The MTDL model integrates multiple tasks by sharing factors or representations during the learning process. This approach is expected to be able to yield the superior generalization, as opposed to single-task learning, and thus has greater applications in diverse domains such as natural language processing, speech recognition, and computer vision [47-49]. Also, the model can serve as a parallel and efficient method for accurately identifying multiple fractional OAM beams simultaneously, as demonstrated below.
To train the MTDL models for two-mode, four-mode, and eight-mode of superposition fractional OAMs, we collect 10 240 images for each, respectively. These images are randomly divided into a training set, validation set, and test set at a ratio of 6:2:2. The performance evaluation on four-mode multiplexing is shown in Fig.4(a) for the accuracy curves, and in Fig.4(b) for the loss function curves: green curves are for l1, blue curves are for l2, orange curves are for l3, and purple curves are for l4. From Fig.4(a), the accuracies of all OAM modes in the training and validation sets increase in the first 15 epochs, then stabilize around 30 epochs, and finally reach the best (nearly 100%) at the 50th epoch. From Fig.4(b) the loss functions converge after 15 epochs. In short, our MTDL model has been well trained and is suitable for prediction. Similarly, the model training and validation for the two-mode and eight-mode cases are also completed. The loss functions converge to the minimum within 10 epochs for the two-mode case, and within 25 epochs for the eight-mode case. Furthermore, the accuracy of each OAM mode reaches up to 99% for both cases.
Fig.5 shows the detailed confusion matrixes for the two-mode fractional OAM multiplexing in the test dataset. The horizontal axis represents the known OAM modes, and the vertical axis represents the to-be-predicted OAM modes. This plot shows that the model is good for recognizing the multiple fractional OAM modes. Fig.6 and Fig.7 show the confusion matrices for four-mode and eight-mode, respectively. As shown in Fig.6(a)−(c) for the three modes l1, l2 and l3 accordingly, the predictions are all on the diagonal line, indicating an accuracy of 100%. In Fig.6(d) for the mode l4, the accuracy is 99.46% due to the error at l = ‒3.1 incorrectly recognized as l = ‒3.2. In Fig.7, despite the small difference between each one, the model still performed well: small errors occur at the modes l7 [Fig.7(g)] and l8 [Fig.7(h)], and the accuracy is both 99.32%. In brief, the results show that our model is reliable for recognizing multiple fractional OAM beams.
3.2 Information transmission I: Fractional OAM multiplexing 256-ary OAM-SK
We construct a 1-m OAM-SK free-space information transmission link to transmit an 8-bit gray image of Van Gogh. Depending on the OAM multiplexing schemes, grayscale values in the range of 0‒255 are mapped to two-digit hexadecimal numbers, four-digit quadratic numbers, or eight-digit binary numbers.
To encode the 256-ary grayscale value with a resolution of 0.1, OAM beams are formed using the following configurations:
For the two-mode multiplexing, [2.0, 3.5], [‒3.5, ‒2.0].
For the four-mode multiplexing, [2.0, 2.3], [3.0, 3.3], [‒2.3, ‒2.0], and [‒3.3, ‒3.0].
For the eight-mode multiplexing, [2.0, 2.1], [3.0, 3.1], [4.0, 4.1], [5.0, 5.1], [‒2.1, ‒2.0], [‒3.1, ‒3.0], [‒4.1, ‒4.0], and [‒5.1, ‒5.0].
The coding mechanism is illustrated in Fig.8. For instance, the pixel value 100 can be represented by a two-digit hexadecimal number (64), resulting in the two superposed OAM modes set (2.6, ‒2.4); or a four-digit quadratic numbers (1210), resulting in the four superposed OAM modes set (2.1, ‒2.2, 3.1, ‒3.0); or an octet number (01100100), resulting in the eight superposed OAM modes set (2.0, ‒2.1, 3.1, ‒3.0, 4.0, ‒4.1, 5.0, ‒5.0). From Fig.8(a) and (b), the light intensity distribution is obviously changing among different OAM sets in two-mode and four-mode multiplexing, which are consistent with the high resolution of OAM (0.1) and the wide range of grayscale values, so to be easily recognized by the MTDL model. Also, due to the small interval among four modes and eight modes, the light field intensity distributions of Fig.8(b) and (c) are quite localized: The light intensity is mainly concentrated on the right side of each image, so the right side of the stripe is brighter and more obvious, while the left side of the stripe is darker. Interestingly, from Fig.8(c), the light intensity distribution among the eight modes is mostly difficult to tell the difference (by the human eyes). But the MTDL model is still capable of recognizing the subtlety in pattern, as demonstrated by the high accuracy (99.32%).
Following the coding mechanism mentioned above, for the received 160 × 160 image, Fig.9(a)‒(c) are two-mode, four-mode and eight-mode multiplexing, respectively. The pixel error rate (PER) accordingly is 0, 0.07%, and 0.03%. The error points in the images are not obvious as the received images are nearly indistinguishable from the original ones. These “invisible” error points can be explained by the imperfect confusion matrix in Fig.6 and Fig.7. The recognition failures occur in l4 in four-mode multiplexing, which is manifested by the misidentification of ‒3.1 as ‒3.2, and in l7 and l8 in eight-mode multiplexing, which misidentified 5.1 as 5.0, or ‒5.1 to ‒5.0. According to the coding mechanism, the recognition error will only affect the last digit of the quadrature code ‒ resulting in a grayscale value deviation of 1, and the last two digits of the binary code ‒ resulting in a deviation of 3. The grayscale values 0‒255 represent colors ranging from black to white. The difference caused by the deviation in grayscale values is minuscule so that the human eyes do not notice it.
We also investigate the relationship between the performance of information transmission links and picture sizes. We select eight different sizes for the self-portrait of Van Gogh: 90 pixel × 90 pixel, 100 pixel × 100 pixel, 110 pixel × 110 pixel, 120 pixel × 120 pixel, 130 pixel × 130 pixel, 140 pixel × 140 pixel, 160 pixel × 160 pixel, and 180 pixel × 180 pixel. The transmission results are shown in Fig.9(d). In two-mode multiplexing, all sizes of images are perfectly transmitted with no pixel error. The excellent transmission in these multiplexes is not surprising, as the confusion matrix already demonstrated a 100% accuracy rate for either of the two encoded bits, with minimal error in grayscale value sizing. Regarding the four-mode and eight-mode multiplexing, both results show similar oscillatory distributions. In terms of probability theory, the PER of quadruplex transmission should be PE = 1 − PC, where PC is the probability that each bit of the code is correctly identified, i.e., the accuracy of recognition of each OAM value. Similarly, the probabilistic PER of eight-way multiplexing is also known. After a simple calculation, the bit error rate for eight-way multiplexing transmission is lower than that of four-way multiplexing, which is consistent with our experimental results. Due to the random occurrence of recognition errors and the impact of the actual transmission process, the PER of four-mode multiplexing OAM is only slightly higher than that of eight-mode multiplexing, and the PER of both is less than 0.07%. In short, the best result lies in the 100 pixel × 100 pixel image transmission, with a PER of 0.01% for four-mode multiplexing and zero in eight-mode multiplexing. Clearly, the fractional OAM multiplexing using 256-ary OAM-SK is effectively accomplished by the MTDL model for optical information transmission.
3.3 Information transmission II: An eightfold speed of OAM-DM
In OAM-DM, OAM beams are treated as independent information channels that carry separate information streams utilizing the orthogonality between different OAM states. With the aid of eight-task MTDL model, OAM-DM information transmission is achieved essentially eight times faster, as seen in Fig.10. The transmission accuracy reaches up to 99.63% for an image size of 160 × 160. For OAM-DM optical communication, each pixel value is carried by one OAM mode, and needs to be transmitted using a specific OAM mode. In single-task deep learning, only one OAM mode at a time is recognized, which is not applicable for superimposing OAM modes’ transmission. Taking a 160 × 160 image as an example, a total of 25 600 points need to be transmitted individually through the 1-meter information transmission link one by one. Additionally, considering the limitations of SLM, CCD refreshing time, and computer response time, transmitting each image would consume a lot of resources. By leveraging the eight-task deep learning, as shown in Fig.10(b), the unit of eight pixels is represented by an eight-mode-multiplexed OAM to transmit. For a 160 × 160 image, the number of OAM intensity distributions to be transmitted is significantly reduced to 3200 (160 × 160/8). This approach enables the transmission of the same information in only one-eighth of the original time, so eight times faster than before. In other words, the MTDL-based OAM multiplexing significantly enhances the information capacity of OAM-DM transmission.
4 Conclusion
In conclusion, we present a MTDL-based recognition technique for the multiplexed fractional OAM modes at a resolution of 0.1, without demultiplexing them separately, with applications in OAM-SK and OAM-DM optical information transmission. The recognition accuracy for each OAM mode is 100% for the two-mode superposition, more than 99.46% for the four-mode superposition, and more than 99.32% for the eight-mode superposition. For the 256-ary OAM-SK, the images of various sizes are transmitted without any transmission error in the two-mode multiplexing. In the case of the four-mode and eight-mode multiplexing, the PER is low – the best result less than 0.01%. For the OAM-DM, the transmission accuracy is 99.63% for image size of 160×160 pixels. As the OAM multiplexing enhances the communication bandwidth and improves the spectral efficiency, the accurate detection ‒ of the multiplexed vortex light at the receiver end ‒ is essential for ensuring the correct operation of the entire multiplexing system. Our results demonstrate good potential to further expand the range of modes and enhance the information processing capacity in optical communication using sophisticated neural networks [50,51].
A. E. Willner, H. Huang, Y. Yan, Y. Ren, N. Ahmed, G. Xie, C. Bao, L. Li, Y. Cao, Z. Zhao, J. Wang, M. P. J. Lavery, M. Tur, S. Ramachandran, A. F. Molisch, N. Ashrafi, S. Ashrafi. Optical communications using orbital angular momentum beams. Adv. Opt. Photonics, 2015, 7(1): 66
[2]
H. Rubinsztein-Dunlop, A. Forbes, M. V. Berry, M. R. Dennis, D. L. Andrews, M. Mansuripur, C. Denz, C. Alpmann, P. Banzer, T. Bauer, E. Karimi, L. Marrucci, M. Padgett, M. Ritsch-Marte, N. M. Litchinitser, N. P. Bigelow, C. Rosales-Guzmán, A. Belmonte, J. P. Torres, T. W. Neely, M. Baker, R. Gordon, A. B. Stilgoe, J. Romero, A. G. White, R. Fickler, A. E. Willner, G. Xie, B. McMorran, A. M. Weiner. Roadmap on structured light. J. Opt., 2017, 19(1): 013001
[3]
L. Allen, M. Beijersbergen, R. Spreeuw, J. Woerdman. Orbital angular momentum of light and transformation of Laguerre‒Gaussian laser modes. Phys. Rev. A, 1992, 45(11): 8185
[4]
J. P. Yin, W. J. Gao, Y. F. Zhu. Generation of dark hollow beams and their applications. Prog. Opt., 2003, 45(11): 119
[5]
M.J. Padgett, Orbital angular momentum 25 years on, Opt. Express 25(10), 11265 (2017) (Invited)
[6]
Y. J. Shen, X. J. Wang, Z. W. Xie, C. J. Min, X. Fu, Q. Liu, M. L. Gong, X. C. Yuan. Optical vortices 30 years on: OAM manipulation from topological charge to multiple singularities. Light Sci. Appl., 2019, 8(1): 90
[7]
A. Forbes, S. Ramachandran, Q. W. Zhan. Photonic angular momentum: Progress and perspectives. Nanophotonics, 2022, 11(4): 625
[8]
S. J. Li, Z. Y. Li, G. S. Huang, X. B. Liu, R. Q. Li, X. Y. Cao. Digital coding transmissive metasurface for multi-OAM-beam. Front. Phys., 2022, 17(6): 62501
[9]
L. Jin, Y. W. Huang, Z. W. Jin, R. C. Devlin, Z. G. Dong, S. T. Mei, M. H. Jiang, W. T. Chen, Z. Wei, H. Liu, J. H. Teng, A. Danner, X. P. Li, S. M. Xiao, S. Zhang, C. Y. Yu, J. K. W. Yang, F. Capasso, C. W. Qiu. Dielectric multi-momentum meta-transformer in the visible. Nat. Commun., 2019, 10(1): 4789
[10]
X. Y. Fang, H. J. Wang, H. C. Yang, Z. L. Ye, Y. M. Wang, Y. Zhang, X. P. Hu, S. N. Zhu, M. Xiao. Multichannel nonlinear holography in a two-dimensional nonlinear photonic crystal. Phys. Rev. A, 2020, 102(4): 043506
[11]
J. J. Guo, Y. P. Zhang, H. Ye, L. Y. Wang, P. C. Chen, D. J. Mao, C. Z. Xie, Z. H. Chen, X. W. Wu, M. Xiao, Y. Zhang. Spatially structured-mode multiplexing holography for high-capacity security encryption. ACS Photonics, 2023, 10(3): 757
[12]
A. E. Willner, Z. Zhao, C. Liu, R. Zhang, H. Song, K. Pang, K. Manukyan, H. Song, X. Su, G. Xie, Y. Ren, Y. Yan, M. Tur, A. F. Molisch, R. W. Boyd, H. Zhou, N. Hu, A. Minoofar, H. Huang. Perspectives on advances in high-capacity, free-space communications using multiplexing of orbital-angular-momentum beams. APL Photonics, 2021, 6(3): 030901
[13]
J. Lin, X. C. Yuan, S. H. Tao, R. E. Burge. Multiplexing free-space optical signals using superimposed collinear orbital angular momentum states. Appl. Opt., 2007, 46(21): 4680
[14]
J. Wang, J. Y. Yang, I. M. Fazal, N. Ahmed, Y. Yan, H. Huang, Y. X. Ren, Y. Yue, S. Dolinar, M. Tur, A. E. Willner. Terabit free-space data transmission employing orbital angular momentum multiplexing. Nat. Photonics, 2012, 6(7): 488
[15]
N. Bozinovic, Y. Yue, Y. Ren, M. Tur, P. Kristensen, H. Huang, A. E. Willner, S. Ramachandran. Terabit-scale orbital angular momentum mode division multiplexing in fibers. Science, 2013, 340(6140): 1545
[16]
G. Vallone, V. D’Ambrosio, A. Sponselli, S. Slussarenko, L. Marrucci, F. Sciarrino, P. Villoresi. Free-space quantum key distribution by rotation-invariant twisted photons. Phys. Rev. Lett., 2014, 113(6): 060503
[17]
H.HuangG.XieY.YanN.AhmedY.RenY.YueD.RogawskiM.J. WillnerB.I. ErkmenK.M. BirnbaumS.J. DolinarM.P. J. LaveryM.J. PadgettM.TurA.E. Willner, 100 Tbit/s free-space data link enabled by three-dimensional multiplexing of orbital angular momentum, polarization, and wave-length, Opt. Lett. 39(2), 197 (2014)
[18]
M. Krenn, R. Fickler, M. Fink, J. Handsteiner, M. Malik, T. Scheidl, R. Ursin, A. Zeilinger. Communication with spatially modulated light through turbulent air across Vienna. New J. Phys., 2014, 16(11): 113028
[19]
A. J. Willner, Y. Ren, G. Xie, Z. Zhao, Y. Cao, L. Li, N. Ahmed, Z. Wang, Y. Yan, P. Liao, C. Liu, M. Mirhosseini, R. W. Boyd, M. Tur, A. E. Willner. Experimental demonstration of 20 Gbit/s data encoding and 2 ns channel hopping using orbital angular momentum modes. Opt. Lett., 2015, 40(24): 5810
[20]
M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik, A. Zeilinger. Twisted light transmission over 143 km. Proc. Natl. Acad. Sci. USA, 2016, 113(48): 13648
[21]
H. Zhang, J. Zeng, X. Y. Lu, Z. Y. Wang, C. L. Zhao, Y. J. Cai. Review on fractional vortex beam. Nanophotonics, 2022, 11(2): 241
[22]
Z. W. Liu, S. Yan, H. G. Liu, X. F. Chen. Superhigh-resolution recognition of optical vortex modes assisted by a deep-learning method. Phys. Rev. Lett., 2019, 123(18): 183902
[23]
Y. Na, D. K. Ko. Adaptive demodulation by deep-learning-based identification of fractional orbital angular momentum modes with structural distortion due to atmospheric turbulence. Sci. Rep., 2021, 11(1): 23505
[24]
J. Leach, M. Padgett, S. Barnett, S. Franke-Arnold, J. Courtial. Measuring the orbital angular momentum of a single photon. Phys. Rev. Lett., 2002, 88(25): 257901
[25]
G. Berkhout, M. Lavery, J. Courtial, M. Beijersbergen, M. Padgett. Efficient sorting of orbital angular momentum states of light. Phys. Rev. Lett., 2010, 105(15): 153601
[26]
M. Mirhosseini, M. Malik, Z. Shi, R. Boyd. Efficient separation of the orbital angular momentum eigenstates of light. Nat. Commun., 2013, 4(1): 2781
[27]
Y. H. Wen, I. Chremmos, Y. J. Chen, J. B. Zhu, Y. F. Zhang, S. Y. Yu. Spiral transformation for high-resolution and efficient sorting of optical vortex modes. Phys. Rev. Lett., 2018, 120(19): 193904
[28]
G. Gibson, J. Courtial, M. J. Padgett, M. Vasnetsov, V. Pas’ko, S. M. Barnett, S. Franke-Arnold. Free-space information transfer using light beams carrying orbital angular momentum. Opt. Express, 2004, 12(22): 5448
[29]
T. Doster, A. T. Watnik. Machine learning approach to OAM beam demultiplexing via convolutional neural networks. Appl. Opt., 2017, 56(12): 3386
[30]
S. Lohani, E. M. Knutson, M. O’Donnell, S. D. Huver, R. T. Glasser. On the use of deep neural networks in optical communications. Appl. Opt., 2018, 57(15): 4180
[31]
S. Park, L. Cattell, J. Nichols, A. Watnik, T. Doster, G. Rohde. De-multiplexing vortex modes in optical communications using transport-based pattern recognition. Opt. Express, 2018, 26(4): 4004
[32]
J. Li, M. Zhang, D. S. Wang. Adaptive demodulator using machine learning for orbital angular momentum shift keying. IEEE Photonics Technol. Lett., 2017, 29(17): 1455
[33]
Q. S. Zhao, S. Q. Hao, Y. Wang, L. Wang, X. F. Wan, C. L. Xu. Mode detection of misaligned orbital angular momentum beams based on convolutional neural network. Appl. Opt., 2018, 57(35): 10152
[34]
S. Lohani, R. T. Glasser. Turbulence correction with artificial neural networks. Opt. Lett., 2018, 43(11): 2611
[35]
Q. H. Tian, Z. Li, K. Hu, L. Zhu, X. L. Pan, Q. Zhang, Y. J. Wang, F. Tian, X. L. Yin, X. J. Xin. Turbo-coded 16-ary OAM shift keying FSO communication system combining the CNN-based adaptive demodulator. Opt. Express, 2018, 26(21): 27849
[36]
J. Li, M. Zhang, D. S. Wang, S. J. Wu, Y. Y. Zhan. Joint atmospheric turbulence detection and adaptive demodulation technique using the CNN for the OAM‒FSO communication. Opt. Express, 2018, 26(8): 10494
[37]
Y. Z. Shi, Z. H. Ma, H. Y. Chen, Y. G. Ke, Y. Chen, X. X. Zhou. High-resolution recognition of FOAM modes via an improved EfficientNet V2 based convolutional neural network. Front. Phys., 2024, 19(3): 32205
[38]
H.LuanD.LinK.LiW.MengM.GuX.Fang, 768-ary Laguerre‒Gaussian-mode shift keying free-space optical communication based on convolutional neural networks, Opt. Express 29(13), 19807 (2021)
[39]
A. Maurer, M. Pontil, B. Paredes. The benefit of multitask representation learning. J. Mach. Learn. Res., 2016, 17(1): 2853
[40]
Z. X. Mao, H. Y. Yu, M. Xia, S. Z. Pan, D. Wu, Y. L. Yin, Y. Xia, J. P. Yin. Broad bandwidth and highly efficient recognition of optical vortex modes achieved by the neural-network approach. Phys. Rev. Appl., 2020, 13(3): 034063
[41]
J. Davis, D. Cottrell, J. Campos, M. Yzuel, I. Moreno. Encoding amplitude information onto phase-only filters. Appl. Opt., 1999, 38(23): 5004
[42]
E. Bolduc, N. Bent, E. Santamato, E. Karimi, R. W. Boyd. Exact solution to simultaneous intensity and phase encryption with a single phase-only hologram. Opt. Lett., 2013, 38(18): 3546
[43]
K.ThungC.Wee, A brief review on multi-task learning, Multimedia Tools Appl. 77(22), 29705 (2018)
[44]
Y. Zhang, Q. Yang. An overview of multi-task learning. Natl. Sci. Rev., 2018, 5(1): 30
[45]
C.SzegedyL.WeiJ.YangqingP.SermanetS.ReedD.AnguelovD.ErhanV.VanhouckeA.Rabinovich, Going deeper with convolutions, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 1 (2015)
[46]
S.XieR.B. GirshickP.DollárZ.TuK.He, Aggregated residual transformations for deep neural networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 5987 (2017)
[47]
R.CollobertJ.Weston, A unified architecture for natural language processing: deep neural networks with multitask learning, in: Proceedings of the 25th International Conference on Machine Learning, Association for Computing Machinery: Helsinki, Finland, 160–167 (2008)
[48]
X. Cui, W. Zhang, U. Finkler, G. Saon, M. Picheny, D. Kung. Distributed training of deep neural network acoustic models for automatic speech recognition: A comparison of current training strategies. IEEE Signal Process. Mag., 2020, 37(3): 39
[49]
R.B. GirshickR.C. N. N. Fast, IEEE International Conference on Computer Vision (ICCV), 1440 (2015)
[50]
B. L. Li, H. T. Luan, K. Y. Li, Q. Y. Chen, W. J. Meng, K. Cheng, M. Gu, X. Y. Fang. Orbital angular momentum optical communications enhanced by artificial intelligence. J. Opt., 2022, 24(9): 094003
[51]
Z. S. Wan, H. Wang, Q. Liu, X. Fu, Y. J. Shen. Ultra-degree-of-freedom structured light for ultracapacity information carriers. ACS Photonics, 2023, 10(7): 2149
RIGHTS & PERMISSIONS
Higher Education Press
AI Summary 中Eng×
Note: Please be aware that the following content is generated by artificial intelligence. This website is not responsible for any consequences arising from the use of this content.