Mid-wavelength infrared (MWIR) detection and long-wavelength infrared (LWIR) detection constitute the key technologies for space-based Earth observation and astronomical detection. The advanced ability of infrared (IR) detection technology to penetrate the atmosphere and identify the camouflaged targets makes it excellent for space-based remote sensing. Thus, such detectors play an essential role in detecting and tracking low-temperature and far-distance moving targets. However, due to the diverse scenarios in which space-based IR detection systems are built, the key parameters of IR technologies are subject to unique demands. We review the developments and features of MWIR and LWIR detectors with a particular focus on their applications in space-based detection. We conduct a comprehensive analysis of key performance indicators for IR detection systems, including the ground sampling distance (GSD), operation range, and noise equivalent temperature difference (NETD) among others, and their interconnections with IR detector parameters. Additionally, the influences of pixel distance, focal plane array size, and operation temperature of space-based IR remote sensing are evaluated. The development requirements and technical challenges of MWIR and LWIR detection systems are also identified to achieve high-quality space-based observation platforms.
The new generation of artificial intelligence (AI) research initiated by Chinese scholars conforms to the needs of a new information environment changes, and strives to advance traditional artificial intelligence (AI 1.0) to a new stage of AI 2.0. As one of the important components of AI, collective intelligence (CI 1.0), i.e., swarm intelligence, is developing to the stage of CI 2.0 (crowd intelligence). Through in-depth analysis and informative argumentation, it is found that an incompatibility exists between CI 1.0 and CI 2.0. Therefore, CI 1.5 is introduced to build a bridge between the above two stages, which is based on bio-collaborative behavioral mimicry. CI 1.5 is the transition from CI 1.0 to CI 2.0, which contributes to the compatibility of the two stages. Then, a new interpretation of the meta-synthesis of wisdom proposed by Qian Xuesen is given. The meta-synthesis of wisdom, as an improvement of crowd intelligence, is an advanced stage of bionic intelligence, i.e., CI 3.0. It is pointed out that the dual-wheel drive of large language models and big data with deep uncertainty is an evolutionary path from CI 2.0 to CI 3.0, and some elaboration is made. As a result, we propose four development stages (CI 1.0, CI 1.5, CI 2.0, and CI 3.0), which form a complete framework for the development of CI. These different stages are progressively improved and have good compatibility. Due to the dominant role of cooperation in the development stages of CI, three types of cooperation in CI are discussed: indirect regulatory cooperation in lower organisms, direct communicative cooperation in higher organisms, and shared intention based collaboration in humans. Labor division is the main form of achieving cooperation and, for this reason, this paper investigates the relationship between the complexity of behavior and types of labor division. Finally, based on the overall understanding of the four development stages of CI, the future development direction and research issues of CI are explored.
Federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models. However, factors such as network topology and computing power of devices can affect its training or communication process in complex network environments. Computing and network convergence (CNC) of sixth-generation (6G) networks, a new network architecture and paradigm with computing-measurable, perceptible, distributable, dispatchable, and manageable capabilities, can effectively support federated learning training and improve its communication efficiency. By guiding the participating devices’ training in federated learning based on business requirements, resource load, network conditions, and computing power of devices, CNC can reach this goal. In this paper, to improve the communication efficiency of federated learning in complex networks, we study the communication efficiency optimization methods of federated learning for CNC of 6G networks that give decisions on the training process for different network conditions and computing power of participating devices. The simulations address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters. The results show that the methods we proposed can cope well with complex network situations, effectively balance the delay distribution of participating devices for local training, improve the communication efficiency during the transfer of model parameters, and improve the resource utilization in the network.
Fueled by the explosive growth of ultra-low-latency and real-time applications with specific computing and network performance requirements, the computing force network (CFN) has become a hot research subject. The primary CFN challenge is to leverage network resources and computing resources. Although recent advances in deep reinforcement learning (DRL) have brought significant improvement in network optimization, these methods still suffer from topology changes and fail to generalize for those topologies not seen in training. This paper proposes a graph neural network (GNN) based DRL framework to accommodate network traffic and computing resources jointly and efficiently. By taking advantage of the generalization capability in GNN, the proposed method can operate over variable topologies and obtain higher performance than the other DRL methods.
With the booming development of fifth-generation network technology and Internet of Things, the number of end-user devices (EDs) and diverse applications is surging, resulting in massive data generated at the edge of networks. To process these data efficiently, the innovative mobile edge computing (MEC) framework has emerged to guarantee low latency and enable efficient computing close to the user traffic. Recently, federated learning (FL) has demonstrated its empirical success in edge computing due to its privacy-preserving advantages. Thus, it becomes a promising solution for analyzing and processing distributed data on EDs in various machine learning tasks, which are the major workloads in MEC. Unfortunately, EDs are typically powered by batteries with limited capacity, which brings challenges when performing energy-intensive FL tasks. To address these challenges, many strategies have been proposed to save energy in FL. Considering the absence of a survey that thoroughly summarizes and classifies these strategies, in this paper, we provide a comprehensive survey of recent advances in energy-efficient strategies for FL in MEC. Specifically, we first introduce the system model and energy consumption models in FL, in terms of computation and communication. Then we analyze the challenges regarding improving energy efficiency and summarize the energy-efficient strategies from three perspectives: learning-based, resource allocation, and client selection. We conduct a detailed analysis of these strategies, comparing their advantages and disadvantages. Additionally, we visually illustrate the impact of these strategies on the performance of FL by showcasing experimental results. Finally, several potential future research directions for energy-efficient FL are discussed.
Federated learning (FL), a cutting-edge distributed machine learning training paradigm, aims to generate a global model by collaborating on the training of client models without revealing local private data. The cooccurrence of non-independent and identically distributed (non-IID) and long-tailed distribution in FL is one challenge that substantially degrades aggregate performance. In this paper, we present a corresponding solution called federated dual-decoupling via model and logit calibration (FedDDC) for non-IID and long-tailed distributions. The model is characterized by three aspects. First, we decouple the global model into the feature extractor and the classifier to fine-tune the components affected by the joint problem. For the biased feature extractor, we propose a client confidence re-weighting scheme to assist calibration, which assigns optimal weights to each client. For the biased classifier, we apply the classifier re-balancing method for fine-tuning. Then, we calibrate and integrate the client confidence re-weighted logits with the re-balanced logits to obtain the unbiased logits. Finally, we use decoupled knowledge distillation for the first time in the joint problem to enhance the accuracy of the global model by extracting the knowledge of the unbiased model. Numerous experiments demonstrate that on non-IID and long-tailed data in FL, our approach outperforms state-of-the-art methods.
Industrial Internet, motivated by the deep integration of new-generation information and communication technology (ICT) and advanced manufacturing technology, will open up the production chain, value chain, and industry chain by establishing complete interconnections between humans, machines, and things. This will also help establish novel manufacturing and service modes, where personalized and customized production for differentiated services is a typical paradigm of future intelligent manufacturing. Thus, there is an urgent requirement to break through the existing chimney-like service mode provided by the hierarchical heterogeneous network architecture and establish a transparent channel for manufacturing and services using a flat network architecture. Starting from the basic concepts of process manufacturing and discrete manufacturing, we first analyze the basic requirements of typical manufacturing tasks. Then, with an overview on the developing process of industrial Internet, we systematically compare the current networking technologies and further analyze the problems of the present industrial Internet. On this basis, we propose to establish a novel “thin waist” that integrates sensing, communication, computing, and control for the future industrial Internet. Furthermore, we perform a deep analysis and engage in a discussion on the key challenges and future research issues regarding the multi-dimensional collaborative sensing of task–resource, the end-to-end deterministic communication of heterogeneous networks, and virtual computing and operation control of industrial Internet.
How to collaboratively offload tasks between user devices, edge networks (ENs), and cloud data centers is an interesting and challenging research topic. In this paper, we investigate the offloading decision, analytical modeling, and system parameter optimization problem in a collaborative cloud–edge–device environment, aiming to trade off different performance measures. According to the differentiated delay requirements of tasks, we classify the tasks into delay-sensitive and delay-tolerant tasks. To meet the delay requirements of delay-sensitive tasks and process as many delay-tolerant tasks as possible, we propose a cloud–edge–device collaborative task offloading scheme, in which delay-sensitive and delay-tolerant tasks follow the access threshold policy and the loss policy, respectively. We establish a four-dimensional continuous-time Markov chain as the system model. By using the Gauss–Seidel method, we derive the stationary probability distribution of the system model. Accordingly, we present the blocking rate of delay-sensitive tasks and the average delay of these two types of tasks. Numerical experiments are conducted and analyzed to evaluate the system performance, and numerical simulations are presented to evaluate and validate the effectiveness of the proposed task offloading scheme. Finally, we optimize the access threshold in the EN buffer to obtain the minimum system cost with different proportions of delay-sensitive tasks.
The integration of industrial Internet, cloud computing, and big data technology is changing the business and management mode of the industry chain. However, the industry chain is characterized by a wide range of fields, complex environment, and many factors, which creates a challenge for efficient integration and leveraging of industrial big data. Aiming at the integration of physical space and virtual space of the current industry chain, we propose an industry chain digital twin (DT) system framework for the industrial Internet. In addition, an industry chain information model based on a knowledge graph (KG) is proposed to integrate complex and heterogeneous industry chain data and extract industrial knowledge. First, the ontology of the industry chain is established, and an entity alignment method based on scientific and technological achievements is proposed. Second, the bidirectional encoder representations from Transformers (BERT) based multi-head selection model is proposed for joint entity–relation extraction of industry chain information. Third, a relation completion model based on a relational graph convolutional network (R-GCN) and a graph sample and aggregate network (GraphSAGE) is proposed which considers both semantic information and graph structure information of KG. Experimental results show that the performances of the proposed joint entity–relation extraction model and relation completion model are significantly better than those of the baselines. Finally, an industry chain information model is established based on the data of 18 industry chains in the field of basic machinery, which proves the feasibility of the proposed method.
This paper proposes a kind of programmable logic element (PLE) based on Sense-Switch pFLASH technology. By programming Sense-Switch pFLASH, all three-bit look-up table (LUT3) functions, partial four-bit look-up table (LUT4) functions, latch functions, and d flip flop (DFF) with enable and reset functions can be realized. Because PLE uses a choice of operational logic (COOL) approach for the operation of logic functions, it allows any logic circuit to be implemented at any ratio of combinatorial logic to register. This intrinsic property makes it close to the basic application specific integrated circuit (ASIC) cell in terms of fine granularity, thus allowing ASIC-like cell-based mappers to apply all their optimization potential. By measuring Sense-Switch pFLASH and PLE circuits, the results show that the “on” state driving current of the Sense-Switch pFLASH is about 245.52 μA, and that the “off” state leakage current is about 0.1 pA. The programmable function of PLE works normally. The delay of the typical combinatorial logic operation AND3 is 0.69 ns, and the delay of the sequential logic operation DFF is 0.65 ns, both of which meet the requirements of the design technical index.
Camouflaged targets are a type of nonsalient target with high foreground and background fusion and minimal target feature information, making target recognition extremely difficult. Most detection algorithms for camouflaged targets use only the target’s single-band information, resulting in low detection accuracy and a high missed detection rate. We present a multimodal image fusion camouflaged target detection technique (MIF-YOLOv5) in this paper. First, we provide a multimodal image input to achieve pixel-level fusion of the camouflaged target’s optical and infrared images to improve the effective feature information of the camouflaged target. Second, a loss function is created, and the K-Means++ clustering technique is used to optimize the target anchor frame in the dataset to increase camouflage personnel detection accuracy and robustness. Finally, a comprehensive detection index of camouflaged targets is proposed to compare the overall effectiveness of various approaches. More crucially, we create a multispectral camouflage target dataset to test the suggested technique. Experimental results show that the proposed method has the best comprehensive detection performance, with a detection accuracy of 96.5%, a recognition probability of 92.5%, a parameter number increase of 1×104, a theoretical calculation amount increase of 0.03 GFLOPs, and a comprehensive detection index of 0.85. The advantage of this method in terms of detection accuracy is also apparent in performance comparisons with other target algorithms.
Phone number recycling (PNR) refers to the event wherein a mobile operator collects a disconnected number and reassigns it to a new owner. It has posed a threat to the reliability of the existing authentication solution for e-commerce platforms. Specifically, a new owner of a reassigned number can access the application account with which the number is associated, and may perform fraudulent activities. Existing solutions that employ a reassigned number database from mobile operators are costly for e-commerce platforms with large-scale users. Thus, alternative solutions that depend on only the information of the applications are imperative. In this work, we study the problem of detecting accounts that have been compromised owing to the reassignment of phone numbers. Our analysis on Meituan’s real-world dataset shows that compromised accounts have unique statistical features and temporal patterns. Based on the observations, we propose a novel model called temporal pattern and statistical feature fusion model (TSF) to tackle the problem, which integrates a temporal pattern encoder and a statistical feature encoder to capture behavioral evolutionary interaction and significant operation features. Extensive experiments on the Meituan and IEEE-CIS datasets show that TSF significantly outperforms the baselines, demonstrating its effectiveness in detecting compromised accounts due to reassigned numbers.
A practical fixed-time adaptive fuzzy control strategy is investigated for uncertain nonlinear systems with time-varying asymmetric constraints and input quantization. To overcome the difficulties of designing controllers under the state constraints, a unified barrier function approach is employed to construct a coordinate transformation that maps the original constrained system to an equivalent unconstrained one, thus relaxing the time-varying asymmetric constraints upon system states and avoiding the feasibility check condition typically required in the traditional barrier Lyapunov function based control approach. Meanwhile, the “explosion of complexity” problem in the traditional backstepping approach arising from repeatedly derivatives of virtual controllers is solved by using the command filter method. It is verified via the fixed-time Lyapunov stability criterion that the system output can track a desired signal within a small error range in a predetermined time, and that all system states remain in the constraint range. Finally, two simulation examples are offered to demonstrate the effectiveness of the proposed strategy.
At the Annual International Cryptology Conference in 2019, Gohr introduced a deep learning based cryptanalysis technique applicable to the reduced-round lightweight block ciphers with a short block of SPECK32/64. One significant challenge left unstudied by Gohr’s work is the implementation of key recovery attacks on large-state block ciphers based on deep learning. The purpose of this paper is to present an improved deep learning based framework for recovering keys for large-state block ciphers. First, we propose a key bit sensitivity test (KBST) based on deep learning to divide the key space objectively. Second, we propose a new method for constructing neural distinguisher combinations to improve a deep learning based key recovery framework for large-state block ciphers and demonstrate its rationality and effectiveness from the perspective of cryptanalysis. Under the improved key recovery framework, we train an efficient neural distinguisher combination for each large-state member of SIMON and SPECK and finally carry out a practical key recovery attack on the large-state members of SIMON and SPECK. Furthermore, we propose that the 13-round SIMON64 attack is the most effective approach for practical key recovery to date. Noteworthly, this is the first attempt to propose deep learning based practical key recovery attacks on 18-round SIMON128, 19-round SIMON128, 14-round SIMON96, and 14-round SIMON64. Additionally, we enhance the outcomes of the practical key recovery attack on SPECK large-state members, which amplifies the success rate of the key recovery attack in comparison to existing results.
Orthogonal time–frequency space (OTFS) is a new modulation technique proposed in recent years for high Doppler wireless scenes. To solve the parameter estimation problem of the OTFS-integrated radar and communications system, we propose a parameter estimation method based on sparse reconstruction preprocessing to reduce the computational effort of the traditional weighted subspace fitting (WSF) algorithm. First, an OTFS-integrated echo signal model is constructed. Then, the echo signal is transformed to the time domain to separate the target angle from the range, and the range and angle of the detected target are coarsely estimated by using the sparse reconstruction algorithm. Finally, the WSF algorithm is used to refine the search with the coarse estimate at the center to obtain an accurate estimate. The simulations demonstrate the effectiveness and superiority of the proposed parameter estimation algorithm.
A low-profile dual-broadband dual-circularly-polarized (dual-CP) reflectarray (RA) is proposed and demonstrated, supporting independent beamforming for right-/left-handed CP waves at both K-band and Ka-band. Such functionality is achieved by incorporating multi-layered phase shifting elements individually operating in the K- and Ka-band, which are then interleaved in a shared aperture, resulting in a cell thickness of only about 0.1λL. By rotating the designed K- and Ka-band elements around their own geometrical centers, the dual-CP waves in each band can be modulated separately. To reduce the overall profile, planar K-/Ka-band dual-CP feeds with a broad band are designed based on the magnetoelectric dipoles and multi-branch hybrid couplers. The planar feeds achieve bandwidths of about 32% and 26% at K- and Ka-band respectively with reflection magnitudes below −13 dB, an axial ratio smaller than 2 dB, and a gain variation of less than 1 dB. A proof-of-concept dual-band dual-CP RA integrated with the planar feeds is fabricated and characterized which is capable of generating asymmetrically distributed dual-band dual-CP beams. The measured peak gain values of the beams are around 24.3 and 27.3 dBic, with joint gain variation <1 dB and axial ratio <2 dB bandwidths wider than 20.6% and 14.6% at the lower and higher bands, respectively. The demonstrated dual-broadband dual-CP RA with four degrees of freedom of beamforming could be a promising candidate for space and satellite communications.
Adversarial training with online-generated adversarial examples has achieved promising performance in defending adversarial attacks and improving robustness of convolutional neural network models. However, most existing adversarial training methods are dedicated to finding strong adversarial examples for forcing the model to learn the adversarial data distribution, which inevitably imposes a large computational overhead and results in a decrease in the generalization performance on clean data. In this paper, we show that progressively enhancing the adversarial strength of adversarial examples across training epochs can effectively improve the model robustness, and appropriate model shifting can preserve the generalization performance of models in conjunction with negligible computational cost. To this end, we propose a successive perturbation generation scheme for adversarial training (SPGAT), which progressively strengthens the adversarial examples by adding the perturbations on adversarial examples transferred from the previous epoch and shifts models across the epochs to improve the efficiency of adversarial training. The proposed SPGAT is both efficient and effective; e.g., the computation time of our method is 900 min as against the 4100 min duration observed in the case of standard adversarial training, and the performance boost is more than 7% and 3% in terms of adversarial accuracy and clean accuracy, respectively. We extensively evaluate the SPGAT on various datasets, including small-scale MNIST, middle-scale CIFAR-10, and large-scale CIFAR-100. The experimental results show that our method is more efficient while performing favorably against state-of-the-art methods.
The synthetic minority oversampling technique (SMOTE) is a popular algorithm to reduce the impact of class imbalance in building classifiers, and has received several enhancements over the past 20 years. SMOTE and its variants synthesize a number of minority-class sample points in the original sample space to alleviate the adverse effects of class imbalance. This approach works well in many cases, but problems arise when synthetic sample points are generated in overlapping areas between different classes, which further complicates classifier training. To address this issue, this paper proposes a novel generalization-oriented rather than imputation-oriented minority-class sample point generation algorithm, named overlapping minimization SMOTE (OM-SMOTE). This algorithm is designed specifically for binary imbalanced classification problems. OM-SMOTE first maps the original sample points into a new sample space by balancing sample encoding and classifier generalization. Then, OM-SMOTE employs a set of sophisticated minority-class sample point imputation rules to generate synthetic sample points that are as far as possible from overlapping areas between classes. Extensive experiments have been conducted on 32 imbalanced datasets to validate the effectiveness of OM-SMOTE. Results show that using OM-SMOTE to generate synthetic minority-class sample points leads to better classifier training performances for the naive Bayes, support vector machine, decision tree, and logistic regression classifiers than the 11 state-of-the-art SMOTE-based imputation algorithms. This demonstrates that OM-SMOTE is a viable approach for supporting the training of high-quality classifiers for imbalanced classification. The implementation of OM-SMOTE is shared publicly on the GitHub platform at
Due to factors such as motion blur, video out-of-focus, and occlusion, multi-frame human pose estimation is a challenging task. Exploiting temporal consistency between consecutive frames is an efficient approach for addressing this issue. Currently, most methods explore temporal consistency through refinements of the final heatmaps. The heatmaps contain the semantics information of key points, and can improve the detection quality to a certain extent. However, they are generated by features, and feature-level refinements are rarely considered. In this paper, we propose a human pose estimation framework with refinements at the feature and semantics levels. We align auxiliary features with the features of the current frame to reduce the loss caused by different feature distributions. An attention mechanism is then used to fuse auxiliary features with current features. In terms of semantics, we use the difference information between adjacent heatmaps as auxiliary features to refine the current heatmaps. The method is validated on the large-scale benchmark datasets PoseTrack2017 and PoseTrack2018, and the results demonstrate the effectiveness of our method.
With the development of satellite miniaturization and remote sensing, the establishment of microsatellite constellations is an inevitable trend. Due to their limited size, weight, and power, spaceborne storage systems with excellent scalability, performance, and reliability are still one of the technical bottlenecks of remote sensing microsatellites. Based on the commercial off-the-shelf field-programmable gate array and memory devices, a spaceborne advanced storage system (SASS) is proposed in this paper. This work provides a dynamic programming, queue scheduling multiple-input multiple-output cache technique and a high-speed, high-reliability NAND flash controller for multiple microsatellite payload data. Experimental results show that SASS has outstanding scalability with a maximum write rate of 2429 Mb/s and preserves at least 78.53% of the performance when a single NAND flash fails. The scheduling technique effectively shortens the data scheduling time, and the data remapping method of the NAND flash controller can reduce the retention error by at least 50.73% and the program disturbance error by at least 37.80%.
Currently, decarbonization has become an emerging trend in the power system arena. However, the increasing number of photovoltaic units distributed into a distribution network may result in voltage issues, providing challenges for voltage regulation across a large-scale power grid network. Reinforcement learning based intelligent control of smart inverters and other smart building energy management (EM) systems can be leveraged to alleviate these issues. To achieve the best EM strategy for building microgrids in a power system, this paper presents two large-scale multi-agent strategy evaluation methods to preserve building occupants’ comfort while pursuing system-level objectives. The EM problem is formulated as a general-sum game to optimize the benefits at both the system and building levels. The α-rank algorithm can solve the general-sum game and guarantee the ranking theoretically, but it is limited by the interaction complexity and hardly applies to the practical power system. A new evaluation algorithm (TcEval) is proposed by practically scaling the α-rank algorithm through a tensor complement to reduce the interaction complexity. Then, considering the noise prevalent in practice, a noise processing model with domain knowledge is built to calculate the strategy payoffs, and thus the TcEval-AS algorithm is proposed when noise exists. Both evaluation algorithms developed in this paper greatly reduce the interaction complexity compared with existing approaches, including ResponseGraphUCB (RG-UCB) and αInformationGain (α-IG). Finally, the effectiveness of the proposed algorithms is verified in the EM case with realistic data.
Ultrafast fiber lasers are indispensable components in the field of ultrafast optics, and their continuous performance advancements are driving the progress of this exciting discipline. Micro/Nanofibers (MNFs) possess unique properties, such as a large fractional evanescent field, flexible and controllable dispersion, and high nonlinearity, making them highly valuable for generating ultrashort pulses. Particularly, in tasks involving mode-locking and dispersion and nonlinearity management, MNFs provide an excellent platform for investigating intriguing nonlinear dynamics and related phenomena, thereby promoting the advancement of ultrafast fiber lasers. In this paper, we present an introduction to the mode evolution and characteristics of MNFs followed by a comprehensive review of recent advances in using MNFs for ultrafast optics applications including evanescent field modulation and control, dispersion and nonlinear management techniques, and nonlinear dynamical phenomenon exploration. Finally, we discuss the potential application prospects of MNFs in the realm of ultrafast optics.
Ground elevation estimation is vital for numerous applications in autonomous vehicles and intelligent robotics including three-dimensional object detection, navigable space detection, point cloud matching for localization, and registration for mapping. However, most works regard the ground as a plane without height information, which causes inaccurate manipulation in these applications. In this work, we propose GeeNet, a novel end-to-end, lightweight method that completes the ground in nearly real time and simultaneously estimates the ground elevation in a grid-based representation. GeeNet leverages the mixing of two- and three-dimensional convolutions to preserve a lightweight architecture to regress ground elevation information for each cell of the grid. For the first time, GeeNet has fulfilled ground elevation estimation from semantic scene completion. We use the SemanticKITTI and SemanticPOSS datasets to validate the proposed GeeNet, demonstrating the qualitative and quantitative performances of GeeNet on ground elevation estimation and semantic scene completion of the point cloud. Moreover, the crossdataset generalization capability of GeeNet is experimentally proven. GeeNet achieves state-of-the-art performance in terms of point cloud completion and ground elevation estimation, with a runtime of 0.88 ms.
Dy3+-doped fluoride fiber lasers have important applications in environment monitoring, real-time sensing, and polymer processing. At present, achieving a high-efficiency and high-power Dy3+-doped fluoride fiber laser in the mid-infrared (mid-IR) region over 3 μm is a scientific and technological frontier. Typically, Dy3+-doped fluoride fiber lasers use a unidirectional pumping method, which suffers from the drawback of high thermal loading density on the fiber tips, thus limiting power scalability. In this study, a bi-directional in-band pumping scheme, to address the limitations of output power scaling and to enhance the efficiency of the Dy3+-doped fluoride fiber laser at 3.2 μm, is investigated numerically based on rate equations and propagation equations. Detailed simulation results reveal that the optical‒optical efficiency of the bi-directional in-band pumped Dy3+-doped fluoride fiber laser can reach 75.1%, approaching the Stokes limit of 87.3%. The potential for further improvement of the efficiency of the Dy3+-doped fluoride fiber laser is also discussed. The bi-directional pumping scheme offers the intrinsic advantage of mitigating the thermal load on the fiber tips, unlike unidirectional pumping, in addition to its high efficiency. As a result, it is expected to significantly scale the power output of Dy3+-doped fluoride fiber lasers in the mid-IR regime.
In this study we present the design and realization of a tunable dual band wireless power transfer (TDB-WPT) coupled resonator system. The frequency response of the tunable band can be controlled using a surface-mounted varactor. The transmitter (Tx) and the receiver (Rx) circuits are symmetric. The top layer contains a feed line with an impedance of 50 Ω. Two identical half rings defected ground structures (HR-DGSs) are loaded on the bottom using a varactor diode. We propose a solution for restricted WPT systems working at a single band application according to the operating frequency. The effects of geometry, orientation, relative distance, and misalignments on the coupling coefficients were studied. To validate the simulation results, the proposed TDB-WPT system was fabricated and tested. The system occupied a space of 40 mm×40 mm. It can deliver power to the receiver with an average coupling efficiency of 98% at the tuned band from 817 to 1018 MHz and an efficiency of 95% at a fixed band of 1.6 GHz at a significant transmission distance of 22 mm. The results of the measurements accorded well with those of an equivalent model and the simulation.
This article investigates the event-triggered adaptive neural network (NN) tracking control problem with deferred asymmetric time-varying (DATV) output constraints. To deal with the DATV output constraints, an asymmetric time-varying barrier Lyapunov function (ATBLF) is first built to make the stability analysis and the controller construction simpler. Second, an event-triggered adaptive NN tracking controller is constructed by incorporating an error-shifting function, which ensures that the tracking error converges to an arbitrarily small neighborhood of the origin within a predetermined settling time, consequently optimizing the utilization of network resources. It is theoretically proven that all signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB), while the initial value is outside the constraint boundary. Finally, a single-link robotic arm (SLRA) application example is employed to verify the viability of the acquired control algorithm.