Front. Inform. Technol. Electron. Eng All Journals
Journal home Browse Most accessed

Most accessed

  • Select all
  • Review
    A review on the developments and space applications of mid- and long-wavelength infrared detection technologies
    Yuying WANG, Jindong LI, Hezhi SUN, Xiang LI
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(8): 1031-1056. https://doi.org/10.1631/FITEE.2300218

    Mid-wavelength infrared (MWIR) detection and long-wavelength infrared (LWIR) detection constitute the key technologies for space-based Earth observation and astronomical detection. The advanced ability of infrared (IR) detection technology to penetrate the atmosphere and identify the camouflaged targets makes it excellent for space-based remote sensing. Thus, such detectors play an essential role in detecting and tracking low-temperature and far-distance moving targets. However, due to the diverse scenarios in which space-based IR detection systems are built, the key parameters of IR technologies are subject to unique demands. We review the developments and features of MWIR and LWIR detectors with a particular focus on their applications in space-based detection. We conduct a comprehensive analysis of key performance indicators for IR detection systems, including the ground sampling distance (GSD), operation range, and noise equivalent temperature difference (NETD) among others, and their interconnections with IR detector parameters. Additionally, the influences of pixel distance, focal plane array size, and operation temperature of space-based IR remote sensing are evaluated. The development requirements and technical challenges of MWIR and LWIR detection systems are also identified to achieve high-quality space-based observation platforms.

  • Perspective
    Computing-aware network (CAN): a systematic design of computing and network convergence
    Xiaoyun WANG, Xiaodong DUAN, Kehan YAO, Tao SUN, Peng LIU, Hongwei YANG, Zhiqiang LI
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 633-644. https://doi.org/10.1631/FITEE.2400098
  • Perspective
    Four development stages of collective intelligence
    Renbin XIAO
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(7): 903-916. https://doi.org/10.1631/FITEE.2300459

    The new generation of artificial intelligence (AI) research initiated by Chinese scholars conforms to the needs of a new information environment changes, and strives to advance traditional artificial intelligence (AI 1.0) to a new stage of AI 2.0. As one of the important components of AI, collective intelligence (CI 1.0), i.e., swarm intelligence, is developing to the stage of CI 2.0 (crowd intelligence). Through in-depth analysis and informative argumentation, it is found that an incompatibility exists between CI 1.0 and CI 2.0. Therefore, CI 1.5 is introduced to build a bridge between the above two stages, which is based on bio-collaborative behavioral mimicry. CI 1.5 is the transition from CI 1.0 to CI 2.0, which contributes to the compatibility of the two stages. Then, a new interpretation of the meta-synthesis of wisdom proposed by Qian Xuesen is given. The meta-synthesis of wisdom, as an improvement of crowd intelligence, is an advanced stage of bionic intelligence, i.e., CI 3.0. It is pointed out that the dual-wheel drive of large language models and big data with deep uncertainty is an evolutionary path from CI 2.0 to CI 3.0, and some elaboration is made. As a result, we propose four development stages (CI 1.0, CI 1.5, CI 2.0, and CI 3.0), which form a complete framework for the development of CI. These different stages are progressively improved and have good compatibility. Due to the dominant role of cooperation in the development stages of CI, three types of cooperation in CI are discussed: indirect regulatory cooperation in lower organisms, direct communicative cooperation in higher organisms, and shared intention based collaboration in humans. Labor division is the main form of achieving cooperation and, for this reason, this paper investigates the relationship between the complexity of behavior and types of labor division. Finally, based on the overall understanding of the four development stages of CI, the future development direction and research issues of CI are explored.

  • Combining graph neural network with deep reinforcement learning for resource allocation in computing force networks
    Xueying HAN, Mingxi XIE, Ke YU, Xiaohong HUANG, Zongpeng DU, Huijuan YAO
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 701-712. https://doi.org/10.1631/FITEE.2300009

    Fueled by the explosive growth of ultra-low-latency and real-time applications with specific computing and network performance requirements, the computing force network (CFN) has become a hot research subject. The primary CFN challenge is to leverage network resources and computing resources. Although recent advances in deep reinforcement learning (DRL) have brought significant improvement in network optimization, these methods still suffer from topology changes and fail to generalize for those topologies not seen in training. This paper proposes a graph neural network (GNN) based DRL framework to accommodate network traffic and computing resources jointly and efficiently. By taking advantage of the generalization capability in GNN, the proposed method can operate over variable topologies and obtain higher performance than the other DRL methods.

  • Correspondence
    SEVAR: a stereo event camera dataset for virtual and augmented reality
    Yuda DONG, Zetao CHEN, Xin HE, Lijun LI, Zichao SHU, Yinong CAO, Junchi FENG, Shijie LIU, Chunlai LI, Jianyu WANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 755-762. https://doi.org/10.1631/FITEE.2400011
  • Communication efficiency optimization of federated learning for computing and network convergence of 6G networks
    Yizhuo CAI, Bo LEI, Qianying ZHAO, Jing PENG, Min WEI, Yushun ZHANG, Xing ZHANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 713-727. https://doi.org/10.1631/FITEE.2300122

    Federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models. However, factors such as network topology and computing power of devices can affect its training or communication process in complex network environments. Computing and network convergence (CNC) of sixth-generation (6G) networks, a new network architecture and paradigm with computing-measurable, perceptible, distributable, dispatchable, and manageable capabilities, can effectively support federated learning training and improve its communication efficiency. By guiding the participating devices’ training in federated learning based on business requirements, resource load, network conditions, and computing power of devices, CNC can reach this goal. In this paper, to improve the communication efficiency of federated learning in complex networks, we study the communication efficiency optimization methods of federated learning for CNC of 6G networks that give decisions on the training process for different network conditions and computing power of participating devices. The simulations address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters. The results show that the methods we proposed can cope well with complex network situations, effectively balance the delay distribution of participating devices for local training, improve the communication efficiency during the transfer of model parameters, and improve the resource utilization in the network.

  • Review
    A survey of energy-efficient strategies for federated learning in mobile edge computing
    Kang YAN, Nina SHU, Tao WU, Chunsheng LIU, Panlong YANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 645-663. https://doi.org/10.1631/FITEE.2300181

    With the booming development of fifth-generation network technology and Internet of Things, the number of end-user devices (EDs) and diverse applications is surging, resulting in massive data generated at the edge of networks. To process these data efficiently, the innovative mobile edge computing (MEC) framework has emerged to guarantee low latency and enable efficient computing close to the user traffic. Recently, federated learning (FL) has demonstrated its empirical success in edge computing due to its privacy-preserving advantages. Thus, it becomes a promising solution for analyzing and processing distributed data on EDs in various machine learning tasks, which are the major workloads in MEC. Unfortunately, EDs are typically powered by batteries with limited capacity, which brings challenges when performing energy-intensive FL tasks. To address these challenges, many strategies have been proposed to save energy in FL. Considering the absence of a survey that thoroughly summarizes and classifies these strategies, in this paper, we provide a comprehensive survey of recent advances in energy-efficient strategies for FL in MEC. Specifically, we first introduce the system model and energy consumption models in FL, in terms of computation and communication. Then we analyze the challenges regarding improving energy efficiency and summarize the energy-efficient strategies from three perspectives: learning-based, resource allocation, and client selection. We conduct a detailed analysis of these strategies, comparing their advantages and disadvantages. Additionally, we visually illustrate the impact of these strategies on the performance of FL by showcasing experimental results. Finally, several potential future research directions for energy-efficient FL are discussed.

  • Federated learning on non-IID and long-tailed data via dual-decoupling
    Zhaohui WANG, Hongjiao LI, Jinguo LI, Renhao HU, Baojin WANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 728-741. https://doi.org/10.1631/FITEE.2300284

    Federated learning (FL), a cutting-edge distributed machine learning training paradigm, aims to generate a global model by collaborating on the training of client models without revealing local private data. The cooccurrence of non-independent and identically distributed (non-IID) and long-tailed distribution in FL is one challenge that substantially degrades aggregate performance. In this paper, we present a corresponding solution called federated dual-decoupling via model and logit calibration (FedDDC) for non-IID and long-tailed distributions. The model is characterized by three aspects. First, we decouple the global model into the feature extractor and the classifier to fine-tune the components affected by the joint problem. For the biased feature extractor, we propose a client confidence re-weighting scheme to assist calibration, which assigns optimal weights to each client. For the biased classifier, we apply the classifier re-balancing method for fine-tuning. Then, we calibrate and integrate the client confidence re-weighted logits with the re-balanced logits to obtain the unbiased logits. Finally, we use decoupled knowledge distillation for the first time in the joint problem to enhance the accuracy of the global model by extracting the knowledge of the unbiased model. Numerous experiments demonstrate that on non-IID and long-tailed data in FL, our approach outperforms state-of-the-art methods.

  • Review
    Industrial Internet for intelligent manufacturing: past, present, and future
    Chi XU, Haibin YU, Xi JIN, Changqing XIA, Dong LI, Peng ZENG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(9): 1173-1192. https://doi.org/10.1631/FITEE.2300806

    Industrial Internet, motivated by the deep integration of new-generation information and communication technology (ICT) and advanced manufacturing technology, will open up the production chain, value chain, and industry chain by establishing complete interconnections between humans, machines, and things. This will also help establish novel manufacturing and service modes, where personalized and customized production for differentiated services is a typical paradigm of future intelligent manufacturing. Thus, there is an urgent requirement to break through the existing chimney-like service mode provided by the hierarchical heterogeneous network architecture and establish a transparent channel for manufacturing and services using a flat network architecture. Starting from the basic concepts of process manufacturing and discrete manufacturing, we first analyze the basic requirements of typical manufacturing tasks. Then, with an overview on the developing process of industrial Internet, we systematically compare the current networking technologies and further analyze the problems of the present industrial Internet. On this basis, we propose to establish a novel “thin waist” that integrates sensing, communication, computing, and control for the future industrial Internet. Furthermore, we perform a deep analysis and engage in a discussion on the key challenges and future research issues regarding the multi-dimensional collaborative sensing of task–resource, the end-to-end deterministic communication of heterogeneous networks, and virtual computing and operation control of industrial Internet.

  • A cloud–edge–device collaborative offloading scheme with heterogeneous tasks and its performance evaluation
    Xiaojun BAI, Yang ZHANG, Haixing WU, Yuting WANG, Shunfu JIN
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 664-684. https://doi.org/10.1631/FITEE.2300128

    How to collaboratively offload tasks between user devices, edge networks (ENs), and cloud data centers is an interesting and challenging research topic. In this paper, we investigate the offloading decision, analytical modeling, and system parameter optimization problem in a collaborative cloud–edge–device environment, aiming to trade off different performance measures. According to the differentiated delay requirements of tasks, we classify the tasks into delay-sensitive and delay-tolerant tasks. To meet the delay requirements of delay-sensitive tasks and process as many delay-tolerant tasks as possible, we propose a cloud–edge–device collaborative task offloading scheme, in which delay-sensitive and delay-tolerant tasks follow the access threshold policy and the loss policy, respectively. We establish a four-dimensional continuous-time Markov chain as the system model. By using the Gauss–Seidel method, we derive the stationary probability distribution of the system model. Accordingly, we present the blocking rate of delay-sensitive tasks and the average delay of these two types of tasks. Numerical experiments are conducted and analyzed to evaluate the system performance, and numerical simulations are presented to evaluate and validate the effectiveness of the proposed task offloading scheme. Finally, we optimize the access threshold in the EN buffer to obtain the minimum system cost with different proportions of delay-sensitive tasks.

  • Improved deep learning aided key recovery framework: applications to large-state block ciphers
    Xiaowei LI, Jiongjiong REN, Shaozhen CHEN
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(10): 1406-1420. https://doi.org/10.1631/FITEE.2300848

    At the Annual International Cryptology Conference in 2019, Gohr introduced a deep learning based cryptanalysis technique applicable to the reduced-round lightweight block ciphers with a short block of SPECK32/64. One significant challenge left unstudied by Gohr’s work is the implementation of key recovery attacks on large-state block ciphers based on deep learning. The purpose of this paper is to present an improved deep learning based framework for recovering keys for large-state block ciphers. First, we propose a key bit sensitivity test (KBST) based on deep learning to divide the key space objectively. Second, we propose a new method for constructing neural distinguisher combinations to improve a deep learning based key recovery framework for large-state block ciphers and demonstrate its rationality and effectiveness from the perspective of cryptanalysis. Under the improved key recovery framework, we train an efficient neural distinguisher combination for each large-state member of SIMON and SPECK and finally carry out a practical key recovery attack on the large-state members of SIMON and SPECK. Furthermore, we propose that the 13-round SIMON64 attack is the most effective approach for practical key recovery to date. Noteworthly, this is the first attempt to propose deep learning based practical key recovery attacks on 18-round SIMON128, 19-round SIMON128, 14-round SIMON96, and 14-round SIMON64. Additionally, we enhance the outcomes of the practical key recovery attack on SPECK large-state members, which amplifies the success rate of the key recovery attack in comparison to existing results.

  • Digital twin system framework and information model for industry chain based on industrial Internet
    Wenxuan WANG, Yongqin LIU, Xudong CHAI, Lin ZHANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(7): 951-967. https://doi.org/10.1631/FITEE.2300123

    The integration of industrial Internet, cloud computing, and big data technology is changing the business and management mode of the industry chain. However, the industry chain is characterized by a wide range of fields, complex environment, and many factors, which creates a challenge for efficient integration and leveraging of industrial big data. Aiming at the integration of physical space and virtual space of the current industry chain, we propose an industry chain digital twin (DT) system framework for the industrial Internet. In addition, an industry chain information model based on a knowledge graph (KG) is proposed to integrate complex and heterogeneous industry chain data and extract industrial knowledge. First, the ontology of the industry chain is established, and an entity alignment method based on scientific and technological achievements is proposed. Second, the bidirectional encoder representations from Transformers (BERT) based multi-head selection model is proposed for joint entity–relation extraction of industry chain information. Third, a relation completion model based on a relational graph convolutional network (R-GCN) and a graph sample and aggregate network (GraphSAGE) is proposed which considers both semantic information and graph structure information of KG. Experimental results show that the performances of the proposed joint entity–relation extraction model and relation completion model are significantly better than those of the baselines. Finally, an industry chain information model is established based on the data of 18 industry chains in the field of basic machinery, which proves the feasibility of the proposed method.

  • Detecting compromised accounts caused by phone number recycling on e-commerce platforms: taking Meituan as an example
    Min GAO, Shutong CHEN, Yangbo GAO, Zhenhua ZHANG, Yu CHEN, Yupeng LI, Qiongzan YE, Xin WANG, Yang CHEN
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(8): 1077-1095. https://doi.org/10.1631/FITEE.2300291

    Phone number recycling (PNR) refers to the event wherein a mobile operator collects a disconnected number and reassigns it to a new owner. It has posed a threat to the reliability of the existing authentication solution for e-commerce platforms. Specifically, a new owner of a reassigned number can access the application account with which the number is associated, and may perform fraudulent activities. Existing solutions that employ a reassigned number database from mobile operators are costly for e-commerce platforms with large-scale users. Thus, alternative solutions that depend on only the information of the applications are imperative. In this work, we study the problem of detecting accounts that have been compromised owing to the reassignment of phone numbers. Our analysis on Meituan’s real-world dataset shows that compromised accounts have unique statistical features and temporal patterns. Based on the observations, we propose a novel model called temporal pattern and statistical feature fusion model (TSF) to tackle the problem, which integrates a temporal pattern encoder and a statistical feature encoder to capture behavioral evolutionary interaction and significant operation features. Extensive experiments on the Meituan and IEEE-CIS datasets show that TSF significantly outperforms the baselines, demonstrating its effectiveness in detecting compromised accounts due to reassigned numbers.

  • Camouflaged target detection based on multimodal image input pixel-level fusion
    Ruihui PENG, Jie LAI, Xueting YANG, Dianxing SUN, Shuncheng TAN, Yingjuan SONG, Wei GUO
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(9): 1226-1239. https://doi.org/10.1631/FITEE.2300503

    Camouflaged targets are a type of nonsalient target with high foreground and background fusion and minimal target feature information, making target recognition extremely difficult. Most detection algorithms for camouflaged targets use only the target’s single-band information, resulting in low detection accuracy and a high missed detection rate. We present a multimodal image fusion camouflaged target detection technique (MIF-YOLOv5) in this paper. First, we provide a multimodal image input to achieve pixel-level fusion of the camouflaged target’s optical and infrared images to improve the effective feature information of the camouflaged target. Second, a loss function is created, and the K-Means++ clustering technique is used to optimize the target anchor frame in the dataset to increase camouflage personnel detection accuracy and robustness. Finally, a comprehensive detection index of camouflaged targets is proposed to compare the overall effectiveness of various approaches. More crucially, we create a multispectral camouflage target dataset to test the suggested technique. Experimental results show that the proposed method has the best comprehensive detection performance, with a detection accuracy of 96.5%, a recognition probability of 92.5%, a parameter number increase of 1×104, a theoretical calculation amount increase of 0.03 GFLOPs, and a comprehensive detection index of 0.85. The advantage of this method in terms of detection accuracy is also apparent in performance comparisons with other target algorithms.

  • Editorial
    Coordination of networking and computing: toward new information infrastructure and new services mode
    Xiaoyun WANG, Tao SUN, Yong CUI, Rajkumar BUYYA, Deke GUO, Qun HUANG, Hassnaa MOUSTAFA, Chen TIAN, Shangguang WANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 629-632. https://doi.org/10.1631/FITEE.2430000
  • A low-profile dual-broadband dual-circularly-polarized reflectarray for K-/Ka-band space applications
    Xuanfeng TONG, Zhi Hao JIANG, Yuan LI, Fan WU, Lin PENG, Taiwei YUE, Wei HONG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(8): 1145-1161. https://doi.org/10.1631/FITEE.2300214

    A low-profile dual-broadband dual-circularly-polarized (dual-CP) reflectarray (RA) is proposed and demonstrated, supporting independent beamforming for right-/left-handed CP waves at both K-band and Ka-band. Such functionality is achieved by incorporating multi-layered phase shifting elements individually operating in the K- and Ka-band, which are then interleaved in a shared aperture, resulting in a cell thickness of only about 0.1λL. By rotating the designed K- and Ka-band elements around their own geometrical centers, the dual-CP waves in each band can be modulated separately. To reduce the overall profile, planar K-/Ka-band dual-CP feeds with a broad band are designed based on the magnetoelectric dipoles and multi-branch hybrid couplers. The planar feeds achieve bandwidths of about 32% and 26% at K- and Ka-band respectively with reflection magnitudes below −13 dB, an axial ratio smaller than 2 dB, and a gain variation of less than 1 dB. A proof-of-concept dual-band dual-CP RA integrated with the planar feeds is fabricated and characterized which is capable of generating asymmetrically distributed dual-band dual-CP beams. The measured peak gain values of the beams are around 24.3 and 27.3 dBic, with joint gain variation <1 dB and axial ratio <2 dB bandwidths wider than 20.6% and 14.6% at the lower and higher bands, respectively. The demonstrated dual-broadband dual-CP RA with four degrees of freedom of beamforming could be a promising candidate for space and satellite communications.

  • Comment
    Suno: potential, prospects, and trends
    Jiaxing YU, Songruoyao WU, Guanting LU, Zijin LI, Li ZHOU, Kejun ZHANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(7): 1025-1030. https://doi.org/10.1631/FITEE.2400299
  • Practical fixed-time adaptive fuzzy control of uncertain nonlinear systems with time-varying asymmetric constraints: a unified barrier function based approach
    Zixuan HUANG, Huanqing WANG, Ben NIU, Xudong ZHAO, Adil M. AHMAD
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(9): 1282-1294. https://doi.org/10.1631/FITEE.2300408

    A practical fixed-time adaptive fuzzy control strategy is investigated for uncertain nonlinear systems with time-varying asymmetric constraints and input quantization. To overcome the difficulties of designing controllers under the state constraints, a unified barrier function approach is employed to construct a coordinate transformation that maps the original constrained system to an equivalent unconstrained one, thus relaxing the time-varying asymmetric constraints upon system states and avoiding the feasibility check condition typically required in the traditional barrier Lyapunov function based control approach. Meanwhile, the “explosion of complexity” problem in the traditional backstepping approach arising from repeatedly derivatives of virtual controllers is solved by using the command filter method. It is verified via the fixed-time Lyapunov stability criterion that the system output can track a desired signal within a small error range in a predetermined time, and that all system states remain in the constraint range. Finally, two simulation examples are offered to demonstrate the effectiveness of the proposed strategy.

  • A novel overlapping minimization SMOTE algorithm for imbalanced classification
    Yulin HE, Xuan LU, Philippe FOURNIER-VIGER, Joshua Zhexue HUANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(9): 1266-1281. https://doi.org/10.1631/FITEE.2300278

    The synthetic minority oversampling technique (SMOTE) is a popular algorithm to reduce the impact of class imbalance in building classifiers, and has received several enhancements over the past 20 years. SMOTE and its variants synthesize a number of minority-class sample points in the original sample space to alleviate the adverse effects of class imbalance. This approach works well in many cases, but problems arise when synthetic sample points are generated in overlapping areas between different classes, which further complicates classifier training. To address this issue, this paper proposes a novel generalization-oriented rather than imputation-oriented minority-class sample point generation algorithm, named overlapping minimization SMOTE (OM-SMOTE). This algorithm is designed specifically for binary imbalanced classification problems. OM-SMOTE first maps the original sample points into a new sample space by balancing sample encoding and classifier generalization. Then, OM-SMOTE employs a set of sophisticated minority-class sample point imputation rules to generate synthetic sample points that are as far as possible from overlapping areas between classes. Extensive experiments have been conducted on 32 imbalanced datasets to validate the effectiveness of OM-SMOTE. Results show that using OM-SMOTE to generate synthetic minority-class sample points leads to better classifier training performances for the naive Bayes, support vector machine, decision tree, and logistic regression classifiers than the 11 state-of-the-art SMOTE-based imputation algorithms. This demonstrates that OM-SMOTE is a viable approach for supporting the training of high-quality classifiers for imbalanced classification. The implementation of OM-SMOTE is shared publicly on the GitHub platform at https://github.com/luxuan123123/OM-SMOTE/.

  • Target parameter estimation for OTFS-integrated radar and communications based on sparse reconstruction preprocessing
    Zhenkai ZHANG, Xiaoke SHANG, Yue XIAO
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 742-754. https://doi.org/10.1631/FITEE.2300462

    Orthogonal time–frequency space (OTFS) is a new modulation technique proposed in recent years for high Doppler wireless scenes. To solve the parameter estimation problem of the OTFS-integrated radar and communications system, we propose a parameter estimation method based on sparse reconstruction preprocessing to reduce the computational effort of the traditional weighted subspace fitting (WSF) algorithm. First, an OTFS-integrated echo signal model is constructed. Then, the echo signal is transformed to the time domain to separate the target angle from the range, and the range and angle of the detected target are coarsely estimated by using the sparse reconstruction algorithm. Finally, the WSF algorithm is used to refine the search with the coarse estimate at the center to obtain an accurate estimate. The simulations demonstrate the effectiveness and superiority of the proposed parameter estimation algorithm.

  • Multi-agent evaluation for energy management by practically scaling α-rank
    Yiyun SUN, Senlin ZHANG, Meiqin LIU, Ronghao ZHENG, Shanling DONG, Xuguang LAN
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(7): 1003-1016. https://doi.org/10.1631/FITEE.2300438

    Currently, decarbonization has become an emerging trend in the power system arena. However, the increasing number of photovoltaic units distributed into a distribution network may result in voltage issues, providing challenges for voltage regulation across a large-scale power grid network. Reinforcement learning based intelligent control of smart inverters and other smart building energy management (EM) systems can be leveraged to alleviate these issues. To achieve the best EM strategy for building microgrids in a power system, this paper presents two large-scale multi-agent strategy evaluation methods to preserve building occupants’ comfort while pursuing system-level objectives. The EM problem is formulated as a general-sum game to optimize the benefits at both the system and building levels. The α-rank algorithm can solve the general-sum game and guarantee the ranking theoretically, but it is limited by the interaction complexity and hardly applies to the practical power system. A new evaluation algorithm (TcEval) is proposed by practically scaling the α-rank algorithm through a tensor complement to reduce the interaction complexity. Then, considering the noise prevalent in practice, a noise processing model with domain knowledge is built to calculate the strategy payoffs, and thus the TcEval-AS algorithm is proposed when noise exists. Both evaluation algorithms developed in this paper greatly reduce the interaction complexity compared with existing approaches, including ResponseGraphUCB (RG-UCB) and αInformationGain (α-IG). Finally, the effectiveness of the proposed algorithms is verified in the EM case with realistic data.

  • GeeNet: robust and fast point cloud completion for ground elevation estimation towards autonomous vehicles
    Liwen LIU, Weidong YANG, Ben FEI
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(7): 938-950. https://doi.org/10.1631/FITEE.2300388

    Ground elevation estimation is vital for numerous applications in autonomous vehicles and intelligent robotics including three-dimensional object detection, navigable space detection, point cloud matching for localization, and registration for mapping. However, most works regard the ground as a plane without height information, which causes inaccurate manipulation in these applications. In this work, we propose GeeNet, a novel end-to-end, lightweight method that completes the ground in nearly real time and simultaneously estimates the ground elevation in a grid-based representation. GeeNet leverages the mixing of two- and three-dimensional convolutions to preserve a lightweight architecture to regress ground elevation information for each cell of the grid. For the first time, GeeNet has fulfilled ground elevation estimation from semantic scene completion. We use the SemanticKITTI and SemanticPOSS datasets to validate the proposed GeeNet, demonstrating the qualitative and quantitative performances of GeeNet on ground elevation estimation and semantic scene completion of the point cloud. Moreover, the crossdataset generalization capability of GeeNet is experimentally proven. GeeNet achieves state-of-the-art performance in terms of point cloud completion and ground elevation estimation, with a runtime of 0.88 ms.

  • Review
    Recent progress on the applications of micro/nanofibers in ultrafast optics
    Xinying HE, Yuhang LI, Zhuning WANG, Sijie PIAN, Xu LIU, Yaoguang MA
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(9): 1193-1208. https://doi.org/10.1631/FITEE.2300509

    Ultrafast fiber lasers are indispensable components in the field of ultrafast optics, and their continuous performance advancements are driving the progress of this exciting discipline. Micro/Nanofibers (MNFs) possess unique properties, such as a large fractional evanescent field, flexible and controllable dispersion, and high nonlinearity, making them highly valuable for generating ultrashort pulses. Particularly, in tasks involving mode-locking and dispersion and nonlinearity management, MNFs provide an excellent platform for investigating intriguing nonlinear dynamics and related phenomena, thereby promoting the advancement of ultrafast fiber lasers. In this paper, we present an introduction to the mode evolution and characteristics of MNFs followed by a comprehensive review of recent advances in using MNFs for ultrafast optics applications including evanescent field modulation and control, dispersion and nonlinear management techniques, and nonlinear dynamical phenomenon exploration. Finally, we discuss the potential application prospects of MNFs in the realm of ultrafast optics.

  • Numerical study of a bi-directional in-band pumped dysprosium-doped fluoride fiber laser at 3.2 μm
    Lingjing LI, Chunyang MA, Nian ZHAO, Jie PENG, Bin LIU, Haining JI, Yuchen WANG, Pinghua TANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(7): 1017-1024. https://doi.org/10.1631/FITEE.2300701

    Dy3+-doped fluoride fiber lasers have important applications in environment monitoring, real-time sensing, and polymer processing. At present, achieving a high-efficiency and high-power Dy3+-doped fluoride fiber laser in the mid-infrared (mid-IR) region over 3 μm is a scientific and technological frontier. Typically, Dy3+-doped fluoride fiber lasers use a unidirectional pumping method, which suffers from the drawback of high thermal loading density on the fiber tips, thus limiting power scalability. In this study, a bi-directional in-band pumping scheme, to address the limitations of output power scaling and to enhance the efficiency of the Dy3+-doped fluoride fiber laser at 3.2 μm, is investigated numerically based on rate equations and propagation equations. Detailed simulation results reveal that the optical‒optical efficiency of the bi-directional in-band pumped Dy3+-doped fluoride fiber laser can reach 75.1%, approaching the Stokes limit of 87.3%. The potential for further improvement of the efficiency of the Dy3+-doped fluoride fiber laser is also discussed. The bi-directional pumping scheme offers the intrinsic advantage of mitigating the thermal load on the fiber tips, unlike unidirectional pumping, in addition to its high efficiency. As a result, it is expected to significantly scale the power output of Dy3+-doped fluoride fiber lasers in the mid-IR regime.

  • An efficient online histogram publication method for data streams with local differential privacy
    Tao TAO, Funan ZHANG, Xiujun WANG, Xiao ZHENG, Xin ZHAO
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(8): 1096-1109. https://doi.org/10.1631/FITEE.2300368

    Many areas are now experiencing data streams that contain privacy-sensitive information. Although the sharing and release of these data are of great commercial value, if these data are released directly, the private user information in the data will be disclosed. Therefore, how to continuously generate publishable histograms (meeting privacy protection requirements) based on sliding data stream windows has become a critical issue, especially when sending data to an untrusted third party. Existing histogram publication methods are unsatisfactory in terms of time and storage costs, because they must cache all elements in the current sliding window (SW). Our work addresses this drawback by designing an efficient online histogram publication (EOHP) method for local differential privacy data streams. Specifically, in the EOHP method, the data collector first crafts a histogram of the current SW using an approximate counting method. Second, the data collector reduces the privacy budget by using the optimized budget absorption mechanism and adds appropriate noise to the approximate histogram, making it possible to publish the histogram while retaining satisfactory data utility. Extensive experimental results on two different real datasets show that the EOHP algorithm significantly reduces the time and storage costs and improves data utility compared to other existing algorithms.

  • TibetanGoTinyNet: a lightweight U-Net style network for zero learning of Tibetan Go
    Xiali LI, Yanyin ZHANG, Licheng WU, Yandong CHEN, Junzhi YU
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(7): 924-937. https://doi.org/10.1631/FITEE.2300493

    The game of Tibetan Go faces the scarcity of expert knowledge and research literature. Therefore, we study the zero learning model of Tibetan Go under limited computing power resources and propose a novel scale-invariant U-Net style two-headed output lightweight network TibetanGoTinyNet. The lightweight convolutional neural networks and capsule structure are applied to the encoder and decoder of TibetanGoTinyNet to reduce computational burden and achieve better feature extraction results. Several autonomous self-attention mechanisms are integrated into TibetanGoTinyNet to capture the Tibetan Go board’s spatial and global information and select important channels. The training data are generated entirely from self-play games. TibetanGoTinyNet achieves 62%–78% winning rate against other four U-Net style models including Res-UNet, Res-UNet Attention, Ghost-UNet, and Ghost Capsule-UNet. It also achieves 75% winning rate in the ablation experiments on the attention mechanism with embedded positional information. The model saves about 33% of the training time with 45%–50% winning rate for different Monte–Carlo tree search (MCTS) simulation counts when migrated from 9 × 9 to 11 × 11 boards. Code for our model is available at https://github.com/paulzyy/TibetanGoTinyNet.

  • Perspective
    Soraforfoundation robots withparallel intelligence: three worldmodels, three robotic systems
    Lili FAN, Chao GUO, Yonglin TIAN, Hui ZHANG, Jun ZHANG, Fei-Yue WANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(7): 917-923. https://doi.org/10.1631/FITEE.2400144
  • Reputation-based joint optimization of user satisfaction and resource utilization in a computing force network
    Yuexia FU, Jing WANG, Lu LU, Qinqin TANG, Sheng ZHANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(5): 685-700. https://doi.org/10.1631/FITEE.2300156

    Under the development of computing and network convergence, considering the computing and network resources of multiple providers as a whole in a computing force network (CFN) has gradually become a new trend. However, since each computing and network resource provider (CNRP) considers only its own interest and competes with other CNRPs, introducing multiple CNRPs will result in a lack of trust and difficulty in unified scheduling. In addition, concurrent users have different requirements, so there is an urgent need to study how to optimally match users and CNRPs on a many-to-many basis, to improve user satisfaction and ensure the utilization of limited resources. In this paper, we adopt a reputation model based on the beta distribution function to measure the credibility of CNRPs and propose a performance-based reputation update model. Then, we formalize the problem into a constrained multi-objective optimization problem and find feasible solutions using a modified fast and elitist non-dominated sorting genetic algorithm (NSGA-Ⅱ). We conduct extensive simulations to evaluate the proposed algorithm. Simulation results demonstrate that the proposed model and the problem formulation are valid, and the NSGA-Ⅱ is effective and can find the Pareto set of CFN, which increases user satisfaction and resource utilization. Moreover, a set of solutions provided by the Pareto set give us more choices of the many-to-many matching of users and CNRPs according to the actual situation.

  • Review
    Convergence of blockchain and Internet of Things: integration, security, and use cases
    Robertas DAMAŠEVIČIUS, Sanjay MISRA, Rytis MASKELIŪNAS, Anand NAYYAR
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(10): 1295-1321. https://doi.org/10.1631/FITEE.2300215

    Internet of Things (IoT) devices are becoming increasingly ubiquitous, and their adoption is growing at an exponential rate. However, they are vulnerable to security breaches, and traditional security mechanisms are not enough to protect them. The massive amounts of data generated by IoT devices can be easily manipulated or stolen, posing significant privacy concerns. This paper is to provide a comprehensive overview of the integration of blockchain and IoT technologies and their potential to enhance the security and privacy of IoT systems. The paper examines various security issues and vulnerabilities in IoT and explores how blockchain-based solutions can be used to address them. It provides insights into the various security issues and vulnerabilities in IoT and explores how blockchain can be used to enhance security and privacy. The paper also discusses the potential applications of blockchain-based IoT (B-IoT) systems in various sectors, such as healthcare, transportation, and supply chain management. The paper reveals that the integration of blockchain and IoT has the potential to enhance the security, privacy, and trustworthiness of IoT systems. The multi-layered architecture of B-IoT, consisting of perception, network, data processing, and application layers, provides a comprehensive framework for the integration of blockchain and IoT technologies. The study identifies various security solutions for B-IoT, including smart contracts, decentralized control, immutable data storage, identity and access management (IAM), and consensus mechanisms. The study also discusses the challenges and future research directions in the field of B-IoT.

  • Event-triggered adaptive tracking control of a class of nonlinear systems with asymmetric time-varying output constraints
    Yitao YANG, Lidong ZHANG
    Frontiers of Information Technology & Electronic Engineering, 2024, 25(8): 1134-1144. https://doi.org/10.1631/FITEE.2300679

    This article investigates the event-triggered adaptive neural network (NN) tracking control problem with deferred asymmetric time-varying (DATV) output constraints. To deal with the DATV output constraints, an asymmetric time-varying barrier Lyapunov function (ATBLF) is first built to make the stability analysis and the controller construction simpler. Second, an event-triggered adaptive NN tracking controller is constructed by incorporating an error-shifting function, which ensures that the tracking error converges to an arbitrarily small neighborhood of the origin within a predetermined settling time, consequently optimizing the utilization of network resources. It is theoretically proven that all signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB), while the initial value is outside the constraint boundary. Finally, a single-link robotic arm (SLRA) application example is employed to verify the viability of the acquired control algorithm.