Intelligent radio access networks (RANs) have been seen as a promising paradigm aiming to better satisfy diverse application demands and support various service scenarios. In this paper, a comprehensive survey of recent advances in intelligent RANs is conducted. First, the efforts made by standard organizations and vendors are summarized, and several intelligent RAN architectures proposed by the academic community are presented, such as intent-driven RAN and network with enhanced data analytic. Then, several enabling techniques are introduced which include AI-driven network slicing, intent perception, intelligent operation and maintenance, AI-based cloud-edge collaborative networking, and intelligent multi-dimensional resource allocation. Furthermore, the recent progress achieved in developing experimental platforms is described. Finally, given the extensiveness of the research area, several promising future directions are outlined, in terms of standard open data sets, enabling AI with a computing power network, realization of edge intelligence, and software-defined intelligent satellite-terrestrial integrated network.
The recent decade has witnessed an upsurge in the demands of intelligent and simplified Internet of Things (IoT) networks that provide ultra-low-power communication for numerous miniaturized devices. Although the research community has paid great attention to wireless protocol designs for these networks, researchers are handicapped by the lack of an energy-efficient software-defined radio (SDR) platform for fast implementation and experimental evaluation. Current SDRs perform well in battery-equipped systems, but fail to support miniaturized IoT devices with stringent hardware and power constraints. This paper takes the first step toward designing an ultra-low-power SDR that satisfies the ultra-low-power or even battery-free requirements of intelligent and simplified IoT networks. To achieve this goal, the core technique is the effective integration of µW-level backscatter in our SDR to sidestep power-hungry active radio frequency chains. We carefully develop a novel circuit design for efficient energy harvesting and power control, and devise a competent solution for eliminating the harmonic and mirror frequencies caused by backscatter hardware. We evaluate the proposed SDR using different modulation schemes, and it achieves a high data rate of 100 kb/s with power consumption less than 200 µW in the active mode and as low as 10 µW in the sleep mode. We also conduct a case study of railway inspection using our platform, achieving 1 kb/s battery-free data delivery to the monitoring unmanned aerial vehicle at a distance of 50 m in a real-world environment, and provide two case studies on smart factories and logistic distribution to explore the application of our platform.
Owing to the inherent central information processing and resource management ability, the cloud radio access network (C-RAN) is a promising network structure for an intelligent and simplified sixth-generation (6G) wireless network. Nevertheless, to further enhance the capacity and coverage, more radio remote heads (RRHs) as well as high-fidelity and low-latency fronthaul links are required, which may lead to high implementation cost. To address this issue, we propose to exploit the intelligent reflecting surface (IRS) as an alternative way to enhance the C-RAN, which is a low-cost and energy-efficient option. Specifically, we consider the uplink transmission where multi-antenna users communicate with the baseband unit (BBU) pool through multi-antenna RRHs and multiple IRSs are deployed between the users and RRHs. RRHs can conduct either point-to-point (P2P) compression or Wyner-Ziv coding to compress the received signals, which are then forwarded to the BBU pool through fronthaul links. We investigate the joint design and optimization of user transmit beamformers, IRS passive beamformers, and fronthaul compression noise covariance matrices to maximize the uplink sum rate subject to fronthaul capacity constraints under P2P compression and Wyner-Ziv coding. By exploiting the Arimoto-Blahut algorithm and semi-definite relaxation (SDR), we propose a successive convex approximation approach to solve non-convex problems, and two iterative algorithms corresponding to P2P compression and Wyner-Ziv coding are provided. Numerical results verify the performance gain brought about by deploying IRS in C-RAN and the superiority of the proposed joint design.
Edge artificial intelligence will empower the ever simple industrial wireless networks (IWNs) supporting complex and dynamic tasks by collaboratively exploiting the computation and communication resources of both machine-type devices (MTDs) and edge servers. In this paper, we propose a multi-agent deep reinforcement learning based resource allocation (MADRL-RA) algorithm for end–edge orchestrated IWNs to support computation-intensive and delay-sensitive applications. First, we present the system model of IWNs, wherein each MTD is regarded as a self-learning agent. Then, we apply the Markov decision process to formulate a minimum system overhead problem with joint optimization of delay and energy consumption. Next, we employ MADRL to defeat the explosive state space and learn an effective resource allocation policy with respect to computing decision, computation capacity, and transmission power. To break the time correlation of training data while accelerating the learning process of MADRL-RA, we design a weighted experience replay to store and sample experiences categorically. Furthermore, we propose a step-by-step ε-greedy method to balance exploitation and exploration. Finally, we verify the effectiveness of MADRL-RA by comparing it with some benchmark algorithms in many experiments, showing that MADRL-RA converges quickly and learns an effective resource allocation policy achieving the minimum system overhead.
To support the ubiquitous connectivity requirement of sixth generation communication, unmanned aerial vehicles (UAVs) play a key role as a major part of the future communication networks. One major issue in UAV communications is the interference resulting from spectrum sharing and line-of-sight links. Recently, the application of the coordinated multipoint (CoMP) technology has been proposed to reduce the interference in the UAV-terrestrial heterogeneous network (HetNet). In this paper, we consider a three-dimensional (3D) multilayer UAV-terrestrial HetNet, where the aerial base stations (ABSs) are deployed at multiple different altitudes. Using stochastic geometry, we develop a tractable mathematical framework to characterize the aggregate interference and evaluate the coverage probability of this HetNet. Our numerical results show that the implementation of the CoMP scheme can effectively reduce the interference in the network, especially when the density of base stations is relatively large. Furthermore, the system parameters of the ABSs deployed at higher altitudes dominantly influence the coverage performance of the considered 3D HetNet.
Fog radio access networks (F-RANs), in which the fog access points are equipped with communication, caching, and computing functionalities, have been anticipated as a promising architecture for enabling virtual reality (VR) applications in wireless networks. Although extensive research efforts have been devoted to designing efficient resource allocation strategies for realizing successful mobile VR delivery in downlink, the equally important resource allocation problem of mobile VR delivery in uplink has so far drawn little attention. In this work, we investigate a mobile VR F-RAN delivery framework, where both the uplink and downlink transmissions are considered. We first characterize the round-trip latency of the system, which reveals its dependence on the communication, caching, and computation resource allocations. Based on this information, we propose a simple yet efficient algorithm to minimize the round-trip latency, while satisfying the practical constraints on caching, computation capability, and transmission capacity in the uplink and downlink. Numerical results show that our proposed algorithm can effectively reduce the round-trip latency compared with various baselines, and the impacts of communication, caching, and computing resources on latency performance are illustrated.
The ever-changing environment and complex combat missions create new demands for the formation of mission groups of unmanned combat agents. This study aims to address the problem of dynamic construction of mission groups under new requirements. Agents are heterogeneous, and a group formation method must dynamically form new groups in circumstances where missions are constantly being explored. In our method, a group formation strategy that combines heuristic rules and response threshold models is proposed to dynamically adjust the members of the mission group and adapt to the needs of new missions. The degree of matching between the mission requirements and the group’s capabilities, and the communication cost of group formation are used as indicators to evaluate the quality of the group. The response threshold method and the ant colony algorithm are selected as the comparison algorithms in the simulations. The results show that the grouping scheme obtained by the proposed method is superior to those of the comparison methods.
Predicting visual attention facilitates an adaptive virtual museum environment and provides a context-aware and interactive user experience. Explorations toward development of a visual attention mechanism using eye-tracking data have so far been limited to 2D cases, and researchers are yet to approach this topic in a 3D virtual environment and from a spatiotemporal perspective. We present the first 3D Eye-tracking Dataset for Visual Attention modeling in a virtual Museum, known as the EDVAM. In addition, a deep learning model is devised and tested with the EDVAM to predict a user’s subsequent visual attention from previous eye movements. This work provides a reference for visual attention modeling and context-aware interaction in the context of virtual museums.
We describe a method of optical flow extraction for high-speed high-brightness targets based on a pulse array image sensor (PAIS). PAIS is a retina-like image sensor with pixels triggered by light; it can convert light into a series of pulse intervals. This method can obtain optical flow from pulse data directly by accumulating continuous pulses. The triggered points can be used to filter redundant data when the target is brighter than the background. The method takes full advantage of the rapid response of PAIS to high-brightness targets. We applied this method to extract the optical flow of high-speed turntables with different background brightness, with the sensor model and actual data, respectively. Under the sampling condition of 2×104 frames/s, the optical flow could be extracted from a high-speed turntable rotating at 1000 r/min. More than 90% of redundant points could be filtered by our method. Experimental results showed that the optical flow extraction algorithm based on pulse data can extract the optical flow information of high-brightness objects efficiently without the need to reconstruct images.
Identifying factors that exert more influence on system output from data is one of the most challenging tasks in science and engineering. In this work, a sensitivity analysis of the generalized Gaussian process regression (SA-GGPR) model is proposed to identify important factors of the nonlinear counting system. In SA-GGPR, the GGPR model with Poisson likelihood is adopted to describe the nonlinear counting system. The GGPR model with Poisson likelihood inherits the merits of nonparametric kernel learning and Poisson distribution, and can handle complex nonlinear counting systems. Nevertheless, understanding the relationships between model inputs and output in the GGPR model with Poisson likelihood is not readily accessible due to its nonparametric and kernel structure. SA-GGPR addresses this issue by providing a quantitative assessment of how different inputs affect the system output. The application results on a simulated nonlinear counting system and a real steel casting-rolling process have demonstrated that the proposed SA-GGPR method outperforms several state-of-the-art methods in identification accuracy.
RSA and ellipse curve cryptography (ECC) algorithms are widely used in authentication, data security, and access control. In this paper, we analyze the basic operation of the ECC and RSA algorithms and optimize their modular multiplication and modular inversion algorithms. We then propose a reconfigurable modular operation architecture, with a mix-memory unit and double multiply-accumulate structures, to realize our unified, asymmetric cryptosystem structure in an operational unit. Synthesized with 55-nm CMOS process, our design runs at 588 MHz and requires only 437 801 µm2 of hardware resources. Our proposed design takes 21.92 and 23.36 mW for 2048-bit RSA modular multiplication and modular inversion respectively, as well as 16.16 and 15.88 mW to complete 512-bit ECC dual-field modular multiplication and modular inversion respectively. It is more energy-efficient and flexible than existing single algorithm units. Compared with existing multiple algorithm units, our proposed method shows better performance. The operation unit is embedded in a 64-bit RISC-V processor, realizing key generation, encryption and decryption, and digital signature functions of both RSA and ECC. Our proposed design takes 0.224 and 0.153 ms for 256-bit ECC point multiplication in G(p) and G(2m) respectively, as well as 0.96 ms to complete 1024-bit RSA exponentiation, meeting the demand for high energy efficiency.
There are two famous function decomposition methods in math: the Taylor series and the Fourier series. The Fourier series developed into the Fourier spectrum, which was applied to signal decomposition and analysis. However, because the Taylor series function cannot be solved without a definite functional expression, it has rarely been used in engineering. We developed a Taylor series using our proposed dendrite net (DD), constructed a relation spectrum, and applied it to decomposition and analysis of models and systems. Specifically, knowledge of the intuitive link between muscle activity and finger movement is vital for the design of commercial prosthetic hands that do not need user pre-training. However, this link has yet to be understood due to the complexity of the human hand. In this study, the relation spectrum was applied to analyze the muscle–finger system. One single muscle actuates multiple fingers, or multiple muscles actuate one single finger simultaneously. Thus, the research was focused on muscle synergy and muscle coupling for the hand. The main contributions are twofold: (1) The findings concerning the hand contribute to the design of prosthetic hands; (2) The relation spectrum makes the online model human-readable, which unifies online performance and offline results. Code is available at https://github.com/liugang1234567/Gang-neuron.
The input/output (I/O) pins of an industry-level fluorescent optical fiber temperature sensor readout circuit need on-chip integrated high-performance electro-static discharge (ESD) protection devices. It is difficult for the failure level of basic N-type buried layer gate-controlled silicon controlled rectifier (NBL-GCSCR) manufactured by the 0.18 μm standard bipolar-CMOS-DMOS (BCD) process to meet this need. Therefore, we propose an on-chip integrated novel deep N-well gate-controlled SCR (DNW-GCSCR) with a high failure level to effectively solve the problems based on the same semiconductor process. Technology computer-aided design (TCAD) simulation is used to analyze the device characteristics. SCRs are tested by trans-mission line pulses (TLP) to obtain accurate ESD parameters. The holding voltage (24.03 V) of NBL-GCSCR with the longitu-dinal bipolar junction transistor (BJT) path is significantly higher than the holding voltage (5.15 V) of DNW-GCSCR with the lateral SCR path of the same size. However, the failure current of the NBL-GCSCR device is 1.71 A, and the failure current of the DNW-GCSCR device is 20.99 A. When the gate size of DNW-GCSCR is increased from 2 μm to 6 μm, the holding voltage is increased from 3.50 V to 8.38 V. The optimized DNW-GCSCR (6 μm) can be stably applied on target readout circuits for on-chip electrostatic discharge protection.