A general empirical path loss (PL) model for air-to-ground (A2G) millimeter-wave (mmWave) channels is proposed in this paper. Different from existing PL models, the new model takes the height factor of unmanned aerial vehicles (UAVs) into account, and divides the propagation conditions into three cases (i.e., line-of-sight, reflection, and diffraction). A map-based deterministic PL prediction algorithm based on the ray-tracing (RT) technique is developed, and is used to generate numerous PL data for different cases. By fitting and analyzing the PL data under different scenarios and UAV heights, altitude-dependent model parameters are provided. Simulation results show that the proposed model can be effectively used to predict PL values for both low- and high-altitude cases. The prediction results of the proposed model better match the RT-based calculation results than those of the Third Generation Partnership Project (3GPP) model and the close-in model. The standard deviation of the PL is also much smaller. Moreover, the new model is flexible and can be extended to other A2G scenarios (not included in this paper) by adjusting the parameters according to the simulation or measurement data.
Millimeter wave (mmWave) has been claimed as the viable solution for high-bandwidth vehicular communications in 5G and beyond. To realize applications in future vehicular communications, it is important to take a robust mmWave vehicular network into consideration. However, one challenge in such a network is that mmWave should provide an ultra-fast and high-rate data exchange among vehicles or vehicle-to-infrastructure (V2I). Moreover, traditional real-time channel estimation strategies are unavailable because vehicle mobility leads to a fast variation mmWave channel. To overcome these issues, a channel estimation approach for mmWave V2I communications is proposed in this paper. Specifically, by considering a fast-moving vehicle secnario, a corresponding mathematical model for a fast time-varying channel is first established. Then, the temporal variation rule between the base station and each mobile user and the determined direction-of-arrival are used to predict the time-varying channel prior information (PI). Finally, by exploiting the PI and the characteristics of the channel, the time-varying channel is estimated. The simulation results show that the scheme in this paper outperforms traditional ones in both normalized mean square error and sum-rate performance in the mmWave time-varying vehicular system.
High-throughput satellites (HTSs) play an important role in future millimeter-wave (mmWave) aeronautical communication to meet high speed and broad bandwidth requirements. This paper investigates the outage performance of an aeronautical broadband satellite communication system’s forward link, where the feeder link from the gateway to the HTS uses free-space optical (FSO) transmission and the user link from the HTS to aircraft operates at the mmWave band. In the user link, spot beam technology is exploited at the HTS and a massive antenna array is deployed at the aircraft. We first present a location-based beamforming (BF) scheme to maximize the expected output signal-to-noise ratio (SNR) of the forward link with the amplify-and-forward (AF) protocol, which turns out to be a phased array. Then, by supposing that the FSO feeder link follows Gamma-Gamma fading whereas the mmWave user link experiences shadowed Rician fading, we take the influence of the phase error into account, and derive the closed-form expression of the outage probability (OP) for the considered system. To gain further insight, a simple asymptotic OP expression at a high SNR is provided to show the diversity order and coding gain. Finally, numerical simulations are conducted to confirm the validity of the theoretical analysis and reveal the effects of phase errors on the system outage performance.
Multi-agent reinforcement learning (MARL) has long been a significant research topic in both machine learning and control systems. Recent development of (single-agent) deep reinforcement learning has created a resurgence of interest in developing new MARL algorithms, especially those founded on theoretical analysis. In this paper, we review recent advances on a sub-area of this topic: decentralized MARL with networked agents. In this scenario, multiple agents perform sequential decision-making in a common environment, and without the coordination of any central controller, while being allowed to exchange information with their neighbors over a communication network. Such a setting finds broad applications in the control and operation of robots, unmanned vehicles, mobile sensor networks, and the smart grid. This review covers several of our research endeavors in this direction, as well as progress made by other researchers along the line. We hope that this review promotes additional research efforts in this exciting yet challenging area.
With the fast development of consumer-level RGB-D cameras, real-world indoor three-dimensional (3D) scene modeling and robotic applications are gaining more attention. However, indoor 3D scene modeling is still challenging because the structure of interior objects may be complex and the RGB-D data acquired by consumer-level sensors may have poor quality. There is a lot of research in this area. In this survey, we provide an overview of recent advances in indoor scene modeling methods, public indoor datasets and libraries which can facilitate experiments and evaluations, and some typical applications using RGB-D devices including indoor localization and emergency evacuation.
We propose a novel indoor positioning algorithm based on the received signal strength (RSS) fingerprint. The proposed algorithm can be divided into three steps, an offline phase at which an advanced clustering (AC) strategy is used, an online phase of approximate localization at which cluster matching is used, and an online phase of precise localization with kernel ridge regression. Specifically, after offline fingerprint collection and similarity measurement, we employ an AC strategy based on the K-medoids clustering algorithm using additional reference points that are geographically located at the outer cluster boundary to enrich the data of each cluster. During the approximate localization, RSS measurements are compared with the cluster radio maps to determine to which cluster the target most likely belongs. Both the Euclidean distance of the RSSs and the Hamming distance of the coverage vectors between the observations and training records are explored for cluster matching. Then, a kernel-based ridge regression method is used to obtain the ultimate positioning of the target. The performance of the proposed algorithm is evaluated in two typical indoor environments, and compared with those of state-of-the-art algorithms. The experimental results demonstrate the effectiveness and advantages of the proposed algorithm in terms of positioning accuracy and complexity.
This paper addresses the problem of joint tracking and classification (JTC) of a single extended target with a complex shape. To describe this complex shape, the spatial extent state is first modeled by star-convex shape via a random hypersurface model (RHM), and then used as feature information for target classification. The target state is modeled by two vectors to alleviate the influence of the high-dimensional state space and the severely nonlinear observation model on target state estimation, while the Euclidean distance metric of the normalized Fourier descriptors is applied to obtain the analytical solution of the updated class probability. Consequently, the resulting method is called the “JTC-RHM method.” Besides, the proposed JTC-RHM is integrated into a Bernoulli filter framework to solve the JTC of a single extended target in the presence of detection uncertainty and clutter, resulting in a JTC-RHM-Ber filter. Specifically, the recursive expressions of this filter are derived. Simulations indicate that: (1) the proposed JTC-RHM method can classify the targets with complex shapes and similar sizes more correctly, compared with the JTC method based on the random matrix model; (2) the proposed method performs better in target state estimation than the star-convex RHM based extended target tracking method; (3) the proposed JTC-RHM-Ber filter has a promising performance in state detection and estimation, and can achieve target classification correctly.
We propose a novel circuit for the fractional-order memristive neural synaptic weighting (FMNSW). The introduced circuit is different from the majority of the previous integer-order approaches and offers important advantages. Since the concept of memristor has been generalized from the classic integer-order memristor to the fractional-order memristor (fracmemristor), a challenging theoretical problem would be whether the fracmemristor can be employed to implement the fractional-order memristive synapses or not. In this research, characteristics of the FMNSW, realized by a pulse-based fracmemristor bridge circuit, are investigated. First, the circuit configuration of the FMNSW is explained using a pulse-based fracmemristor bridge circuit. Second, the mathematical proof of the fractional-order learning capability of the FMNSW is analyzed. Finally, experimental work and analyses of the electrical characteristics of the FMNSW are presented. Strong ability of the FMNSW in explaining the cellular mechanisms that underlie learning and memory, which is superior to the traditional integer-order memristive neural synaptic weighting, is considered a major advantage for the proposed circuit.
In the standard grey wolf optimizer (GWO), the search wolf must wait to update its current position until the comparison between the other search wolves and the three leader wolves is completed. During this waiting period, the standard GWO is seen as the static GWO. To get rid of this waiting period, two dynamic GWO algorithms are proposed: the first dynamic grey wolf optimizer (DGWO1) and the second dynamic grey wolf optimizer (DGWO2). In the dynamic GWO algorithms, the current search wolf does not need to wait for the comparisons between all other search wolves and the leading wolves, and its position can be updated after completing the comparison between itself or the previous search wolf and the leading wolves. The position of the search wolf is promptly updated in the dynamic GWO algorithms, which increases the iterative convergence rate. Based on the structure of the dynamic GWOs, the performance of the other improved GWOs is examined, verifying that for the same improved algorithm, the one based on dynamic GWO has better performance than that based on static GWO in most instances.
Blind signcryption (BSC) can guarantee the blindness and untrackability of signcrypted messages, and moreover, it provides simultaneous unforgeability and confidentiality. Most traditional BSC schemes are based on the number theory. However, with the rapid development of quantum computing, traditional BSC systems are faced with severe security threats. As promising candidate cryptosystems with the ability to resist attacks from quantum computing, lattice-based cryptosystems have attracted increasing attention in academic fields. In this paper, a post-quantum blind signcryption scheme from lattice (PQ-LBSCS) is devised by applying BSC to lattice-based cryptosystems. PQ-LBSCS inherits the advantages of the lattice-based cryptosystem and blind signcryption technique. PQ-LBSCS is provably secure under the hard assumptions of the learning with error problem and small integer solution problem in the standard model. Simulations are carried out using the Matlab tool to analyze the computational efficiency, and the simulation results show that PQ-LBSCS is more efficient than previous schemes. PQ-LBSCS has extensive application prospects in e-commerce, mobile communication, and smart cards.
Video summarization has established itself as a fundamental technique for generating compact and concise video, which alleviates managing and browsing large-scale video data. Existing methods fail to fully consider the local and global relations among frames of video, leading to a deteriorated summarization performance. To address the above problem, we propose a graph convolutional attention network (GCAN) for video summarization. GCAN consists of two parts, embedding learning and context fusion, where embedding learning includes the temporal branch and graph branch. In particular, GCAN uses dilated temporal convolution to model local cues and temporal self-attention to exploit global cues for video frames. It learns graph embedding via a multi-layer graph convolutional network to reveal the intrinsic structure of frame samples. The context fusion part combines the output streams from the temporal branch and graph branch to create the context-aware representation of frames, on which the importance scores are evaluated for selecting representative frames to generate video summary. Experiments are carried out on two benchmark databases, SumMe and TVSum, showing that the proposed GCAN approach enjoys superior performance compared to several state-of-the-art alternatives in three evaluation settings.