Zero trust architecture is an end-to-end approach for server resources and data security which contains identity authentication, access control, dynamic evaluation, and so on. This work focuses on authentication technology in the zero trust network. In this paper, a Traceable Universal Designated Verifier Signature (TUDVS) is used to construct a privacy-preserving authentication scheme for zero trust architecture. Specifically, when a client requests access to server resources, we want to protect the client's access privacy which means that the server administrator cannot disclose the client's access behavior to any third party. In addition, the security of the proposed scheme is proved and its efficiency is analyzed. Finally, TUDVS is applied to the single packet authorization scenario of the zero trust architecture to prove the practicability of the proposed scheme.
The future Sixth-Generation (6G) wireless systems are expected to encounter emerging services with diverse requirements. In this paper, 6G network resource orchestration is optimized to support customized network slicing of services, and place network functions generated by heterogeneous devices into available resources. This is a combinatorial optimization problem that is solved by developing a Particle Swarm Optimization (PSO) based scheduling strategy with enhanced inertia weight, particle variation, and nonlinear learning factor, thereby balancing the local and global solutions and improving the convergence speed to globally near-optimal solutions. Simulations show that the method improves the convergence speed and the utilization of network resources compared with other variants of PSO.
Virtual Reality (VR) is a key industry for the development of the digital economy in the future. Mobile VR has advantages in terms of mobility, lightweight and cost-effectiveness, which has gradually become the mainstream implementation of VR. In this paper, a mobile VR video adaptive transmission mechanism based on intelligent caching and hierarchical buffering strategy in Mobile Edge Computing (MEC)-equipped 5G networks is proposed, aiming at the low latency requirements of mobile VR services and flexible buffer management for VR video adaptive transmission. To support VR content proactive caching and intelligent buffer management, users' behavioral similarity and head movement trajectory are jointly used for viewpoint prediction. The tile-based content is proactively cached in the MEC nodes based on the popularity of the VR content. Second, a hierarchical buffer-based adaptive update algorithm is presented, which jointly considers bandwidth, buffer, and predicted viewpoint status to update the tile chunk in client buffer. Then, according to the decomposition of the problem, the buffer update problem is modeled as an optimization problem, and the corresponding solution algorithms are presented. Finally, the simulation results show that the adaptive caching algorithm based on 5G intelligent edge and hierarchical buffer strategy can improve the user experience in the case of bandwidth fluctuations, and the proposed viewpoint prediction method can significantly improve the accuracy of viewpoint prediction by 15%.
The deployment of distributed multi-controllers for Software-Defined Networking (SDN) architecture is an emerging solution to improve network scalability and management. However, the network control failure affects the dynamic resource allocation in distributed networks resulting in network disruption and low resilience. Thus, we consider the control plane fault tolerance for cost-effective and accurate controller location models during control plane failures. This fault-tolerance strategy has been applied to distributed SDN control architecture, which allows each switch to migrate to next controller to enhance network performance. In this paper, the Reliable and Dynamic Mapping-based Controller Placement (RDMCP) problem in distributed architecture is framed as an optimization problem to improve the system reliability, quality, and availability. By considering the bound constraints, a heuristic state-of-the-art Controller Placement Problem (CPP) algorithm is used to address the optimal assignment and reassignment of switches to nearby controllers other than their regular controllers. The algorithm identifies the optimal controller location, minimum number of controllers, and the expected assignment costs after failure at the lowest effective cost. A metaheuristic Particle Swarm Optimization (PSO) algorithm was combined with RDMCP to form a hybrid approach that improves objective function optimization in terms of reliability and cost-effectiveness. The effectiveness of our hybrid RDMCP-PSO was then evaluated using extensive experiments and compared with other baseline algorithms. The findings demonstrate that the proposed hybrid technique significantly increases the network performance regarding the controller number and load balancing of the standalone heuristic CPP algorithm.
The development of the Internet of Things (IoT) technology is leading to a new era of smart applications such as smart transportation, buildings, and smart homes. Moreover, these applications act as the building blocks of IoT-enabled smart cities. The high volume and high velocity of data generated by various smart city applications are sent to flexible and efficient cloud computing resources for processing. However, there is a high computation latency due to the presence of a remote cloud server. Edge computing, which brings the computation close to the data source is introduced to overcome this problem. In an IoT-enabled smart city environment, one of the main concerns is to consume the least amount of energy while executing tasks that satisfy the delay constraint. An efficient resource allocation at the edge is helpful to address this issue. In this paper, an energy and delay minimization problem in a smart city environment is formulated as a bi-objective edge resource allocation problem. First, we presented a three-layer network architecture for IoT-enabled smart cities. Then, we designed a learning automata-based edge resource allocation approach considering the three-layer network architecture to solve the said bi-objective minimization problem. Learning Automata (LA) is a reinforcement-based adaptive decision-maker that helps to find the best task and edge resource mapping. An extensive set of simulations is performed to demonstrate the applicability and effectiveness of the LA-based approach in the IoT-enabled smart city environment.
Training-based cellular communication systems use orthogonal pilot sequences to limit pilot contamination. However, the orthogonality constraint imposes a certain pilot length, and therefore, in communication systems with a large number of users, time-frequency resources are wasted significantly in the training phase. In cellular massive MIMO systems, the time-frequency resources can be used more efficiently by replacing the orthogonal pilots with shorter non-orthogonal pilot sequences in such a way that more space is available for the transmission of additional data symbols, and thus achieving higher data rates. Of course, the use of non-orthogonal pilots introduces additional pilot contamination, so the performance improvement could be achieved under certain system conditions, which are thoroughly investigated in this paper. We first provide a performance analysis framework for the uplink of cellular massive MIMO systems in which the effect of user pilot non-orthogonality has been analytically modelled. In this framework, we derive analytical expressions for the channel estimation, user Signal-to-Interference-plus-Noise-Ratio (SINR), and the average channel capacity per cell. We then use the proposed framework to evaluate the achievable spectral efficiency gain obtained by replacing orthogonal pilots with non-orthogonal counterparts. In particular, the existing trade-off between pilot lengths and the additional data symbols that can be transmitted by reducing the number of pilot symbols, is numerically quantified over a wide range of system parameters.
In this paper, in order to reduce the energy leakage caused by the discretized representation in sparse channel estimation for Orthogonal Frequency Division Multiplexing (OFDM) systems, we systematically have analyzed the optimal locations of atoms with discrete delays for each path reconstruction from the perspective of linear fitting theory. Then, we have investigated the adverse effects of the non-ideal inner product function on the iteration in one of the most widely used channel estimation method, Orthogonal Matching Pursuit (OMP). The study shows that the distance between the selected atoms for each path in OMP can be larger than the sampling interval, which prevents OMP-based methods from achieving better performance. To overcome this drawback, the image deblurring-based channel estimation method, in which the channel estimation problem is analogized to one-dimensional image deblurring, was proposed to improve the large compensation distance of traditional OMP. The advantage of the proposed method was validated by the results of numerical simulation and sea trial data decoding.
LTE-Unlicensed (LTE-U) network is a type of cellular communication network operating the unlicensed spectrum. Offloading cellular traffic to WiFi or Device-to-Device (D2D) network can lead to interference among them. Applying Multiple-Input Multiple-Output (MIMO) technology in Cellular Base Station (CBS) and WiFi Access Point (WAP) can effectively reduce interference among D2D, WiFi and cellular networks. To our best knowledge, there is still no literature to explicitly study the characteristics of the traffic offloading in the Multiple-User MIMO (MU-MIMO) enabled network coexisting with D2D and WiFi networks. In this article, we thoroughly investigate the impact of D2D communication and MU-MIMO enabled WAP and CBS on the performance of the LTE-U network. More specifically, we derive the expressions of the downlink rates for cellular users, D2D users, and WiFi users with incomplete Channel State Information (CSI) feedback, and we validate our analysis through Monte-Carlo simulation. Numerous results illustrate the following conclusions. (i) Increasing the number of WiFi users, the length of CSI feedback, and the quantity of D2D pairs that reuse the channel with a single cellular user can increase the total throughput of the heterogeneous network. (ii) The total throughput decreases when more than two users are offloaded to D2D pairs, and increases as the number of offloaded users increases when less than six users are offloaded to WiFi network. (iii) Simultaneously offloading traffic to D2D pairs and WiFi network can obtain higher total throughput than offloading traffic to only one of them.
Terahertz (THz) wireless communication has been recognized as a powerful technology to meet the ever-increasing demand of ultra-high rate services. In order to achieve efficient and reliable wireless communications over THz bands, it is extremely necessary to find an appropriate waveform for THz communications. In this paper, performance comparison of various single-carrier and multi-carrier waveforms over THz channels will be provided. Specifically, first, a system model for terahertz communication is briefly described, which includes amplifier nonlinearity, propagation characteristic, phase noise, etc. Then, the transceiver architectures related to both single-carrier and multi-carrier waveforms are presented, as well as their corresponding signal processing techniques. To evaluate the suitability of the waveforms, key performance metrics concerning power efficiency, transmission performance, and computational complexity are provided. Simulation results are provided to compare and validate the performance of different waveforms, which demonstrate the outstanding performance of Discrete-Fourier-Transform spread Orthogonal Frequency Division Multiplexing (DFT-s-OFDM) to THz communications when compared to Cyclic Prefix-OFDM (CP-OFDM) and other single-carrier waveforms.
Visibility conditions between antennas, i.e. Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) can be crucial in the context of indoor localization, for which detecting the NLOS condition and further correcting constant position estimation errors or allocating resources can reduce the negative influence of multipath propagation on wireless communication and positioning. In this paper a Deep Learning (DL) model to classify LOS/NLOS condition while analyzing two Channel Impulse Response (CIR) parameters: Total Power (TP) [dBm] and First Path Power (FP) [dBm] is proposed. The experiments were conducted using DWM1000 DecaWave radio module based on measurements collected in a real indoor environment and the proposed architecture provides LOS/NLOS identification with an accuracy of more than 100% and 95% in static and dynamic senarios, respectively. The proposed model improves the classification rate by 2-5% compared to other Machine Learning (ML) methods proposed in the literature.
Millimeter-wave (mmWave) Non-Orthogonal Multiple Access (NOMA) with random beamforming is a promising technology to guarantee massive connectivity and low latency transmissions of future generations of mobile networks. In this paper, we introduce a cost-effective and energy-efficient mmWave-NOMA system that exploits sparse antenna arrays in the transmitter. Our analysis shows that utilizing low-weight and small-sized sparse antennas in the Base Station (BS) leads to better outage probability performance. We also introduce an optimum low complexity Equilibrium Optimization (EO)-based algorithm to further improve the outage probability. The simulation and analysis results show that the systems equipped with sparse antenna arrays making use of optimum beamforming vectors outperform the conventional systems with uniform linear arrays in terms of outage probability and sum rates.
The new generation of communication systems is moving towards using a millimeter-wave spectrum. Since the shadowing effects are undeniable in this type of propagation, the proposed Generalized Fisher (GF) distribution can be useful in modeling shadowed fading channels, considering the non-linearity and the multi-cluster nature of the diffusion medium. After introducing the model, its main statistics, including Probability Density Function (PDF), Cumulative Distribution Function (CDF), Moment Generating Function (MGF), and the distribution of the sum of an arbitrary number of independent and non-identically distributed (i.n.i.d.) random variables with GF distribution are calculated. Subsequently, some wireless communication application criteria such as ergodic and outage capacities, are computed. Finally, considering the classic Wyner's wiretap model and passive eavesdropping scenario, specific security criteria, such as the probability of non-zero secrecy capacity and secrecy outage probability, are also determined. These expressions are measured in terms of either univariate or multivariate Fox's H-function.
In this study, we analyzed the performance of an Unmanned Aerial Vehicle (UAV)-based mixed Underwater Power Line Communication-Radio Frequency (UPLC-RF) network. In this network, a buoy located at the sea is used as a relay to transmit signals from the underwater signal source to the UAV through the PLC link. We assume that the UPLC channel obeys a log-normal distribution and that the RF link follows the Rician distribution. Using this model, we obtained the closed-form expressions for the Outage Probability (OP), Average Bit-error-rate (ABER), and Average Channel Capacity (ACC). In addition, the asymptotic analysis of the OP and ABER was performed, and an upper bound for the average capacity was obtained. Finally, the analytical results were verified by Monte Carlo simulation thereby demonstrating the effect of impulse noise and the altitude of the UAV on network performance.
Although speech emotion recognition is challenging, it has broad application prospects in human-computer interaction. Building a system that can accurately and stably recognize emotions from human languages can provide a better user experience. However, the current unimodal emotion feature representations are not distinctive enough to accomplish the recognition, and they do not effectively simulate the inter-modality dynamics in speech emotion recognition tasks. This paper proposes a multimodal method that utilizes both audio and semantic content for speech emotion recognition. The proposed method consists of three parts: two high-level feature extractors for text and audio modalities, and an autoencoder-based feature fusion. For audio modality, we propose a structure called Temporal Global Feature Extractor (TGFE) to extract the high-level features of the time-frequency domain relationship from the original speech signal. Considering that text lacks frequency information, we use only a Bidirectional Long Short-Term Memory network (BLSTM) and attention mechanism to simulate an intra-modal dynamic. Once these steps have been accomplished, the high-level text and audio features are sent to the autoencoder in parallel to learn their shared representation for final emotion classification. We conducted extensive experiments on three public benchmark datasets to evaluate our method. The results on Interactive Emotional Motion Capture (IEMOCAP) and Multimodal EmotionLines Dataset (MELD) outperform the existing method. Additionally, the results of CMU Multi-modal Opinion-level Sentiment Intensity (CMU-MOSI) are competitive. Furthermore, experimental results show that compared to unimodal information and autoencoder-based feature level fusion, the joint multimodal information (audio and text) improves the overall performance and can achieve greater accuracy than simple feature concatenation.
Efficient Convolution Operator (ECO) algorithms have achieved impressive performances in visual tracking. However, its feature extraction network of ECO is unconducive for capturing the correlation features of occluded and blurred targets between long-range complex scene frames. More so, its fixed weight fusion strategy does not use the complementary properties of deep and shallow features. In this paper, we propose a new target tracking method, namely ECO++, using deep feature adaptive fusion in a complex scene, in the following two aspects: First, we constructed a new temporal convolution mode and used it to replace the underlying convolution layer in Conformer network to obtain an improved Conformer network. Second, we adaptively fuse the deep features, which output through the improved Conformer network, by combining the Peak to Sidelobe Ratio (PSR), frame smoothness scores and adaptive adjustment weight. Extensive experiments on the OTB-2013, OTB-2015, UAV123, and VOT2019 benchmarks demonstrate that the proposed approach outperforms the state-of-the-art algorithms in tracking accuracy and robustness in complex scenes with occluded, blurred, and fast-moving targets.
The phenomenal popularity of smart mobile computing hardware is enabling pervasive edge intelligence and ushering us into a digital twin era. However, the natural barrier between edge equipment owned by different interested parties poses unique challenges for cross-domain trust management. In addition, the openness of radio access and the accessibility of edge services render edge intelligence systems vulnerable and put sensitive user data in jeopardy. This paper presents an intrusion protection mechanism for edge trust transfer to address the inter-edge trust management issue and the conundrum of detecting indistinguishable malevolent nodes launching weak attacks. First, an inter-edge reputation transfer framework is established to leverage the trust quality of different edges to retain the accumulated trust histories of users when they roam in multi-edge environments structurally. Second, a fine-grained intrusion protection system is proposed to reduce the negative impact of attacks on user interactions and improve the overall trust quality and system security of edge intelligence services. The experimental results validate the effectiveness and superior performance of the proposed intrusion protection for edge trust transfer in securing, enhancing, and consolidating edge intelligence services.
Signal detection plays an essential role in massive Multiple-Input Multiple-Output (MIMO) systems. However, existing detection methods have not yet made a good tradeoff between Bit Error Rate (BER) and computational complexity, resulting in slow convergence or high complexity. To address this issue, a low-complexity Approximate Message Passing (AMP) detection algorithm with Deep Neural Network (DNN) (denoted as AMP-DNN) is investigated in this paper. Firstly, an efficient AMP detection algorithm is derived by scalarizing the simplification of Belief Propagation (BP) algorithm. Secondly, by unfolding the obtained AMP detection algorithm, a DNN is specifically designed for the optimal performance gain. For the proposed AMP-DNN, the number of trainable parameters is only related to that of layers, regardless of modulation scheme, antenna number and matrix calculation, thus facilitating fast and stable training of the network. In addition, the AMP-DNN can detect different channels under the same distribution with only one training. The superior performance of the AMP-DNN is also verified by theoretical analysis and experiments. It is found that the proposed algorithm enables the reduction of BER without signal prior information, especially in the spatially correlated channel, and has a lower computational complexity compared with existing state-of-the-art methods.
Unmanned Aerial Vehicle (UAV) is an air base station featuring flexible deployment and mobility. It can significantly improve the communication quality of the system due to its line-of-sight channel connection with ground devices. However, due to the openness of UAV-to-Ground channels, the communication between ground users’ devices and UAV is easily eavesdropped. In this paper, we aim to improve the security of communication system by using full-duplex UAV as a mobile air base station. The UAV sends interference signals to eavesdroppers and receives signals from ground devices. We jointly optimize the scheduling between the UAV and ground devices, the transmission power of the UAV and ground devices, as well as the trajectory of the UAV to maximize the minimum average security communication data rate. This optimization problem is mixed with integers and non-convex expressions. Therefore, this problem is not a standard convex optimization problem, which can not be solved with standard methods. With this in mind, we propose an effective algorithm which solves this problem iteratively by applying Successive Convex Approximation (SCA), variable relaxation and substitution. Finally, numerical results demonstrate the effectiveness of the proposed algorithm.
The Internet of Things (IoT) has permeated various fields relevant to our lives. In these applications, countless IoT devices transmit vast amounts of data, which often carry important and private information. To prevent malicious users from spoofing these information, the first critical step is effective authentication. Physical Layer Authentication (PLA) employs unique characteristics inherent to wireless signals and physical devices and is promising in the IoT due to its flexibility, low complexity, and transparency to higher layer protocols. In this paper, the focus is on the interaction between multiple malicious spoofers and legitimate receivers in the PLA process. First, the interaction is formulated as a static spoof detection game by including the spoofers and receivers as players. The best authentication threshold of the receiver and the attack rate of the spoofers are consideblack as Nash Equilibrium (NE). Then, closed-form expressions are derived for all NEs in the static environment in three cases: multiplayer games, zero-sum games with collisions, and zero-sum games without collisions. Considering the dynamic environment, a Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm is proposed to analyze the interactions of receiver and spoofers. Last, comprehensive simulation experiments are conducted and demonstrate the impact of environmental parameters on the NEs, which provides guidance to design effective PLA schemes.
In Software-Defined Networks (SDNs), determining how to efficiently achieve Quality of Service (QoS)-aware routing is challenging but critical for significantly improving the performance of a network, where the metrics of QoS can be defined as, for example, average latency, packet loss ratio, and throughput. The SDN controller can use network statistics and a Deep Reinforcement Learning (DRL) method to resolve this challenge. In this paper, we formulate dynamic routing in an SDN as a Markov decision process and propose a DRL algorithm called the Asynchronous Advantage Actor-Critic QoS-aware Routing Optimization Mechanism (AQROM) to determine routing strategies that balance the traffic loads in the network. AQROM can improve the QoS of the network and reduce the training time via dynamic routing strategy updates; that is, the reward function can be dynamically and promptly altered based on the optimization objective regardless of the network topology and traffic pattern. AQROM can be considered as one-step optimization and a black-box routing mechanism in high-dimensional input and output sets for both discrete and continuous states, and actions with respect to the operations in the SDN. Extensive simulations were conducted using OMNeT++ and the results demonstrated that AQROM 1) achieved much faster and stable convergence than the Deep Deterministic Policy Gradient (DDPG) and Advantage Actor-Critic (A2C), 2) incurred a lower packet loss ratio and latency than Open Shortest Path First (OSPF), DDPG, and A2C, and 3) resulted in higher and more stable throughput than OSPF, DDPG, and A2C.
With the arrival of the 5G era, wireless communication technologies and services are rapidly exhausting the limited spectrum resources. Spectrum auctions came into being, which can effectively utilize spectrum resources. Because of the complexity of the electronic spectrum auction network environment, the security of spectrum auction can not be guaranteed. Most scholars focus on researching the security of the single-sided auctions, while ignoring the practical scenario of a secure double spectrum auction where participants are composed of multiple sellers and buyers. Researchers begin to design the secure double spectrum auction mechanisms, in which two semi-honest agents are introduced to finish the spectrum auction rules. But these two agents may collude with each other or be bribed by buyers and sellers, which may create security risks, therefore, a secure double spectrum auction is proposed in this paper. Unlike traditional secure double spectrum auctions, the spectrum auction server with Software Guard Extensions (SGX) component is used in this paper, which is an Ethereum blockchain platform that performs spectrum auctions. A secure double spectrum protocol is also designed, using SGX technology and cryptographic tools such as Paillier cryptosystem, stealth address technology and one-time ring signatures to well protect the private information of spectrum auctions. In addition, the smart contracts provided by the Ethereum blockchain platform are executed to assist offline verification, and to verify important spectrum auction information to ensure the fairness and impartiality of spectrum auctions. Finally, security analysis and performance evaluation of our protocol are discussed.
Accurate classification of encrypted traffic plays an important role in network management. However, current methods confronts several problems: inability to characterize traffic that exhibits great dispersion, inability to classify traffic with multi-level features, and degradation due to limited training traffic size. To address these problems, this paper proposes a traffic granularity-based cryptographic traffic classification method, called Granular Classifier (GC). In this paper, a novel Cardinality-based Constrained Fuzzy C-Means (CCFCM) clustering algorithm is proposed to address the problem caused by limited training traffic, considering the ratio of cardinality that must be linked between flows to achieve good traffic partitioning. Then, an original representation format of traffic is presented based on granular computing, named Traffic Granules (TG), to accurately describe traffic structure by catching the dispersion of different traffic features. Each granule is a compact set of similar data with a refined boundary by excluding outliers. Based on TG, GC is constructed to perform traffic classification based on multi-level features. The performance of the GC is evaluated based on real-world encrypted network traffic data. Experimental results show that the GC achieves outstanding performance for encrypted traffic classification with limited size of training traffic and keeps accurate classification in dynamic network conditions.
The dual frequency Heterogeneous Network (HetNet), including sub-6 GHz networks together with Millimeter Wave (mmWave), achieves the high data rates of user in the networks with hotspots. The cache-enabled HetNets with hotspots are investigated using an analytical framework in which Macro Base Stations (MBSs) and hotspot centers are treated as two independent homogeneous Poisson Point Processes (PPPs), and locations of Small Base Stations (SBSs) and users are modeled as two Poisson Cluster Processes (PCPs). Under the PCP-based modeling method and the Most Popular Caching (MPC) scheme, we propose a cache-enabled association strategy for HetNets with limited storage capacity. The performance of association probability and coverage probability is explicitly derived, and Monte Carlo simulation is utilized to verify that the results are correct. The outcomes of the simulation present the influence of antenna configuration and cache capacities of MBSs and SBSs on network performance. Numerical optimization of the standard deviation ratio of SBSs and users of association probability is enabled by our analysis.
With the development of information technology, radio communication technology has made rapid progress. Many radio signals that have appeared in space are difficult to classify without manually labeling. Unsupervised radio signal clustering methods have recently become an urgent need for this situation. Meanwhile, the high complexity of deep learning makes it difficult to understand the decision results of the clustering models, making it essential to conduct interpretable analysis. This paper proposed a combined loss function for unsupervised clustering based on autoencoder. The combined loss function includes reconstruction loss and deep clustering loss. Deep clustering loss is added based on reconstruction loss, which makes similar deep features converge more in feature space. In addition, a features visualization method for signal clustering was proposed to analyze the interpretability of autoencoder utilizing Saliency Map. Extensive experiments have been conducted on a modulated signal dataset, and the results indicate the superior performance of our proposed method over other clustering algorithms. In particular, for the simulated dataset containing six modulation modes, when the SNR is 20 dB, the clustering accuracy of the proposed method is greater than 78%. The interpretability analysis of the clustering model was performed to visualize the significant features of different modulated signals and verified the high separability of the features extracted by clustering model.
In this paper, Index Modulation (IM) aided Generalized Space-Time Block Coding (GSTBC) is proposed, which intrinsically exploits the benefits of IM concept, diversity gain and spatial multiplexing gain. Specifically, the information bits are partitioned into U groups, with each being modulated by IM symbols (i.e. Spatial Modulation (SM), Quadrature SM (QSM),etc). Next, the structure of GSTBC is invoked for each K IM symbol, and a total of μ = U/K GSTBC codes are transmitted via T time slots. A Block Expectation Propagation (B-EP) detector is designed for the proposed IM-GSTBC structure. Moreover, the theoretical Average Bit Error Probability (ABEP) is derived for our IM-GSTBC system, which is confirmed by the simulation results and helpful for performance evaluation. Simulation results show that our proposed IM-GSTBC system is capable of striking an efficient trade-off between spatial multiplexing gain, spatial diversity gain as well as implementation cost imposed for both small-scale and large-scale MIMO antenna configurations.
To resist various types of jamming in wireless channels, appropriate constellation modulation is used in wireless communication to ensure a low bit error rate. Due to the complexity and variability of the channel environment, a simple preset constellation is difficult to adapt to all scenarios, so the online constellation optimization method based on Reinforcement Learning (RL) shows its potential. However, the existing RL technology is difficult to ensure the optimal convergence efficiency. Therefore, in this paper, Dynamic Adversarial Interference (DAJ) waveforms are introduced and the DAJ-RL method is proposed by referring to adversarial training in Deep Learning (DL). The algorithm can converge to the optimal state quickly by self-adaptive power and probability direction of dynamic strong adversary of DAJ. In this paper, a rigorous theoretical proof of the symbol error rate is given and it is shown that the method approaches the mathematical limit. Also, numerical and hardware experiments show that the constellations generated by DAJ-RL have the best error rate at all noise levels. In the end, the proposed DAJ-RL method effectively improves the RL-based anti-jamming modulation for cognitive electronic warfare.
Connected automated vehicles (CAVs) rely heavily on intelligent algorithms and remote sensors. If the control center or on-board sensors are under cyber-attack due to the security vulnerability of wireless communication, it can cause significant damage to CAVs or passengers. The primary objective of this study is to model cyber-attacked traffic flow and evaluate the impacts of cyber-attack on the traffic system filled with CAVs in a connected environment. Based on the analysis on environmental perception system and possible cyber-attacks on sensors, a novel lane-changing model for CAVs is proposed and multiple traffic scenarios for cyber-attacks are designed. The impact of the proportion of cyber-attacked vehicles and the severity of the cyber-attack on the lane-changing process is then quantitatively analyzed. The evaluation indexes include spatio-temporal evolution of average speed, spatial distribution of selected lane-changing gaps, lane-changing rate distribution, lane-changing preparation search time, efficiency and safety. Finally, the numerical simulation results show that the freeway traffic near an off-ramp is more sensitive to the proportion of cyber-attacked vehicles than to the severity of the cyber-attack. Also, when the traffic system is under cyber-attack, more unsafe back gaps are chosen for lane-changing, especially in the center lane. Therefore, more lane-changing maneuvers are concentrated on approaching the off-ramp, causing severe congestions and potential rear-end collisions. In addition, as the number of cyber-attacked vehicles and the severity of cyber-attacks increase, the road capacity and safety level will rapidly decrease. The results of this study can provide a theoretical basis for accident avoidance and efficiency improvement for the design of CAVs and management of automated highway systems.
The millimeter-Wave (mmWave) communication with the advantages of abundant bandwidth and immunity to interference has been deemed a promising technology to greatly improve network capacity. However, due to such characteristics of mmWave, as short transmission distance, high sensitivity to the blockage, and large propagation path loss, handover issues (including trigger condition and target beam selection) become much complicated. In this paper, we design a novel handover scheme to optimize the overall system throughput as well as the total system delay while guaranteeing the Quality of Service (QoS) of each User Equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the Reinforcement Learning (RL) algorithm and optimization theory. The RL algorithm known as Multi-Agent Proximal Policy Optimization (MAPPO) plays a role in determining handover trigger conditions. Further, we propose an optimization problem in conjunction with MAPPO to select the target base station. The aim is to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. The numerical results show the overall system throughput and delay with our method are slightly worse than that with the exhaustive search method but much better than that using another typical RL algorithm Deep Deterministic Policy Gradient (DDPG).
Non-Orthogonal Multiple Access (NOMA) has already proven to be an effective multiple access scheme for 5th Generation (5G) wireless networks. It provides improved performance in terms of system throughput, spectral efficiency, fairness, and energy efficiency (EE). However, in conventional NOMA networks, performance degradation still exists because of the stochastic behavior of wireless channels. To combat this challenge, the concept of Intelligent Reflecting Surface (IRS) has risen to prominence as a low-cost intelligent solution for Beyond 5G (B5G) networks. In this paper, a modeling primer based on the integration of these two cutting-edge technologies, i.e., IRS and NOMA, for B5G wireless networks is presented. An in-depth comparative analysis of IRS-assisted Power Domain (PD)-NOMA networks is provided through 3-fold investigations. First, a primer is presented on the system architecture of IRS-enabled multiple-configuration PD-NOMA systems, and parallels are drawn with conventional network configurations, i.e., conventional NOMA, Orthogonal Multiple Access (OMA), and IRS-assisted OMA networks. Followed by this, a comparative analysis of these network configurations is showcased in terms of significant performance metrics, namely, individual users' achievable rate, sum rate, ergodic rate, EE, and outage probability. Moreover, for multi-antenna IRS-enabled NOMA networks, we exploit the active Beamforming (BF) technique by employing a greedy algorithm using a state-of-the-art branch-reduce-and-bound (BRB) method. The optimality of the BRB algorithm is presented by comparing it with benchmark BF techniques, i.e., minimum-mean-square-error, zero-forcing-BF, and maximum-ratio-transmission. Furthermore, we present an outlook on future envisioned NOMA networks, aided by IRSs, i.e., with a variety of potential applications for 6G wireless networks. This work presents a generic performance assessment toolkit for wireless networks, focusing on IRS-assisted NOMA networks. This comparative analysis provides a solid foundation for the development of future IRS-enabled, energy-efficient wireless communication systems.
In this paper, we analyze the outage performance of Unmanned Aerial Vehicles (UAVs)-enabled downlink Non-Orthogonal Multiple Access (NOMA) communication systems with the Semi-Grant-Free (SGF) transmission scheme. A UAV provides coverage services for a Grant-Based (GB) user and one Grant-Free (GF) user is allowed to utilize the same channel resource opportunistically. The analytical expressions for the exact and asymptotic Outage Probability (OP) of the GF user are derived. The results demonstrate that no-zero diversity order can be achieved only under stringent conditions on users' quality of service requirements. Subsequently, an efficient Dynamic Power Allocation (DPA) scheme is proposed to relax such data rate constraints. The analytical expressions for the exact and asymptotic OP of the GF user with the DPA scheme are derived. Finally, Monte Carlo simulation results are presented to validate the correctness of the derived analytical expressions and demonstrate the effects of the UAV's location and altitude on the OP of the GF user.
Intelligent assembly of large-scale, complex structures using an intelligent manufacturing platform represents the future development direction for industrial manufacturing. During large-scale structural assembly processes, several bottleneck problems occur in the existing auxiliary assembly technology. First, the traditional LiDAR-based assembly technology is often limited by the openness of the manufacturing environment, in which there are blind spots, and continuous online assembly adjustment thus cannot be realized. Second, for assembly of large structures, a single-station LiDAR system cannot achieve complete coverage, which means that a multi-station combination method must be used to acquire the complete three-dimensional data; many more data errors are caused by the transfer between stations than by the measurement accuracy of a single station, which means that the overall system's measurement and adjustment errors are increased greatly. Third, because of the large numbers of structural components contained in a large assembly, the accumulated errors may lead to assembly interference, but the LiDAR-assisted assembly process does not have a feedback perception capability, and thus assembly component loss can easily be caused when assembly interference occurs. Therefore, this paper proposes to combine an optical fiber sensor network with digital twin technology, which will allow the test data from the assembly entity state in the real world to be applied to the “twin” model in the virtual world and thus solve the problems with test openness and data transfer. The problem of station and perception feedback is also addressed and represents the main innovation of this work. The system uses an optical fiber sensor network as a flexible sensing medium to monitor the strain field distribution within a complex area in real time, and then completes real-time parameter adjustment of the virtual assembly based on the distributed data. Complex areas include areas that are laser-unreachable, areas with complex contact surfaces, and areas with large-scale bending deformations. An assembly condition monitoring system is designed based on the optical fiber sensor network, and an assembly condition monitoring algorithm based on multiple physical quantities is proposed. The feasibility of use of the optical fiber sensor network as the real-state parameter acquisition module for the digital twin intelligent assembly system is discussed. The offset of any position in the test area is calculated using the convolutional neural network of a residual module to provide the compensation parameters required for the virtual model of the assembly structure. In the model optimization parameter module, a correction data table is obtained through iterative learning of the algorithm to realize state prediction from the test data. The experiment simulates a large-scale structure assembly process, and performs virtual and real mapping for a variety of situations with different assembly errors to enable correction of the digital twin data stream for the assembly process through the optical fiber sensor network. In the plane strain field calibration experiment, the maximum error among the test values for this system is 0.032 mm, and the average error is 0.014 mm. The results show that use of visual calibration can correct the test error to within a very small range. This result is equally applicable to gradient curvature surfaces and freeform surfaces. Statistics show that the average measurement accuracy error for regular surfaces is better than 11.2%, and the average measurement accuracy error for irregular surfaces is better than 14.8%. During simulation of large-scale structure assembly experiments, the average position deviation accuracy is 0.043 mm, which is in line with the designed accuracy.