In recent years, the rapid advancement of mega-constellations in Low Earth Orbit (LEO) has led to the emergence of satellite communication networks characterized by a complex interplay between high- and low-altitude orbits and by unprecedented scale. Traditional network-representation methodologies in Euclidean space are insufficient to capture the dynamics and evolution of high-dimensional complex networks. By contrast, hyperbolic space offers greater scalability and stronger representational capacity than Euclidean-space methods, thereby providing a more suitable framework for representing large-scale satellite communication networks. This paper aims to address the burgeoning demands of large-scale space-air-ground integrated satellite communication networks by providing a comprehensive review of representation-learning methods for large-scale complex networks and their application within hyperbolic space. First, we briefly introduce several equivalent models of hyperbolic space. Then, we summarize existing representation methods and applications for large-scale complex networks. Building on these advances, we propose representation methods for complex satellite communication networks in hyperbolic space and discuss potential application prospects. Finally, we highlight several pressing directions for future research.
Privacy-Preserving Computation (PPC) comprises the techniques, schemes and protocols which ensure privacy and confidentiality in the context of secure computation and data analysis. Most of the current PPC techniques rely on the complexity of cryptographic operations, which are expected to be efficiently solved by quantum computers soon. This review explores how PPC can be built on top of quantum computing itself to alleviate these future threats. We analyze quantum proposals for Secure Multi-party Computation, Oblivious Transfer and Homomorphic Encryption from the last decade focusing on their maturity and the challenges they currently face. Our findings show a strong focus on purely theoretical works, but a rise on the experimental consideration of these techniques in the last 5 years. The applicability of these techniques to actual use cases is an underexplored aspect which could lead to the practical assessment of these techniques.
This century's rapid urbanization has disrupted urban governance, sustainability, and resource management. The Internet of Things (IoT) and 5G have the potential to transform smart cities through real-time data processing, enhanced connectivity, and sustainable urban design. This study investigates the potential of 5G connectivity with the IoT's hierarchical framework to enhance public service provision, mitigate environmental effects, and optimize urban resource management. The article asserts that these technologies can enhance urban operations by tackling scalability, interoperability, and security issues. The research employs case studies from Singapore and Barcelona. The document moreover analyzes AI-driven security systems, 6G networks, and the contributions of IoT and 5G to the advancement of a circular economy. The essay asserts that the growth of smart cities necessitates robust policy frameworks to guarantee equitable access, data protection, and ethical considerations. This study integrates prior research with practical experiences to tackle data-informed municipal governance and urban innovation. The importance of policy in fostering inclusive and sustainable urban futures is emphasized.
The ultra-high speed, ultra-low latency, and massive connectivity of the 6th Generation Mobile Network (6G) present unprecedented challenges to network security. In addition, the deep integration of Artificial Intelligence (AI) into 6G networks introduces AI-native features that further complicate the design and implementation of secure network architectures. To meet the security demands posed by the massive number of devices and edge nodes in 6G networks, a decentralized security architecture is essential, as it effectively mitigates the performance bottlenecks typically associated with centralized systems. Blockchain technology offers a promising trust mechanism among devices in 6G networks. However, conventional blockchain systems suffer from limited scalability under high-load conditions, making them inadequate for supporting a large volume of nodes and frequent data exchanges. To overcome these limitations, We propose Shard-DAG, a scalable architecture that structurally integrates Directed Acyclic Graphs (DAG) and sharding. Each shard adopts a Block-DAG structure for parallel block processing, effectively overcoming the performance bottlenecks of traditional chain-based blockchains. Furthermore, we introduce a DAG-based transaction ordering mechanism within each shard to defend against double-spending attacks. To ensure inter-shard security, Block-DAG adopts a black-box interaction approach to prevent cross-shard double-spending. Theoretical analysis and experimental evaluations demonstrate that Shard-DAG achieves near-linear scalability. In a network of 1200 nodes with 8 shards, Shard-DAG achieves peak throughput improvements of 14.64 times over traditional blockchains, 8.61 times over standalone Block-DAG, and 2.05 times over conventional sharded blockchains. The results validate Shard-DAG's ability to scale efficiently while maintaining robust security properties.
The integration of Geostationary Earth Orbit (GEO) satellite constellations into Sixth Generation (6G) framework for cellular networks is essential to achieve global connectivity. Despite the major importance of this integration, current research often underestimates the limitations imposed by available satellite payload power, erroneously assuming a uniform maximum power density distribution across all communication beams. In this paper, we propose an Efficient Downlink Resource Allocation scheme (EDRA) that accounts for transmitting power resource limitations, variable service quality demands, and a heterogeneous number of users. Our approach relies on the thorough analysis of real-world demographic data, allowing us to optimize the allocation of downlink power and time-frequency resources in a practical and effective manner. Furthermore, we introduce an optimization model to maximize the total system revenue, using an iterative algorithm specifically designed to solve complex optimization problems. Numerical simulations demonstrated that the EDRA scheme improved the average network revenue by more than 66% relatively to standard methods, with performance gains increasingly large for an increasing diversity of service types, establishing the robustness and adaptability of the proposed EDRA scheme in the rapidly-evolving context of satellite-based communication systems.
The Space-Terrestrial Network (STN) aims to deliver comprehensive on-demand network services, addressing the broad and varied needs of Internet of Things (IoT) applications. However, the STN faces new challenges such as service multiplicity, topology dynamicity, and conventional management complexity. This necessitates a flexible and autonomous approach to network resource management to effectively align network services with available resources. Thus, we incorporate the Intent-Driven Network (IDN) into the STN, enabling the execution of multiple missions through automated resource allocation and dynamic network policy optimization. This approach enhances programmability and flexibility, facilitating intelligent network management for real-time control and adaptable service deployment in both traditional and IoT-focused scenarios. Building on previous mechanisms, we develop the intent-driven CoX resource management model, which includes components for coordination intent decomposition, collaboration intent management, and cooperation resource management. We propose an advanced intent verification mechanism and create an intent-driven CoX resource management algorithm leveraging a two-stage deep reinforcement learning method to minimize resource usage and delay costs in cross-domain communications within the STN. Ultimately, we establish an intent-driven CoX prototype to validate the efficacy of this proposed mechanism, which demonstrates improved performance in intent refinement and resource management efficiency.
The Internet of Things (IoT) technology provides data acquisition, transmission, and analysis to control rehabilitation robots, encompassing sensor data from the robots as well as lidar signals for trajectory planning (desired trajectory). In IoT rehabilitation robot systems, managing nonvanishing uncertainties and input quantization is crucial for precise and reliable control performance. These challenges can cause instability and reduced effectiveness, particularly in adaptive networked control. This paper investigates networked control with guaranteed performance for IoT rehabilitation robots under nonvanishing uncertainties and input quantization. First, input quantization is managed via a quantization-aware control design, ensur stability and minimizing tracking errors, even with discrete control inputs, to avoid chattering. Second, the method handles nonvanishing uncertainties by adjusting control parameters via real-time neural network adaptation, maintaining consistent performance despite persistent disturbances. Third, the control scheme guarantees the desired tracking performance within a specified time, with all signals in the closed-loop system remaining uniformly bounded, offering a robust, reliable solution for IoT rehabilitation robot control. The simulation verifies the benefits and efficacy of the proposed control strategy.
The Internet of Things (IoT) has become an integral part of daily life, making the protection of user privacy increasingly important. In gateway-based IoT systems, user data is transmitted through gateways to platforms, pushing the data to various applications, widely used in smart cities, industrial IoT, smart farms, healthcare IoT, and other fields. Threshold Public Key Encryption (TPKE) provides a method to distribute private keys for decryption, enabling joint decryption by multiple parties, thus ensuring data security during gateway transmission, platform storage, and application access. However, existing TPKE schemes face several limitations, including vulnerability to quantum attacks, failure to meet Simulation-Security (SS) requirements, lack of verifiability, and inefficiency, which results in gateway-based IoT systems still being not secure and efficient enough. To address these challenges, we propose a Verifiable Simulation-Secure Threshold PKE scheme based on standard Module-LWE (VSSTPM). Our scheme resists quantum attacks, achieves SS, and incorporates Non-Interactive Zero-Knowledge (NIZK) proofs. Implementation and performance evaluations demonstrate that VSSTPM offers 112-bit quantum security and outperforms existing TPKE schemes in terms of efficiency. Compared to the ECC-based TPKE scheme, our scheme reduces the time cost for decryption participants by 72.66%, and the decryption verification of their scheme is 11 times slower than ours. Compared with the latest lattice-based TPKE scheme, our scheme reduces the time overhead by 90% and 48.9% in system user encryption and decryption verification, respectively, and their scheme is 13 times slower than ours in terms of decryption participants.
Due to the dynamic nature of service requests and the uneven distribution of services in the Internet of Vehicles (IoV), Multi-access Edge Computing (MEC) networks with pre-installed servers are often susceptible to insufficient computing power at certain times or in certain areas. In addition, Vehicular Users (VUs) need to share their observations for centralized neural network training, resulting in additional communication overhead. In this paper, we present a hybrid MEC server architecture, where fixed RoadSide Units (RSUs) and Mobile Edge Servers (MESs) cooperate to provide computation offloading services to VUs. We propose a distributed federated learning and Deep Reinforcement Learning (DRL) based algorithm, namely Federated Dueling Double Deep Q-Network (FD3QN), with the objective of minimizing the weighted sum of service latency and energy consumption. Horizontal federated learning is incorporated into the Dueling Double Deep Q-Network (D3QN) to allocate cross-domain resources after the offload decision process. A client-server framework with federated aggregation is used to maintain the global model. The proposed FD3QN algorithm can jointly optimize power, sub-band, and computational resources. Simulation results show that the proposed algorithm outperforms baselines in terms of system cost and exhibits better robustness in uncertain IoV environments.
Sixth-generation (6G) communication system promises unprecedented data density and transformative applications over different industries. However, managing heterogeneous data with different distributions in 6G-enabled multi-access edge cloud networks presents challenges for efficient Machine Learning (ML) training and aggregation, often leading to increased energy consumption and reduced model generalization. To solve this problem, this research proposes a Weighted Proximal Policy-based Federated Learning approach integrated with ResNet50 and Scaled Exponential Linear Unit activation function (WPPFL-RS). The proposed method optimizes resource allocation such as CPU and memory, through enhancing the Cyber-twin technology to estimate the computing capacities of edge clouds. The proposed WPPFL-RS approach significantly minimizes the latency and energy consumption, solving complex challenges in 6G-enabled edge computing. This makes sure that efficient resource utilization and enhanced performance in heterogeneous edge networks. The proposed WPPFL-RS achieves a minimum latency of 8.20 s on 100 tasks, a significant improvement over the baseline Deep Reinforcement Learning (DRL), which recorded 11.39 s. This approach highlights its potential to enhance resource utilization and performance in 6G edge networks.
As the 6G era approaches, wireless communication faces challenges such as massive user numbers, high mobility, and spectrum resource sharing. Radio maps are crucial for network design, optimization, and management, providing essential channel information. In this paper, we propose an innovative learning framework for Radio Map Estimation (RME) based on cycle-consistent generative adversarial networks. Traditional RME methods are often constrained by model complexity and interpolation accuracy, while learning-based methods require strictly paired datasets, making their practical application difficult. Our method overcomes these limitations by enabling training with unpaired data, efficiently converting local features into radio maps. Our experimental results demonstrate the effectiveness of the proposed method in two scenarios: accurate map data and map data with dynamic errors. To address dynamic interference, we designed a two-stage learning process that uses sparse observations to correct local details in the radio map, and the model's accuracy and practicality.
In this paper, we investigate an reconfigurable intelligent surface-aided Integrated Sensing And Communication (ISAC) system. Our objective is to maximize the achievable sum rate of the multi-antenna communication users through the joint active and passive beamforming. Specifically, the weighted minimum mean-square error method is first used to reformulate the original problem into an equivalent one. Then, we utilize an alternating optimization algorithm to decouple the optimization variables and decompose this challenging problem into two subproblems. Given reflecting coefficients, a penalty-based algorithm is utilized to deal with the non-convex radar Signal-to-Noise Ratio (SNR) constraints. For the given beamforming matrix of the base station, we apply majorization-minimization to transform the problem into a Quadratic Constraint Quadratic Programming (QCQP) problem, which is ultimately solved using a Semi-Definite Relaxation (SDR) based algorithm. Simulation results illustrate the advantage of deploying reconfigurable intelligent surface in the considered multi-user Multiple-Input Multiple-Output (MIMO) ISAC systems.
Metaverse, envisioned as the next evolution of the Internet, is expected to evolve into an innovative medium advancing information civilization. Its core characteristics, including ubiquity, seamlessness, immersion, interoperability and metaspatiotemporality, are catalyzing the development of multiple technologies and fostering a convergence between the physical and virtual worlds. Despite its potential, the critical concept of symbiosis, which involves the synchronous generation and management of virtuality from reality and serves as the cornerstone of this convergence, is often overlooked. Additionally, cumbersome service designs, stemming from the intricate interplay of various technologies and inefficient resource utilization, are impeding an ideal Metaverse ecosystem. To address these challenges, we propose a bi-model Parallel Symbiotic Metaverse (PSM) system, engineered with a Cybertwin-enabled 6G framework where Cybertwins mirror Sensing Devices (SDs) and serve a bridging role as autonomous agents. Based on this framework, the system is structured into two models. In the queue model, SDs capture environmental data that Cybertwins then coordinate and schedule. In the service model, Cybertwins manage service requests and collaborate with SDs to make responsive decisions. We incorporate two algorithms to address resource scheduling and virtual service responses, showcasing the synergistic role of Cybertwins. Moreover, our PSM system advocates for the participation of SDs from collaborators, enhancing performance while reducing operational costs for Virtual Service Operator (VSO). Finally, we comparatively analyze the efficiency and complexity of the proposed algorithms, and demonstrate the efficacy of the PSM system across multiple performance indicators. The results indicate our system can be deployed cost-effectively with Cybertwin-enabled 6G.
Non-Terrestrial Networks (NTN) can be used to provide emergency voice services in Sixth-Generation (6G) communication systems. However, Internet of Things (IoT) terminals, which comprise restricted bandwidth resources and weak computing power, which make ensuring high-quality voice services over NTN challenging. Recent advancements in Artificial Intelligence (AI) techniques have been increasingly applied to enhance the audio quality and reduce the bit rate. However, applying models with high computational complexity to IoT terminals is difficult. In this study, we propose a voice-services-over NTN solution including a novel 6G non-terrestrial and ground network integrated framework and a lightweight Large Models (LMs)-driven codec operating at 450 bits per second. We also designed a new voice packet header and deployed an agent on-ground gateway to reduce the bandwidth overhead. The non-standard Session Initiation Protocol header was converted to the standard format while re-encapsulating Internet Protocol and User Datagram Protocol headers, replacing the conventional implementations. Additionally, an operational NTN satellite was used to evaluate the proposed ReCodec. The experimental results demonstrate that the ReCodec decreases the computational complexity by 96.61% while increasing the voice quality by 17.55% when compared with the state-of-the-art mechanisms. Furthermore, the design of the packet header reduced the voice frame header to 50 bytes.
Video distribution strategies in wireless edge networks can significantly reduce video transmission latency and system energy consumption, meeting emerging video services' high-rate, low-latency requirements. However, channel condition variability and dynamics caused by user-to-base-station distance and user mobility affect the Quality of Experience (QoE). To address this problem, this paper examines adaptive video streaming strategies under dynamic channel conditions to optimize user QoE. Specifically, to achieve centralized control of wireless edge networks and simplify the management and scheduling of communication resources, Software-Defined Networking (SDN) is adopted within the wireless edge network, and an SDN-based edge caching architecture is proposed. Based on the virtual queue of users receiving video and combining various video factors to quantify the user QoE metric, an optimization problem is established to maximize the time-averaged total user QoE. Subsequently, an adaptive video distribution algorithm is designed, and the optimal video quality selection strategy and power allocation strategy are obtained in conjunction with Lyapunov optimization theory. Therefore, simulation results indicate that our approach significantly reduces video playback interruptions and enhances user QoE.
Anomaly detection is an important task for maintaining the performance of cloud data center. Traditional anomaly detection primarily examines individual Virtual Machine (VM) behavior, neglecting the impact of interactions among multiple VMs on Key Performance Indicator (KPI) data, e.g., memory utilization. Furthermore, the non-stationarity, high complexity, and uncertain periodicity of KPI data in VM also bring difficulties to deep learning-based anomaly detection tasks. To settle these challenges, this paper proposes MCBiWGAN-GTN, a multi-channel semi-supervised time series anomaly detection algorithm based on the Bidirectional Wasserstein Generative Adversarial Network with Graph-Time Network (BiWGAN-GTN) and the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN). (a) The BiWGAN-GTN algorithm is proposed to extract spatiotemporal information from data. (b) The loss function of BiWGAN-GTN is redesigned to solve the abnormal data intrusion problem during the training process. (c) MCBiWGAN-GTN is designed to reduce data complexity through CEEMDAN for time series decomposition and utilizes BiWGAN-GTN to train different components. (d) To adapt the proposed algorithm for the entire cloud data center, a cloud data center anomaly detection framework based on Swarm Learning (SL) is designed. The evaluation results on a real-world cloud data center dataset show that MCBiWGAN-GTN outperforms the baseline, with an F1-score of 0.96, an accuracy of 0.935, a precision of 0.954, a recall of 0.967, and an FPR of 0.203. The experiments also verify the stability of MCBiWGAN-GTN, the impact of parameter configurations, and the effectiveness of the proposed SL framework.
Motion recognition refers to the intelligent recognition of human motion using data collected from wearable sensors, which exceedingly has gained significant interest from both academic and industrial fields. However, temporary-sudden activities caused by accidental behavior pose a major challenge to motion recognition and have been largely overlooked in existing works. To address this problem, the multi-dimensional time series of motion data is modeled as a Time-Frequency (TF) tensor, and the original challenge is transformed into a problem of outlier-corrupted tensor pattern recognition, where transient sudden activity data are considered as outliers. Since the TF tensor can capture the latent spatio-temporal correlations of the motion data, the tensor MPCA is used to derive the principal spatio-temporal pattern of the motion data. However, traditional MPCA uses the squared F-norm as the projection distance measure, which makes it sensitive to the presence of outlier motion data. Therefore, in the proposed outlier-robust MPCA scheme, the F-norm with the desirable geometric properties is used as the distance measure to simultaneously mitigate the interference of outlier motion data while preserving rotational invariance. Moreover, to reduce the complexity of outlier-robust motion recognition, we impose the proposed outlier-robust MPCA scheme on the traditional MPCANet which is a low-complexity deep learning network. The experimental results show that our proposed outlier-robust MPCANet can simultaneously improve motion recognition performance and reduce the complexity, especially in practical scenarios where the real-time data is corrupted by temporary-sudden activities.
In high-speed multiuser Time Reversal (TR) downlink systems, the transmission rate is degraded due to the presence of severe inter-user and inter-symbol interference. Moreover, maximizing the weighted sum rate in such systems is a critical objective, since the weighting factors represent the priority of different users in different applications. However, it faces significant challenges as it is an NP-hard and non-convex problem. In order to suppress these interferences and maximize the weighted sum rate, in this paper we present a novel approach for the joint design of the pre-filters. The proposed method applies successive convex approximation to transform the original problem into a Second-Order Cone Programming (SOCP) problem. Then, a low-complexity iterative algorithm is provided to effectively solve the resulting SOCP problem. According to the simulation results, the proposed method reaches a local optimum within a few iterations and demonstrates superior performance in terms of weighted sum rate compared to the current algorithm.;Keywords : Pre-filters design;Successive convex approximation;Second-order cone programming;Time reversal;Weighted sum rate
The rapid advancement of 6G communication networks presents both considerable problems and opportunities in network management, necessitating sophisticated solutions that extend beyond conventional methods. This study seeks to investigate and evaluate autonomous network management solutions designed for 6G communication networks, highlighting their technical advantages and potential implications. We examine the role of Artificial Intelligence (AI), Machine Learning (ML), and network automation in facilitating self-organization, optimization, and decision-making within critical network domains, including spectrum management, traffic load balancing, fault detection, and security and privacy. We examine the integration of edge computing and Distributed Ledger Technologies (DLT), specifically blockchain, to improve trust, transparency, and security in autonomous networks. This study provides a comprehensive understanding of the technological developments driving fully autonomous, efficient, and resilient 6G network infrastructures by methodically analyzing existing methodologies, identifying significant research gaps, and exploring potential prospects. The results offer significant insights for researchers, engineers, and industry experts involved in the development and deployment of advanced autonomous network management systems.
With the rapid development of generative artificial intelligence technology, the traditional cloud-based centralized model training and inference face significant limitations due to high transmission latency and costs, which restrict user-side in-situ Artificial Intelligence Generated Content (AIGC) service requests. To this end, we propose the Edge Artificial Intelligence Generated Content (EdgeAIGC) framework, which can effectively address the challenges of cloud computing by implementing in-situ processing of services close to the data source through edge computing. However, AIGC models usually have a large parameter scale and complex computing requirements, which poses a huge challenge to the storage and computing resources of edge devices. This paper focuses on the edge intelligence model caching and resource allocation problems in the EdgeAIGC framework, aiming to improve the cache hit rate and resource utilization of edge devices for models by optimizing the model caching strategy and resource allocation scheme, and realize in-situ AIGC service processing. With the optimization objectives of minimizing service request response time and execution cost in resource-constrained environments, we employ the Twin Delayed Deep Deterministic Policy Gradient algorithm for optimization. Experimental results show that, compared with other methods, our model caching and resource allocation strategies can effectively improve the cache hit rate by at least 41.06% and reduce the response cost as well.
The Internet of Things (IoT) and allied applications have made real-time responsiveness for massive devices over the Internet essential. Cloud-edge/fog ensembles handle such applications' computations. For Beyond 5th Generation (B5G) communication paradigms, Edge Servers (ESs) must be placed within Information Communication Technology infrastructures to meet Quality of Service requirements like response time and resource utilisation. Due to the large number of Base Stations (BSs) and ESs and the possibility of significant variations in placing the ESs within the IoTs geographical expanse for optimising multiple objectives, the Edge Server Placement Problem (ESPP) is NP-hard. Thus, stochastic evolutionary metaheuristics are natural. This work addresses the ESPP using a Particle Swarm Optimization that initialises particles as BS positions within the geography to maintain the workload while scanning through all feasible sets of BSs as an encoded sequence. The Workload-Threshold Aware Sequence Encoding (WTASE) Scheme for ESPP provides the number of ESs to be deployed, similar to existing methodologies and exact locations for their placements without the overhead of maintaining a prohibitively large distance matrix. Simulation tests using open-source datasets show that the suggested technique improves ESs utilisation rate, workload balance, and average energy consumption by 36%, 17%, and 32%, respectively, compared to prior works.
Low Earth Orbit (LEO) satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources. Existing studies integrate edge computing with LEO satellite networks to optimize task offloading; however, they often overlook the impact of frequent topology changes, unstable transmission links, and intermittent satellite visibility, leading to task execution failures and increased latency. To address these issues, this paper proposes a dynamic integrated space-ground computing framework that optimizes task offloading under LEO satellite mobility constraints. We design an adaptive task migration strategy through inter-satellite links when target satellites become inaccessible. To enhance data transmission reliability, we introduce a communication stability constraint based on transmission bit error rate (BER). Additionally, we develop a genetic algorithm (GA)-based task scheduling method that dynamically allocates computing resources while minimizing latency and energy consumption. Our approach jointly considers satellite computing capacity, link stability, and task execution reliability to achieve efficient task offloading. Experimental results demonstrate that the proposed method significantly improves task execution success rates, reduces system overhead, and enhances overall computational efficiency in LEO satellite networks.
In federated learning (FL), the distribution of data across different clients leads to the degradation of global model performance in training. Personalized Federated Learning (pFL) can address this problem through global model personalization. Researches over the past few years have calibrated differences in weights across the entire model or optimized only individual layers of the model without considering that different layers of the whole neural network have different utilities, resulting in lagged model convergence and inadequate personalization in non-IID data. In this paper, we propose model layered optimization for feature extractor and classifier (pFedEC), a novel pFL training framework personalized for different layers of the model. Our study divides the model layers into the feature extractor and classifier. We initialize the model's classifiers during model training, while making the local model's feature extractors learn the representation of the global model's feature extractors to correct each client's local training, integrating the utilities of the different layers in the entire model. Our extensive experiments show that pFedEC achieves 92.95% accuracy on CIFAR-10, outperforming existing pFL methods by approximately 1.8%. On CIFAR-100 and Tiny-ImageNet, pFedEC improves the accuracy by at least 4.2%, reaching 73.02% and 28.39%, respectively.
Due to open communication environment, Internet of Vehicles (IoV) are vulnerable to many attacks, including the gray hole attack, which can disrupt the process of transmitting messages. And this results in the degradation of routing performance. To address this issue, a double deep Q-networks-based stable routing for resisting gray hole attack (DOSR) is proposed in this paper. The aim of the DOSR algorithm is to maximize the message delivery ratio as well as to minimize the transmission delay. For this, the distance ratio, message loss ratio, and connection ratio are taken into consideration when choosing a relay node. Then, to choose the relay node is formulated as an optimization problem, and a double deep Q-networks are utilized to solve the optimization problem. Experimental results show that DOSR outperforms QLTR and TLRP by significant margins: in scenarios with 400 vehicles and 10% malicious nodes, the message delivery ratio (MDR) of DOSR is 8.3% higher than that of QLTR and 5.1% higher than that of TLRP; the average transmission delay (ATD) is reduced by 23.3% compared to QLTR and 17.9% compared to TLRP. Additionally, sensitivity analysis of hyperparameters confirms the convergence and stability of DOSR, demonstrating its robustness in dynamic IoV environments.
The advent of 6G networks is poised to drive a new era of intelligent, privacy-preserving distributed learning by leveraging advanced communication and AI-driven edge intelligence. Federated Learning (FL) has emerged as a promising paradigm to enable collaborative model training without exposing raw data. However, its deployment in 6G networks faces significant obstacles, including vulnerabilities to inference attacks, the complexities of heterogeneous and dynamic network environments, and the inherent trade-off between privacy protection and model performance. In response to these challenges, we introduce DP-Fed6G, a novel FL framework that integrates differential privacy (DP) to fortify data security while ensuring high-quality learning outcomes. Specifically, DP-Fed6G employs an adaptive noise injection strategy that dynamically adjusts privacy protection levels based on real-time 6G network conditions and device heterogeneity, ensuring robust data security while maximizing model performance and optimizing the trade-off between privacy and utility. Extensive experiments on three real-world healthcare datasets demonstrate that DP-Fed6G consistently outperforms existing baselines (DP-FedSGD and DP-FedAvg), achieving up to 10.3% higher test accuracy under the same privacy budget. The proposed framework thus provides a practical solution for secure and privacy-preserving AI in 6G, supporting intelligent decision-making in privacy-sensitive applications.
The ensemble of Information and Communication Technology (ICT) and Artificial Intelligence (AI) has catalysed many developments and innovations in the automotive industry. 6G networks emerge as a promising technology for realising Intelligent Transport Systems (ITS), which benefits the drivers and society. As the network is highly heterogeneous and robust, the physical layer security and node reliability of the vehicles hold paramount significance. This work presents a novel methodology that integrates the prowess of computer vision techniques and the Lightweight Super Learning Ensemble (LSLE) of Machine Learning (ML) algorithms to predict the presence of intruders in the network. Furthermore, our work utilizes a Deep Convolutional Neural Network (DCNN) to detect obstacles by identifying the Region of Interest (ROI) in the images. As the network utilizes mm-waves with shorter wavelengths, Intelligent Reflecting Surfaces (IRS) are employed to redirect signals to legitimate nodes, thereby mitigating the malicious activity of intruders. The experimental simulation shows that the proposed LSLE outperforms the state-of-the-art techniques in terms of accuracy, False Positive Rate (FPR), Recall, F1-Score, and Precision. A consistent performance improvement with an average FPR of 85.08% and accuracy of 92.01% is achieved by the model. Thus, in the future, detecting moving obstacles and real-time network traffic monitoring can be included to achieve more realistic results.
Although 6G networks combined with artificial intelligence present revolutionary prospects for healthcare delivery, resource management in dense medical device networks stays a basic issue. Reliable communication directly affects patient outcomes in these settings; nonetheless, current resource allocation techniques struggle with complicated interference patterns and different service needs of AI-native healthcare systems. In dense installations where conventional approaches fail, this paper tackles the challenge of combining network efficiency with medical care priority. Thus, we offer a Dueling Deep Q-Network (DDQN) -based resource allocation approach for AI-native healthcare systems in 6G dense networks. First, we create a point-line graph coloring-based interference model to capture the unique characteristics of medical device communications. Building on this foundation, we suggest a DDQN approach to optimal resource allocation over multiple medical services by combining advantage estimate with healthcare-aware state evaluation. Unlike traditional graph-based models, this one correctly depicts the overlapping coverage areas common in hospital environments. Building on this basis, our DDQN design allows the system to prioritize medical needs while distributing resources by separating healthcare state assessment from advantage estimation. Experimental findings show that the suggested DDQN outperforms state-of-the-art techniques in dense healthcare installations by 14.6% greater network throughput and 13.7% better resource use. The solution shows particularly strong in maintaining service quality under vital conditions with 5.5% greater QoS satisfaction for emergency services and 8.2% quicker recovery from interruptions.
Intelligent Transportation Systems (ITS) leverage Integrated Sensing and Communications (ISAC) to enhance data exchange between vehicles and infrastructure in the Internet of Vehicles (IoV). This integration inevitably increases computing demands, risking real-time system stability. Vehicle Edge Computing (VEC) addresses this by offloading tasks to Road Side Units (RSUs), ensuring timely services. Our previous work, the FLSimCo algorithm, which uses local resources for federated Self-Supervised Learning (SSL), has a limitation: vehicles often can’t complete all iteration tasks. Our improved algorithm offloads partial tasks to RSUs and optimizes energy consumption by adjusting transmission power, CPU frequency, and task assignment ratios, balancing local and RSU-based training. Meanwhile, setting an offloading threshold further prevents inefficiencies. Simulation results show that the enhanced algorithm reduces energy consumption and improves offloading efficiency and accuracy of federated SSL.