Electric Vehicles (EVs) have developed into a complex ecosystem that includes many technical components such as task offloading on mobile devices, the Internet of Vehicles (IoV), and smart grids. Moreover, Edge Computing (EC) is a technique that relocates applications and services closer to end-users. This computing paradigm has been extensively adopted across many scenarios, effectively reducing the load on the cloud computing infrastructure and centralized server facilities. EVs are closely related to EC in many aspects since electric vehicles are typically supported by modern communication and Artificial Intelligence (AI) technologies, such as, sensor networks, computation offloading, autonomous systems, and blockchain. However, the diversity and heterogeneity of edge devices have raised many security and privacy concerns in electric vehicles, and some complex EC scenarios make addressing these issues even more challenging. In this paper, we provide a comprehensive review of the security and privacy concerns raised by EC in EVs. First, we elaborate on the development, characteristics, and applications of EC in EVs. Next, we describe the typical architectures used to ensure the security and privacy of EC in EVs. Then, we analyze the risks and challenges related to the security and privacy of EC in EVs, focusing on several significant scenarios (e.g., offloading, the IoV, and smart grids). We also discuss current research progress on the security and privacy, covering methodologies, architectures, algorithms, insights, and performance. Finally, we discuss several future challenges and issues regarding the security and privacy of EC in EVs.
The explosive proliferation of Large Language Models (LLMs) imposes significant energy and operational bur-dens on Geographically Distributed Data Centers (GDDCs), thereby demanding an efficient mechanism for LLMs task scheduling. While prior geo-distributed scheduling methods reduce cost and carbon emissions by exploiting regional heterogeneity, they largely overlook model and data reuse opportunities and the uncertainty of LLM execution times. In this paper, we introduce GCOS, to the best of our knowledge, the first green scheduling framework that incorporates a dual-cache system for both data and models, while jointly optimizing task assign-ment and cache migration. We firstly propose a dual-cache mechanism that decouples model and data caching to enable fine-grained reuse and minimize redundant transmissions. Subsequently, we propose the Multi-Agent Cache-aware Cooperative Scheduling (MACCS) algorithm, which leverages reinforcement learning to optimize task placement with a focus on minimizing both carbon emissions and cost. Additionally, we design a lightweight execution time predictor, DiPTree, to address the high variability in task execution times. Extensive experiments on real-world datasets demonstrate that GCOS reduces overall cost by up to 92.6 % and carbon emissions by 90.3 %, significantly outperforming existing baselines.
This paper investigates a downlink millimeter-Wave (mmWave) communication system equipped with multi-ple cooperative Intelligent Reflecting Surfaces (IRSs), aiming to extend mmWave signal coverage and maximize system throughput. To fully exploit the potential of IRSs within a user-centric framework, this study delves into the joint optimization problem of user multiple association, transmit beamforming, and cooperative passive beamforming. Meanwhile, the impact of IRS locations on user association is analyzed. Given the non-convexity and complexity of the joint optimization problem, a low-complexity optimization algorithm is designed. The algorithm integrates iterative optimization, Lagrangian dual decomposition, and Fractional Programming (FP) techniques. Specifically, the user association problem is optimized using the Lagrangian dual decomposition method, while the joint beamforming is solved via the FP method. Simulation results demonstrate that, com-pared to traditional methods, the proposed algorithm significantly improves the system sum rate, validating its effectiveness and superiority.
This paper develops a quadruped robot virtual-real interactive control system based on digital twin technology. The system is designed to address key challenges in robotics technology, including real-time performance, low-latency control, high-precision multi-sensor data fusion, stable network transmission, data security, user-friendly interaction interface, system scalability, and maintainability. The system comprises a number of functional mod-ules, including a 3D modeling module, a positioning perception module, a virtual interaction module, a wise sensing-transmission module, and a cloud server. The 3D modeling module is responsible for constructing the virtual quadruped robot and motion space scenarios. The positioning perception module integrates LiDAR and Inertial Measurement Unit (IMU) data, utilizing Point-LIO and HDL-localization algorithms for high-precision environmental perception and positioning. The virtual interaction module provides a user-friendly control inter-face through computer software and the HoloLens headset. The wise sensing-transmission module employs WiFi and 5G links to ensure low-latency and high-bandwidth data transmission, and employs libhv and libssl asyn-chronous IO and network security cryptographic libraries to guarantee data security. The system is designed to run on the Ubuntu 20.04 platform, offering excellent scalability and maintainability. This system has broad appli-cation prospects in industrial manufacturing, construction, disaster rescue, military applications, and educational training. It enhances the performance and reliability of quadruped robot systems and lays a solid foundation for the future development of the industrial metaverse.
Communication infrastructure is often among the first casualties in natural or human-induced disasters, severely impairing the coordination and efficiency of rescue operations. Rapid deployment of Unmanned Aerial Vehicles (UAVs) and satellite systems has thus become essential for establishing robust communication links to support rescue-critical tasks. However, existing emergency communication networks rely heavily on domain expertise for topology design, thereby suffering from issues such as inefficient resource allocation and network congestion, among others. To address these challenges, we present TopoLLM, a framework that leverages Large Language Models (LLMs) for tool-driven optimization of emergency network topologies. This framework effectively combines the reasoning capabilities of the LLM with TopoTool, a domain-specific optimization toolkit engineered for high-precision and load-balanced network planning in disaster scenarios. Guided by an adaptive tool-selection mechanism, TopoLLM autonomously generates resilient topologies and allocates resources intelligently, reducing the need for extensive human interventions. Experimental evaluations on simulated disaster scenarios verify that TopoLLM can rapidly generate high-accuracy and robust topologies, achieving notable performance improvements compared with existing approaches.
Driven by globalization and digitization, the Mobile Industrial Supply Chain Internet of Things (IoT) has gradually developed, utilizing mobile devices and IoT technologies to enable real-time monitoring and efficient responses across various stages. However, with the growing demand for high-frequency data exchange, the Mobile Industrial Supply Chain IoT faces significant challenges in data security, authentication, and privacy protection. This paper proposes a security authentication scheme based on blockchain and group key management, leveraging the decentralized and tamper-resistant features of blockchain, the privacy-preserving authentication method of Zero-Knowledge Proofs (ZKP), and a hierarchical key management mechanism based on binary key trees. This approach aims to enhance the security and scalability of Mobile Industrial Supply Chain IoT. The experimental section simulates scenarios such as dynamic node addition and key updates, evaluating the performance in terms of encryption, decryption, and key management efficiency, thus demonstrating its superiority in multi-party collaborative environments.
Multi-access Edge Computing (MEC) enhances computational efficiency by enabling resource-constrained User Devices (UD) to offload tasks to edge servers. Compared to traditional edge servers fixed on the Small Cellular Base Stations (SBS), mobile vehicles with idle resources serve as mobile edge servers, which can reduce UD’s task latency due to closer proximity to the UD. However, due to the limited computation resources of vehicles and highly competitive among UD, the available computation resources provided by vehicles for UD are uncertain, which poses a challenge for UD in making task offloading decisions. In this paper, we establish a risk-aware task offloading framework in vehicle-assisted MEC networks with computation resource uncertainty, where UD make offloading decisions by considering their risk-aware behavior. We first characterize and model UD’s risk-aware behavior based on Prospect Theory (PT) and then formulate a user satisfaction maximization problem by optimizing the offloading strategy of UD. To solve it, we reformulate the above problem among multiple users as a non-cooperative game and prove the uniqueness of the Pure Nash Equilibrium (PNE). We also propose a low-complexity distributed iterative optimization algorithm to obtain the optimal offloading strategy. The simulation results demonstrate that the proposed scheme significantly enhances satisfaction utility of UD and reduces failure probability of vehicles compared to other benchmark methods.
Semantic Communication (SemCom) is a promising paradigm for future 6G networks, where communication performance hinges on the effectiveness of SemCom models, particularly the source-channel encoder and decoder. However, training these models faces significant challenges. Firstly, the privacy-sensitive nature of communication data discourages users from uploading data to centralized servers. Secondly, heterogeneous local data distributions and diverse communication counterparts of different users necessitate personalized SemCom models. Specifically, a user’s encoder must align with its receivers’ decoders and the transmitted data distribution, while its decoder must adapt to the user’s transmitters and received data distribution. To address these challenges, we propose FineFed, a personalized federated learning method with collaborative fine-tuning. Initially, a unified global model is trained distributively via federated learning, eliminating data uploads. Subsequently, users iteratively fine-tune encoders and decoders collaboratively, achieving SemCom model personalization. For encoder fine-tuning, decoders are fixed and shared with transmitters to address distributed loss calculation issues. Each encoder is fine-tuned using the idea of multi-task learning, treating communication with each receiver as a separate task. Then, encoders are fixed. A user shares its decoder with its own transmitters. These transmitters collaboratively fine-tune the user’s decoder by the idea of federated multi-task learning. Experimental results demonstrate that FineFed improves the average performance of federated SemCom models by 1%-7%, bringing it closer to the performance of centrally-trained models.
This paper proposes a Deep Reinforcement Learning (DRL) algorithm for user scheduling in Millimeter Wave (mmWave) networks, which utilizes Channel Knowledge Map (CKM) for knowledge transfer to enhance the learning of scheduling strategies. The user scheduling and link configuration problems are modeled as a multi-queue system. Each queue represents the data demand of an individual user. This setup allows the base station to make dynamic scheduling decisions based on changing environmental conditions. This approach facilitates efficient management of user-specific requirements while addressing the challenges posed by dynamic network environments. Our model incorporates relay selection, codebook selection, and beam tracking to support flexible and efficient resource allocation. In contrast to traditional channel model-based optimization, we design algorithms for scheduling policy pre-training using CKMs, which provide information about the channel between specific pairs of locations. Specifically, we assume that the CKM is fully available to allow the complex scheduling network to have a better starting point or follow a more favorable gradient direction through knowledge migration. This integration of CKM with knowledge transfer significantly accelerates DRL convergence and enhances performance stability. Simulation results confirmed the effectiveness of the proposed approach. Relative to the baseline methods, integrating CKM with knowledge transfer accelerated the convergence of the DRL algorithm by approximately 20%, maintained the delay within 30 milliseconds, and reduced the average queue length by nearly 30%.
Endogenous security in next-generation wireless communication systems attracts increasing attentions in re-cent years. A typical solution to endogenous security problems is the Quantum Key Distribution (QKD), where unconditional security can be achieved thanks to the inherent properties of quantum mechanics. Continuous Variable-Quantum Key Distribution (CV-QKD) enjoys high Secret Key Rate (SKR) and good compatibility with existing optical communication infrastructure. Traditional CV-QKD usually employ coherent receivers to detect coherent states, whose detection performance is restricted to the standard quantum limit. In this paper, we employ a generalized Kennedy receiver called CD-Kennedy receiver to enhance the detection performance of coherent states in turbulent channels, where Equal-Gain Combining (EGC) method is used to combine the out-put of CD-Kennedy receivers. Besides, we derive the SKR of a post-selection based CV-QKD protocol using both CD-Kennedy receiver and homodyne receiver with EGC in turbulent channels. We further propose an equivalent transmittance method to facilitate the calculation of both the Bit-Error Rate (BER) and SKR. Numerical results show that the CD-Kennedy receiver can outperform the homodyne receiver in turbulent channels in terms of both BER and SKR performance. We find that BER and SKR performance advantage of CD-Kennedy receiver over homodyne receiver demonstrate opposite trends as the average transmittance increases, which indicates that two separate system settings should be employed for communication and key distribution purposes. Besides, we also demonstrate that the SKR performance of a CD-Kennedy receiver is much robust than that of a homodyne receiver in turbulent channels.
Amplitude Phase Shift Keying (APSK) is more suitable for the nonlinear channels of Low Earth Orbit (LEO) satellite communication systems compared to Quadrature Amplitude Modulation (QAM). To tackle challenges posed by Direct Current (DC) interference and high demodulation complexity, we propose an APSK demodulation algorithm based on K-means clustering. Initially, static DC components are calculated and removed from the received APSK signals. Subsequently, the estimated APSK constellation points serve as initial centers for K-means clustering. These centers are refined through the K-means process and act as theoretical APSK constellation points for the Max-Log-MAP demodulation algorithm, effectively eliminating residual DC. We then introduce a low-complexity APSK demodulation algorithm that utilizes the symmetry of constellation points along with the Euclidean distance between DC-eliminated signals and these constellation points to minimize the set of constellation points. Simulation results indicate that for 32-APSK, our proposed demodulation submodule reduces computational complexity to approximately one-third that of the Max-Log-MAP algorithm while improving Bit Error Rate (BER) performance by about 0.23 dB. Furthermore, end-to-end simulation experiments conducted within LEO satellite communication systems demonstrate that our approach not only maintains this complexity advantage but also enhances BER performance by approximately 1.1 dB.
Driven by the increasing demand for efficient data transmission, massive Multiple-Input Multiple-Output (MIMO) systems have emerged as a key technology for future communication systems. However, effective utilization of MIMO relies heavily on accurate Channel State Information (CSI) that is fed back to the base station, which poses significant challenges due to the overhead associated with CSI feedback, especially with the increasing number of antennas. To overcome these drawbacks, this paper proposes a Deep Learning (DL) scheme to improve the CSI feedback, presenting a network named CsiDNet, which compresses CSI at the user end and decompresses it at the base station side. In addition, an auxiliary module is designed to restore CSI information under error-prone scenarios, enhancing the robustness of the system. Extensive performance analysis and simulations demonstrate that CsiDNet achieves an improvement of 2.7 dB and 0.1 dB in terms of Normalized Mean Square Error (NMSE) and Square Generalized Cosine Similarity (SGCS) respectively compared to other models, while significantly reducing computational complexity. The auxiliary module further improves the NMSE and SGCS performance by 4 dB and 0.1 dB respectively, reflecting its effectiveness in recovering error-prone CSI components. Overall, our research improves the accuracy and efficiency of CSI feedback while enhancing the system’s robustness against real-world transmission challenges.
In the field of video scene graph generation, spatio-temporal feature extraction and the long-tail effect in relationship classification are core research issues. This paper proposes extracting spatio-temporal features using the global-local Transformer model for video scene graph generation. Methods based on the Transformer architecture and attention mechanism enrich the semantic information of spatio-temporal features in videos, thereby improving the accuracy of relationship classification. In the feature processing module, pose features are introduced to strengthen the semantic representation of objects. In the spatial feature encoding module, a local spatial visibility matrix based on bounding boxes and key points of human pose features is proposed to add the issue of insufficient attention to local details in traditional Transformer encoders. In the temporal feature encoding module, a global random frame extraction strategy is proposed, which considers global temporal features while also taking computational complexity into account. In the relation classification module, to address the uneven distribution of object and relation categories in the Action Genome dataset, a relation classification loss function based on bipartite graph matching and Focal Loss is proposed, which alleviates the long-tail effect in relation classification and improves the accuracy.
With the development of Sixth-Generation (6G) mobile communication technologies, Low Earth Orbit (LEO) satellite communication systems have become extremely important in mobile communications owing to their large coverage, high efficiency, and low cost. However, the high dynamic LEO satellite channels cause serious time-frequency dual selective fading, significantly impairing the performance of conventional single time or fre-quency domain synchronization algorithms and limiting their applicability. To address these challenges, this paper proposes a synchronization algorithm based on Linear Frequency Modulation (LFM) signals and the Frac-tional Fourier Transform (FRFT). Exploiting the inherent robustness of LFM signals against frequency deviations and multipath effects, coupled with their energy concentration property in the optimal fractional Fourier do-main, the proposed algorithm enables efficient synchronization with enhanced resilience to time-frequency vari-ations. Furthermore, LFM preamble sequences are optimally designed for diverse channel conditions. This work presents a theoretical analysis of the time-frequency nonstationary characteristics of LEO satellite channels and discusses the performance limitations of traditional synchronization algorithms. The proposed integrated FRFT-LFM synchronization framework and sequence optimization scheme are rigorously evaluated via comprehensive simulations. The results demonstrate substantial improvements in synchronization accuracy and computational efficiency compared with conventional methods, particularly under time-frequency dual selective fading LEO satellite channels. The algorithm provides a robust and reliable solution for time-frequency synchronization in LEO satellite communication systems, thereby enhancing overall system performance and reliability.
The emerging sixth-generation networks demand ultra-high-speed wideband transmissions. In this context, this study proposes a novel Reconfigurable Intelligent Surface (RIS)-aided Incremental Relaying (IR) scheme that combines the complementary benefits of RISs and relay systems to enhance the achievable rate. In the proposed system, a relay is exploited to retransmit the source signal when the destination fails to decode the RIS-aided signal correctly. To assess the system performance, we analytically derive closed-form expressions for the out-age probability and throughput of the RIS-aided IR scheme, using the central limit theorem. Simulation results validate the analytical findings and reveal that the proposed RIS-aided IR scheme significantly outperforms the conventional pure RIS and hybrid RIS-relay schemes in terms of both outage probability and throughput, high-lighting its potential for improving communication-system performance.