The sixth-generation (6G) mobile network implements the social vision of digital twins and ubiquitous intelligence. Contrary to the fifth-generation (5G) mobile network that focuses only on communications, 6G mobile networks must natively support new capabilities such as sensing, computing, artificial intelligence (AI), big data, and security while facilitating Everything as a Service. Although 5G mobile network deployment has demonstrated that network automation and intelligence can simplify network operation and maintenance (O&M), the addition of external functionalities has resulted in low service efficiency and high operational costs. In this study, a technology framework for a 6G autonomous radio access network (RAN) is proposed to achieve a high-level network autonomy that embraces the design of native cloud, native AI, and network digital twin (NDT). First, a service-based architecture is proposed to re-architect the protocol stack of RAN, which flexibly orchestrates the services and functions on demand as well as customizes them into cloud-native services. Second, a native AI framework is structured to provide AI support for the diverse use cases of network O&M by orchestrating communications, AI models, data, and computing power demanded by AI use cases. Third, a digital twin network is developed as a virtual environment for the training, pre-validation, and tuning of AI algorithms and neural networks, avoiding possible unexpected losses of the network O&M caused by AI applications. The combination of native AI and NDT can facilitate network autonomy by building closed-loop management and optimization for RAN.
A communication network can natively provide artificial intelligence (AI) training services for resource-limited network entities to quickly build accurate digital twins and achieve high-level network autonomy. Considering that network entities that require digital twins and those that provide AI services may belong to different operators, incentive mechanisms are needed to maximize the utility of both. In this paper, we establish a Stackelberg game to model AI training task offloading for digital twins in native AI networks with the operator with base stations as the leader and resource-limited network entities as the followers. We analyze the Stackelberg equilibrium to obtain equilibrium solutions. Considering the time-varying wireless network environment, we further design a deep reinforcement learning algorithm to achieve dynamic pricing and task offloading. Finally, extensive simulations are conducted to verify the effectiveness of our proposal.
Task diversity is one of the biggest challenges for future sixth-generation (6G) networks. Taking the task as the center and driving the dynamic 6G radio access network (RAN) with artificial intelligence (AI) are necessary to accurately meet the personalized demands of users. However, AI can only configure the parameters of a monolithic RAN and cannot schedule the functions. The development trend of 6G RANs is to enhance dynamic capability and scheduling ease. In this paper, we propose a service-based RAN architecture that can deploy decoupled RAN functions and customize networks according to tasks. Protocol analysis shows that the interactive relationship between RAN control plane (CP) functions is complex and needs to be decoupled according to the principles of high cohesion and low coupling. Based on the graph theory rather than expert experience, we design a RAN decoupling scheme. The functional connection and interaction of the CP are represented by constructing an undirected weighted graph, followed by achieving decoupling of the CP through a minimum spanning tree. Then an integration decoupling scheme of a RAN-CN (core network) is introduced considering the duplicate and redundant functions of the RAN and CN. The granularity of decoupling in a service-based RAN is determined by analyzing the flexibility of decoupling, complexity of signaling, and processing delay. We find that it is more appropriate to decouple the RAN CP into four services. The integration decoupling of the RAN-CN resolves the technical bottleneck of low serial efficiency in the Ng interface, supporting AI-based global service scheduling.
With the development of satellite communication technology, satellite-terrestrial integrated networks (STINs), which integrate satellite networks and ground networks, can realize global seamless coverage of communication services. Confronting the intricacies of network dynamics, the resource heterogeneity, and the unpredictability of user mobility, dynamic resource allocation within networks faces formidable challenges. Digital twin (DT), as a new technique, can reflect a physical network to a virtual network to monitor, analyze, and optimize the physical networks. Nevertheless, in the process of constructing a DT model, the deployment location and resource allocation of DTs may adversely affect its performance. Therefore, we propose a STIN model, which alleviates the problem of insufficient single-layer deployment flexibility of the traditional edge network by deploying DTs in multi-layer nodes in a STIN. To address the challenge of deploying DTs in the network, we propose a multi-layer DT deployment problem in the STIN to reduce system delay. Then we adopt a multi-agent reinforcement learning (MARL) scheme to explore the optimal strategy of the DT multi-layer deployment problem. The implemented scheme demonstrates a notable reduction in system delay, as evidenced by simulation outcomes.
As the underlying foundation of a digital twin network (DTN), digital twin channel (DTC) can accurately depict the electromagnetic wave propagation in the air interface to support the DTN-based 6G wireless network. Since electromagnetic wave propagation is affected by the environment, constructing the relationship between the environment and radio wave propagation is the key to implementing DTC. In the existing methods, the environmental information inputted into the neural network has many dimensions, and the correlation between the environment and the channel is unclear, resulting in a highly complex relationship construction process. To solve this issue, we propose a unified construction method of radio environment knowledge (REK) inspired by the electromagnetic wave property to quantify the propagation contribution based on easily obtainable location information. An effective scatterer determination scheme based on random geometry is proposed which reduces redundancy by 90%, 87%, and 81% in scenarios with complete openness, impending blockage, and complete blockage, respectively. We also conduct a path loss prediction task based on a lightweight convolutional neural network (CNN) employing a simple two-layer convolutional structure to validate REK’s effectiveness. The results show that only 4 ms of testing time is needed with a prediction error of 0.3, effectively reducing the network complexity.
Optimizing the deployment of large language models (LLMs) in edge computing environments is critical for enhancing privacy and computational efficiency. In the path toward efficient wireless LLM inference in edge computing, this study comprehensively analyzes the impact of different splitting points in mainstream open-source LLMs. Accordingly, this study introduces a framework taking inspiration from model-based reinforcement learning to determine the optimal splitting point across the edge and user equipment. By incorporating a reward surrogate model, our approach significantly reduces the computational cost of frequent performance evaluations. Extensive simulations demonstrate that this method effectively balances inference performance and computational load under varying network conditions, providing a robust solution for LLM deployment in decentralized settings.