2025-02-15 2025, Volume 11 Issue 1

  • Select all
  • Xie Xueshuo , Wang Haolong , Jian Zhaolong , Fang Yaozheng , Wang Zichun , Li Tao

    Smart contracts are widely used on the blockchain to implement complex transactions, such as decentralized applications on Ethereum. Effective vulnerability detection of large-scale smart contracts is critical, as attacks on smart contracts often cause huge economic losses. Since it is difficult to repair and update smart contracts, it is necessary to find the vulnerabilities before they are deployed. However, code analysis, which requires traversal paths, and learning methods, which require many features to be trained, are too time-consuming to detect large-scale on-chain contracts. Learning-based methods will obtain detection models from a feature space compared to code analysis methods such as symbol execution. But the existing features lack the interpretability of the detection results and training model, even worse, the large-scale feature space also affects the efficiency of detection. This paper focuses on improving the detection efficiency by reducing the dimension of the features, combined with expert knowledge. In this paper, a feature extraction model Block-gram is proposed to form low-dimensional knowledge-based features from bytecode. First, the metadata is separated and the runtime code is converted into a sequence of opcodes, which are divided into segments based on some instructions (jumps, etc.). Then, scalable Block-gram features, including 4-dimensional block features and 8-dimensional attribute features, are mined for the learning-based model training. Finally, feature contributions are calculated from SHAP values to measure the relationship between our features and the results of the detection model. In addition, six types of vulnerability labels are made on a dataset containing 33,885 contracts, and these knowledge-based features are evaluated using seven state-of-the-art learning algorithms, which show that the average detection latency speeds up 25× to 650×, compared with the features extracted by N-gram, and also can enhance the interpretability of the detection model.

  • Yu Beiyuan , Liu Yizhong , Ren Shanyao , Zhou Ziyu , Liu Jianwei

    Metaverse is a new emerging concept building up a virtual environment for the user using Virtual Reality (VR) and blockchain technology but introduces privacy risks. Now, a series of challenges arise in Metaverse security, including massive data traffic breaches, large-scale user tracking, analysis activities, unreliable Artificial Intelligence (AI) analysis results, and social engineering security for people. In this work, we concentrate on Decentraland and Sandbox, two well-known Metaverse applications in Web 3.0. Our experiments analyze, for the first time, the personal privacy data exposed by Metaverse applications and services from a combined perspective of network traffic and privacy policy. We develop a lightweight traffic processing approach suitable for the Web 3.0 environment, which does not rely on complex decryption or reverse engineering techniques.
    We propose a smart contract interaction traffic analysis method capable of retrieving user interactions with Metaverse applications and blockchain smart contracts. This method provides a new approach to de-anonymizing users' identities through Metaverse applications. Our system, METAseen, analyzes and compares network traffic with the privacy policies of Metaverse applications to identify controversial data collection practices. The consistency check experiment reveals that the data types exposed by Metaverse applications include Personal Identifiable Information (PII), device information, and Metaverse-related data. By comparing the data flows observed in the network traffic with assertions made in the privacy regulations of the Metaverse service provider, we discovered that far more than 49% of the Metaverse data flows needed to be disclosed appropriately.

  • Hu Shijing , Lin Junxiong , Du Xin , Huang Wenbin , Lu Zhihui , Duan Qiang , Wu Jie

    Blockchain technologies have been used to facilitate Web 3.0 and FinTech applications. However, conventional blockchain technologies suffer from long transaction delays and low transaction success rates in some Web 3.0 and FinTech applications such as Supply Chain Finance (SCF). Blockchain sharding has been proposed to improve blockchain performance. However, the existing sharding methods either use a static sharding strategy, which lacks the adaptability for the dynamic SCF environment, or are designed for public chains, which are not applicable to consortium blockchain-based SCF. To address these issues, we propose an adaptive consortium blockchain sharding framework named ACSarF, which is based on the deep reinforcement learning algorithm. The proposed framework can improve consortium blockchain sharding to effectively reduce transaction delay and adaptively adjust the sharding and blockout strategies to increase the transaction success rate in a dynamic SCF environment. Furthermore, we propose to use a consistent hash algorithm in the ACSarF framework to ensure transaction load balancing in the adaptive sharding system to further improve the performance of blockchain sharding in dynamic SCF scenarios. To evaluate the proposed framework, we conducted extensive experiments in a typical SCF scenario. The obtained experimental results show that the ACSarF framework achieves a more than 60% improvement in user experience compared to other state-of-the-art blockchain systems.

  • Wang Haibo , Gao Hongwei , Ma Teng , Li Chong , Jing Tao

    Distributed Federated Learning (DFL) technology enables participants to cooperatively train a shared model while preserving the privacy of their local datasets, making it a desirable solution for decentralized and privacy-preserving Web3 scenarios. However, DFL faces incentive and security challenges in the decentralized framework. To address these issues, this paper presents a Hierarchical Blockchain-enabled DFL (HBDFL) system, which provides a generic solution framework for the DFL-related applications. The proposed system consists of four major components, including a model contribution-based reward mechanism, a Proof of Elapsed Time and Accuracy (PoETA) consensus algorithm, a Distributed Reputation-based Verification Mechanism (DRTM) and an Accuracy-Dependent Throughput Management (ADTM) mechanism. The model contribution-based rewarding mechanism incentivizes network nodes to train models with their local datasets, while the PoETA consensus algorithm optimizes the tradeoff between the shared model accuracy and system throughput. The DRTM improves the system efficiency in consensus, and the ADTM mechanism guarantees that the throughput performance remains within a predefined range while improving the shared model accuracy. The performance of the proposed HBDFL system is evaluated by numerical simulations, with the results showing that the system improves the accuracy of the shared model while maintaining high throughput and ensuring security.

  • Li Feiyu , Zhou Xian , Gao Yuyuan , Huo Jiahao , Li Rui , Long Keping

    In this paper, a double-effect DNN-based Digital Back-Propagation (DBP) scheme is proposed and studied to achieve the Integrated Communication and Sensing (ICS) ability, which can not only realize nonlinear damage mitigation but also monitor the optical power and dispersion profile over multi-span links. The link status information can be extracted by the characteristics of the learned optical fiber parameters without any other measuring instruments. The efficiency and feasibility of this method have been investigated in different fiber link conditions, including various launch power, transmission distance, and the location and the amount of the abnormal losses. A good monitoring performance can be obtained while the launch optical power is 2 dBm which does not affect the normal operation of the optical communication system and the step size of DBP is 20 km which can provide a better distance resolution. This scheme successfully detects the location of single or multiple optical attenuators in long-distance multi-span fiber links, including different abnormal losses of 2 ​dB, 4 ​dB, and 6 ​dB in 360 ​km and serval combinations of abnormal losses of (1 ​dB, 5 ​dB), (3 ​dB, 3 ​dB), (5 ​dB, 1 ​dB) in 360 ​km and 760 ​km. Meanwhile, the transfer relationship of the estimated coefficient values with different step sizes is further investigated to reduce the complexity of the fiber nonlinear damage compensation. These results provide an attractive approach for precisely sensing the optical fiber link status information and making correct strategies timely to ensure optical communication system operations.

  • Li Ang , Zhang Xueyi , Li Peilin , Kang Bin

    Fine-grained Image Recognition (FGIR) task is dedicated to distinguishing similar sub-categories that belong to the same super-category, such as bird species and car types. In order to highlight visual differences, existing FGIR works often follow two steps: discriminative sub-region localization and local feature representation. However, these works pay less attention on global context information. They neglect a fact that the subtle visual difference in challenging scenarios can be highlighted through exploiting the spatial relationship among different sub-regions from a global view point. Therefore, in this paper, we consider both global and local information for FGIR, and propose a collaborative teacher-student strategy to reinforce and unity the two types of information. Our framework is implemented mainly by convolutional neural network, referred to Teacher-Student Based Attention Convolutional Neural Network (T-S-ACNN). For fine-grained local information, we choose the classic Multi-Attention Network (MA-Net) as our baseline, and propose a type of boundary constraint to further reduce background noises in the local attention maps. In this way, the discriminative sub-regions tend to appear in the area occupied by fine-grained objects, leading to more accurate sub-region localization. For fine-grained global information, we design a graph convolution based Global Attention Network (GA-Net), which can combine extracted local attention maps from MA-Net with non-local techniques to explore spatial relationship among sub-regions. At last, we develop a collaborative teacher-student strategy to adaptively determine the attended roles and optimization modes, so as to enhance the cooperative reinforcement of MA-Net and GA-Net. Extensive experiments on CUB-200-2011, Stanford Cars and FGVC Aircraft datasets illustrate the promising performance of our framework.

  • Chen Zhuo , Zhu Bowen , Zhou Chuan

    Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster (CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and time-varying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming (MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning (DRL) incorporating Graph Convolutional Networks (GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement. The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.

  • Wang Shumo , Song Xiaoqin , Xu Han , Song Tiecheng , Zhang Guowei , Yang Yang

    With the rapid development of Intelligent Transportation Systems (ITS), many new applications for Intelligent Connected Vehicles (ICVs) have sprung up. In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles, computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention. However, the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges. In this paper, we propose a heterogeneous Vehicular Edge Computing (VEC) architecture with Task Vehicles (TaVs), Service Vehicles (SeVs) and Roadside Units (RSUs), and propose a distributed algorithm, namely PG-MRL, which jointly optimizes offloading decision and resource allocation. In the first stage, the offloading decisions of TaVs are obtained through a potential game. In the second stage, a multi-agent Deep Deterministic Policy Gradient (DDPG), one of deep reinforcement learning algorithms, with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection. The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.

  • Zhang Yanbo , Yang Zheng , Cui Jingjing , Lei Xianfu , Wu Yi , Zhang Jun , Fang Chao , Ding Zhiguo

    In this paper, the application of Non-Orthogonal Multiple Access (NOMA) is investigated in a multiple-input single-output network consisting of multiple legitimate users and a potential eavesdropper. To support secure transmissions from legitimate users, two NOMA Secrecy Sum Rate Transmit BeamForming (NOMA-SSR-TBF) schemes are proposed to maximise the SSR of a Base Station (BS) with sufficient and insufficient transmit power. For BS with sufficient transmit power, an artificial jamming beamforming design scheme is proposed to disrupt the potential eavesdropping without impacting the legitimate transmissions. In addition, for BS with insufficient transmit power, a modified successive interference cancellation decoding sequence is used to reduce the impact of artificial jamming on legitimate transmissions. More specifically, iterative algorithm for the successive convex approximation are provided to jointly optimise the vectors of transmit beamforming and artificial jamming. Experimental results demonstrate that the proposed NOMA-SSR-TBF schemes outperforms the existing works, such as the maximized artificial jamming power scheme, the maximized artificial jamming power scheme with artificial jamming beamforming design and maximized secrecy sum rate scheme without artificial jamming beamforming design.

  • Niu Haiwen , Wang Luhan , Du Keliang , Lu Zhaoming , Wen Xiangming , Liu Yu

    Cybertwin-enabled 6th Generation (6G) network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications. Multi-Agent Deep Reinforcement Learning (MADRL) technologies driven by Cybertwins have been proposed for adaptive task offloading strategies. However, the existence of random transmission delay between Cybertwin-driven agents and underlying networks is not considered in related works, which destroys the standard Markov property and increases the decision reaction time to reduce the task offloading strategy performance. In order to address this problem, we propose a pipelining task offloading method to lower the decision reaction time and model it as a delay-aware Markov Decision Process (MDP). Then, we design a delay-aware MADRL algorithm to minimize the weighted sum of task execution latency and energy consumption. Firstly, the state space is augmented using the lastly-received state and historical actions to rebuild the Markov property. Secondly, Gate Transformer-XL is introduced to capture historical actions' importance and maintain the consistent input dimension dynamically changed due to random transmission delays. Thirdly, a sampling method and a new loss function with the difference between the current and target state value and the difference between real state-action value and augmented state-action value are designed to obtain state transition trajectories close to the real ones. Numerical results demonstrate that the proposed methods are effective in reducing reaction time and improving the task offloading performance in the random-delay Cybertwin-enabled 6G networks.

  • Chen Jiewei , Guo Shaoyong , Shen Tao , Feng Yan , Gao Jian , Qiu Xuesong

    As the information sensing and processing capabilities of IoT devices increase, a large amount of data is being generated at the edge of Industrial IoT (IIoT), which has become a strong foundation for distributed Artificial Intelligence (AI) applications. However, most users are reluctant to disclose their data due to network bandwidth limitations, device energy consumption, and privacy requirements. To address this issue, this paper introduces an Edge-assisted Federated Learning (EFL) framework, along with an incentive mechanism for lightweight industrial data sharing. In order to reduce the information asymmetry between data owners and users, an EFL model-sharing incentive mechanism based on contract theory is designed. In addition, a weight dispersion evaluation scheme based on Wasserstein distance is proposed. This study models an optimization problem of node selection and sharing incentives to maximize the EFL model consumers' profit and ensure the quality of training services. An incentive-based EFL algorithm with individual rationality and incentive compatibility constraints is proposed. Finally, the experimental results verify the effectiveness of the proposed scheme in terms of positive incentives for contract design and performance analysis of EFL systems.

  • Jia Xuedan , Wang Liangmin , Cheng Ke , Jing Pujie , Song Xiangmei

    Electronic auctions (e-auctions) remove the physical limitations of traditional auctions and bring this mechanism to the general public. However, most e-auction schemes involve a trusted auctioneer, which is not always credible in practice. Some studies have applied cryptography tools to solve this problem by distributing trust, but they ignore the existence of collusion. In this paper, a blockchain-based Privacy-Preserving and Collusion-Resistant scheme (PPCR) for double auctions is proposed by employing both cryptography and blockchain technology, which is the first decentralized and collusion-resistant double auction scheme that guarantees bidder anonymity and bid privacy. A two-server-based auction framework is designed to support off-chain allocation with privacy preservation and on-chain dispute resolution for collusion resistance. A Dispute Resolution agreement (DR) is provided to the auctioneer to prove that they have conducted the auction correctly and the result is fair and correct. In addition, a Concise Dispute Resolution protocol (CDR) is designed to handle situations where the number of accused winners is small, significantly reducing the computation cost of dispute resolution. Extensive experimental results confirm that PPCR can indeed achieve efficient collusion resistance and verifiability of auction results with low on-chain and off-chain computational overhead.

  • Sun Jun , Guo Mengzhu , Liu Jian

    High reliability applications in dense access scenarios have become one of the main goals of 6G environments. To solve the access collision of dense Machine Type Communication (MTC) devices in cell-free communication systems, an intelligent cooperative secure access scheme based on multi-agent reinforcement learning and federated learning is proposed, that is, the Preamble Slice Orderly Queue Access (PSOQA) scheme. In this scheme, the preamble arrangement is combined with the access control. The preamble arrangement is realized by preamble slices which is from the virtual preamble pool. The access devices learn to queue orderly by deep reinforcement learning. The orderly queue weakens the random and avoids collision. A preamble slice is assigned to an orderly access queue at each access time. The orderly queue is determined by interaction information among multiple agents. With the federated reinforcement learning framework, the PSOQA scheme is implemented to guarantee the privacy and security of agents. Finally, the access performance of PSOQA is compared with other random contention schemes in different load scenarios. Simulation results show that PSOQA can not only improve the access success rate but also guarantee low-latency tolerant performances.

  • Zhao Juan , Dai Haibo , Xu Xiaolong , Yan Hao , Zhang Zheng , Li Chunguo

    The secured access is studied in this paper for the network of the image remote sensing. Each sensor in this network encounters the information security when uploading information of the images wirelessly from the sensor to the central collection point. In order to enhance the sensing quality for the remote uploading, the passive reflection surface technique is employed. If one eavesdropper that exists nearby this sensor is keeping on accessing the same networks, he may receive the same image from this sensor. Our goal in this paper is to improve the SNR of legitimate collection unit while cut down the SNR of the eavesdropper as much as possible by adaptively adjust the uploading power from this sensor to enhance the security of the remote sensing images. In order to achieve this goal, the secured energy efficiency performance is theoretically analyzed with respect to the number of the passive reflection elements by calculating the instantaneous performance over the channel fading coefficients. Based on this theoretical result, the secured access is formulated as a mathematical optimization problem by adjusting the sensor uploading power as the unknown variables with the objective of the energy efficiency maximization while satisfying any required maximum data rate of the eavesdropper sensor. Finally, the analytical expression is theoretically derived for the optimum uploading power. Numerical simulations verify the design approach.

  • J. Cañete Francisco , Prasad Gautham , Lampe Lutz

    In Power Line Communications (PLC), there are regulatory masks that restrict the transmit power spectral density for electromagnetic compatibility reasons, which creates coverage issues despite the not too long distances. Hence, PLC networks often employ repeaters/relays, especially in smart grid neighborhood area networks. Even in broadband indoor PLC systems that offer a notable data rate, relaying may pave the way to new applications like being the backbone for wireless technologies in a cost-effective manner to support the Internet-of-things paradigm. In this paper, we study Multiple-Input Multiple-Output (MIMO) PLC systems that incorporate in-band full-duplex functionality in relaying networks. We present several MIMO configurations that allow end-to-end half-duplex or full-duplex operations and analyze the achievable performance with state-of-the-art PLC systems. To reach this analysis, we get channel realizations from random network layouts for indoor and outdoor scenarios. We adopt realistic MIMO channel and noise models and consider transmission techniques according to PLC standards. The concepts discussed in this work can be useful in the design of future PLC relay-aided networks for different applications that look for a coverage extension and/or throughput: smart grids with enhanced communications in outdoor scenarios, and “last meter” systems for high-speed connections everywhere in indoor ones.

  • Zhang Wei , Wang Shafei , Pan Ye , Li Qiang , Lin Jingran , Wu Xiaoxiao

    Recently, the Fog-Radio Access Network (F-RAN) has gained considerable attention, because of its flexible architecture that allows rapid response to user requirements. In this paper, computational offloading in F-RAN is considered, where multiple User Equipments (UEs) offload their computational tasks to the F-RAN through fog nodes. Each UE can select one of the fog nodes to offload its task, and each fog node may serve multiple UEs. The tasks are computed by the fog nodes or further offloaded to the cloud via a capacity-limited fronhaul link. In order to compute all UEs' tasks quickly, joint optimization of UE-Fog association, radio and computation resources of F-RAN is proposed to minimize the maximum latency of all UEs. This min-max problem is formulated as a Mixed Integer Nonlinear Program (MINP). To tackle it, first, MINP is reformulated as a continuous optimization problem, and then the Majorization Minimization (MM) method is used to find a solution. The MM approach that we develop is unconventional in that each MM subproblem is solved inexactly with the same provable convergence guarantee as the exact MM, thereby reducing the complexity of MM iteration. In addition, a cooperative offloading model is considered, where the fog nodes compress-and-forward their received signals to the cloud. Under this model, a similar min-max latency optimization problem is formulated and tackled by the inexact MM. Simulation results show that the proposed algorithms outperform some offloading strategies, and that the cooperative offloading can exploit transmission diversity better than noncooperative offloading to achieve better latency performance.

  • Bi Jiang , Wang Lidong , Guo Ge

    The size of the Audio and Video (AV) content of the 8K program is four times larger than that of 4K content, providing viewers with a more ideal audiovisual experience while placing higher demands on the capability and efficiency of document preparation and processing, signal transmission, and scheduling. However, it is difficult to meet the high robustness requirements of 8K broadcast services because the existing broadcast system architecture is limited by efficiency, cost, and other factors. In this study, an 8K Ultra-High-Definition (UHD) TV program broadcast scheme was designed. The verification results show that the scheme is high quality, highly efficient, and robust. In particular, in the research, the file format normalizing module was first placed in the broadcast area instead of the file preparation area, and the low-compression transmission scheme of the all-IP signal JPEG XS was designed in the signal transmission area for improving the efficiency of the scheme. Next, to reduce the impact on the robustness of broadcast services, the broadcast control logic of the broadcast core components is optimized. Finally, a series of 8K TV program broadcasting systems have been implemented and their performance has been verified. The results show that the system meets the efficiency and robustness requirements of a high-quality 8K AV broadcast system, and thus has a high degree of practicability.

  • Zhao Junhui , Hu Fajin , Li Jiahang , Nie Yiwen

    In Heterogeneous Vehicle-to-Everything Networks (HVNs), multiple users such as vehicles and handheld devices and infrastructure can communicate with each other to obtain more advanced services. However, the increasing number of entities accessing HVNs presents a huge technical challenge to allocate the limited wireless resources. Traditional model-driven resource allocation approaches are no longer applicable because of rich data and the interference problem of multiple communication modes reusing resources in HVNs. In this paper, we investigate a wireless resource allocation scheme including power control and spectrum allocation based on the resource block reuse strategy. To meet the high capacity of cellular users and the high reliability of Vehicle-to-Vehicle (V2V) user pairs, we propose a data-driven Multi-Agent Deep Reinforcement Learning (MADRL) resource allocation scheme for the HVN. Simulation results demonstrate that compared to existing algorithms, the proposed MADRL-based scheme achieves a high sum capacity and probability of successful V2V transmission, while providing close-to-limit performance.

  • Liu Jihong , Zhou Yiqing , Liu Ling

    Wireless communication-enabled Cooperative Adaptive Cruise Control (CACC) is expected to improve the safety and traffic capacity of vehicle platoons. Existing CACC considers a conventional communication delay with fixed Vehicular Communication Network (VCN) topologies. However, when the network is under attack, the communication delay may be much higher, and the stability of the system may not be guaranteed. This paper proposes a novel communication Delay Aware CACC with Dynamic Network Topologies (DADNT). The main idea is that for various communication delays, in order to maximize the traffic capacity while guaranteeing stability and minimizing the following error, the CACC should dynamically adjust the VCN network topology to achieve the minimum inter-vehicle spacing. To this end, a multi-objective optimization problem is formulated, and a 3-step Divide-And-Conquer sub-optimal solution (3DAC) is proposed. Simulation results show that with 3DAC, the proposed DADNT with CACC can reduce the inter-vehicle spacing by 5%, 10%, and 14%, respectively, compared with the traditional CACC with fixed one-vehicle, two-vehicle, and three-vehicle look-ahead network topologies, thereby improving the traffic efficiency.

  • Zhang Chiya , Li Xinjie , He Chunlong , Li Xingquan , Lin Dongping

    In this paper, we investigate the application of the Unmanned Aerial Vehicle (UAV)-enabled relaying system in emergency communications, where one UAV is applied as a relay to help transmit information from ground users to a Base Station (BS). We maximize the total transmitted data from the users to the BS, by optimizing the user communication scheduling and association along with the power allocation and the trajectory of the UAV. To solve this non-convex optimization problem, we propose the traditional Convex Optimization (CO) and the Reinforcement Learning (RL)-based approaches. Specifically, we apply the block coordinate descent and successive convex approximation techniques in the CO approach, while applying the soft actor-critic algorithm in the RL approach. The simulation results show that both approaches can solve the proposed optimization problem and obtain good results. Moreover, the RL approach establishes emergency communications more rapidly than the CO approach once the training process has been completed.

  • Yuan Haonan , Fei Shufan , Yan Zheng

    Blockchain technology is increasingly popular and has been widely applied in many industrial fields, due to its unique properties of decentralization, immutability, and traceability. Blockchain systems in different fields vary, with different block structures, consensus mechanisms and access permission models. These differences make it hard for different blockchain systems to interoperate with each other, which isolates them. Cross-chain technologies have been developed to solve this isolation problem in order to improve the interoperability of blockchains. Although some surveys on cross-chain technologies can be found, they are unable to keep up with the latest research progress due to their extremely fast pace of development. Moreover, the literature misses general criteria to evaluate the quality of cross-chain technologies. In this paper, a comprehensive literature review of cross-chain technologies is conducted by employing a comprehensive set of evaluation criteria. The preliminaries on blockchain interoperability are first presented. Then, a set of evaluation criteria is proposed in terms of security, privacy, performance, and functionality. The latest cutting-edge works are reviewed based on the proposed taxonomy of cross-chain technologies and their performance is evaluated against our proposed criteria. Finally, some open issues and future directions of cross-chain research are pointed out.

  • Li Xiangrong , Zhang Yu , Zhu Haotian , Wang Yubo , Huang Junjia

    The core missions of IoT are to sense data, transmit data and give feedback to the real world based on the calculation of the sensed data. The trust of sensing source data and transmission network is extremely important to IoT security. 5G-IoT with its low latency, wide connectivity and high-speed transmission extends the business scenarios of IoT, yet it also brings new challenges to trust proof solutions of IoT. Currently, there is a lack of efficient and reliable trust proof solutions for massive dynamically connected nodes, while the existing solutions have high computational complexity and can't adapt to time-sensitive services in 5G-IoT scenarios. In order to solve the above problems, this paper proposes an adaptive multi-dimensional trust proof solution. Firstly, the static and dynamic attributes of sensing nodes are metricized, and the historical interaction as well as the recommendation information are combined with the comprehensive metric of sensing nodes, and a multi-dimensional fine-grained trusted metric model is established in this paper. Then, based on the comprehensive metrics, the sensing nodes are logically grouped and assigned with service levels to achieve the screening and isolation of malicious nodes. At the same time, the proposed solution reduces the energy consumption of the metric process and optimizes the impact of real-time metrics on the interaction latency. Simulation experiments show that the solution can accurately and efficiently identify malicious nodes and effectively guarantee the safe and trustworthy operation of 5G-IoT nodes, while having a small impact on the latency of the 5G network.

  • Yang Xu , Xing Hongyan , Ji Xinyuan , Su Xin , Pedrycz Witold

    Thunderstorm detection based on the Atmospheric Electric Field (AEF) has evolved from time-domain models to space-domain models. It is especially important to evaluate and determine the particularly Weather Attribute (WA), which is directly related to the detection reliability and authenticity. In this paper, a strategy is proposed to integrate three currently competitive WA's evaluation methods. First, a conventional evaluation method based on AEF statistical indicators is selected. Subsequent evaluation approaches include competing AEF-based predicted value intervals, and AEF classification based on fuzzy c-means. Different AEF attributes contribute to a more accurate AEF classification to different degrees. The resulting dynamic weighting applied to these attributes improves the classification accuracy. Each evaluation method is applied to evaluate the WA of a particular AEF, to obtain the corresponding evaluation score. The integration in the proposed strategy takes the form of a score accumulation. Different cumulative score levels correspond to different final WA results. Thunderstorm imaging is performed to visualize thunderstorm activities using those AEFs already evaluated to exhibit thunderstorm attributes. Empirical results confirm that the proposed strategy effectively and reliably images thunderstorms, with a 100% accuracy of WA evaluation. This is the first study to design an integrated thunderstorm detection strategy from a new perspective of WA evaluation, which provides promising solutions for a more reliable and flexible thunderstorm detection.

  • Huang Mingfeng , Liu Anfeng , N. Xiong Neal , V. Vasilakos Athanasios

    With the unprecedented prevalence of Industrial Internet of Things (IIoT) and 5G technology, various applications supported by industrial communication systems have generated exponentially increased processing tasks, which makes task assignment inefficient due to insufficient workers. In this paper, an Intelligent and Trustworthy task assignment method based on Trust and Social relations (ITTS) is proposed for scenarios with many tasks and few workers. Specifically, ITTS first makes initial assignments based on trust and social influences, thereby transforming the complex large-scale industrial task assignment of the platform into the small-scale task assignment for each worker. Then, an intelligent Q-decision mechanism based on workers' social relation is proposed, which adopts the first-exploration-then-utilization principle to allocate tasks. Only when a worker cannot cope with the assigned tasks, it initiates dynamic worker recruitment, thus effectively solving the worker shortage problem as well as the cold start issue. More importantly, we consider trust and security issues, and evaluate the trust and social circles of workers by accumulating task feedback, to provide the platform a reference for worker recruitment, thereby creating a high-quality worker pool. Finally, extensive simulations demonstrate ITTS outperforms two benchmark methods by increasing task completion rates by 56.49%-61.53% and profit by 42.34%-47.19%.

  • Liu Xin , Zhou Rui , Shimizu Shohei , Chong Rui , Zhou Qingguo , Zhou Xiaokang

    As smart contracts, represented by Solidity, become deeply integrated into the manufacturing industry, blockchain-based Digital Twins (DT) has gained momentum in recent years. Most of the blockchain infrastructures in widespread use today are based on the Proof-of-Work (PoW) mechanism, and the process of creating blocks is known as “mining”. Mining becomes increasingly difficult as the blockchain grows in size and the number of on-chain business systems increases. To lower the threshold of participation in the mining process, “mining pools” have been created. Miners can cooperate and share the mining rewards according to the hashrate they contributed to the pool. Stratum is the most widely used communication protocol between miners and mining pools. Its security is essential for the participants. In this paper, we propose two novel Man-In-The-Middle (MITM) attack schemes against Stratum, which allow attackers to steal miners' hashrate to any mining pool using hijacked TCP connections. Compared with existing attacks, our work is more secretive, more suitable for the real-world environment, and more harmful. The Proof-of-Concept (PoC) shows that our schemes work perfectly on most mining softwares and pools. Furthermore, we present a lightweight AI-driven approach based on protocol-level feature analysis to detect Stratum MITM for blockchain-based DTs. Its detection model consists of three layers: feature extraction layer, vectorization layer, and detection layer. Experiments prove that our detection approach can effectively detect Stratum MITM traffic with 98% accuracy. Our work alerts the communities and provides possible mitigation against these more hidden and profitable attack schemes.