2024-06-01 2024, Volume 10 Issue 3

  • Select all
  • research-article
    Yichi Zhang, Haitao Zhao, Kuo Cao, Li Zhou, Zhe Wang, Yueling Liu, Jibo Wei

    Increasing research has focused on semantic communication, the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver. In this paper, we design a novel encoding and decoding semantic communication framework, which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels. On the sender side, the average semantic loss caused by the wrong detection is defined, and a semantic source encoding strategy is developed to minimize the average semantic loss. To further improve communication reliability, a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver. Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.

  • research-article
    Jiale Wu, Celimuge Wu, Yangfei Lin, Tsutomu Yoshinaga, Lei Zhong, Xianfu Chen, Yusheng Ji

    With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.

  • research-article
    Yating Liu, Xiaojie Wang, Zhaolong Ning, MengChu Zhou, Lei Guo, Behrouz Jedari

    Semantic Communication (SC) has emerged as a novel communication paradigm that provides a receiver with meaningful information extracted from the source to maximize information transmission throughput in wireless networks, beyond the theoretical capacity limit. Despite the extensive research on SC, there is a lack of comprehensive survey on technologies, solutions, applications, and challenges for SC. In this article, the development of SC is first reviewed and its characteristics, architecture, and advantages are summarized. Next, key technologies such as semantic extraction, semantic encoding, and semantic segmentation are discussed and their corresponding solutions in terms of efficiency, robustness, adaptability, and reliability are summarized. Applications of SC to UAV communication, remote image sensing and fusion, intelligent transportation, and healthcare are also presented and their strategies are summarized. Finally, some challenges and future research directions are presented to provide guidance for further research of SC.

  • research-article
    Lu Sun, Xiaona Li, Mingyue Zhang, Liangtian Wan, Yun Lin, Xianpeng Wang, Gang Xu

    Interconnection of all things challenges the traditional communication methods, and Semantic Communication and Computing (SCC) will become new solutions. It is a challenging task to accurately detect, extract, and represent semantic information in the research of SCC-based networks. In previous research, researchers usually use convolution to extract the feature information of a graph and perform the corresponding task of node classification. However, the content of semantic information is quite complex. Although graph convolutional neural networks provide an effective solution for node classification tasks, due to their limitations in representing multiple relational patterns and not recognizing and analyzing higher-order local structures, the extracted feature information is subject to varying degrees of loss. Therefore, this paper extends from a single-layer topology network to a multi-layer heterogeneous topology network. The Bidirectional Encoder Representations from Transformers (BERT) training word vector is introduced to extract the semantic features in the network, and the existing graph neural network is improved by combining the higher-order local feature module of the network model representation network. A multi-layer network embedding algorithm on SCC-based networks with motifs is proposed to complete the task of end-to-end node classification. We verify the effectiveness of the algorithm on a real multi-layer heterogeneous network.

  • research-article
    Rong Ma, Zhen Zhang, Yide Ma, Xiping Hu, Edith C.H. Ngai, Victor C.M. Leung

    In recent years, the Internet of Things (IoT) has gradually developed applications such as collecting sensory data and building intelligent services, which has led to an explosion in mobile data traffic. Meanwhile, with the rapid development of artificial intelligence, semantic communication has attracted great attention as a new communication paradigm. However, for IoT devices, however, processing image information efficiently in real time is an essential task for the rapid transmission of semantic information. With the increase of model parameters in deep learning methods, the model inference time in sensor devices continues to increase. In contrast, the Pulse Coupled Neural Network (PCNN) has fewer parameters, making it more suitable for processing real-time scene tasks such as image segmentation, which lays the foundation for real-time, effective, and accurate image transmission. However, the parameters of PCNN are determined by trial and error, which limits its application. To overcome this limitation, an Improved Pulse Coupled Neural Networks (IPCNN) model is proposed in this work. The IPCNN constructs the connection between the static properties of the input image and the dynamic properties of the neurons, and all its parameters are set adaptively, which avoids the inconvenience of manual setting in traditional methods and improves the adaptability of parameters to different types of images. Experimental segmentation results demonstrate the validity and efficiency of the proposed self-adaptive parameter setting method of IPCNN on the gray images and natural images from the Matlab and Berkeley Segmentation Datasets. The IPCNN method achieves a better segmentation result without training, providing a new solution for the real-time transmission of image semantic information.

  • research-article
    Yueling Liu, Shengteng Jiang, Yichi Zhang, Kuo Cao, Li Zhou, Boon-Chong Seet, Haitao Zhao, Jibo Wei

    Context information is significant for semantic extraction and recovery of messages in semantic communication. However, context information is not fully utilized in the existing semantic communication systems since relationships between sentences are often ignored. In this paper, we propose an Extended Context-based Semantic Communication (ECSC) system for text transmission, in which context information within and between sentences is explored for semantic representation and recovery. At the encoder, self-attention and segment-level relative attention are used to extract context information within and between sentences, respectively. In addition, a gate mechanism is adopted at the encoder to incorporate the context information from different ranges. At the decoder, Transformer-XL is introduced to obtain more semantic information from the historical communication processes for semantic recovery. Simulation results show the effectiveness of our proposed model in improving the semantic accuracy between transmitted and recovered messages under various channel conditions.

  • research-article
    Yongfeng Tao, Minqiang Yang, Yushan Wu, Kevin Lee, Adrienne Kline, Bin Hu

    With the rapid growth of information transmission via the Internet, efforts have been made to reduce network load to promote efficiency. One such application is semantic computing, which can extract and process semantic communication. Social media has enabled users to share their current emotions, opinions, and life events through their mobile devices. Notably, people suffering from mental health problems are more willing to share their feelings on social networks. Therefore, it is necessary to extract semantic information from social media (vlog data) to identify abnormal emotional states to facilitate early identification and intervention. Most studies do not consider spatio-temporal information when fusing multimodal information to identify abnormal emotional states such as depression. To solve this problem, this paper proposes a spatio-temporal squeeze transformer method for the extraction of semantic features of depression. First, a module with spatio-temporal data is embedded into the transformer encoder, which is utilized to obtain a representation of spatio-temporal features. Second, a classifier with a voting mechanism is designed to encourage the model to classify depression and non-depression effectively. Experiments are conducted on the D-Vlog dataset. The results show that the method is effective, and the accuracy rate can reach 70.70%. This work provides scaffolding for future work in the detection of affect recognition in semantic communication based on social media vlog data.

  • research-article
    Hui Xia, Ning Huang, Xuecai Feng, Rui Zhang, Chao Liu

    The cloud platform has limited defense resources to fully protect the edge servers used to process crowd sensing data in Internet of Things. To guarantee the network's overall security, we present a network defense resource allocation with multi-armed bandits to maximize the network's overall benefit. Firstly, we propose the method for dynamic setting of node defense resource thresholds to obtain the defender (attacker) benefit function of edge servers (nodes) and distribution. Secondly, we design a defense resource sharing mechanism for neighboring nodes to obtain the defense capability of nodes. Subsequently, we use the decomposability and Lipschitz continuity of the defender's total expected utility to reduce the difference between the utility's discrete and continuous arms and analyze the difference theoretically. Finally, experimental results show that the method maximizes the defender's total expected utility and reduces the difference between the discrete and continuous arms of the utility.

  • research-article
    Hongruo Zhang, Yifei Zou, Haofei Yin, Dongxiao Yu, Xiuzhen Cheng

    The past decades have witnessed a wide application of federated learning in crowd sensing, to handle the numerous data collected by the sensors and provide the users with precise and customized services. Meanwhile, how to protect the private information of users in federated learning has become an important research topic. Compared with the differential privacy (DP) technique and secure multiparty computation (SMC) strategy, the covert communication mechanism in federated learning is more efficient and energy-saving in training the machine learning models. In this paper, we study the covert communication problem for federated learning in crowd sensing Internet-of-Things networks. Different from the previous works about covert communication in federated learning, most of which are considered in a centralized framework and experimental-based, we firstly proposes a centralized covert communication mechanism for federated learning among n learning agents, the time complexity of which is O(log n), approximating to the optimal solution. Secondly, for the federated learning without parameter server, which is a harder case, we show that solving such a problem is NP-hard and prove the existence of a distributed covert communication mechanism with O(log log Δ log n) times, approximating to the optimal solution. Δ is the maximum distance between any pair of learning agents. Theoretical analysis and numerical simulations are presented to show the performance of our covert communication mechanisms. We hope that our covert communication work can shed some light on how to protect the privacy of federated learning in crowd sensing from the view of communications.

  • research-article
    Xin Li, Xinghua Lei, Xiuwen Liu, Hang Xiao

    The recent proliferation of Fifth-Generation (5G) networks and Sixth-Generation (6G) networks has given rise to Vehicular Crowd Sensing (VCS) systems which solve parking collisions by effectively incentivizing vehicle participation. However, instead of being an isolated module, the incentive mechanism usually interacts with other modules. Based on this, we capture this synergy and propose a Collision-free Parking Recommendation (CPR), a novel VCS system framework that integrates an incentive mechanism, a non-cooperative VCS game, and a multi-agent reinforcement learning algorithm, to derive an optimal parking strategy in real time. Specifically, we utilize an LSTM method to predict parking areas roughly for recommendations accurately. Its incentive mechanism is designed to motivate vehicle participation by considering dynamically priced parking tasks and social network effects. In order to cope with stochastic parking collisions, its non-cooperative VCS game further analyzes the uncertain interactions between vehicles in parking decision-making. Then its multi-agent reinforcement learning algorithm models the VCS campaign as a multi-agent Markov decision process that not only derives the optimal collision-free parking strategy for each vehicle independently, but also proves that the optimal parking strategy for each vehicle is Pareto-optimal. Finally, numerical results demonstrate that CPR can accomplish parking tasks at a 99.7% accuracy compared with other baselines, efficiently recommending parking spaces.

  • research-article
    Guanlin Wu, Haipeng Wang, Yu Liu, You He

    With the rapid growth of the maritime Internet of Things (IoT) devices for Maritime Monitor Services (MMS), maritime traffic controllers could not handle a massive amount of data in time. For unmanned MMS, one of the key technologies is situation understanding. However, the presence of slow-fast high maneuvering targets and track breakages due to radar blind zones make modeling the dynamics of marine multi-agents difficult, and pose significant challenges to maritime situation understanding. In order to comprehend the situation accurately and thus offer unmanned MMS, it is crucial to model the complex dynamics of multi-agents using IoT big data. Nevertheless, previous methods typically rely on complex assumptions, are plagued by unstructured data, and disregard the interactions between multiple agents and the spatial-temporal correlations. A deep learning model, Graph Spatial-Temporal Generative Adversarial Network(GraphSTGAN), is proposed in this paper, which uses graph neural network to model unstructured data and uses STGAN to learn the spatial-temporal dependencies and interactions. Extensive experiments show the effectiveness and robustness of the proposed method.

  • research-article
    Zhaowei Liu, Dong Yang, Shenqiang Wang, Hang Su

    With the rapid advancement of 5G technology, the Internet of Things (IoT) has entered a new phase of applications and is rapidly becoming a significant force in promoting economic development. Due to the vast amounts of data created by numerous 5G IoT devices, the Ethereum platform has become a tool for the storage and sharing of IoT device data, thanks to its open and tamper-resistant characteristics. So, Ethereum account security is necessary for the Internet of Things to grow quickly and improve people's lives. By modeling Ethereum transaction records as a transaction network, the account types are well identified by the Ethereum account classification system established based on Graph Neural Networks (GNNs). This work first investigates the Ethereum transaction network. Surprisingly, experimental metrics reveal that the Ethereum transaction network is neither optimal nor even satisfactory in terms of accurately representing transactions per account. This flaw may significantly impede the classification capability of GNNs, which is mostly governed by their attributes. This work proposes an Adaptive Multi-channel Bayesian Graph Attention Network (AMBGAT) for Ethereum account classification to address this difficulty. AMBGAT uses attention to enhance node features, estimate graph topology that conforms to the ground truth, and efficiently extract node features pertinent to downstream tasks. An extensive experiment with actual Ethereum transaction data demonstrates that AMBGAT obtains competitive performance in the classification of Ethereum accounts while accurately estimating the graph topology.

  • research-article
    Xu Li, Gwanggil Jeon, Wenshuo Wang, Jindong Zhao

    The maturity of 5G technology has enabled crowd-sensing services to collect multimedia data over wireless network, so it has promoted the applications of crowd-sensing services in different fields, but also brings more privacy security challenges, the most commom which is privacy leakage. As a privacy protection technology combining data integrity check and identity anonymity, ring signature is widely used in the field of privacy protection. However, introducing signature technology leads to additional signature verification overhead. In the scenario of crowd-sensing, the existing signature schemes have low efficiency in multi-signature verification. Therefore, it is necessary to design an efficient multi-signature verification scheme while ensuring security. In this paper, a batch-verifiable signature scheme is proposed based on the crowd-sensing background, which supports the sensing platform to verify the uploaded multiple signature data efficiently, so as to overcoming the defects of the traditional signature scheme in multi-signature verification. In our proposal, a method for linking homologous data was presented, which was valuable for incentive mechanism and data analysis. Simulation results showed that the proposed scheme has good performance in terms of security and efficiency in crowd-sensing applications with a large number of users and data.

  • research-article
    Yaguang Lin, Xiaoming Wang, Liang Wang, Pengfei Wan

    As an ingenious convergence between the Internet of Things and social networks, the Social Internet of Things (SIoT) can provide effective and intelligent information services and has become one of the main platforms for people to spread and share information. Nevertheless, SIoT is characterized by high openness and autonomy, multiple kinds of information can spread rapidly, freely and cooperatively in SIoT, which makes it challenging to accurately reveal the characteristics of the information diffusion process and effectively control its diffusion. To this end, with the aim of exploring multi-information cooperative diffusion processes in SIoT, we first develop a dynamics model for multi-information cooperative diffusion based on the system dynamics theory in this paper. Subsequently, the characteristics and laws of the dynamical evolution process of multi-information cooperative diffusion are theoretically investigated, and the diffusion trend is predicted. On this basis, to further control the multi-information cooperative diffusion process efficiently, we propose two control strategies for information diffusion with control objectives, develop an optimal control system for the multi-information cooperative diffusion process, and propose the corresponding optimal control method. The optimal solution distribution of the control strategy satisfying the control system constraints and the control budget constraints is solved using the optimal control theory. Finally, extensive simulation experiments based on real dataset from Twitter validate the correctness and effectiveness of the proposed model, strategy and method.

  • research-article
    Lizong Zhang, Yiming Wang, Ke Yan, Yi Su, Nawaf Alharbe, Shuxin Feng

    With the adoption of cutting-edge communication technologies such as 5G/6G systems and the extensive development of devices, crowdsensing systems in the Internet of Things (IoT) are now conducting complicated video analysis tasks such as behaviour recognition. These applications have dramatically increased the diversity of IoT systems. Specifically, behaviour recognition in videos usually requires a combinatorial analysis of the spatial information about objects and information about their dynamic actions in the temporal dimension. Behaviour recognition may even rely more on the modeling of temporal information containing short-range and long-range motions, in contrast to computer vision tasks involving images that focus on understanding spatial information. However, current solutions fail to jointly and comprehensively analyse short-range motions between adjacent frames and long-range temporal aggregations at large scales in videos. In this paper, we propose a novel behaviour recognition method based on the integration of multigranular (IMG) motion features, which can provide support for deploying video analysis in multimedia IoT crowdsensing systems. In particular, we achieve reliable motion information modeling by integrating a channel attention-based short-term motion feature enhancement module (CSEM) and a cascaded long-term motion feature integration module (CLIM). We evaluate our model on several action recognition benchmarks, such as HMDB51, Something-Something and UCF101. The experimental results demonstrate that our approach outperforms the previous state-of-the-art methods, which confirms its effectiveness and efficiency.

  • research-article
    Ahmad Azab, Mahmoud Khasawneh, Saed Alrabaee, Kim-Kwang Raymond Choo, Maysa Sarsour

    In network traffic classification, it is important to understand the correlation between network traffic and its causal application, protocol, or service group, for example, in facilitating lawful interception, ensuring the quality of service, preventing application choke points, and facilitating malicious behavior identification. In this paper, we review existing network classification techniques, such as port-based identification and those based on deep packet inspection, statistical features in conjunction with machine learning, and deep learning algorithms. We also explain the implementations, advantages, and limitations associated with these techniques. Our review also extends to publicly available datasets used in the literature. Finally, we discuss existing and emerging challenges, as well as future research directions.

  • research-article
    Mian Guo, Mithun Mukherjee, Jaime Lloret, Lei Li, Quansheng Guan, Fei Ji

    The growing development of the Internet of Things (IoT) is accelerating the emergence and growth of new IoT services and applications, which will result in massive amounts of data being generated, transmitted and processed in wireless communication networks. Mobile Edge Computing (MEC) is a desired paradigm to timely process the data from IoT for value maximization. In MEC, a number of computing-capable devices are deployed at the network edge near data sources to support edge computing, such that the long network transmission delay in cloud computing paradigm could be avoided. Since an edge device might not always have sufficient resources to process the massive amount of data, computation offloading is significantly important considering the cooperation among edge devices. However, the dynamic traffic characteristics and heterogeneous computing capabilities of edge devices challenge the offloading. In addition, different scheduling schemes might provide different computation delays to the offloaded tasks. Thus, offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay. This paper seeks to guarantee low delay for computation intensive applications by jointly optimizing the offloading and scheduling in such an MEC system. We propose a Delay-Greedy Computation Offloading (DGCO) algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices. A Reinforcement Learning-based Parallel Scheduling (RLPS) algorithm is further designed to schedule offloaded tasks in the multi-core MEC server. With an offloading delay broadcast mechanism, the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization. Finally, the simulation results show that our proposal can bound the end-to-end delay of various tasks. Even under slightly heavy task load, the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%, while that given by benchmarked algorithms is reduced to intolerable value. The simulation results are demonstrated the effectiveness of DGCO-RLPS for delay guarantee in MEC.

  • research-article
    Shengyu Zhang, Xiaoqian Li, Kwan Lawrence Yeung

    Low-Earth Orbit Satellite Constellations (LEO-SCs) provide global, high-speed, and low latency Internet access services, which bridges the digital divide in the remote areas. As inter-satellite links are not supported in initial deployment (i.e. the Starlink), the communication between satellites is based on ground stations with radio frequency signals. Due to the rapid movement of satellites, this hybrid topology of LEO-SCs and ground stations is time-varying, which imposes a major challenge to uninterrupted service provisioning and network management. In this paper, we focus on solving two notable problems in such a ground station-assisted LEO-SC topology, i.e., traffic engineering and fast reroute, to guarantee that the packets are forwarded in a balanced and uninterrupted manner. Specifically, we employ segment routing to support the arbitrary path routing in LEO-SCs. To solve the traffic engineering problem, we proposed two source routings with traffic splitting algorithms, Delay-Bounded Traffic Splitting (DBTS) and DBTS+, where DBTS equally splits a flow and DBTS ​+ ​favors shorter paths. Simulation results show that DBTS ​+ ​can achieve about 30% lower maximum satellite load at the cost of about 10% more delay. To guarantee the fast recovery of failures, two fast reroute mechanisms, Loop-Free Alternate (LFA) and LFA+, are studied, where LFA pre-computes an alternate next-hop as a backup while LFA ​+ ​finds a 2-segment backup path. We show that LFA ​+ ​can increase the percentage of protection coverage by about 15%.

  • research-article
    Muralitharan Krishnan, Yongdo Lim, Seethalakshmi Perumal, Gayathri Palanisamy

    Existing web-based security applications have failed in many situations due to the great intelligence of attackers. Among web applications, Cross-Site Scripting (XSS) is one of the dangerous assaults experienced while modifying an organization's or user's information. To avoid these security challenges, this article proposes a novel, all-encompassing combination of machine learning (NB, SVM, k-NN) and deep learning (RNN, CNN, LSTM) frameworks for detecting and defending against XSS attacks with high accuracy and efficiency. Based on the representation, a novel idea for merging stacking ensemble with web applications, termed “hybrid stacking”, is proposed. In order to implement the aforementioned methods, four distinct datasets, each of which contains both safe and unsafe content, are considered. The hybrid detection method can adaptively identify the attacks from the URL, and the defense mechanism inherits the advantages of URL encoding with dictionary-based mapping to improve prediction accuracy, accelerate the training process, and effectively remove the unsafe JScript/JavaScript keywords from the URL. The simulation results show that the proposed hybrid model is more efficient than the existing detection methods. It produces more than 99.5% accurate XSS attack classification results (accuracy, precision, recall, f1_score, and Receiver Operating Characteristic (ROC)) and is highly resistant to XSS attacks. In order to ensure the security of the server's information, the proposed hybrid approach is demonstrated in a real-time environment.

  • research-article
    Yu Zhang, Bei Gong, Qian Wang

    The popularity of the Internet of Things (IoT) has enabled a large number of vulnerable devices to connect to the Internet, bringing huge security risks. As a network-level security authentication method, device fingerprint based on machine learning has attracted considerable attention because it can detect vulnerable devices in complex and heterogeneous access phases. However, flexible and diversified IoT devices with limited resources increase difficulty of the device fingerprint authentication method executed in IoT, because it needs to retrain the model network to deal with incremental features or types. To address this problem, a device fingerprinting mechanism based on a Broad Learning System (BLS) is proposed in this paper. The mechanism firstly characterizes IoT devices by traffic analysis based on the identifiable differences of the traffic data of IoT devices, and extracts feature parameters of the traffic packets. A hierarchical hybrid sampling method is designed at the preprocessing phase to improve the imbalanced data distribution and reconstruct the fingerprint dataset. The complexity of the dataset is reduced using Principal Component Analysis (PCA) and the device type is identified by training weights using BLS. The experimental results show that the proposed method can achieve state-of-the-art accuracy and spend less training time than other existing methods.

  • research-article
    Zhimin Zhang, Huansheng Ning, Fadi Farha, Jianguo Ding, Kim-Kwang Raymond Choo

    Effective user authentication is key to ensuring equipment security, data privacy, and personalized services in Internet of Things (IoT) systems. However, conventional mode-based authentication methods(e.g., passwords and smart cards) may be vulnerable to a broad range of attacks (e.g., eavesdropping and side-channel attacks). Hence, there have been attempts to design biometric-based authentication solutions, which rely on physiological and behavioral characteristics. Behavioral characteristics need continuous monitoring and specific environmental settings, which can be challenging to implement in practice. However, we can also leverage Artificial Intelligence (AI) in the extraction and classification of physiological characteristics from IoT devices processing to facilitate authentication. Thus, we review the literature on the use of AI in physiological characteristics recognition published after 2015. We use the three-layer architecture of the IoT (i.e., sensing layer, feature layer, and algorithm layer) to guide the discussion of existing approaches and their limitations. We also identify a number of future research opportunities, which will hopefully guide the design of next generation solutions.

  • research-article
    Qiao Tian, Sicheng Zhang, Shiwen Mao, Yun Lin

    As modern communication technology advances apace, the digital communication signals identification plays an important role in cognitive radio networks, the communication monitoring and management systems. AI has become a promising solution to this problem due to its powerful modeling capability, which has become a consensus in academia and industry. However, because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space, the physical layer digital communication signals identification model is threatened by adversarial attacks. Adversarial examples pose a common threat to AI models, where well-designed and slight perturbations added to input data can cause wrong results. Therefore, the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications. In this paper, we first launch adversarial attacks on the end-to-end AI model for automatic modulation classification, and then we explain and present three defense mechanisms based on the adversarial principle. Next we present more detailed adversarial indicators to evaluate attack and defense behavior. Finally, a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model, which should be paid more attention in future research.

  • research-article
    Silvana Trindade, Luiz F. Bittencourt, Nelson L.S. da Fonseca

    Federated learning has been explored as a promising solution for training machine learning models at the network edge, without sharing private user data. With limited resources at the edge, new solutions must be developed to leverage the software and hardware resources as the existing solutions did not focus on resource management for network edge, specially for federated learning. In this paper, we describe the recent work on resource management at the edge and explore the challenges and future directions to allow the execution of federated learning at the edge. Problems such as the discovery of resources, deployment, load balancing, migration, and energy efficiency are discussed in the paper.

  • research-article
    Sabuzima Nayak, Ripon Patgiri, Lilapati Waikhom, Arif Ahmed

    Edge technology aims to bring cloud resources (specifically, the computation, storage, and network) to the closed proximity of the edge devices, i.e., smart devices where the data are produced and consumed. Embedding computing and application in edge devices lead to emerging of two new concepts in edge technology: edge computing and edge analytics. Edge analytics uses some techniques or algorithms to analyse the data generated by the edge devices. With the emerging of edge analytics, the edge devices have become a complete set. Currently, edge analytics is unable to provide full support to the analytic techniques. The edge devices cannot execute advanced and sophisticated analytic algorithms following various constraints such as limited power supply, small memory size, limited resources, etc. This article aims to provide a detailed discussion on edge analytics. The key contributions of the paper are as follows-a clear explanation to distinguish between the three concepts of edge technology: edge devices, edge computing, and edge analytics, along with their issues. In addition, the article discusses the implementation of edge analytics to solve many problems and applications in various areas such as retail, agriculture, industry, and healthcare. Moreover, the research papers of the state-of-the-art edge analytics are rigorously reviewed in this article to explore the existing issues, emerging challenges, research opportunities and their directions, and applications.

  • research-article
    Jie Chen, Jianhua Tang

    Wireless Sensor Network (WSN) is a cornerstone of Internet of Things (IoT) and has rich application scenarios. In this work, we consider a heterogeneous WSN whose sensor nodes have a diversity in their Residual Energy (RE). In this work, to protect the sensor nodes with low RE, we investigate dynamic working modes for sensor nodes which are determined by their RE and an introduced energy threshold. Besides, we employ an Unmanned Aerial Vehicle (UAV) to collect the stored data from the heterogeneous WSN. We aim to jointly optimize the cluster head selection, energy threshold and sensor nodes’ working mode to minimize the weighted sum of energy consumption from the WSN and UAV, subject to the data collection rate constraint. To this end, we propose an efficient search method to search for an optimal energy threshold, and develop a penalty-based successive convex approximation algorithm to select the cluster heads. Then we present a low-complexity iterative approach to solve the joint optimization problem and discuss the implementation procedure. Numerical results justify that our proposed approach is able to reduce the energy consumption of the sensor nodes with low RE significantly and also saves energy for the whole WSN.