Infrared small-target detection has important applications in many fields due to its high penetration capability and detection distance. This study introduces a detector called “YOLO-SDLUWD” which is based on the YOLOv7 network, for small target detection in complex infrared backgrounds. The “SDLUWD” refers to the combination of the Spatial Depth layer followed Convolutional layer structure (SD-Conv) and a Linear Up-sampling fusion Path Aggregation Feature Pyramid Network (LU-PAFPN) and a training strategy based on the normalized Gaussian Wasserstein Distance loss (WD-loss) function. “YOLO-SDLUWD” aims to reduce detection accuracy when the maximum pooling downsampling layer in the backbone network loses important feature information, support the interaction and fusion of high-dimensional and low-dimensional feature information, and overcome the false alarm predictions induced by noise in small target images. The detector achieved a mAP@0.5 of 90.4% and mAP@0.5:0.95 of 48.5% on IRIS-AG, an increase of 9%-11% over YOLOv7-tiny, outperforming other state-of-the-art target detectors in terms of accuracy and speed.
The Intelligent Internet of Things (IIoT) involves real-world things that communicate or interact with each other through networking technologies by collecting data from these “things” and using intelligent approaches, such as Artificial Intelligence (AI) and machine learning, to make accurate decisions. Data science is the science of dealing with data and its relationships through intelligent approaches. Most state-of-the-art research focuses independently on either data science or IIoT, rather than exploring their integration. Therefore, to address the gap, this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT (IIoT) system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics. The paper analyzes the data science or big data security and privacy features, including network architecture, data protection, and continuous monitoring of data, which face challenges in various IoT-based systems. Extensive insights into IoT data security, privacy, and challenges are visualized in the context of data science for IoT. In addition, this study reveals the current opportunities to enhance data science and IoT market development. The current gap and challenges faced in the integration of data science and IoT are comprehensively presented, followed by the future outlook and possible solutions.
Changes in the Atmospheric Electric Field Signal (AEFS) are highly correlated with weather changes, especially with thunderstorm activities. However, little attention has been paid to the ambiguous weather information implicit in AEFS changes. In this paper, a Fuzzy C-Means (FCM) clustering method is used for the first time to develop an innovative approach to characterize the weather attributes carried by AEFS. First, a time series dataset is created in the time domain using AEFS attributes. The AEFS-based weather is evaluated according to the time-series Membership Degree (MD) changes obtained by inputting this dataset into the FCM. Second, thunderstorm intensities are reflected by the change in distance from a thunderstorm cloud point charge to an AEF apparatus. Thus, a matching relationship is established between the normalized distance and the thunderstorm dominant MD in the space domain. Finally, the rationality and reliability of the proposed method are verified by combining radar charts and expert experience. The results confirm that this method accurately characterizes the weather attributes and changes in the AEFS, and a negative distance-MD correlation is obtained for the first time. The detection of thunderstorm activity by AEF from the perspective of fuzzy set technology provides a meaningful guidance for interpretable thunderstorms.
The cleanliness of seed cotton plays a critical role in the pre-treatment of cotton textiles, and the removal of impurity during the harvesting process directly determines the quality and market value of cotton textiles. By fusing band combination optimization with deep learning, this study aims to achieve more efficient and accurate detection of film impurities in seed cotton on the production line. By applying hyperspectral imaging and a one-dimensional deep learning algorithm, we detect and classify impurities in seed cotton after harvest. The main categories detected include pure cotton, conveyor belt, film covering seed cotton, and film adhered to the conveyor belt. The proposed method achieves an impurity detection rate of 99.698%. To further ensure the feasibility and practical application potential of this strategy, we compare our results against existing mainstream methods. In addition, the model shows excellent recognition performance on pseudo-color images of real samples. With a processing time of 11.764 μs per pixel from experimental data, it shows a much improved speed requirement while maintaining the accuracy of real production lines. This strategy provides an accurate and efficient method for removing impurities during cotton processing.
Remote driving, an emergent technology enabling remote operations of vehicles, presents a significant challenge in transmitting large volumes of image data to a central server. This requirement outpaces the capacity of traditional communication methods. To tackle this, we propose a novel framework using semantic communications, through a region of interest semantic segmentation method, to reduce the communication costs by transmitting meaningful semantic information rather than bit-wise data. To solve the knowledge base inconsistencies inherent in semantic communications, we introduce a blockchain-based edge-assisted system for managing diverse and geographically varied semantic segmentation knowledge bases. This system not only ensures the security of data through the tamper-resistant nature of blockchain but also leverages edge computing for efficient management. Additionally, the implementation of blockchain sharding handles differentiated knowledge bases for various tasks, thus boosting overall blockchain efficiency. Experimental results show a great reduction in latency by sharding and an increase in model accuracy, confirming our framework's effectiveness.
With the evolution of next-generation communication networks, ensuring robust Core Network (CN) architecture and data security has become paramount. This paper addresses critical vulnerabilities in the architecture of CN and data security by proposing a novel framework based on blockchain technology that is specifically designed for communication networks. Traditional centralized network architectures are vulnerable to Distributed Denial of Service (DDoS) attacks, particularly in roaming scenarios where there is also a risk of private data leakage, which imposes significant operational demands. To address these issues, we introduce the Blockchain-Enhanced Core Network Architecture (BECNA) and the Secure Decentralized Identity Authentication Scheme (SDIDAS). The BECNA utilizes blockchain technology to decentralize data storage, enhancing network security, stability, and reliability by mitigating Single Points of Failure (SPoF). The SDIDAS utilizes Decentralized Identity (DID) technology to secure user identity data and streamline authentication in roaming scenarios, significantly reducing the risk of data breaches during cross-network transmissions. Our framework employs Ethereum, free5GC, Wireshark, and UERANSIM tools to create a robust, tamper-evident system model. A comprehensive security analysis confirms substantial improvements in user privacy and network security. Simulation results indicate that our approach enhances communication CNs security and reliability, while also ensuring data security.
In this paper, we present a Deep Neural Network (DNN) based framework that employs Radio Frequency (RF) hologram tensors to locate multiple Ultra-High Frequency (UHF) passive Radio-Frequency Identification (RFID) tags. The RF hologram tensor exhibits a strong relationship between observation and spatial location, helping to improve the robustness to dynamic environments and equipment. Since RFID data is often marred by noise, we implement two types of deep neural network architectures to clean up the RF hologram tensor. Leveraging the spatial relationship between tags, the deep networks effectively mitigate fake peaks in the hologram tensors resulting from multipath propagation and phase wrapping. In contrast to fingerprinting-based localization systems that use deep networks as classifiers, our deep networks in the proposed framework treat the localization task as a regression problem preserving the ambiguity between fingerprints. We also present an intuitive peak finding algorithm to obtain estimated locations using the sanitized hologram tensors. The proposed framework is implemented using commodity RFID devices, and its superior performance is validated through extensive experiments.
With the rapid development of digital communication and the widespread use of the Internet of Things, multi-view image compression has attracted increasing attention as a fundamental technology for image data communication. Multi-view image compression aims to improve compression efficiency by leveraging correlations between images. However, the requirement of synchronization and inter-image communication at the encoder side poses significant challenges, especially for constrained devices. In this study, we introduce a novel distributed image compression model based on the attention mechanism to address the challenges associated with the availability of side information only during decoding. Our model integrates an encoder network, a quantization module, and a decoder network, to ensure both high compression performance and high-quality image reconstruction. The encoder uses a deep Convolutional Neural Network (CNN) to extract high-level features from the input image, which then pass through the quantization module for further compression before undergoing lossless entropy coding. The decoder of our model consists of three main components that allow us to fully exploit the information within and between images on the decoder side. Specifically, we first introduce a channel-spatial attention module to capture and refine information within individual image feature maps. Second, we employ a semi-coupled convolution module to extract both shared and specific information in images. Finally, a cross-attention module is employed to fuse mutual information extracted from side information. The effectiveness of our model is validated on various datasets, including KITTI Stereo and Cityscapes. The results highlight the superior compression capabilities of our method, surpassing state-of-the-art techniques.
In this paper, a sparse graph neural network-aided (SGNN-aided) decoder is proposed for improving the decoding performance of polar codes under bursty interference. Firstly, a sparse factor graph is constructed using the encoding characteristic to achieve high-throughput polar decoding. To further improve the decoding performance, a residual gated bipartite graph neural network is designed for updating embedding vectors of heterogeneous nodes based on a bidirectional message passing neural network. This framework exploits gated recurrent units and residual blocks to address the gradient disappearance in deep graph recurrent neural networks. Finally, predictions are generated by feeding the embedding vectors into a readout module. Simulation results show that the proposed decoder is more robust than the existing ones in the presence of bursty interference and exhibits high universality.
As the problem of surface garbage pollution becomes more serious, it is necessary to improve the efficiency of garbage inspection and picking rather than traditional manual methods. Due to lightness, Unmanned Aerial Vehicles (UAVs) can traverse the entire water surface in a short time through their flight field of view. In addition, Unmanned Surface Vessels (USVs) can provide battery replacement and pick up garbage. In this paper, we innovatively establish a system framework for the collaboration between UAV and USVs, and develop an automatic water cleaning strategy. First, on the basis of the partition principle, we propose a collaborative coverage path algorithm based on UAV off-site takeoff and landing to achieve global inspection. Second, we design a task scheduling and assignment algorithm for USVs to balance the garbage loads based on the particle swarm optimization algorithm. Finally, based on the swarm intelligence algorithm, we also design an autonomous obstacle avoidance path planning algorithm for USVs to realize autonomous navigation and collaborative cleaning. The system can simultaneously perform inspection and clearance tasks under certain constraints. The simulation results show that the proposed algorithms have higher generality and flexibility while effectively improving computational efficiency and reducing actual cleaning costs compared with other schemes.
For better flexibility and greater coverage areas, Unmanned Aerial Vehicles (UAVs) have been applied in Flying Mobile Edge Computing (F-MEC) systems to offer offloading services for the User Equipment (UEs). This paper considers a disaster-affected scenario where UAVs undertake the role of MEC servers to provide computing resources for Disaster Relief Devices (DRDs). Considering the fairness of DRDs, a max-min problem is formulated to optimize the saved time by jointly designing the trajectory of the UAVs, the offloading policy and serving time under the constraint of the UAVs' energy capacity. To solve the above non-convex problem, we first model the service process as a Markov Decision Process (MDP) with the Reward Shaping (RS) technique, and then propose a Deep Reinforcement Learning (DRL) based algorithm to find the optimal solution for the MDP. Simulations show that the proposed RS-DRL algorithm is valid and effective, and has better performance than the baseline algorithms.
Dual-function communication radar systems use common Radio Frequency (RF) signals are used for both communication and detection. For better compatibility with existing communication systems, we adopt Multiple-Input Multiple-Output (MIMO) Orthogonal Frequency Division Multiplexing (OFDM) signals as integrated signals and investigate the estimation performance of MIMO-OFDM signals. First, we analyze the Cramer-Rao Lower Bound (CRLB) of parameter estimation. Then, the transmit powers over different subcarriers are optimized to achieve the best tradeoff between the transmission rate and the estimation performance. Finally, we propose a more accurate estimation method that uses Canonical Polyadic Decomposition (CPD) of the third-order tensor to obtain the parameter matrices. Due to the characteristic of the column structure of the parameter matrices, we only need to use DFT / IDFT to recover the parameters of multiple targets. The simulation results show that tensor-based estimation method can achieve a performance close to CRLB, and the estimation performance can be improved by optimizing the transmit powers.
With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Maritime Meteorological Sensor Networks (MMSNs). However, the increasing number of intelligent devices joining the MMSN poses a growing threat to network security. Current Artificial Intelligence (AI) intrusion detection techniques turn intrusion detection into a classification problem, where AI excels. These techniques assume sufficient high-quality instances for model construction, which is often unsatisfactory for real-world operation with limited attack instances and constantly evolving characteristics. This paper proposes an Adaptive Personalized Federated learning (APFed) framework that allows multiple MMSN owners to engage in collaborative training. By employing an adaptive personalized update and a shared global classifier, the adverse effects of imbalanced, Non-Independent and Identically Distributed (Non-IID) data are mitigated, enabling the intrusion detection model to possess personalized capabilities and good global generalization. In addition, a lightweight intrusion detection model is proposed to detect various attacks with an effective adaptation to the MMSN environment. Finally, extensive experiments on a classical network dataset show that the attack classification accuracy is improved by about 5% compared to most baselines in the global scenarios.
Traffic encryption techniques facilitate cyberattackers to hide their presence and activities. Traffic classification is an important method to prevent network threats. However, due to the tremendous traffic volume and limitations of computing, most existing traffic classification techniques are inapplicable to the high-speed network environment. In this paper, we propose a High-speed Encrypted Traffic Classification (HETC) method containing two stages. First, to efficiently detect whether traffic is encrypted, HETC focuses on randomly sampled short flows and extracts aggregation entropies with chi-square test features to measure the different patterns of the byte composition and distribution between encrypted and unencrypted flows. Second, HETC introduces binary features upon the previous features and performs fine-grained traffic classification by combining these payload features with a Random Forest model. The experimental results show that HETC can achieve a 94% F-measure in detecting encrypted flows and a 85%-93% F-measure in classifying fine-grained flows for a 1-KB flow-length dataset, outperforming the state-of-the-art comparison methods. Meanwhile, HETC does not need to wait for the end of the flow and can extract mass computing features. The average time for HETC to process each flow is only 2 or 16 ms, which is lower than the flow duration in most cases, making it a good candidate for high-speed traffic classification.
Recently, wireless security has been highlighted as one of the most important techniques for 6G mobile communication systems. Many researchers have tried to improve the Physical-Layer Security (PLS) performance such as Secrecy Outage Probability (SOP) and Secrecy Energy-Efficiency (SEE). The SOP indicates the outage probability that the data transmission between legitimate devices does not guarantee a certain reliability level, and the SEE is defined as the ratio between the achievable secrecy-rate and the consumed transmit power. In this paper, we consider a Multi-User Multi-Input Single-Output (MU-MISO) downlink cellular network where a legitimate Base Station (BS) equipped with multiple transmit antennas sends secure information to multiple legitimate Mobile Stations (MSs), and multiple potential eavesdroppers (EVEs) equipped with a single receive antenna try to eavesdrop on this information. Each potential EVE tries to intercept the secure information, i.e., the private message, from the legitimate BS to legitimate MSs with a certain eavesdropping probability. To securely receive the private information, each legitimate MS feeds back its effective channel gain to the legitimate BS only when the effective channel gain is higher than a certain threshold, i.e., the legitimate MSs adopt an Opportunistic Feedback (OF) strategy. In such eavesdropping channels, both SOP and SEE are analyzed as performance measures of PLS and their closed-form expressions are derived mathematically. Based on the analytical results, it is shown that the SOP of the OF strategy approaches that of a Full Feedback (FF) strategy as the number of legitimate MSs or the number of antennas at the BS increases. Furthermore, the trade-off between SOP and SEE as a function of the channel feedback threshold in the OF strategy is investigated. The analytical results and related observations are verified by numerical simulations.
The wide application of smart contracts allows industry companies to implement some complex distributed collaborative businesses, which involve the calculation of complex functions, such as matrix operations. However, complex functions such as matrix operations are difficult to implement on Ethereum Virtual Machine (EVM)-based smart contract platforms due to their distributed security environment limitations. Existing off-chain methods often result in a significant reduction in contract execution efficiency, thus a platform software development kit interface implementation method has become a feasible way to reduce overheads, but this method cannot verify operation correctness and may leak sensitive user data. To solve the above problems, we propose a verifiable EVM-based smart contract cross-language implementation scheme for complex operations, especially matrix operations, which can guarantee operation correctness and user privacy while ensuring computational efficiency. In this scheme, a verifiable interaction process is designed to verify the computation process and results, and a matrix blinding technology is introduced to protect sensitive user data in the calculation process. The security analysis and performance tests show that the proposed scheme can satisfy the correctness and privacy of the cross-language implementation of smart contracts at a small additional efficiency cost.
Federated learning combines with fog computing to transform data sharing into model sharing, which solves the issues of data isolation and privacy disclosure in fog computing. However, existing studies focus on centralized single-layer aggregation federated learning architecture, which lack the consideration of cross-domain and asynchronous robustness of federated learning, and rarely integrate verification mechanisms from the perspective of incentives. To address the above challenges, we propose a Blockchain and Signcryption enabled Asynchronous Federated Learning (BSAFL) framework based on dual aggregation for cross-domain scenarios. In particular, we first design two types of signcryption schemes to secure the interaction and access control of collaborative learning between domains. Second, we construct a differential privacy approach that adaptively adjusts privacy budgets to ensure data privacy and local models' availability of intra-domain user. Furthermore, we propose an asynchronous aggregation solution that incorporates consensus verification and elastic participation using blockchain. Finally, security analysis demonstrates the security and privacy effectiveness of BSAFL, and the evaluation on real datasets further validates the high model accuracy and performance of BSAFL.
With the popularity of the Internet of Vehicles (IoV), a large amount of data is being generated every day. How to securely share data between the IoV operator and various value-added service providers becomes one of the critical issues. Due to its flexible and efficient fine-grained access control feature, Ciphertext-Policy Attribute-Based Encryption (CP-ABE) is suitable for data sharing in IoV. However, there are many flaws in most existing CP-ABE schemes, such as attribute privacy leakage and key misuse. This paper proposes a Traceable and Revocable CP-ABE-based Data Sharing with Partially hidden policy for IoV (TRE-DSP). A partially hidden access structure is adopted to hide sensitive user attribute values, and attribute categories are sent along with the ciphertext to effectively avoid privacy exposure. In addition, key tracking and malicious user revocation are introduced with broadcast encryption to prevent key misuse. Since the main computation task is outsourced to the cloud, the burden of the user side is relatively low. Analysis of security and performance demonstrates that TRE-DSP is more secure and practical for data sharing in IoV.
The increase in user mobility and density in modern cellular networks increases the risk of overloading certain base stations in popular locations such as shopping malls or stadiums, which can result in connection loss for some users. To combat this, the traffic load of base stations should be kept as balanced as possible. In this paper, we propose an efficient load balancing-aware handover algorithm for highly dynamic beyond 5G heterogeneous networks by assigning mobile users to base stations with lighter loads when a handover is performed. The proposed algorithm is evaluated in a scenario with users having different levels of mobility, such as pedestrians and vehicles, and is shown to outperform the conventional handover mechanism, as well as another algorithm from the literature. As a secondary benefit, the overall energy consumption in the network is shown to be reduced with the proposed algorithm.
Ciphertext-Policy Attribute-Based Encryption (CP-ABE) enables fine-grained access control on ciphertexts, making it a promising approach for managing data stored in the cloud-enabled Internet of Things. But existing schemes often suffer from privacy breaches due to explicit attachment of access policies or partial hiding of critical attribute content. Additionally, resource-constrained IoT devices, especially those adopting wireless communication, frequently encounter affordability issues regarding decryption costs. In this paper, we propose an efficient and fine-grained access control scheme with fully hidden policies (named FHAC). FHAC conceals all attributes in the policy and utilizes bloom filters to efficiently locate them. A test phase before decryption is applied to assist authorized users in finding matches between their attributes and the access policy. Dictionary attacks are thwarted by providing unauthorized users with invalid values. The heavy computational overhead of both the test phase and most of the decryption phase is outsourced to two cloud servers. Additionally, users can verify the correctness of multiple outsourced decryption results simultaneously. Security analysis and performance comparisons demonstrate FHAC's effectiveness in protecting policy privacy and achieving efficient decryption.
The lack of facial features caused by wearing masks degrades the performance of facial recognition systems. Traditional occluded face recognition methods cannot integrate the computational resources of the edge layer and the device layer. Besides, previous research fails to consider the facial characteristics including occluded and unoccluded parts. To solve the above problems, we put forward a device-edge collaborative occluded face recognition method based on cross-domain feature fusion. Specifically, the device-edge collaborative face recognition architecture gets the utmost out of maximizes device and edge resources for real-time occluded face recognition. Then, a cross-domain facial feature fusion method is presented which combines both the explicit domain and the implicit domain facial. Furthermore, a delay-optimized edge recognition task scheduling method is developed that comprehensively considers the task load, computational power, bandwidth, and delay tolerance constraints of the edge. This method can dynamically schedule face recognition tasks and minimize recognition delay while ensuring recognition accuracy. The experimental results show that the proposed method achieves an average gain of about 21% in recognition latency, while the accuracy of the face recognition task is basically the same compared to the baseline method.
Demand Side Management (DSM) is a vital issue in smart grids, given the time-varying user demand for electricity and power generation cost over a day. On the other hand, wireless communications with ubiquitous connectivity and low latency have emerged as a suitable option for smart grid. The design of any DSM system using a wireless network must consider the wireless link impairments, which is missing in existing literature. In this paper, we propose a DSM system using a Real-Time Pricing (RTP) mechanism and a wireless Neighborhood Area Network (NAN) with data transfer uncertainty. A Zigbee-based Internet of Things (IoT) model is considered for the communication infrastructure of the NAN. A sample NAN employing XBee and Raspberry Pi modules is also implemented in real-world settings to evaluate its reliability in transferring smart grid data over a wireless link. The proposed DSM system determines the optimal price corresponding to the optimum system welfare based on the two-way wireless communications among users, decision-makers, and energy providers. A novel cost function is adopted to reduce the impact of changes in user numbers on electricity prices. Simulation results indicate that the proposed system benefits users and energy providers. Furthermore, experimental results demonstrate that the success rate of data transfer significantly varies over the implemented wireless NAN, which can substantially impact the performance of the proposed DSM system. Further simulations are then carried out to quantify and analyze the impact of wireless communications on the electricity price, user welfare, and provider welfare.
The integration of high-speed railway communication systems with 5G technology is widely recognized as a significant development. Due to the considerable mobility of trains and the complex nature of the environment, the wireless channel exhibits non-stationary characteristics and fast time-varying characteristics, which presents significant hurdles in terms of channel estimation. In addition, the use of massive MIMO technology in the context of 5G networks also leads to an increase in the complexity of estimation. To address the aforementioned issues, this paper presents a novel approach for channel estimation in high mobility scenarios using a reconstruction and recovery network. In this method, the time-frequency response of the channel is considered as a two-dimensional image. The Fast Super-Resolution Convolution Neural Network (FSRCNN) is used to first reconstruct channel images. Next, the Denoising Convolution Neural Network (DnCNN) is applied to reduce the channel noise and improve the accuracy of channel estimation. Simulation results show that the accuracy of the channel estimation model surpasses that of the standard channel estimation method, while also exhibiting reduced algorithmic complexity.
We study a novel replication mechanism to ensure service continuity against multiple simultaneous server failures. In this mechanism, each item represents a computing task and is replicated into ξ+1 servers for some integer ξ≥1, with workloads specified by the amount of required resources. If one or more servers fail, the affected workloads can be redirected to other servers that host replicas associated with the same item, such that the service is not interrupted by the failure of up to ξ servers. This requires that any feasible assignment algorithm must reserve some capacity in each server to accommodate the workload redirected from potential failed servers without overloading, and determining the optimal method for reserving capacity becomes a key issue. Unlike existing algorithms that assume that no two servers share replicas of more than one item, we first formulate capacity reservation for a general arbitrary scenario. Due to the combinatorial nature of this problem, finding the optimal solution is difficult. To this end, we propose a Generalized and Simple Calculating Reserved Capacity (GSCRC) algorithm, with a time complexity only related to the number of items packed in the server. In conjunction with GSCRC, we propose a robust replica packing algorithm with capacity optimization (RobustPack), which aims to minimize the number of servers hosting replicas and tolerate multiple server failures. Through theoretical analysis and experimental evaluations, we show that the RobustPack algorithm can achieve better performance.
With the rapid development of web technology, Social Networks (SNs) have become one of the most popular platforms for users to exchange views and to express their emotions. More and more people are used to commenting on a certain hot spot in SNs, resulting in a large amount of texts containing emotions. Textual Emotion Cause Extraction (TECE) aims to automatically extract causes for a certain emotion in texts, which is an important research issue in natural language processing. It is different from the previous tasks of emotion recognition and emotion classification. In addition, it is not limited to the shallow-level emotion classification of text, but to trace the emotion source. In this paper, we provide a survey for TECE. First, we introduce the development process and classification of TECE. Then, we discuss the existing methods and key factors for TECE. Finally, we enumerate the challenges and developing trend for TECE.
In task offloading, the movement of vehicles causes the switching of connected RSUs and servers, which may lead to task offloading failure or high service delay. In this paper, we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling. Then, a Bi-LSTM-based model is proposed to predict the trajectories of vehicles. The service area is divided into several equal-sized grids. If the actual position of the vehicle and the predicted position by the model belong to the same grid, the prediction is considered correct, thereby reducing the difficulty of vehicle trajectory prediction. Moreover, we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction. Considering the inevitable prediction error, we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers, thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading. Simulation results show that, compared with other classical schemes, the proposed strategy has lower average task offloading delays.
With the introduction of 5G, users and devices can access the industrial network from anywhere in the world. Therefore, traditional perimeter-based security technologies for industrial networks can no longer work well. To solve this problem, a new security model called Zero Trust(ZT) is desired, which believes in “never trust and always verify”. Every time the asset in the industrial network is accessed, the subject is authenticated and its trustworthiness is assessed. In this way, the asset in industrial network can be well protected, whether the subject is in the internal network or the external network. However, in order to construct the zero trust model in the 5G Industrial Internet collaboration system, there are still many problems to be solved. In this paper, we first introduce the security issues in the 5G Industrial Internet collaboration system, and illustrate the zero trust architecture. Then, we analyze the gap between existing security techniques and the zero trust architecture. Finally, we discuss several potential security techniques that can be used to implement the zero trust model. The purpose of this paper is to point out the further direction for the realization of the Zero Trust Architecture (ZTA) in the 5G Industrial Internet collaboration system.
It is difficult to improve both energy consumption and detection accuracy simultaneously, and even to obtain the trade-off between them, when detecting and tracking moving targets, especially for Underwater Wireless Sensor Networks (UWSNs). To this end, this paper investigates the relationship between the Degree of Target Change (DoTC) and the detection period, as well as the impact of individual nodes. A Hierarchical Detection and Tracking Approach (HDTA) is proposed. Firstly, the network detection period is determined according to DoTC, which reflects the variation of target motion. Secondly, during the network detection period, each detection node calculates its own node detection period based on the detection mutual information. Taking DoTC as pheromone, an ant colony algorithm is proposed to adaptively adjust the network detection period. The simulation results show that the proposed HDTA with the optimizations of network level and node level significantly improves the detection accuracy by 25% and the network energy consumption by 10% simultaneously, compared to the traditional adaptive period detection scheme
Cyber-Physical Networks (CPN) are comprehensive systems that integrate information and physical domains, and are widely used in various fields such as online social networking, smart grids, and the Internet of Vehicles (IoV). With the increasing popularity of digital photography and Internet technology, more and more users are sharing images on CPN. However, many images are shared without any privacy processing, exposing hidden privacy risks and making sensitive content easily accessible to Artificial Intelligence(AI) algorithms. Existing image sharing methods lack fine-grained image sharing policies and cannot protect user privacy. To address this issue, we propose a social relationship-driven privacy customization protection model for publishers and co-photographers. We construct a heterogeneous social information network centered on social relationships, introduce a user intimacy evaluation method with time decay, and evaluate privacy levels considering user interest similarity. To protect user privacy while maintaining image appreciation, we design a lightweight face-swapping algorithm based on Generative Adversarial Network (GAN) to swap faces that need to be protected. Our proposed method minimizes the loss of image utility while satisfying privac
In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, commonly used in data transmission protocols, increases transmission delay and consumes excessive bandwidth. To overcome this issue, forward error correction techniques, e.g., Random Linear Network Coding (RLNC) can be used in data transmission. The primary challenge in RLNC-based methodologies is sustaining a consistent coding ratio during data transmission, leading to notable bandwidth usage and transmission delay in dynamic network conditions. Therefore, this study proposes a new block-based RLNC strategy known as Adjustable RLNC (ARLNC), which dynamically adjusts the coding ratio and transmission window during runtime based on the estimated network error rate calculated via receiver feedback. The calculations in this approach are performed using a Galois field with the order of 256. Furthermore, we assessed ARLNC's performance by subjecting it to various error models such as Gilbert Elliott, exponential, and constant rates and compared it with the standard RLNC. The results show that dynamically adjusting the coding ratio and transmission window size based on network conditions significantly enhances network throughput and reduces total transmission delay in most scenarios. In contrast to the conventional RLNC method employing a fixed coding ratio, the presented approach has demonstrated significant enhancements, resulting in a 73% decrease in transmission delay and a 4 times augmentation in throughput. However, in dynamic computational environments, ARLNC generally incurs higher computational costs than the standard RLNC but excels in high-performance networks.