As autonomous vehicles and the other supporting infrastructures (e.g., smart cities and intelligent transportation systems) become more commonplace, the Internet of Vehicles (IoV) is getting increasingly prevalent. There have been attempts to utilize Digital Twins (DTs) to facilitate the design, evaluation, and deployment of IoV-based systems, for example by supporting high-fidelity modeling, real-time monitoring, and advanced predictive capabilities. However, the literature review undertaken in this paper suggests that integrating DTs into IoV-based system design and deployment remains an understudied topic. In addition, this paper explains how DTs can benefit IoV system designers and implementers, as well as describes several challenges and opportunities for future researchers.
Digital Twin (DT) supports real time analysis and provides a reliable simulation platform in the Internet of Things (IoT). The creation and application of DT hinges on amounts of data, which poses pressure on the application of Artificial Intelligence (AI) for DT descriptions and intelligent decision-making. Federated Learning (FL) is a cutting-edge technology that enables geographically dispersed devices to collaboratively train a shared global model locally rather than relying on a data center to perform model training. Therefore, DT can benefit by combining with FL, successfully solving the "data island" problem in traditional AI. However, FL still faces serious challenges, such as enduring single-point failures, suffering from poison attacks, lacking effective incentive mechanisms. Before the successful deployment of DT, we should tackle the issues caused by FL. Researchers from industry and academia have recognized the potential of introducing Blockchain Technology (BT) into FL to overcome the challenges faced by FL, where BT acting as a distributed and immutable ledger, can store data in a secure, traceable, and trusted manner. However, to the best of our knowledge, a comprehensive literature review on this topic is still missing. In this paper, we review existing works about blockchain-enabled FL and visualize their prospects with DT. To this end, we first propose evaluation requirements with respect to security, fault-tolerance, fairness, efficiency, cost-saving, profitability, and support for heterogeneity. Then, we classify existing literature according to the functionalities of BT in FL and analyze their advantages and disadvantages based on the proposed evaluation requirements. Finally, we discuss open problems in the existing literature and the future of DT supported by blockchain-enabled FL, based on which we further propose some directions for future research.
Wireless Sensor Network (WSN) is a distributed sensor network composed a large number of nodes with low cost, low performance and self-management. The special structure of WSN brings both convenience and vulnerability. For example, a malicious participant can launch attacks by capturing a physical device. Therefore, node authentication that can resist malicious attacks is very important to network security. Recently, blockchain technology has shown the potential to enhance the security of the Internet of Things (IoT). In this paper, we propose a Blockchain-empowered Authentication Scheme (BAS) for WSN. In our scheme, all nodes are managed by utilizing the identity information stored on the blockchain. Besides, the simulation experiment about worm detection is executed on BAS, and the security is evaluated from detection and infection rate. The experiment results indicate that the proposed scheme can effectively inhibit the spread and infection of worms in the network.
Digital twinning enables manufacturers to create digital representations of physical entities, thus implementing virtual simulations for product development. Previous efforts of digital twinning neglect the decisive consumer feedback in product development stages, failing to cover the gap between physical and digital spaces. This work mines real-world consumer feedbacks through social media topics, which is significant to product development. We specifically analyze the prevalent time of a product topic, giving an insight into both consumer attention and the widely-discussed time of a product. The primary body of current studies regards the prevalent time prediction as an accompanying task or assumes the existence of a preset distribution. Therefore, these proposed solutions are either biased in focused objectives and underlying patterns or weak in the capability of generalization towards diverse topics. To this end, this work combines deep learning and survival analysis to predict the prevalent time of topics. We propose a specialized deep survival model which consists of two modules. The first module enriches input covariates by incorporating latent features of the time-varying text, and the second module fully captures the temporal pattern of a rumor by a recurrent network structure. Moreover, a specific loss function different from regular survival models is proposed to achieve a more reasonable prediction. Extensive experiments on real-world datasets demonstrate that our model significantly outperforms the state-of-the-art methods.
The Autonomous Underwater Glider (AUG) is a kind of prevailing underwater intelligent internet vehicle and occupies a dominant position in industrial applications, in which path planning is an essential problem. Due to the complexity and variability of the ocean, accurate environment modeling and flexible path planning algorithms are pivotal challenges. The traditional models mainly utilize mathematical functions, which are not complete and reliable. Most existing path planning algorithms depend on the environment and lack flexibility. To overcome these challenges, we propose a path planning system for underwater intelligent internet vehicles. It applies digital twins and sensor data to map the real ocean environment to a virtual digital space, which provides a comprehensive and reliable environment for path simulation. We design a value-based reinforcement learning path planning algorithm and explore the optimal network structure parameters. The path simulation is controlled by a closed-loop model integrated into the terminal vehicle through edge computing. The integration of state input enriches the learning of neural networks and helps to improve generalization and flexibility. The task-related reward function promotes the rapid convergence of the training. The experimental results prove that our reinforcement learning based path planning algorithm has great flexibility and can effectively adapt to a variety of different ocean conditions.
The prediction for Multivariate Time Series (MTS) explores the interrelationships among variables at historical moments, extracts their relevant characteristics, and is widely used in finance, weather, complex industries and other fields. Furthermore, it is important to construct a digital twin system. However, existing methods do not take full advantage of the potential properties of variables, which results in poor predicted accuracy. In this paper, we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network (AFSTGCN). First, to address the problem of the unknown spatial-temporal structure, we construct the Adaptive Fused Spatial-Temporal Graph (AFSTG) layer. Specifically, we fuse the spatial-temporal graph based on the interrelationship of spatial graphs. Simultaneously, we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods. Subsequently, to overcome the insufficient extraction of disordered correlation features, we construct the Adaptive Fused Spatial-Temporal Graph Convolutional (AFSTGC) module. The module forces the reordering of disordered temporal, spatial and spatial-temporal dependencies into rule-like data. AFSTGCN dynamically and synchronously acquires potential temporal, spatial and spatial-temporal correlations, thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy. Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models.
In traditional digital twin communication system testing, we can apply test cases as completely as possible in order to ensure the correctness of the system implementation, and even then, there is no guarantee that the digital twin communication system implementation is completely correct. Formal verification is currently recognized as a method to ensure the correctness of software system for communication in digital twins because it uses rigorous mathematical methods to verify the correctness of systems for communication in digital twins and can effectively help system designers determine whether the system is designed and implemented correctly. In this paper, we use the interactive theorem proving tool Isabelle/HOL to construct the formal model of the X86 architecture, and to model the related assembly instructions. The verification result shows that the system states obtained after the operations of relevant assembly instructions is consistent with the expected states, indicating that the system meets the design expectations.
In the context of Industry 4.0, a paradigm shift from traditional industrial manipulators to Collaborative Robots (CRs) is ongoing, with the latter serving ever more closely humans as auxiliary tools in many production processes. In this scenario, continuous technological advancements offer new opportunities for further innovating robotics and other areas of next-generation industry. For example, 6G could play a prominent role due to its human-centric view of the industrial domains. In particular, its expected dependability features will pave the way for new applications exploiting highly effective Digital Twin (DT)- and eXtended Reality (XR)-based telepresence. In this work, a novel application for the above technologies allowing two distant users to collaborate in the programming of a CR is proposed. The approach encompasses demanding data flows (e.g., point cloud-based streaming of collaborating users and robotic environment), with network latency and bandwidth constraints. Results obtained by analyzing this approach from the viewpoint of network requirements in a setup designed to emulate 6G connectivity indicate that the expected performance of forthcoming mobile networks will make it fully feasible in principle.
The rapid development of 5G/6G and AI enables an environment of Internet of Everything (IoE) which can support millions of connected mobile devices and applications to operate smoothly at high speed and low delay. However, these massive devices will lead to explosive traffic growth, which in turn cause great burden for the data transmission and content delivery. This challenge can be eased by sinking some critical content from cloud to edge. In this case, how to determine the critical content, where to sink and how to access the content correctly and efficiently become new challenges. This work focuses on establishing a highly efficient content delivery framework in the IoE environment. In particular, the IoE environment is re-constructed as an end-edge-cloud collaborative system, in which the concept of digital twin is applied to promote the collaboration. Based on the digital asset obtained by digital twin from end users, a content popularity prediction scheme is firstly proposed to decide the critical content by using the Temporal Pattern Attention (TPA) enabled Long Short-Term Memory (LSTM) model. Then, the prediction results are input for the proposed caching scheme to decide where to sink the critical content by using the Reinforce Learning (RL) technology. Finally, a collaborative routing scheme is proposed to determine the way to access the content with the objective of minimizing overhead. The experimental results indicate that the proposed schemes outperform the state-of-the-art benchmarks in terms of the caching hit rate, the average throughput, the successful content delivery rate and the average routing overhead.
The digital twin is the concept of transcending reality, which is the reverse feedback from the real physical space to the virtual digital space. People hold great prospects for this emerging technology. In order to realize the upgrading of the digital twin industrial chain, it is urgent to introduce more modalities, such as vision, haptics, hearing and smell, into the virtual digital space, which assists physical entities and virtual objects in creating a closer connection. Therefore, perceptual understanding and object recognition have become an urgent hot topic in the digital twin. Existing surface material classification schemes often achieve recognition through machine learning or deep learning in a single modality, ignoring the complementarity between multiple modalities. In order to overcome this dilemma, we propose a multimodal fusion network in our article that combines two modalities, visual and haptic, for surface material recognition. On the one hand, the network makes full use of the potential correlations between multiple modalities to deeply mine the modal semantics and complete the data mapping. On the other hand, the network is extensible and can be used as a universal architecture to include more modalities. Experiments show that the constructed multimodal fusion network can achieve 99.42% classification accuracy while reducing complexity.
Haptic is the modality that complements traditional multimedia, i.e., audiovisual, to evolve the next wave of innovation at which the Internet data stream can be exchanged to enable remote skills and control applications. This will require ultra-low latency and ultra-high reliability to evolve the mobile experience into the era of Digital Twin and Tactile Internet. While the 5th generation of mobile networks is not yet widely deployed, Long-Term Evolution (LTE-A) latency remains much higher than the 1 ms requirement for the Tactile Internet and therefore the Digital Twin. This work investigates an interesting solution based on the incorporation of Software-defined networking (SDN) and Multi-access Mobile Edge Computing (MEC) technologies in an LTE-A network, to deliver future multimedia applications over the Tactile Internet while overcoming the QoS challenges. Several network scenarios were designed and simulated using Riverbed modeler and the performance was evaluated using several time-related Key Performance Indicators (KPIs) such as throughput, End-2-End (E2E) delay, and jitter. The best scenario possible is clearly the one integrating MEC and SDN approaches, where the overall delay, jitter, and throughput for haptics- attained 2 ms, 0.01 ms, and 1000 packets per second. The results obtained give clear evidence that the integration of, both SDN and MEC, in LTE-A indicates performance improvement, and fulfills the standard requirements in terms of the above KPIs, for realizing a Digital Twin/Tactile Internet-based system.
As an emerging joint learning model, federated learning is a promising way to combine model parameters of different users for training and inference without collecting users’ original data. However, a practical and efficient solution has not been established in previous work due to the absence of efficient matrix computation and cryptography schemes in the privacy-preserving federated learning model, especially in partially homomorphic cryptosystems. In this paper, we propose a Practical and Efficient Privacy-preserving Federated Learning (PEPFL) framework. First, we present a lifted distributed ElGamal cryptosystem for federated learning, which can solve the multi-key problem in federated learning. Secondly, we develop a Practical Partially Single Instruction Multiple Data (PSIMD) parallelism scheme that can encode a plaintext matrix into single plaintext for encryption, improving the encryption efficiency and reducing the communication cost in partially homomorphic cryptosystem. In addition, based on the Convolutional Neural Network (CNN) and the designed cryptosystem, a novel privacy-preserving federated learning framework is designed by using Momentum Gradient Descent (MGD). Finally, we evaluate the security and performance of PEPFL. The experiment results demonstrate that the scheme is practicable, effective, and secure with low communication and computation costs.
This paper investigates the problem of collecting multidimensional data throughout time (i.e., longitudinal studies) for the fundamental task of frequency estimation under Local Differential Privacy (LDP) guarantees. Contrary to frequency estimation of a single attribute, the multidimensional aspect demands particular attention to the privacy budget. Besides, when collecting user statistics longitudinally, privacy progressively degrades. Indeed, the “multiple” settings in combination (i.e., many attributes and several collections throughout time) impose several challenges, for which this paper proposes the first solution for frequency estimates under LDP. To tackle these issues, we extend the analysis of three state-of-the-art LDP protocols (Generalized Randomized Response - GRR, Optimized Unary Encoding - OUE, and Symmetric Unary Encoding - SUE) for both longitudinal and multidimensional data collections. While the known literature uses OUE and SUE for two rounds of sanitization (a.k.a. memoization), i.e., L-OUE and L-SUE, respectively, we analytically and experimentally show that starting with OUE and then with SUE provides higher data utility (i.e., L-OSUE). Also, for attributes with small domain sizes, we propose Longitudinal GRR (L-GRR), which provides higher utility than the other protocols based on unary encoding. Last, we also propose a new solution named Adaptive LDP for LOngitudinal and Multidimensional FREquency Estimates (ALLOMFREE), which randomly samples a single attribute to be sent with the whole privacy budget and adaptively selects the optimal protocol, i.e., either L-GRR or L-OSUE. As shown in the results, ALLOMFREE consistently and considerably outperforms the state-of-the-art L-SUE and L-OUE protocols in the quality of the frequency estimates.
With the maturity and development of 5G field, Mobile Edge CrowdSensing (MECS), as an intelligent data collection paradigm, provides a broad prospect for various applications in IoT. However, sensing users as data uploaders lack a balance between data benefits and privacy threats, leading to conservative data uploads and low revenue or excessive uploads and privacy breaches. To solve this problem, a Dynamic Privacy Measurement and Protection (DPMP) framework is proposed based on differential privacy and reinforcement learning. Firstly, a DPM model is designed to quantify the amount of data privacy, and a calculation method for personalized privacy threshold of different users is also designed. Furthermore, a Dynamic Private sensing data Selection (DPS) algorithm is proposed to help sensing users maximize data benefits within their privacy thresholds. Finally, theoretical analysis and ample experiment results show that DPMP framework is effective and efficient to achieve a balance between data benefits and sensing user privacy protection, in particular, the proposed DPMP framework has 63% and 23% higher training efficiency and data benefits, respectively, compared to the Monte Carlo algorithm.
Benefiting from the development of Federated Learning (FL) and distributed communication systems, large-scale intelligent applications become possible. Distributed devices not only provide adequate training data, but also cause privacy leakage and energy consumption. How to optimize the energy consumption in distributed communication systems, while ensuring the privacy of users and model accuracy, has become an urgent challenge. In this paper, we define the FL as a 3-layer architecture including users, agents and server. In order to find a balance among model training accuracy, privacy-preserving effect, and energy consumption, we design the training process of FL as game models. We use an extensive game tree to analyze the key elements that influence the players’ decisions in the single game, and then find the incentive mechanism that meet the social norms through the repeated game. The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality, and the proposed incentive mechanism can also promote users to submit high-quality data in FL. Following the multiple rounds of play, the incentive mechanism can help all players find the optimal strategies for energy, privacy, and accuracy of FL in distributed communication systems.
With the prevalence of the Internet of Things (IoT) systems, smart cities comprise complex networks, including sensors, actuators, appliances, and cyber services. The complexity and heterogeneity of smart cities have become vulnerable to sophisticated cyber-attacks, especially privacy-related attacks such as inference and data poisoning ones. Federated Learning (FL) has been regarded as a hopeful method to enable distributed learning with privacy-preserved intelligence in IoT applications. Even though the significance of developing privacy-preserving FL has drawn as a great research interest, the current research only concentrates on FL with independent identically distributed (i.i.d) data and few studies have addressed the non-i. i.d setting. FL is known to be vulnerable to Generative Adversarial Network (GAN) attacks, where an adversary can presume to act as a contributor participating in the training process to acquire the private data of other contributors. This paper proposes an innovative Privacy Protection-based Federated Deep Learning (PP-FDL) framework, which accomplishes data protection against privacy-related GAN attacks, along with high classification rates from non-i. i.d data. PP-FDL is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures contributors have no access to the data of each other, where class probabilities are protected utilizing a private identifier generated for each class. The PP-FDL framework is evaluated for image classification using simple convolutional networks which are trained using MNIST and CIFAR-10 datasets. The empirical results have revealed that PF-DFL can achieve data protection and the framework outperforms the other three state-of-the-art models with 3%-8% as accuracy improvements.
The security of Federated Learning (FL)/Distributed Machine Learning (DML) is gravely threatened by data poisoning attacks, which destroy the usability of the model by contaminating training samples, so such attacks are called causative availability indiscriminate attacks. Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations, we propose a new supervised batch detection method for poison, which can fleetly sanitize the training dataset before the local model training. We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model, which will be used in an efficient batch hierarchical detection process. Our model stockpiles knowledge about poison, which can be expanded by retraining to adapt to new attacks. Being neither attack-specific nor scenario-specific, our method is applicable to FL/DML or other online or offline scenarios.
In wireless communication networks, mobile users in overlapping areas may experience severe interference, therefore, designing effective Interference Management (IM) methods is crucial to improving network performance. However, when managing multiple disturbances from the same source, it may not be feasible to use existing IM methods such as Interference Alignment (IA) and Interference Steering (IS) exclusively. It is because with IA, the aligned interference becomes indistinguishable at its desired Receiver (Rx) under the cost constraint of Degrees-of-Freedom (DoF), while with IS, more transmit power will be consumed in the direct and repeated application of IS to each interference. To remedy these deficiencies, Interference Alignment Steering (IAS) is proposed by incorporating IA and IS and exploiting their advantages in IM. With IAS, the interfering Transmitter (Tx) first aligns one interference incurred by the transmission of one data stream to a one-dimensional subspace orthogonal to the desired transmission at the interfered Rx, and then the remaining interferences are treated as a whole and steered to the same subspace as the aligned interference. Moreover, two improved versions of IAS, i.e., IAS with Full Adjustment at the Interfering Tx (IAS-FAIT) and Interference Steering and Alignment (ISA), are presented. The former considers the influence of IA on the interfering user-pair's performance. The orthogonality between the desired signals at the interfered Rx can be maintained by adjusting the spatial characteristics of all interferences and the aligned interference components, thus ensuring the Spectral Efficiency (SE) of the interfering communication pairs. Under ISA, the power cost for IS at the interfered Tx is minimized, hence improving SE performance of the interfered communication-pairs. Since the proposed methods are realized at the interfering and interfered Txs cooperatively, the expenses of IM are shared by both communication-pairs. Our in-depth simulation results show that joint use of IA and IS can effectively manage multiple disturbances from the same source and improve the system's SE.
A significant demand rises for energy-efficient deep neural networks to support power-limited embedding devices with successful deep learning applications in IoT and edge computing fields. An accurate energy prediction approach is critical to provide measurement and lead optimization direction. However, the current energy prediction approaches lack accuracy and generalization ability due to the lack of research on the neural network structure and the excessive reliance on customized training dataset. This paper presents a novel energy prediction model, NeurstrucEnergy. NeurstrucEnergy treats neural networks as directed graphs and applies a bi-directional graph neural network training on a randomly generated dataset to extract structural features for energy prediction. NeurstrucEnergy has advantages over linear approaches because the bi-directional graph neural network collects structural features from each layer's parents and children. Experimental results show that NeurstrucEnergy establishes state-of-the-art results with mean absolute percentage error of 2.60%. We also evaluate NeurstrucEnergy in a randomly generated dataset, achieving the mean absolute percentage error of 4.83% over 10 typical convolutional neural networks in recent years and 7 efficient convolutional neural networks created by neural architecture search. Our code is available at https://github.com/NEUSoftGreenAI/NeurstrucEnergy.git.
In this paper, we explore a distributed collaborative caching and computing model to support the distribution of adaptive bit rate video streaming. The aim is to reduce the average initial buffer delay and improve the quality of user experience. Considering the difference between global and local video popularities and the time-varying characteristics of video popularity, a two-stage caching scheme is proposed to push popular videos closer to users and minimize the average initial buffer delay. Based on both long-term content popularity and short-term content popularity, the proposed caching solution is decouple into the proactive cache stage and the cache update stage. In the proactive cache stage, we develop a proactive cache placement algorithm that can be executed in an off-peak period. In the cache update stage, we propose a reactive cache update algorithm to update the existing cache policy to minimize the buffer delay. Simulation results verify that the proposed caching algorithms can reduce the initial buffer delay efficiently.
In the era of rapid development of Internet of Things (IoT), numerous machine-to-machine technologies have been applied to the industrial domain. Due to the divergence of IoT solutions, the industry is faced with a need to apply various technologies for automation and control. This fact leads to a demand for an establishing interworking mechanism which would allow smooth interoperability between heterogeneous devices. One of the major protocols widely used today in industrial electronic devices is Modbus. However, data generated by Modbus devices cannot be understood by IoT applications using different protocols, so it should be applied in a couple with an IoT service layer platform. oneM2M, a global IoT standard, can play the role of interconnecting various protocols, as it provides flexible tools suitable for building an interworking framework for industrial services. Therefore, in this paper, we propose an interworking architecture between devices working on the Modbus protocol and an IoT platform implemented based on oneM2M standards. In the proposed architecture, we introduce the way to model Modbus data as oneM2M resources, rules to map them to each other, procedures required to establish interoperable communication, and optimization methods for this architecture. We analyze our solution and provide an evaluation by implementing it based on a solar power management use case. The results demonstrate that our model is feasible and can be applied to real case scenarios.
Nearly all real-world networks are complex networks and usually are in danger of collapse. Therefore, it is crucial to exploit and understand the mechanisms of network attacks and provide better protection for network functionalities. Network dismantling aims to find the smallest set of nodes such that after their removal the network is broken into connected components of sub-extensive size. To overcome the limitations and drawbacks of existing network dismantling methods, this paper focuses on network dismantling problem and proposes a neighbor-loop structure based centrality metric, NL, which achieves a balance between computational efficiency and evaluation accuracy. In addition, we design a novel method combining NL-based nodes-removing, greedy tree-breaking and reinsertion. Moreover, we compare five baseline methods with our algorithm on ten widely used real-world networks and three types of model networks including Erdös-Rényi random networks, Watts-Strogatz small-world networks and Barabási-Albert scale-free networks with different network generation parameters. Experimental results demonstrate that our proposed method outperforms most peer methods by obtaining a minimal set of targeted attack nodes. Furthermore, the insights gained from this study may be of assistance to future practical research into real-world networks.
Hybrid Power-line/Visible-light Communication (HPVC) network has been one of the most promising Cooperative Communication (CC) technologies for constructing Smart Home due to its superior communication reliability and hardware efficiency. Current research on HPVC networks focuses on the performance analysis and optimization of the Physical (PHY) layer, where the Power Line Communication (PLC) component only serves as the backbone to provide power to light Emitting Diode (LED) devices. So designing a Media Access Control(MAC) protocol remains a great challenge because it allows both PLC and Visible Light Communication (VLC) components to operate data transmission, i.e., to achieve a true HPVC network CC. To solve this problem, we propose a new HPC network MAC protocol (HPVC MAC) based on Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) by combining IEEE 802.15.7 and IEEE 1901 standards. Firstly, we add an Additional Assistance (AA) layer to provide the channel selection strategies for sensor stations, so that they can complete data transmission on the selected channel via the specified CSMA/CA mechanism, respectively. Based on this, we give a detailed working principle of the HPVC MAC, followed by the construction of a joint analytical model for mathematical-mathematical validation of the HPVC MAC. In the modeling process, the impacts of PHY layer settings (including channel fading types and additive noise feature), CSMA/CA mechanisms of 802.15.7 and 1901, and practical configurations (such as traffic rate, transit buffer size) are comprehensively taken into consideration. Moreover, we prove the proposed analytical model has the solvability. Finally, through extensive simulations, we characterize the HPVC MAC performance under different system parameters and verify the correctness of the corresponding analytical model with an average error rate of 4.62% between the simulation and analytical results.
Energy limitation of traditional Wireless Sensor Networks (WSNs) greatly confines the network lifetime due to generating and processing massive sensing data with a limited battery. The energy harvesting WSN is a novel network architecture to address the limitation of traditional WSN. However, existing coverage and deployment schemes neglect the environmental correlation of sensor nodes and external energy with respect to physical space. Comprehensively considering the spatial correlation of the environment and the uneven distribution of energy in energy harvesting WSN, we investigate how to deploy a collection of sensor nodes to save the deployment cost while ensuring the target perpetual coverage. The Confident Information Coverage (CIC) model is adopted to formulate the CIC Minimum Deployment Cost Target Perpetual Coverage (CICMTP) problem to minimize the deployed sensor nodes. As the CICMTP is NP-hard, we devise two approximation algorithms named Local Greedy Threshold Algorithm based on CIC (LGTA-CIC) and Overall Greedy Search Algorithm based on CIC (OGSA-CIC). The LGTA-CIC has a low time complexity and the OGSA-CIC has a better approximation rate. Extensive simulation results demonstrate that the OGSA-CIC is able to achieve lower deployment cost and the performance of the proposed algorithms outperforms GRNP, TPNP and EENP algorithms.