Nowadays, machine learning is widely used in various applications. Training a model requires huge amounts of data, but it can pose a threat to user privacy. With the growing concern for privacy, the “Right to be Forgotten” has been proposed, which means that users have the right to request that their personal information be removed from machine learning models. The emergence of machine unlearning is a response to this need. Implementing machine unlearning is not easy because simply deleting samples from a database does not allow the model to “forget” the data. Therefore, this paper summarises the definition of the machine unlearning formulation, process, deletion requests, design requirements and validation, algorithms, applications, and future perspectives, in the hope that it will help future researchers in machine unlearning.
The study aims to address the challenge of dynamic assessment in power systems by proposing a design scheme for an intelligent adaptive power distribution system based on runtime verification. The system architecture is built upon cloud-edge-end collaboration, enabling comprehensive monitoring and precise management of the power grid through coordinated efforts across different levels. Specifically, the study employs the adaptive observer approach, allowing dynamic adjustments to observers to reflect updates in requirements and ensure system reliability. This method covers both structural and parametric adjustments to specifications, including updating time protection conditions, updating events, and adding or removing responses. The results demonstrate that with the implementation of adaptive observers, the system becomes more flexible in responding to changes, significantly enhancing its level of efficiency. By employing dynamically changing verification specifications, the system achieves real-time and flexible verification. This research provides technical support for the safe, efficient, and reliable operation of electrical power distribution systems.
With the rapid development of mobile internet technology and increasing concerns over data privacy, Federated Learning (FL) has emerged as a significant framework for training machine learning models. Given the advancements in technology, User Equipment (UE) can now process multiple computing tasks simultaneously, and since UEs can have multiple data sources that are suitable for various FL tasks, multiple tasks FL could be a promising way to respond to different application requests at the same time. However, running multiple FL tasks simultaneously could lead to a strain on the device’s computation resource and excessive energy consumption, especially the issue of energy consumption challenge. Due to factors such as limited battery capacity and device heterogeneity, UE may fail to efficiently complete the local training task, and some of them may become stragglers with high-quality data. Aiming at alleviating the energy consumption challenge in a multi-task FL environment, we design an automatic Multi-Task FL Deployment (MFLD) algorithm to reach the local balancing and energy consumption goals. The MFLD algorithm leverages Deep Reinforcement Learning (DRL) techniques to automatically select UEs and allocate the computation resources according to the task requirement. Extensive experiments validate our proposed approach and showed significant improvements in task deployment success rate and energy consumption cost.
In the growing demand for data sharing, how to realize fine-grained trusted access control of shared data and protect data security has become a difficult problem. Ciphertext policy attribute-based encryption (CP-ABE) model is widely used in cloud data sharing scenarios, but there are problems such as privacy leakage of access policy, irrevocability of user or attribute, key escrow, and trust bottleneck. Therefore, we propose a blockchain-assisted CP-ABE (B-CP-ABE) mechanism for trusted data access control. Firstly, we construct a data trusted access control architecture based on the B-CP-ABE, which realizes the automated execution of access policies through smart contracts and guarantees the trusted access process through blockchain. Then, we define the B-CP-ABE scheme, which has the functions of policy partial hidden, attribute revocation, and anti-key escrow. The B-CP-ABE scheme utilizes Bloom filter to hide the mapping relationship of sensitive attributes in the access structure, realizes flexible revocation and recovery of users and attributes by re-encryption algorithm, and solves the key escrow problem by joint authorization of data owners and attribute authority. Finally, we demonstrate the usability of the B-CP-ABE scheme by performing security analysis and performance analysis.
The evolution of artificial intelligence has thrust the Online Judge (OJ) systems into the forefront of research, particularly within programming education, with a focus on enhancing performance and efficiency. Addressing the shortcomings of the current OJ systems in coarse defect localization granularity and heavy task scheduling architecture, this paper introduces an innovative Integrated Intelligent Defect Localization and Lightweight Task Scheduling Online Judge (IDL-LTSOJ) system. Firstly, to achieve token-level fine-grained defect localization, a Deep Fine-Grained Defect Localization (Deep-FGDL) deep neural network model is developed. By integrating Bidirectional Long Short-Term Memory (BiLSTM) and Bidirectional Gated Recurrent Unit (BiGRU), this model extracts fine-grained information from the abstract syntax tree (AST) of code, enabling more accurate defect localization. Subsequently, we propose a lightweight task scheduling architecture to tackle issues, such as limited concurrency in task evaluation and high equipment costs. This architecture integrates a Kafka messaging system with an optimized task distribution strategy to enable concurrent execution of evaluation tasks, substantially enhancing system evaluation efficiency. The experimental results demonstrate that the Deep-FGDL model improves the accuracy by 35.9% in the Top-20 rank compared to traditional machine learning benchmark methods for fine-grained defect localization tasks. Moreover, the lightweight task scheduling strategy notably reduces response time by nearly 6000ms when handling 120 task volumes, which represents a significant improvement in evaluation efficiency over centralized evaluation methods.
This paper investigates the framework of wireless rechargeable sensor network (WRSN) assisted by multiple mobile unmanned vehicles (MUVs) and laser-charged unmanned aerial vehicles (UAVs). On the basis of framework, we cooperatively investigate the trajectory optimization of multi-UAVs and multi-MUVs for charging WRSN (TOUM) problem, whose goal aims at designing the optimal travel plan of UAVs and MUVs cooperatively to charge WRSN such that the remaining energy of each sensor in WRSN is greater than or equal to the threshold and the time consumption of UAV that takes the most time of all UAVs is minimized. The TOUM problem is proved NP-hard. To solve the TOUM problem, we first investigate the multiple UAVs-based TSP (MUTSP) problem to balance the charging tasks assigned to every UAV. Then, based on the MUTSP problem, we propose the TOUM algorithm (TOUMA) to design the detailed travel plan of UAVs and MUVs. We also present an algorithm named TOUM-DQN to make intelligent decisions about the travel plan of UAVs and MUVs by extracting valuable information from the network. The effectiveness of proposed algorithms is verified through extensive simulation experiments. The results demonstrate that the TOUMA algorithm outperforms the solar charging method, the base station charging method, and the TOUM-DQN algorithm in terms of time efficiency. Simultaneously, the experimental results show that the execution time of TOUM-DQN algorithm is significantly lower than TOUMA algorithm.
In recent years, the rapid development of Internet of Things (IoT) technology has led to a significant increase in the amount of data stored in the cloud. However, traditional IoT systems rely primarily on cloud data centers for information storage and user access control services. This practice creates the risk of privacy breaches on IoT data sharing platforms, including issues such as data tampering and data breaches. To address these concerns, blockchain technology, with its inherent properties such as tamper-proof and decentralization, has emerged as a promising solution that enables trusted sharing of IoT data. Still, there are challenges to implementing encrypted data search in this context. This paper proposes a novel searchable attribute cryptographic access control mechanism that facilitates trusted cloud data sharing. Users can use keywords To efficiently search for specific data and decrypt content keys when their properties are consistent with access policies. In this way, cloud service providers will not be able to access any data privacy-related information, ensuring the security and trustworthiness of data sharing, as well as the protection of user data privacy. Our simulation results show that our approach outperforms existing studies in terms of time overhead. Compared to traditional access control schemes,our approach reduces data encryption time by 33%, decryption time by 5%, and search time by 75%.
Unsupervised cross-modal hashing has achieved great success in various information retrieval applications owing to its efficient storage usage and fast retrieval speed. Recent studies have primarily focused on training the hash-encoded networks by calculating a sample-based similarity matrix to improve the retrieval performance. However, there are two issues remain to solve: (1) The current sample-based similarity matrix only considers the similarity between image-text pairs, ignoring the different information densities of each modality, which may introduce additional noise and fail to mine key information for retrieval; (2) Most existing unsupervised cross-modal hashing methods only consider alignment between different modalities, while ignoring consistency between each modality, resulting in semantic conflicts. To tackle these challenges, a novel Deep High-level Concept-mining Jointing Hashing (DHCJH) model for unsupervised cross-modal retrieval is proposed in this study. DHCJH is able to capture the essential high-level semantic information from image modalities and integrate into the text modalities to improve the accuracy of guidance information. Additionally, a new hashing loss with a regularization term is introduced to avoid the cross-modal semantic collision and false positive pairs problems. To validate the proposed method, extensive comparison experiments on benchmark datasets are conducted. Experimental findings reveal that DHCJH achieves superior performance in both accuracy and efficiency. The code of DHCJH is available at Github.
Caching and sharing the content files are critical and fundamental for various future vehicular applications. However, how to satisfy the content demands in a timely manner with limited storage is an open issue owing to the high mobility of vehicles and the unpredictable distribution of dynamic requests. To better serve the requests from the vehicles, a cache-enabled multi-layer architecture, consisting of a Micro Base Station (MBS) and several Small Base Stations (SBSs), is proposed in this paper. Considering that vehicles usually travel through the coverage of multiple SBSs in a short time period, the cooperative caching and sharing strategy is introduced, which can provide comprehensive and stable cache services to vehicles. In addition, since the content popularity profile is unknown, we model the content caching problems in a Multi-Armed Bandit (MAB) perspective to minimize the total delay while gradually estimating the popularity of content files. The reinforcement learning-based algorithms with a novel Q-value updating module are employed to update the caching files in different timescales for MBS and SBSs, respectively. Simulation results show the proposed algorithm outperforms benchmark algorithms with static or varying content popularity. In the high-speed environment, the cooperation between SBSs effectively improves the cache hit rate and further improves service performance.
The importance of Open Source Software (OSS) has increased in recent years. OSS is software that is jointly developed and maintained globally through open collaboration and knowledge sharing. OSS plays an important role, especially in the Information Technology (IT) field, by increasing the efficiency of software development and reducing costs. However, licensing issues, security issues, etc., may arise when using OSS. Some services analyze source code and provide OSS-related data to solve these problems, a representative example being Blackduck. Blackduck inspects the entiresource code within the project and provides OSS information and related data included in the whole project. Therefore, there are problems such as inefficiency due to full inspection of the source code and difficulty in determining the exact location where OSS is identified. This paper proposes a scheme to intuitively analyze source code through Graph Modelling Language (GML) conversion to solve these problems. Additionally, encryption is applied to GML to performsecure GML-based OSS inspection. The study explains the process of converting source code to GML and performing OSS inspection. Afterward, we compare the capacity and accuracy of text-based OSS inspection and GML-based OSS inspection. Signcryption is applied to performsafe, GML-based, efficient OSS inspection.
Large Language Models (LLMs) are complex artificial intelligence systems, which can understand, generate, and translate human languages. By analyzing large amounts of textual data, these models learn language patterns to perform tasks such as writing, conversation, and summarization. Agents built on LLMs (LLM agents) further extend these capabilities, allowing them to process user interactions and perform complex operations in diverse task environments. However, during the processing and generation of massive data, LLMs and LLM agents pose a risk of sensitive information leakage, potentially threatening data privacy. This paper aims to demonstrate data privacy issues associated with LLMs and LLM agents to facilitate a comprehensive understanding. Specifically, we conduct an in-depth survey about privacy threats, encompassing passive privacy leakage and active privacy attacks. Subsequently, we introduce the privacy protection mechanisms employed by LLMs and LLM agents and provide a detailed analysis of their effectiveness. Finally, we explore the privacy protection challenges for LLMs and LLM agents as well as outline potential directions for future developments in this domain.
Edge computing is becoming ever more relevant to offload compute-heavy tasks in vehicular networks. In this context, the concept of vehicular micro clouds (VMCs) has been proposed to use compute and storage resources on nearby vehicles to complete computational tasks. As many tasks in this application domain are time critical, offloading to the cloud is prohibitive. Additionally, task deadlines have to be dealt with. This paper addresses two main challenges. First, we present a task migration algorithm supporting deadlines in vehicular edge computing. The algorithm is following the earliest deadline first model but in presence of dynamic processing resources, i.e, vehicles joining and leaving a VMC. This task offloading is very sensitive to the mobility of vehicles in a VMC, i.e, the so-called dwell time a vehicles spends in the VMC. Thus, secondly, we propose a machine learning-based solution for dwell time prediction. Our dwell time prediction model uses a random forest approach to estimate how long a vehicle will stay in a VMC. Our approach is evaluated using mobility traces of an artificial simple intersection scenario as well as of real urban traffic in cities of Luxembourg and Nagoya. Our proposed approach is able to realize low-delay and low-failure task migration in dynamic vehicular conditions, advancing the state of the art in vehicular edge computing.