Graph neural networks (GNNs) have emerged as a powerful tool for effectively mining and learning from graph-structured data, with applications spanning numerous domains. However, most research focuses on static graphs, neglecting the dynamic nature of real-world networks where topologies and attributes evolve over time. By integrating sequence modeling modules into traditional GNN architectures, dynamic GNNs aim to bridge this gap, capturing the inherent temporal dependencies of dynamic graphs for a more authentic depiction of complex networks. This paper provides a comprehensive review of the fundamental concepts, key techniques, and state-of-the-art dynamic GNN models. We present the mainstream dynamic GNN models in detail and categorize models based on how temporal information is incorporated. We also discuss large-scale dynamic GNNs and pre-training techniques. Although dynamic GNNs have shown superior performance, challenges remain in scalability, handling heterogeneous information, and lack of diverse graph datasets. The paper also discusses possible future directions, such as adaptive and memory-enhanced models, inductive learning, and theoretical analysis.
This paper presents an overview of deep learning (DL)-based algorithms designed for solving the traveling salesman problem (TSP), categorizing them into four categories: end-to-end construction algorithms, end-to-end improvement algorithms, direct hybrid algorithms, and large language model (LLM)-based hybrid algorithms. We introduce the principles and methodologies of these algorithms, outlining their strengths and limitations through experimental comparisons. End-to-end construction algorithms employ neural networks to generate solutions from scratch, demonstrating rapid solving speed but often yielding subpar solutions. Conversely, end-to-end improvement algorithms iteratively refine initial solutions, achieving higher-quality outcomes but necessitating longer computation times. Direct hybrid algorithms directly integrate deep learning with heuristic algorithms, showcasing robust solving performance and generalization capability. LLM-based hybrid algorithms leverage LLMs to autonomously generate and refine heuristics, showing promising performance despite being in early developmental stages. In the future, further integration of deep learning techniques, particularly LLMs, with heuristic algorithms and advancements in interpretability and generalization will be pivotal trends in TSP algorithm design. These endeavors aim to tackle larger and more complex real-world instances while enhancing algorithm reliability and practicality. This paper offers insights into the evolving landscape of DL-based TSP solving algorithms and provides a perspective for future research directions.
In recent years, ROS (Robot Operating System) packages have become increasingly popular as a type of software artifact that can be effectively reused in robotic software development. Indeed, finding suitable ROS packages that closely match the software’s functional requirements from the vast number of available packages is a nontrivial task using current search methods. The traditional search methods for ROS packages often involve inputting keywords related to robotic tasks into general-purpose search engines (e.g., Google) or code hosting platforms (e.g., GitHub) to obtain approximate results of all potentially suitable ROS packages. However, the accuracy of these search methods remains relatively low because the task-related keywords may not precisely match the functionalities offered by the ROS packages. To improve the search accuracy of ROS packages, this paper presents a novel semantic-based search approach that relies on the semantic-level ROS Package Knowledge Graph (RPKG) to automatically retrieve the most suitable ROS packages. Firstly, to construct the RPKG, we employ multi-dimensional feature extraction techniques to extract semantic concepts, including code file name, category, hardware device, characteristics, and function, from the dataset of ROS package text descriptions. The semantic features extracted from this process result in a substantial number of entities (32,294) and relationships (54,698). Subsequently, we create a robot domain-specific small corpus and further fine-tune a pre-trained language model, BERT-ROS, to generate embeddings that effectively represent the semantics of the extracted features. These embeddings play a crucial role in facilitating semantic-level understanding and comparisons during the ROS package search process within the RPKG. Secondly, we introduce a novel semantic matching-based search algorithm that incorporates the weighted similarities of multiple features from user search queries, which searches out more accurate ROS packages than the traditional keyword search method. To validate the enhanced accuracy of ROS package searching, we conduct comparative case studies between our semantic-based search approach and four baseline search approaches: ROS Index, GitHub, Google, and ChatGPT. The experiment results demonstrate that our approach achieves higher accuracy in terms of ROS package searching, outperforming the other approaches by at least 21% from 5 levels, including top1, top5, top10, top15, and top20.
Evidential Document-level Event Factuality Identification (EvDEFI) aims to predict the factual nature of an event and extract evidential sentences from the document precisely. Previous work usually limited to only predicting the factuality of an event with respect to a document, and neglected the interpretability of the task. As a more fine-grained and interpretable task, EvDEFI is still in the early stage. The existing model only used shallow similarity calculation to extract evidences, and employed simple attentions without lexical features, which is quite coarse-grained. Therefore, we propose a novel EvDEFI model named Heterogeneous and Extractive Graph Attention Network (HEGAT), which can update representations of events and sentences by multi-view graph attentions based on tokens and various lexical features from both local and global levels. Experiments on EB-DEF-v2 corpus demonstrate that HEGAT model is superior to several competitive baselines and can validate the interpretability of the task.
Synonym discovery is important in a wide variety of concept-related tasks, such as entity/concept mining and industrial knowledge graph (KG) construction. It intends to determine whether two terms refer to the same concept in semantics. Existing methods rely on contexts or KGs. However, these methods are often impractical in some cases where contexts or KGs are not available. Therefore, this paper proposes a context-free prompt learning based synonym discovery method called ProSyno, which takes the world’s largest freely available dictionary Wiktionary as a semantic source. Based on a pre-trained language model (PLM), we employ a prompt learning method to generalize to other datasets without any fine-tuning. Thus, our model is more appropriate for context-free situation and can be easily transferred to other fields. Experimental results demonstrate its superiority comparing with state-of-the-art methods.
Submodular maximization is a significant area of interest in combinatorial optimization. It has various real-world applications. In recent years, streaming algorithms for submodular maximization have gained attention, allowing real-time processing of large data sets by examining each piece of data only once. However, most of the current state-of-the-art algorithms are only applicable to monotone submodular maximization. There are still significant gaps in the approximation ratios between monotone and non-monotone objective functions.
In this paper, we propose a streaming algorithm framework for non-monotone submodular maximization and use this framework to design deterministic streaming algorithms for the d-knapsack constraint and the knapsack constraint. Our 1-pass streaming algorithm for the d-knapsack constraint has a
Clique relaxation problems are important extension versions of the maximum clique problem with extensive real-world applications. Although lots of studies focus on designing local search algorithms for solving these problems, almost all state-of-the-art local search algorithms adopt a common general initialization method. This paper develops a general group driven initialization method for clique relaxation problems. The proposed method uses two kinds of ways to divide vertices into some subgroups by using the useful information of the search procedure and the structure information of a given instance and then constructs a good initial solution by considering the generated group information. We apply the proposed initialization method to two clique relaxation problems. Experimental results demonstrate that the proposed initialization method clearly improves the performance of state-of-the-art algorithms for the clique relaxation problems.
EOSIO, as a representative of blockchain 3.0 platforms, immediately follows in the footsteps of Bitcoin and Ethereum. It has raised the largest ever initial coin offering, and its market capitalization has reached up to $14.3 billion. Innovatively, EOSIO brings adopts lots of new features, like the delegated proof of stake consensus algorithm and updatable smart contracts. Not only these features lead to a prosperity of the decentralized application ecosystem, but they also inevitably introduce loopholes. For example, EOSBet, a famous gambling DApp, was attacked twice within a single month and lost more than $1 million. To the best of our knowledge, little work has surveyed the EOSIO from a security researcher’s perspective. To fill this gap, we firstly abstract the complicated EOSIO ecosystem into components following hierarchical relationships, upon which we delve deeper for root causes of all existing vulnerabilities. We also systematically study possible attacks and mitigations against these vulnerabilities, and summarize several best practices for developers, EOSIO official, and security researchers to shed light on future directions.