This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations. Although deep neural networks have exhibited superior performance in various tasks, interpretability is always Achilles’ heel of deep neural networks. At present, deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations. We believe that high model interpretability may help people break several bottlenecks of deep learning, e.g., learning from a few annotations, learning via human–computer communications at the semantic level, and semantically debugging network representations. We focus on convolutional neural networks (CNNs), and revisit the visualization of CNN representations, methods of diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability. Finally, we discuss prospective trends in explainable artificial intelligence.
道路交通流预测不仅可以为出行者提供实时有效的信息,而且可以帮助他们选择最佳路径,减少出行时间,实现道路交通路径诱导,缓解交通拥堵。本文提出了一种基于ARIMA模型和Kalman滤波算法的道路交通流预测方法。首先,基于道路交通历史数据建立时间序列的ARIMA模型。其次,结合ARIMA模型和Kalman滤波法构建道路交通预测算法,获取Kalman滤波的测量方程和更新方程。然后,基于历史道路交通数据进行算法的参数设定。最后,以北京的四条路段作为案例,对所提出的方法进行了分析。实验结果表明,基于ARIMA模型和Kalman滤波的实时道路交通状态预测方法是可行的,并且可以获得很高的精度。
The Internet based cyber-physical world has profoundly changed the information environment for the development of artificial intelligence (AI), bringing a new wave of AI research and promoting it into the new era of AI 2.0. As one of the most prominent characteristics of research in AI 2.0 era, crowd intelligence has attracted much attention from both industry and research communities. Specifically, crowd intelligence provides a novel problem-solving paradigm through gathering the intelligence of crowds to address challenges. In particular, due to the rapid development of the sharing economy, crowd intelligence not only becomes a new approach to solving scientific challenges, but has also been integrated into all kinds of application scenarios in daily life, e.g., online-tooffline (O2O) application, real-time traffic monitoring, and logistics management. In this paper, we survey existing studies of crowd intelligence. First, we describe the concept of crowd intelligence, and explain its relationship to the existing related concepts, e.g., crowdsourcing and human computation. Then, we introduce four categories of representative crowd intelligence platforms. We summarize three core research problems and the state-of-the-art techniques of crowd intelligence. Finally, we discuss promising future research directions of crowd intelligence.
In this paper, we review recent emerging theoretical and technological advances of artificial intelligence (AI) in the big data settings. We conclude that integrating data-driven machine learning with human knowledge (common priors or implicit intuitions) can effectively lead to explainable, robust, and general AI, as follows: from shallow computation to deep neural reasoning; from merely data-driven model to data-driven with structured logic rules models; from task-oriented (domain-specific) intelligence (adherence to explicit instructions) to artificial general intelligence in a general context (the capability to learn from experience). Motivated by such endeavors, the next generation of AI, namely AI 2.0, is positioned to reinvent computing itself, to transform big data into structured knowledge, and to enable better decision-making for our society.
Quantum-dot cellular automata (QCA) is an emerging area of research in reversible computing. It can be used to design nanoscale circuits. In nanocommunication, the detection and correction of errors in a received message is a major factor. Besides, device density and power dissipation are the key issues in the nanocommunication architecture. For the first time, QCA-based designs of the reversible low-power odd parity generator and odd parity checker using the Feynman gate have been achieved in this study. Using the proposed parity generator and parity checker circuit, a nanocommunication architecture is proposed. The detection of errors in the received message during transmission is also explored. The proposed QCA Feynman gate outshines the existing ones in terms of area, cell count, and delay. The quantum costs of the proposed conventional reversible circuits and their QCA layouts are calculated and compared, which establishes that the proposed QCA circuits have very low quantum cost compared to conventional designs. The energy dissipation by the layouts is estimated, which ensures the possibility of QCA nano-device serving as an alternative platform for the implementation of reversible circuits. The stability of the proposed circuits under thermal randomness is analyzed, showing the operational efficiency of the circuits. The simulation results of the proposed design are tested with theoretical values, showing the accuracy of the circuits. The proposed circuits can be used to design more complex low-power nanoscale lossless nanocommunication architecture such as nano-transmitters and nano-receivers.
The explosive growth of malware variants poses a major threat to information security. Traditional anti-virus systems based on signatures fail to classify unknown malware into their corresponding families and to detect new kinds of malware programs. Therefore, we propose a machine learning based malware analysis system, which is composed of three modules: data processing, decision making, and new malware detection. The data processing module deals with gray-scale images, Opcode n-gram, and import functions, which are employed to extract the features of the malware. The decision-making module uses the features to classify the malware and to identify suspicious malware. Finally, the detection module uses the shared nearest neighbor (SNN) clustering algorithm to discover new malware families. Our approach is evaluated on more than 20 000 malware instances, which were collected by Kingsoft, ESET NOD32, and Anubis. The results show that our system can effectively classify the unknown malware with a best accuracy of 98.9%, and successfully detects 86.7% of the new malware.
Since the landmark work of R. E. Kalman in the 1960s, considerable efforts have been devoted to time series state space models for a large variety of dynamic estimation problems. In particular, parametric filters that seek analytical estimates based on a closed-form Markov–Bayes recursion, e.g., recursion from a Gaussian or Gaussian mixture (GM) prior to a Gaussian/GM posterior (termed ‘Gaussian conjugacy’ in this paper), form the backbone for a general time series filter design. Due to challenges arising from nonlinearity, multimodality (including target maneuver), intractable uncertainties (such as unknown inputs and/or non-Gaussian noises) and constraints (including circular quantities), etc., new theories, algorithms, and technologies have been developed continuously to maintain such a conjugacy, or to approximate it as close as possible. They had contributed in large part to the prospective developments of time series parametric filters in the last six decades. In this paper, we review the state of the art in distinctive categories and highlight some insights that may otherwise be easily overlooked. In particular, specific attention is paid to nonlinear systems with an informative observation, multimodal systems including Gaussian mixture posterior and maneuvers, and intractable unknown inputs and constraints, to fill some gaps in existing reviews and surveys. In addition, we provide some new thoughts on alternatives to the first-order Markov transition model and on filter evaluation with regard to computing complexity.
Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks continue to increase. This poses a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher–student networks, compact network design, and hardware accelerators. Finally, we introduce and discuss a few possible future directions.
Resampling is a critical procedure that is of both theoretical and practical significance for efficient implementation of the particle filter. To gain an insight of the resampling process and the filter, this paper contributes in three further respects as a sequel to the tutorial (Li et al., 2015). First, identical distribution (ID) is established as a general principle for the resampling design, which requires the distribution of particles before and after resampling to be statistically identical. Three consistent metrics including the (symmetrical) Kullback-Leibler divergence, Kolmogorov-Smirnov statistic, and the sampling variance are introduced for assessment of the ID attribute of resampling, and a corresponding, qualitative ID analysis of representative resampling methods is given. Second, a novel resampling scheme that obtains the optimal ID attribute in the sense of minimum sampling variance is proposed. Third, more than a dozen typical resampling methods are compared via simulations in terms of sample size variation, sampling variance, computing speed, and estimation accuracy. These form a more comprehensive understanding of the algorithm, providing solid guidelines for either selection of existing resampling methods or new implementations.
Intelligent unmanned autonomous systems are some of the most important applications of artificial intelligence (AI). The development of such systems can significantly promote innovation in AI technologies. This paper introduces the trends in the development of intelligent unmanned autonomous systems by summarizing the main achievements in each technological platform. Furthermore, we classify the relevant technologies into seven areas, including AI technologies, unmanned vehicles, unmanned aerial vehicles, service robots, space robots, marine robots, and unmanned workshops/intelligent plants. Current trends and de-velopments in each area are introduced.
Global demand for power has significantly increased, but power generation and transmission capacities have not increased proportionally with this demand. As a result, power consumers suffer from various problems, such as voltage and frequency instability and power quality issues. To overcome these problems, the capacity for available power transfer of a transmission network should be enhanced. Researchers worldwide have addressed this issue by using flexible AC transmission system (FACTS) devices. We have conducted a comprehensive review of how FACTS controllers are used to enhance the available transfer capability (ATC) and power transfer capability (PTC) of power system networks. This review includes a discussion of the classification of different FACTS devices according to different factors. The popularity and applications of these devices are discussed together with relevant statistics. The operating principles of six major FACTS devices and their application in increasing ATC and PTC are also presented. Finally, we evaluate the performance of FACTS devices in ATC and PTC improvement with respect to different control algorithms.
Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient selflocalization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.
Nonlinear oscillators and circuits can be coupled to reach synchronization and consensus. The occurrence of complete synchronization means that all oscillators can maintain the same amplitude and phase, and it is often detected between identical oscillators. However, phase synchronization means that the coupled oscillators just keep pace in oscillation even though the amplitude of each node could be different. For dimensionless dynamical systems and oscillators, the synchronization approach depends a great deal on the selection of coupling variable and type. For nonlinear circuits, a resistor is often used to bridge the connection between two or more circuits, so voltage coupling can be activated to generate feedback on the coupled circuits. In this paper, capacitor coupling is applied between two Pikovsk-Rabinovich (PR) circuits, and electric field coupling explains the potential mechanism for differential coupling. Then symmetric coupling and cross coupling are activated to detect synchronization stability, separately. It is found that resistor-based voltage coupling via a single variable can stabilize the synchronization, and the energy flow of the controller is decreased when synchronization is realized. Furthermore, by applying appropriate intensity for the coupling capacitor, synchronization is also reached and the energy flow across the coupling capacitor is helpful in regulating the dynamical behaviors of coupled circuits, which are supported by a continuous energy exchange between capacitors and the inductor. It is also confirmed that the realization of synchronization is dependent on the selection of a coupling channel. The approach and stability of complete synchronization depend on symmetric coupling, which is activated between the same variables. Cross coupling between different variables just triggers phase synchronization. The capacitor coupling can avoid energy consumption for the case with resistor coupling, and it can also enhance the energy exchange between two coupled circuits.
By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization problems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark function results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more calls of fitness functions and fewer individuals.
Conversational systems have come a long way since their inception in the 1960s. After decades of research and development, we have seen progress from Eliza and Parry in the 1960s and 1970s, to task-completion systems as in the Defense Advanced Research Projects Agency (DARPA) communicator program in the 2000s, to intelligent personal assistants such as Siri, in the 2010s, to today’s social chatbots like XiaoIce. Social chatbots’ appeal lies not only in their ability to respond to users’ diverse requests, but also in being able to establish an emotional connection with users. The latter is done by satisfying users’ need for communication, affection, as well as social belonging. To further the advancement and adoption of social chatbots, their design must focus on user engagement and take both intellectual quotient (IQ) and emotional quotient (EQ) into account. Users should want to engage with a social chatbot; as such, we define the success metric for social chatbots as conversation-turns per session (CPS). Using XiaoIce as an illustrative example, we discuss key technologies in building social chatbots from core chat to visual awareness to skills. We also show how XiaoIce can dynamically recognize emotion and engage the user throughout long conversations with appropriate interpersonal responses. As we become the first generation of humans ever living with artificial intelligenc (AI), we have a responsibility to design social chatbots to be both useful and empathetic, so they will become ubiquitous and help society as a whole.
Mobile device manufacturers are rapidly producing miscellaneous Android versions worldwide. Simultaneously, cyber criminals are executing malicious actions, such as tracking user activities, stealing personal data, and committing bank fraud. These criminals gain numerous benefits as too many people use Android for their daily routines, including important communications. With this in mind, security practitioners have conducted static and dynamic analyses to identify malware. This study used static analysis because of its overall code coverage, low resource consumption, and rapid processing. However, static analysis requires a minimum number of features to efficiently classify malware. Therefore, we used genetic search (GS), which is a search based on a genetic algorithm (GA), to select the features among 106 strings. To evaluate the best features determined by GS, we used five machine learning classifiers, namely, Naïve Bayes (NB), functional trees (FT), J48, random forest (RF), and multilayer perceptron (MLP). Among these classifiers, FT gave the highest accuracy (95%) and true positive rate (TPR) (96.7%) with the use of only six features.
Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) and maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.
Deep reinforcement learning (RL) has become one of the most popular topics in artificial intelligence research. It has been widely used in various fields, such as end-to-end control, robotic control, recommendation systems, and natural language dialogue systems. In this survey, we systematically categorize the deep RL algorithms and applications, and provide a detailed review over existing deep RL algorithms by dividing them into modelbased methods, model-free methods, and advanced RL methods. We thoroughly analyze the advances including exploration, inverse RL, and transfer RL. Finally, we outline the current representative applications, and analyze four open problems for future research.
Cross-media analysis and reasoning is an active research area in computer science, and a promising direction for artificial intelligence. However, to the best of our knowledge, no existing work has summarized the state-of-the-art methods for cross-media analysis and reasoning or presented advances, challenges, and future directions for the field. To address these issues, we provide an overview as follows: (1) theory and model for cross-media uniform representation; (2) cross-media correlation understanding and deep mining; (3) cross-media knowledge graph construction and learning methodologies; (4) cross-media knowledge evolution and reasoning; (5) cross-media description and generation; (6) cross-media intelligent engines; and (7) cross-media intelligent applications. By presenting approaches, advances, and future directions in cross-media analysis and rea-soning, our goal is not only to draw more attention to the state-of-the-art advances in the field, but also to provide technical insights by discussing the challenges and research directions in these areas.
With the development of sensor fusion technologies, there has been a lot of research on intelligent ground vehicles, where obstacle detection is one of the key aspects of vehicle driving. Obstacle detection is a complicated task, which involves the diversity of obstacles, sensor characteristics, and environmental conditions. While the on-road driver assistance system or autonomous driving system has been well researched, the methods developed for the structured road of city scenes may fail in an off-road environment because of its uncertainty and diversity. A single type of sensor finds it hard to satisfy the needs of obstacle detection because of the sensing limitations in range, signal features, and working conditions of detection, and this motivates researchers and engineers to develop multi-sensor fusion and system integration methodology. This survey aims at summarizing the main considerations for the onboard multi-sensor configuration of intelligent ground vehicles in the off-road environments and providing users with a guideline for selecting sensors based on their performance requirements and application environments. State-of-the-art multi-sensor fusion methods and system prototypes are reviewed and associated to the corresponding heterogeneous sensor configurations. Finally, emerging technologies and challenges are discussed for future study.
We investigate a multifunctional n-step honeycomb network which has not been studied before. By adjusting the circuit parameters, such a network can be transformed into several different networks with a variety of functions, such as a regular ladder network and a triangular network. We derive two new formulae for equivalent resistance in the resistor network and equivalent impedance in the LC network, which are in the fractional-order domain. First, we simplify the complex network into a simple equivalent model. Second, using Kirchhoff’s laws, we establish a fractional difference equation. Third, we construct an equivalent transformation method to obtain a general solution for the nonlinear differential equation. In practical applications, several interesting special results are obtained. In particular, ann-step impedance LC network is discussed and many new characteristics of complex impedance have been found.
Behavior-based malware analysis is an important technique for automatically analyzing and detecting malware, and it has received considerable attention from both academic and industrial communities. By considering how malware behaves, we can tackle the malware obfuscation problem, which cannot be processed by traditional static analysis approaches, and we can also derive the as-built behavior specifications and cover the entire behavior space of the malware samples. Although there have been several works focusing on malware behavior analysis, such research is far from mature, and no overviews have been put forward to date to investigate current developments and challenges. In this paper, we conduct a survey on malware behavior description and analysis considering three aspects: malware behavior description, behavior analysis methods, and visualization techniques. First, existing behavior data types and emerging techniques for malware behavior description are explored, especially the goals, principles, characteristics, and classifications of behavior analysis techniques proposed in the existing approaches. Second, the inadequacies and challenges in malware behavior analysis are summarized from different perspectives. Finally, several possible directions are discussed for future research.
Artificial intelligence (AI) has played a significant role in imitating and producing large-scale designs such as e-commerce banners. However, it is less successful at creative and collaborative design outputs. Most humans express their ideas as rough sketches, and lack the professional skills to complete pleasing paintings. Existing AI approaches have failed to convert varied user sketches into artistically beautiful paintings while preserving their semantic concepts. To bridge this gap, we have developed SmartPaint, a co-creative drawing system based on generative adversarial networks (GANs), enabling a machine and a human being to collaborate in cartoon landscape painting. SmartPaint trains a GAN using triples of cartoon images, their corresponding semantic label maps, and edge detection maps. The machine can then simultaneously understand the cartoon style and semantics, along with the spatial relationships among the objects in the landscape images. The trained system receives a sketch as a semantic label map input, and automatically synthesizes its edge map for stable handling of varied sketches. It then outputs a creative and fine painting with the appropriate style corresponding to the human’s sketch. Experiments confirmed that the proposed SmartPaint system successfully generates high-quality cartoon paintings.
Emotion-based features are critical for achieving high performance in a speech emotion recognition (SER) system. In general, it is difficult to develop these features due to the ambiguity of the ground-truth. In this paper, we apply several unsupervised feature learning algorithms (including K-means clustering, the sparse auto-encoder, and sparse restricted Boltzmann machines), which have promise for learning task-related features by using unlabeled data, to speech emotion recognition. We then evaluate the performance of the proposed approach and present a detailed analysis of the effect of two important factors in the model setup, the content window size and the number of hidden layer nodes. Experimental results show that larger content windows and more hidden nodes contribute to higher performance. We also show that the two-layer network cannot explicitly improve performance compared to a single-layer network.
In this study, we provide an overview of recent advances in multisensor multitarget tracking based on the random finite set (RFS) approach. The fusion that plays a fundamental role in multisensor filtering is classified into data-level multitarget measurement fusion and estimate-level multitarget density fusion, which share and fuse local measurements and posterior densities between sensors, respectively. Important properties of each fusion rule including the optimality and sub-optimality are presented. In particular, two robust multitarget density-averaging approaches, arithmetic- and geometric-average fusion, are addressed in detail for various RFSs. Relevant research topics and remaining challenges are highlighted.
Developing an efficient and robust lightweight graphic user interface (GUI) for industry process monitoring is always a challenging task. Current implementation methods for embedded GUI are with the matters of real-time processing and ergonomics performance. To address the issue, an embedded lightweight GUI component library design method based on quasar technology embedded (Qt/E) is proposed. First, an entity-relationship (E-R) model for the GUI library is developed to define the functional framework and data coupling relations. Second, a cross-compilation environment is constructed, and the Qt/E shared library files are tailored to satisfy the requirements of embedded target systems. Third, by using the signal-slot communication interfaces, a message mapping mechanism that does not require a call-back pointer is developed, and the context switching performance is improved. According to the multi-thread method, the parallel task processing capabilities for data collection, calculation, and display are enhanced, and the real-time performance and robustness are guaranteed. Finally, the human-computer interaction process is optimized by a scrolling page method, and the ergonomics performance is verified by the industrial psychology methods. Two numerical cases and five industrial experiments show that the proposed method can increase real-time read-write correction ratios by more than 26% and 29%, compared with Windows-CE-GUI and Android-GUI, respectively. The component library can be tailored to 900 KB and supports 12 hardware platforms. The average session switch time can be controlled within 0.6 s and six key indexes for ergonomics are verified by different industrial applications.
We propose a novel approach called the robust fractional-order proportional-integral-derivative (FOPID) controller, to stabilize a perturbed nonlinear chaotic system on one of its unstable fixed points. The stability analysis of the nonlinear chaotic system is made based on the proportional-integral-derivative actions using the bifurcation diagram. We extract an initial set of controller parameters, which are subsequently optimized using a quadratic criterion. The integral and derivative fractional orders are also identified by this quadratic criterion. By applying numerical simulations on two nonlinear systems, namely the multi-scroll Chen system and the Genesio-Tesi system, we show that the fractional PIλDμ controller provides the best closed-loop system performance in stabilizing the unstable fixed points, even in the presence of random perturbation.
人工智能追求的长期目标是使机器能像人一样学习和思考。由于人类面临的许多问题具有不确定性、脆弱性和开放性,任何智能程度的机器都无法完全取代人类,这就需要将人的作用或人的认知模型引入到人工智能系统中,形成混合-增强智能的形态,这种形态是人工智能或机器智能的可行的、重要的成长模式。混合-增强智能可以分为两类基本形式:一类是人在回路的人机协同混合增强智能,另一类是将认知模型嵌入机器学习系统中,形成基于认知计算的混合智能。本文讨论人机协同的混合-增强智能的基本框架,以及基于认知计算的混合-增强智能的基本要素:直觉推理与因果模型、记忆和知识演化;特别论述了直觉推理在复杂问题求解中的作用和基本原理,以及基于记忆与推理的视觉场景理解的认知学习网络;阐述了竞争-对抗式认知学习方法,并讨论了其在自动驾驶方面的应用;最后给出混合-增强智能在相关领域的典型应用。
Cryptocurrencies represented by Bitcoin have fully demonstrated their advantages and great potential in payment and monetary systems during the last decade. The mining pool, which is considered the source of Bitcoin, is the cornerstone of market stability. The surveillance of the mining pool can help regulators effectively assess the overall health of Bitcoin and issues. However, the anonymity of mining-pool miners and the difficulty of analyzing large numbers of transactions limit in-depth analysis. It is also a challenge to achieve intuitive and comprehensive monitoring of multi-source heterogeneous data. In this study, we present SuPoolVisor, an interactive visual analytics system that supports surveillance of the mining pool and de-anonymization by visual reasoning. SuPoolVisor is divided into pool level and address level. At the pool level, we use a sorted stream graph to illustrate the evolution of computing power of pools over time, and glyphs are designed in two other views to demonstrate the influence scope of the mining pool and the migration of pool members. At the address level, we use a force-directed graph and a massive sequence view to present the dynamic address network in the mining pool. Particularly, these two views, together with the Radviz view, support an iterative visual reasoning process for de-anonymization of pool members and provide interactions for cross-view analysis and identity marking. Effectiveness and usability of SuPoolVisor are demonstrated using three cases, in which we cooperate closely with experts in this field.