Network technology is the basis for large-scale high-efficiency network computing, such as supercomputing, cloud computing, big data processing, and artificial intelligence computing. The network technologies of network computing systems in different fields not only learn from each other but also have targeted design and optimization. Considering it comprehensively, three development trends, i.e., integration, differentiation, and optimization, are summarized in this paper for network technologies in different fields. Integration reflects that there are no clear boundaries for network technologies in different fields, differentiation reflects that there are some unique solutions in different application fields or innovative solutions under new application requirements, and optimization reflects that there are some optimizations for specific scenarios. This paper can help academic researchers consider what should be done in the future and industry personnel consider how to build efficient practical network systems.
Artificial intelligence (AI) has accelerated the advancement of financial services by identifying hidden patterns from data to improve the quality of financial decisions. However, in addition to commonly desired attributes, such as model accuracy, financial services demand trustworthy AI with properties that have not been adequately realized. These properties of trustworthy AI are interpretability, fairness and inclusiveness, robustness and security, and privacy protection. Here, we review the recent progress and limitations of applying AI to various areas of financial services, including risk management, fraud detection, wealth management, personalized services, and regulatory technology. Based on these progress and limitations, we introduce FinBrain 2.0, a research framework toward trustworthy AI. We argue that we are still a long way from having a truly trustworthy AI in financial services and call for the communities of AI and financial industry to join in this effort.
As an interdisciplinary research approach, traditional cognitive science adopts mainly the experiment, induction, modeling, and validation paradigm. Such models are sometimes not applicable in cyber-physical-social-systems (CPSSs), where the large number of human users involves severe heterogeneity and dynamics. To reduce the decision-making conflicts between people and machines in human-centered systems, we propose a new research paradigm called parallel cognition that uses the system of intelligent techniques to investigate cognitive activities and functionals in three stages: descriptive cognition based on artificial cognitive systems (ACSs), predictive cognition with computational deliberation experiments, and prescriptive cognition via parallel behavioral prescription. To make iteration of these stages constantly on-line, a hybrid learning method based on both a psychological model and user behavioral data is further proposed to adaptively learn an individual’s cognitive knowledge. Preliminary experiments on two representative scenarios, urban travel behavioral prescription and cognitive visual reasoning, indicate that our parallel cognition learning is effective and feasible for human behavioral prescription, and can thus facilitate human-machine cooperation in both complex engineering and social systems.
The goal of decentralized multi-source domain adaptation is to conduct unsupervised multi-source domain adaptation in a data decentralization scenario. The challenge of data decentralization is that the source domains and target domain lack cross-domain collaboration during training. On the unlabeled target domain, the target model needs to transfer supervision knowledge with the collaboration of source models, while the domain gap will lead to limited adaptation performance from source models. On the labeled source domain, the source model tends to overfit its domain data in the data decentralization scenario, which leads to the negative transfer problem. For these challenges, we propose dual collaboration for decentralized multi-source domain adaptation by training and aggregating the local source models and local target model in collaboration with each other. On the target domain, we train the local target model by distilling supervision knowledge and fully using the unlabeled target domain data to alleviate the domain shift problem with the collaboration of local source models. On the source domain, we regularize the local source models in collaboration with the local target model to overcome the negative transfer problem. This forms a dual collaboration between the decentralized source domains and target domain, which improves the domain adaptation performance under the data decentralization scenario. Extensive experiments indicate that our method outperforms the state-of-the-art methods by a large margin on standard multi-source domain adaptation datasets.
Traffic signal control is shifting from passive control to proactive control, which enables the controller to direct current traffic flow to reach its expected destinations. To this end, an effective prediction model is needed for signal controllers. What to predict, how to predict, and how to leverage the prediction for control policy optimization are critical problems for proactive traffic signal control. In this paper, we use an image that contains vehicle positions to describe intersection traffic states. Then, inspired by a model-based reinforcement learning method, DreamerV2, we introduce a novel learning-based traffic world model. The traffic world model that describes traffic dynamics in image form is used as an abstract alternative to the traffic environment to generate multi-step planning data for control policy optimization. In the execution phase, the optimized traffic controller directly outputs actions in real time based on abstract representations of traffic states, and the world model can also predict the impact of different control behaviors on future traffic conditions. Experimental results indicate that the traffic world model enables the optimized real-time control policy to outperform common baselines, and the model achieves accurate image-based prediction, showing promising applications in futuristic traffic signal control.
As an indispensable part of process monitoring, the performance of fault classification relies heavily on the sufficiency of process knowledge. However, data labels are always difficult to acquire because of the limited sampling condition or expensive laboratory analysis, which may lead to deterioration of classification performance. To handle this dilemma, a new semi-supervised fault classification strategy is performed in which enhanced active learning is employed to evaluate the value of each unlabeled sample with respect to a specific labeled dataset. Unlabeled samples with large values will serve as supplementary information for the training dataset. In addition, we introduce several reasonable indexes and criteria, and thus human labeling interference is greatly reduced. Finally, the fault classification effectiveness of the proposed method is evaluated using a numerical example and the Tennessee Eastman process.
The sparrow search algorithm (SSA) is a recent meta-heuristic optimization approach with the advantages of simplicity and flexibility. However, SSA still faces challenges of premature convergence and imbalance between exploration and exploitation, especially when tackling multimodal optimization problems. Aiming to deal with the above problems, we propose an enhanced variant of SSA called the multi-strategy enhanced sparrow search algorithm (MSSSA) in this paper. First, a chaotic map is introduced to obtain a high-quality initial population for SSA, and the opposition-based learning strategy is employed to increase the population diversity. Then, an adaptive parameter control strategy is designed to accommodate an adequate balance between exploration and exploitation. Finally, a hybrid disturbance mechanism is embedded in the individual update stage to avoid falling into local optima. To validate the effectiveness of the proposed MSSSA, a large number of experiments are implemented, including 40 complex functions from the IEEE CEC2014 and IEEE CEC2019 test suites and 10 classical functions with different dimensions. Experimental results show that the MSSSA achieves competitive performance compared with several state-of-the-art optimization algorithms. The proposed MSSSA is also successfully applied to solve two engineering optimization problems. The results demonstrate the superiority of the MSSSA in addressing practical problems.
Analyzing the vulnerability of power systems in cascading failures is generally regarded as a challenging problem. Although existing studies can extract some critical rules, they fail to capture the complex subtleties under different operational conditions. In recent years, several deep learning methods have been applied to address this issue. However, most of the existing deep learning methods consider only the grid topology of a power system in terms of topological connections, but do not encompass a power system’s spatial information such as the electrical distance to increase the accuracy in the process of graph convolution. In this paper, we construct a novel power-weighted line graph that uses power system topology and spatial information to optimize the edge weight assignment of the line graph. Then we propose a multi-graph convolutional network (MGCN) based on a graph classification task, which preserves a power system’s spatial correlations and captures the relationships among physical components. Our model can better handle the problem with power systems that have parallel lines, where our method can maintain desirable accuracy in modeling systems with these extra topology features. To increase the interpretability of the model, we present the MGCN using layer-wise relevance propagation and quantify the contributing factors of model classification.
In this paper, observer-based control for fractional-order singular systems with order α (0 < α < 1) and input delay is studied. On the basis of the Smith predictor and approximation error, the system with input delay is approximately equivalent to the system without input delay. Furthermore, based on the linear matrix inequality (LMI) technique, the necessary and sufficient condition of observer-based control is proposed. Since the condition is a nonstrict LMI, including the equality constraint, it will lead to some trouble when solving problems using toolbox. Thus, the strict LMI-based condition is improved in the paper. Finally, a numerical example and a direct current motor example are given to illustrate the effectiveness of the strict LMI-based condition.
A novel algorithm that combines the generalized labeled multi-Bernoulli (GLMB) filter with signal features of the unknown emitter is proposed in this paper. In complex electromagnetic environments, emitter features (EFs) are often unknown and time-varying. Aiming at the unknown feature problem, we propose a method for identifying EFs based on dynamic clustering of data fields. Because EFs are time-varying and the probability distribution is unknown, an improved fuzzy C-means algorithm is proposed to calculate the correlation coefficients between the target and measurements, to approximate the EF likelihood function. On this basis, the EF likelihood function is integrated into the recursive GLMB filter process to obtain the new prediction and update equations. Simulation results show that the proposed method can improve the tracking performance of multiple targets, especially in heavy clutter environments.