Symbolic execution is an effective way of systematically exploring the search space of a program, and is often used for automatic software testing and bug finding. The program to be analyzed is usually compiled into a binary or an intermediate representation, on which symbolic execution is carried out. During this process, compiler optimizations influence the effectiveness and efficiency of symbolic execution. However, to the best of our knowledge, there exists no work on compiler optimization recommendation for symbolic execution with respect to (w.r.t.) modified condition/decision coverage (MC/DC), which is an important testing coverage criterion widely used for mission-critical software. This study describes our use of a state-of-the-art symbolic execution tool to carry out extensive experiments to study the impact of compiler optimizations on symbolic execution w.r.t. MC/DC. The results indicate that instruction combining (IC) optimization is the important and dominant optimization for symbolic execution w.r.t. MC/DC. We designed and implemented a support vector machine based optimization recommendation method w.r.t. IC (denoted as auto). The experiments on two standard benchmarks (Coreutils and NECLA) showed that auto achieves the best MC/DC on 67.47% of Coreutils programs and 78.26% of NECLA programs.
General purpose graphics processing units (GPGPUs) can be used to improve computing performance considerably for regular applications. However, irregular memory access exists in many applications, and the benefits of graphics processing units (GPUs) are less substantial for irregular applications. In recent years, several studies have presented some solutions to remove static irregular memory access. However, eliminating dynamic irregular memory access with software remains a serious challenge. A pure software solution without hardware extensions or offline profiling is proposed to eliminate dynamic irregular memory access, especially for indirect memory access. Data reordering and index redirection are suggested to reduce the number of memory transactions, thereby improving the performance of GPU kernels. To improve the efficiency of data reordering, an operation to reorder data is offloaded to a GPU to reduce overhead and thus transfer data. Through concurrently executing the compute unified device architecture (CUDA) streams of data reordering and the data processing kernel, the overhead of data reordering can be reduced. After these optimizations, the volume of memory transactions can be reduced by 16.7%–50% compared with CUSPARSE-based benchmarks, and the performance of irregular kernels can be improved by 9.64%–34.9% using an NVIDIA Tesla P4 GPU.
The weighting subspace fitting (WSF) algorithm performs better than the multi-signal classification (MUSIC) algorithm in the case of low signal-to-noise ratio (SNR) and when signals are correlated. In this study, we use the random matrix theory (RMT) to improve WSF. RMT focuses on the asymptotic behavior of eigenvalues and eigenvectors of random matrices with dimensions of matrices increasing at the same rate. The approximative first-order perturbation is applied in WSF when calculating statistics of the eigenvectors of sample covariance. Using the asymptotic results of the norm of the projection from the sample covariance matrix signal subspace onto the real signal in the random matrix theory, the method of calculating WSF is obtained. Numerical results are shown to prove the superiority of RMT in scenarios with few snapshots and a low SNR.
In dense traffic unmanned aerial vehicle (UAV) ad-hoc networks, traffic congestion can cause increased delay and packet loss, which limit the performance of the networks; therefore, a traffic balancing strategy is required to control the traffic. In this study, we propose TQNGPSR, a traffic-aware Q-network enhanced geographic routing protocol based on greedy perimeter stateless routing (GPSR), for UAV ad-hoc networks. The protocol enforces a traffic balancing strategy using the congestion information of neighbors, and evaluates the quality of a wireless link by the Q-network algorithm, which is a reinforcement learning algorithm. Based on the evaluation of each wireless link, the protocol makes routing decisions in multiple available choices to reduce delay and decrease packet loss. We simulate the performance of TQNGPSR and compare it with AODV, OLSR, GPSR, and QNGPSR. Simulation results show that TQNGPSR obtains higher packet delivery ratios and lower end-to-end delays than GPSR and QNGPSR. In high node density scenarios, it also outperforms AODV and OLSR in terms of the packet delivery ratio, end-to-end delay, and throughput.
Large-scale datasets are driving the rapid developments of deep convolutional neural networks for visual sentiment analysis. However, the annotation of large-scale datasets is expensive and time consuming. Instead, it is easy to obtain weakly labeled web images from the Internet. However, noisy labels still lead to seriously degraded performance when we use images directly from the web for training networks. To address this drawback, we propose an end-to-end weakly supervised learning network, which is robust to mislabeled web images. Specifically, the proposed attention module automatically eliminates the distraction of those samples with incorrect labels by reducing their attention scores in the training process. On the other hand, the special-class activation map module is designed to stimulate the network by focusing on the significant regions from the samples with correct labels in a weakly supervised learning approach. Besides the process of feature learning, applying regularization to the classifier is considered to minimize the distance of those samples within the same class and maximize the distance between different class centroids. Quantitative and qualitative evaluations on well- and mislabeled web image datasets demonstrate that the proposed algorithm outperforms the related methods.
Similarity measure has long played a critical role and attracted great interest in various areas such as pattern recognition and machine perception. Nevertheless, there remains the issue of developing an efficient two-dimensional (2D) robust similarity measure method for images. Inspired by the properties of subspace, we develop an effective 2D image similarity measure technique, named transformation similarity measure (TSM), for robust face recognition. Specifically, the TSM method robustly determines the similarity between two well-aligned frontal facial images while weakening interference in the face recognition by linear transformation and singular value decomposition. We present the mathematical features and some odds to reveal the feasible and robust measure mechanism of TSM. The performance of the TSM method, combined with the nearest neighbor rule, is evaluated in face recognition under different challenges. Experimental results clearly show the advantages of the TSM method in terms of accuracy and robustness.
Opinion question machine reading comprehension (MRC) requires a machine to answer questions by analyzing corresponding passages. Compared with traditional MRC tasks where the answer to every question is a segment of text in corresponding passages, opinion question MRC is more challenging because the answer to an opinion question may not appear in corresponding passages but needs to be deduced from multiple sentences. In this study, a novel framework based on neural networks is proposed to address such problems, in which a new hybrid embedding training method combining text features is used. Furthermore, extra attention and output layers which generate auxiliary losses are introduced to jointly train the stacked recurrent neural networks. To deal with imbalance of the dataset, irrelevancy of question and passage is used for data augmentation. Experimental results show that the proposed method achieves state-of-the-art performance. We are the biweekly champion in the opinion question MRC task in Artificial Intelligence Challenger 2018 (AIC2018).
To address indeterminism in the bilevel knapsack problem, an uncertain bilevel knapsack problem (UBKP) model is proposed. Then, an uncertain solution for UBKP is proposed by defining the
A robust polynomial observer is designed based on passive synchronization of a given class of fractional-order Colpitts (FOC) systems with mismatched uncertainties and disturbances. The primary objective of the proposed observer is to minimize the effects of unknown bounded disturbances on the estimation of errors. A more practicable output-feedback passive controller is proposed using an adaptive polynomial state observer. The distributed approach of a continuous frequency of the FOC is considered to analyze the stability of the observer. Then we derive some stringent conditions for the robust passive synchronization using Finsler’s lemma based on the fractional Lyapunov stability theory. It is shown that the proposed method not only guarantees the asymptotic stability of the controller but also allows the derived adaptation law to remove the uncertainties within the nonlinear plant’s dynamics. The entire system using passivity is implemented with details in PSpice to demonstrate the feasibility of the proposed control scheme. The results of this research are illustrated using computer simulations for the control problem of the fractional-order chaotic Colpitts system. The proposed approach depicts an efficient and systematic control procedure for a large class of nonlinear systems with the fractional derivative.
Biological neurons can receive inputs and capture a variety of external stimuli, which can be encoded and transmitted as different electric signals. Thus, the membrane potential is adjusted to activate the appropriate firing modes. Indeed, reliable neuron models should take intrinsic biophysical effects and functional encoding into consideration. One fascinating and important question is the physical mechanism for the transcription of external signals. External signals can be transmitted as a transmembrane current or a signal voltage for generating action potentials. We present a photosensitive neuron model to estimate the nonlinear encoding and responses of neurons driven by external optical signals. In the model, a photocell (phototube) is used to activate a simple FitzHugh-Nagumo (FHN) neuron, and then external optical signals (illumination) are imposed to excite the photocell for generating a time-varying current/voltage source. The photocell-coupled FHN neuron can therefore capture and encode external optical signals, similar to artificial eyes. We also present detailed bifurcation analysis for estimating the mode transition and firing pattern selection of neuronal electrical activities. The sampled time series can reproduce the main characteristics of biological neurons (quiescent, spiking, bursting, and even chaotic behaviors) by activating the photocell in the neural circuit. These results could be helpful in giving possible guidance for studying neurodynamics and applying neural circuits to detect optical signals.
Run-length limited (RLL) codes can facilitate reliable data transmission and provide flicker-free illumination in visible light communication (VLC) systems. We propose novel high-rate RLL codes, which can improve error performance and mitigate flicker. Two RLL coding schemes are developed by designing the finite-state machine to further enhance the coding gain by improving the minimum Hamming distance and using the state-splitting method to realize small state numbers. In our RLL code design, the construction of the codeword set is critical. This codeword set is designed considering the set-partitioning algorithm criterion. The flicker control and minimum Hamming distance of the various proposed RLL codes are described in detail, and the flicker performances of different codes are compared based on histograms. Simulations are conducted to evaluate the proposed RLL codes in on-off keying modulation VLC systems. Simulation results demonstrate that the proposed RLL codes achieve superior error performance to the existing RLL codes.