Jun 2017, Volume 18 Issue 5
    

  • Select all
  • Article
    Hai LI, Xiao-wei LIU, Rui WENG, Hai-feng ZHANG

    Differential capacitive detection has been widely used in the displacement measurement of the proof mass of vibratory gyroscopes, but it did not achieve high resolutions in angle detection of rotational gyroscopes due to restrictions in structure, theory, and interface circuitry. In this paper, a differential capacitive detection structure is presented to measure the tilt angle of the rotor of a novel rotational gyroscope. A mathematical model is built to study how the structure’s capacitance changes with the rotor tilt angles. The relationship between differential capacitance and structural parameters is analyzed, and preliminarily optimized size parameters are adopted. A lownoise readout interface circuit is designed to convert differential capacitance changes to voltage signals. Rate table test results of the gyroscope show that the smallest resolvable tilt angle of the rotor is less than 0.47’’ (0.00013?), and the nonlinearity of the angle detection structure is 0.33%, which can be further improved. The results indicate that the proposed detection structure and the circuitry are helpful for a high accuracy of the gyroscope.

  • Article
    Gong-jun LI

    We investigate the adaptive tracking problem for the longitudinal dynamics of state-constrained airbreathing hypersonic vehicles, where not only the velocity and the altitude, but also the angle of attack (AOA) is required to be tracked. A novel indirect AOA tracking strategy is proposed by viewing the pitch angle as a new output and devising an appropriate pitch angle reference trajectory. Then based on the redefined outputs (i.e., the velocity, the altitude, and the pitch angle), a modified backstepping design is proposed where the barrier Lyapunov function is used to solve the state-constrained control problem and the control gain of this class of systems is unknown. Stability analysis is given to show that the tracking objective is achieved, all the closed-loop signals are bounded, and all the states always satisfy the given constraints. Finally, numerical simulations verify the effectiveness of the proposed approach.

  • Article
    Lei-ming ZHANG, Long-hao TANG, Yong LEI

    Controller area network (CAN) based fieldbus technologies have been widely used in networked manufacturing systems. As the information channel of the system, the reliability of the network is crucial to the system throughput, product quality, and work crew safety. However, due to the inaccessibility of the nodes’ internal states, direct assessment of the reliability of CAN nodes using the nodes’ internal error counters is infeasible. In this paper, a novel CAN node reliability assessment method, which uses node’s time to bus-off as the reliability measure, is proposed. The method estimates the transmit error counter (TEC) of any node in the network based on the network error log and the information provided by the observable nodes whose error counters are accessible. First, a node TEC estimation model is established based on segmented Markov chains. It considers the sparseness of the distribution of the CAN network errors. Second, by learning the differences between the model estimates and the actual values from the observable node, a Bayesian network is developed for the estimation updating mechanism of the observable nodes. Then, this estimation updating mechanism is transferred to general CAN nodes with no TEC value accessibility to update the TEC estimation. Finally, a node reliability assessment method is developed to predict the time to reach bus-off state of the nodes. Case studies are carried out to demonstrate the effectiveness of the proposed methodology. Experimental results show that the estimates using the proposed model agree well with actual observations.

  • Article
    Wei LIU, Ai-qun HU

    We propose a new narrowband speech watermarking scheme by replacing part of the speech with a scaled and spectrally shaped hidden signal. Theoretically, it is proved that if a small amount of host speech is modified, then not only an ideal channel model for hidden communication can be established, but also high imperceptibility and good intelligibility can be achieved. Furthermore, a practical system implementation is proposed. At the embedder, the power normalization criterion is first imposed on a passband watermark signal by forcing its power level to be the same as the original passband excitation of the cover speech, and a synthesis filter is then used to spectrally shape the scaled watermark signal. At the extractor, a bandpass filter is first used to get rid of the out-of-band signal, and an analysis filter is then employed to compensate for the distortion introduced by the synthesis filter. Experimental results show that the data rate is as high as 400 bits/s with better bandwidth efficiency, and good imperceptibility is achieved. Moreover, this method is robust against various attacks existing in real applications.

  • Article
    Chun-xue WANG, Li-gang LIU

    We present a fully automatic method for finding geometrically consistent correspondences while discarding outliers from the candidate point matches in two images. Given a set of candidate matches provided by scale-invariant feature transform (SIFT) descriptors, which may contain many outliers, our goal is to select a subset of these matches retaining much more geometric information constructed by a mapping searched in the space of all diffeomorphisms. This problem can be formulated as a constrained optimization involving both the Beltrami coefficient (BC) term and quasi-conformal map, and solved by an efficient iterative algorithm based on the variable splitting method. In each iteration, we solve two subproblems, namely a linear system and linearly constrained convex quadratic programming. Our algorithm is simple and robust to outliers. We show that our algorithm enables producing more correct correspondences experimentally compared with state-of-the-art approaches.

  • Article
    Yong-ping DU, Chang-qing YAO, Shu-hua HUO, Jing-xuan LIU

    The collaborative filtering (CF) technique has been widely used recently in recommendation systems. It needs historical data to give predictions. However, the data sparsity problem still exists. We propose a new item-based restricted Boltzmann machine (RBM) approach for CF and use the deep multilayer RBM network structure, which alleviates the data sparsity problem and has excellent ability to extract features. Each item is treated as a single RBM, and different items share the same weights and biases. The parameters are learned layer by layer in the deep network. The batch gradient descent algorithm with minibatch is used to increase the convergence speed. The new feature vector discovered by the multilayer RBM network structure is very effective in predicting a rating and achieves a better result. Experimental results on the data set of MovieLens show that the item-based multilayer RBM approach achieves the best performance, with a mean absolute error of 0.6424 and a root-mean-square error of 0.7843.

  • Article
    Zhao-yun CHEN, Lei LUO, Da-fei HUANG, Mei WEN, Chun-yuan ZHANG

    Recently correlation filter based trackers have attracted considerable attention for their high computational efficiency. However, they cannot handle occlusion and scale variation well enough. This paper aims at preventing the tracker from failure in these two situations by integrating the depth information into a correlation filter based tracker. By using RGB-D data, we construct a depth context model to reveal the spatial correlation between the target and its surrounding regions. Furthermore, we adopt a region growing method to make our tracker robust to occlusion and scale variation. Additional optimizations such as a model updating scheme are applied to improve the performance for longer video sequences. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed tracker performs favourably against state-of-the-art algorithms.

  • Article
    Kai ZHU, Gang LIU, Long ZHAO, Wan ZHANG

    Label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. However, satisfying the requirements of higher accuracy and less running time is always a great challenge. In this paper we propose a novel patch-based segmentation method combining a local weighted voting strategy with Bayesian inference. Multiple atlases are registered to a target image by an advanced normalization tools (ANTs) algorithm. To obtain a segmentation of the target, labels of the atlas images are propagated to the target image. We first adopt intensity prior and label prior as two key metrics when implementing the local weighted voting scheme, and then compute the two priors at the patch level. Further, we analyze the label fusion procedure concerning the image background and take the image background as an isolated label when estimating the label prior. Finally, by taking the Dice score as a criterion to quantitatively assess the accuracy of segmentations, we compare the results with those of other methods, including joint fusion, majority voting, local weighted voting, majority voting based on patch, and the widely used FreeSurfer whole-brain segmentation tool. It can be clearly seen that the proposed algorithm provides better results than the other methods. During the experiments, we make explorations about the influence of different parameters (including patch size, patch area, and the number of training subjects) on segmentation accuracy.

  • Article
    Wen-yan CUI, Xiang-ru MENG, Bin-feng YANG, Huan-huan YANG, Zhi-yuan ZHAO

    Network fault management is crucial for a wireless sensor network (WSN) to maintain a normal running state because faults (e.g., link failures) often occur. The existing lossy link localization (LLL) approach usually infers the most probable failed link set first, and then gives the fault hypothesis set. However, the inferred failed link set contains many possible failures that do not actually occur. That quantity of redundant information in the inferred set can pose a high computational burden on fault hy-pothesis inference, and consequently decreases the evaluation accuracy and increases the failure localization time. To address the issue, we propose the conditional information entropy based redundancy elimination (CIERE), a redundant lossy link elimination approach, which can eliminate most redundant information while reserving the important information. Specifically, we develop a probabilistically correlated failure model that can accurately reflect the correlation between link failures and model the nonde-terministic fault propagation. Through several rounds of mathematical derivations, the LLL problem is transformed to a set-covering problem. A heuristic algorithm is proposed to deduce the failure hypothesis set. We compare the performance of the proposed approach with those of existing LLL methods in simulation and on a real WSN, and validate the efficiency and effec-tiveness of the proposed approach.

  • Article
    Wei ZHANG, Jia-yu ZHUANG, Xi YONG, Jian-kou LI, Wei CHEN, Zhe-min LI

    User-generated content (UGC) such as blogs and twitters are exploding in modern Internet services. In such systems, recommender systems are needed to help people filter vast amount of UGC generated by other users. However, traditional rec-ommendation models do not use user authorship of items. In this paper, we show that with this additional information, we can significantly improve the performance of recommendations. A generative model that combines hierarchical topic modeling and matrix factorization is proposed. Empirical results show that our model outperforms other state-of-the-art models, and can provide interpretable topic structures for users and items. Furthermore, since user interests can be inferred from their productions, rec-ommendations can be made for users that do not have any ratings to solve the cold-start problem.

  • Article
    Yue-bin LUO, Bao-sheng WANG, Xiao-feng WANG, Bo-feng ZHANG

    Port address hopping (PAH) communication is a powerful network moving target defense (MTD) mechanism. It was inspired by frequency hopping in wireless communications. One of the critical and difficult issues with PAH is synchronization. Existing schemes usually provide hops for each session lasting only a few seconds/minutes, making them easily influenced by network events such as transmission delays, traffic jams, packet dropouts, reordering, and retransmission. To address these problems, in this paper we propose a novel selfsynchronization scheme, called ‘keyed-hashing based self-synchronization (KHSS)’. The proposed method generates the message authentication code (MAC) based on the hash based MAC (HMAC), which is then further used as the synchronization information for port address encoding and decoding. Providing the PAH communication system with one-packet-one-hopping and invisible message authentication abilities enables both clients and servers to constantly change their identities as well as perform message authentication over unreliable communication mediums without synchronization and authentication information transmissions. Theoretical analysis and simulation and experiment results show that the proposed method is effective in defending against man-in-the-middle (MITM) attacks and network scanning. It significantly outperforms existing schemes in terms of both security and hopping efficiency.

  • Article
    Jun-sheng LV, You LI, Yu-mei ZHOU, Jian-zhong ZHAO, Hai-hua SHEN, Feng ZHANG

    A wide-range tracking technique for clock and data recovery (CDR) circuit is presented. Compared to the traditional technique, a digital CDR controller with calibration is adopted to extend the tracking range. Because of the use of digital circuits in the design, CDR is not sensitive to process and power supply variations. To verify the technique, the whole CDR circuit is implemented using 65-nm CMOS technology. Measurements show that the tracking range of CDR is greater than ±6×10−3 at 5 Gb/s. The receiver has good jitter tolerance performance and achieves a bit error rate of<10–12. The re-timed and re-multiplexed serial data has a root-mean-square jitter of 6.7 ps.

  • Article
    Myung-jae KIM, Il-ho YANG, Min-seok KIM, Ha-jin YU

    We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using outputs of selection or clustering algorithms from the background speakers’ utterances. The proposed approach is used as a feature normalization method for building histograms when there are insufficient input utterance samples. In addition, the proposed method is used as an i-vector normalization method in an i-vector-based probabilistic linear discriminant analysis (PLDA) system, which is the current state-of-the-art for speaker verification. The ranks of sample values for histogram equalization are estimated in ascending order from both the input utterances and the supplement set. New ranks are obtained by computing the sum of different kinds of ranks. Subsequently, the proposed method determines the cumulative distribution function of the test utterance using the newly defined ranks. The proposed method is compared with conventional feature normalization methods, such as cepstral mean normalization (CMN), cepstral mean and variance normalization (MVN), histogram equalization (HEQ), and the European Telecommunications Standards Institute (ETSI) advanced front-end methods. In addition, performance is compared for a case in which the greedy selection algorithm is used with fuzzyC-means and K-means algorithms. The YOHO and Electronics and Telecommunications Research Institute (ETRI) databases are used in an evaluation in the feature space. The test sets are simulated by the Opus VoIP codec. We also use the 2008 National Institute of Standards and Technology (NIST) speaker recognition evaluation (SRE) corpus for the i-vector system. The results of the experimental evaluation demonstrate that the average system performance is improved when the proposed method is used, compared to the conventional feature normalization methods.