Jul 2016, Volume 10 Issue 4
    

  • Select all
  • PERSPECTIVE
    Zhi-Hua ZHOU
  • REVIEW ARTICLE
    Yue WU,Chao LU,Yunji CHEN

    With the rapid development of semiconductor industry, the number of cores integrated on chip increases quickly, which brings tough challenges such as bandwidth, scalability and power into on-chip interconnection. Under such background, Network-on-Chip (NoC) is proposed and gradually replacing the traditional on-chip interconnections such as sharing bus and crossbar. For the convenience of physical layout, mesh is the most used topology in NoC design. Routing algorithm, which decides the paths of packets, has significant impact on the latency and throughput of network. Thus routing algorithm plays a vital role in a wellperformed network. This study mainly focuses on the routing algorithms of mesh NoC. By whether taking network information into consideration in routing decision, routing algorithms of NoC can be roughly classified into oblivious routing and adaptive routing. Oblivious routing costs less without adaptiveness while adaptive routing is on the contrary. To combine the advantages of oblivious and adaptive routing algorithm, half-adaptive algorithms were proposed. In this paper, the concepts, taxonomy and features of routing algorithms of NoC are introduced. Then the importance of routing algorithms in mesh NoC is highlighted, and representative routing algorithms with respective features are reviewed and summarized. Finally, we try to shed light upon the future work of NoC routing algorithms.

  • RESEARCH ARTICLE
    Junha LEE,Dae-Kyoo KIM,Suntae KIM,Sooyong PARK

    Cohesion is a design quality that has a great impact on the posterior development and maintenance. As software evolves, the cohesion of the system becomes weaker due to the changes introduced during evolution. Over evolution, a single responsibility class may be unintentionally assigned other responsibilities, which makes the class less cohesive and more complex and consequently increases the complexity of the entire system. There has been much work on decomposing class responsibilities based on internal class relationships such as method-attribute referencing and internal method calls. However, object-oriented systems involve significant external class relationships carrying important behavioral semantics, which should be taken into account in identifying class responsibilities. In this paper, we present a novel approach for identifying and decomposing classes responsibilities based on method similarity using both internal and external class relationships. We extend the existing work for measuring similarity of internal class relationships and present a distance-based method for measuring external class relationships. We evaluate the approach using three open source applications — JMeter, JHotDraw, and ArgoUML. The evaluation shows that the presented approach improves precision over the existing work. We validate the results using independent samples T-test and ANOVA applied to a set of hypotheses. The validation confirms that the results are statistically significant.

  • RESEARCH ARTICLE
    Xiaofang QI,Jun HE,Peng WANG,Huayang ZHOU

    Reachability testing is an important approach to testing concurrent programs. It generates and exercises synchronization sequences automatically and on-the-fly without saving any test history. Existing reachability testing can be classified into exhaustive and t-way testing. Exhaustive testing is impractical in many cases while t-way testing may decrease the capability of fault detection in some cases. In this paper, we present a variable strength reachability testing strategy, which adopts the dynamic framework of reachability testing and uses a variable strength combinatorial strategy. Different parameter groups are provided with different covering strength. Variable strength testing covers no t-way combinations but the necessary combinations of parameters having mutual interactions in a concurrent program. It is more reasonable than t-way testing because uniform interactions between parameters do not often exist in concurrent systems. We propose a merging algorithmthat implements the variable strength combinatorial testing strategy and conduct our experiment on several concurrent programs. The experimental results indicate that our variable strength reachability testing reaches a good tradeoff between the effectiveness and efficiency. It can keep the same capability of fault detection as exhaustive reachability testing while substantially reducing the number of synchronization sequences and decreasing the execution time in most cases.

  • RESEARCH ARTICLE
    Wenmei LIU,Hui LIU

    Extract method is one of the most popular software refactorings. However, little work has been done to investigate or validate the major motivations for such refactorings. Digging into this issue might help researchers to improve tool support for extract method refactorings, e.g., proposing better tools to recommend refactoring opportunities, and to select fragments to be extracted. To this end, we conducted an interview with 25 developers, and our results suggest that current reuse, decomposition of long methods, clone resolution, and future reuse are the major motivations for extract method refactorings.We also validated the results by analyzing the refactoring history of seven open-source applications. Analysis results suggest that current reuse was the primary motivation for 56% of extract method refactorings, decomposition of methods was the primary motivation for 28% of extract method refactorings, and clone resolution was the primary motivation for 16% of extract method refactorings. These findings might suggest that recommending extract method opportunities by analyzing only the inner structure (e.g., complexity and length) of methods alone would miss many extract method opportunities. These findings also suggest that extract method refactorings are often driven by current and immediate reuse. Consequently, how to recognize or predict reuse requirements timely during software evolution may play a key role in the recommendation and automation of extract method refactorings. We also investigated the likelihood for the extracted methods to be reused in future, and our results suggest that such methods have a small chance (12%) to be reused in future unless the extracted fragment could be reused immediately in software evolution and extracting such a fragment can resolve existing clones at the same time.

  • RESEARCH ARTICLE
    Samir ZEGHLACHE,Djamel SAIGAA,Kamel KARA

    In this paper, a robust controller for a six degrees of freedom (6 DOF) octorotor helicopter control is proposed in presence of actuator and sensor faults. Neural networks (NN), interval type-2 fuzzy logic control (IT2FLC) approach and sliding mode control (SMC) technique are used to design a controller, named fault tolerant neural network interval type-2 fuzzy sliding mode controller (FTNNIT2FSMC), for each subsystem of the octorotor helicopter. The proposed control scheme allows avoiding difficult modeling, attenuating the chattering effect of the SMC, reducing the number of rules for the fuzzy controller, and guaranteeing the stability and the robustness of the system. The simulation results show that the FTNNIT2FSMC can greatly alleviate the chattering effect, tracking well in presence of actuator and sensor faults.

  • RESEARCH ARTICLE
    Xin XU,Wei WANG,Jianhong WANG

    Radar emitter identification has been recognized as an indispensable task for electronic intelligence system. With the increasingly accumulated radar emitter intelligence and information, one key issue is to rebuild the radar emitter classifier efficiently with the newly-arrived information. Although existing incremental learning algorithms are superior in saving significant computational cost by incremental learning on continuously increasing training samples, they are not adaptable enough yet when emitter types, features and samples are increasing dramatically. For instance, the intra-pulse characters of emitter signals could be further extracted and thus expand the feature dimension. The same goes for the radar emitter type dimension when samples from new radar emitter types are gathered. In addition, existing incremental classifiers are still problematic in terms of computational cost, sensitivity to data input order, and difficulty in multiemitter type identification. To address the above problems, we bring forward a three-way incremental learning algorithm (TILA) for radar emitter identification which is adaptable for the increase in emitter features, types and samples.

  • RESEARCH ARTICLE
    Kang LI,Fazhi HE,Xiao CHEN

    Recently, compressive tracking (CT) has been widely proposed for its efficiency, accuracy and robustness on many challenging sequences. Its appearance model employs non-adaptive random projections that preserve the structure of the image feature space. A very sparse measurement matrix is used to extract features by multiplying it with the feature vector of the image patch. An adaptive Bayes classifier is trained using both positive samples and negative samples to separate the target from background. On the CT framework, however, some features used for classification have weak discriminative abilities, which reduces the accuracy of the strong classifier. In this paper, we present an online compressive feature selection algorithm(CFS) based on the CT framework. It selects the features which have the largest margin when using them to classify positive samples and negative samples. For features that are not selected, we define a random learning rate to update them slowly. It makes those weak classifiers preserve more target information, which relieves the drift when the appearance of the target changes heavily. Therefore, the classifier trained with those discriminative features couples its score in many challenging sequences, which leads to a more robust tracker. Numerous experiments show that our tracker could achieve superior result beyond many state-of-the-art trackers.

  • RESEARCH ARTICLE
    Wayne Xin ZHAO,Ji-Rong WEN,Xiaoming LI

    Timeline generation is an important research task which can help users to have a quick understanding of the overall evolution of one given topic. Previous methods simply split the time span into fixed, equal time intervals without studying the role of the evolutionary patterns of the underlying topic in timeline generation. In addition, few of these methods take users’ collective interests into considerations to generate timelines.

    We consider utilizing social media attention to address these two problems due to the facts: 1) social media is an important pool of real users’ collective interests; 2) the information cascades generated in it might be good indicators for boundaries of topic phases. Employing Twitter as a basis, we propose to incorporate topic phases and user’s collective interests which are learnt from social media into a unified timeline generation algorithm.We construct both one informativeness-oriented and three interestingness-oriented evaluation sets over five topics.We demonstrate that it is very effective to generate both informative and interesting timelines. In addition, our idea naturally leads to a novel presentation of timelines, i.e., phase based timelines, which can potentially improve user experience.

  • RESEARCH ARTICLE
    Cungen CAO,Yuefei SUI,Zaiyue ZHANG

    Hoare logic is a logic used as a way of specifying semantics of programming languages, which has been extended to be a separation logic to reason about mutable heap structure. In a model M of Hoare logic, each program α induces an M-computable function fMα on the universe of M; and the M-recursive functions are defined on M. It will be proved that the class of all the M-computable functions fMα induced by programs is equal to the class of all the M-recursive functions. Moreover, each M-recursive function is ΣNM<?Pub Caret1?>1 -definable in M, where the universal quantifier is a number quantifier ranging over the standard part of a nonstandard model M.

  • RESEARCH ARTICLE
    Zongzhang ZHANG,Qiming FU,Xiaofang ZHANG,Quan LIU

    Partially observable Markov decision processes (POMDPs) provide a rich mathematical framework for planning tasks in partially observable stochastic environments. The notion of the covering number, a metric of capturing the search space size of a POMDP planning problem, has been proposed as a complexity measure of approximate POMDP planning. Existing theoretical results are based on POMDPs with finite and discrete state spaces and measured in the l1-metric space. When considering heuristics, they are assumed to be always admissible. This paper extends the theoretical results on the covering numbers of different search spaces, including the newly defined space reachable under inadmissible heuristics, to the ln-metric spaces. We provide a simple but scalable algorithm for estimating covering numbers. Experimentally, we provide estimated covering numbers of the search spaces reachable by following different policies on several benchmark problems, and analyze their abilities to predict the runtime of POMDP planning algorithms.

  • RESEARCH ARTICLE
    Hao WANG,Zhen LIU,Zhe LIU,Duncan S. WONG

    Signcryption is a public key cryptographic method that achieves unforgeability and confidentiality simultaneously with significantly smaller overhead than that required by “digital signature followed by public key encryption”. It does this by signing and encrypting a message in a single step. An aggregate signcryption scheme allows individual signcryption ciphertexts intended for the same recipient to be aggregated into a single (shorter) combined ciphertext without losing any of the security guarantees. We present an aggregate signcryption scheme in the identity-based setting using multilinear maps, and provide a proof of security in the standard model. To the best of our knowledge, our new scheme is the first aggregate signcryption scheme that is secure in the standard model.

  • RESEARCH ARTICLE
    Alampallam Ramaswamy VASUDEVAN,Subramanian SELVAKUMAR

    Identification of attacks by a network intrusion detection system (NIDS) is an important task. In signature or rule based detection, the previously encountered attacks are modeled, and signatures/rules are extracted. These rules are used to detect such attacks in future, but in anomaly or outlier detection system, the normal network traffic is modeled. Any deviation from the normal model is deemed to be an outlier/ attack. Data mining and machine learning techniques are widely used in offline NIDS. Unsupervised and supervised learning techniques differ the way NIDS dataset is treated. The characteristic features of unsupervised and supervised learning are finding patterns in data, detecting outliers, and determining a learned function for input features, generalizing the data instances respectively. The intuition is that if these two techniques are combined, better performance may be obtained. Hence, in this paper the advantages of unsupervised and supervised techniques are inherited in the proposed hierarchical model and devised into three stages to detect attacks in NIDS dataset. NIDS dataset is clustered using Dirichlet process (DP) clustering based on the underlying data distribution. Iteratively on each cluster, local denser areas are identified using local outlier factor (LOF) which in turn is discretized into four bins of separation based on LOF score. Further, in each bin the normal data instances are modeled using one class classifier (OCC). A combination of Density Estimation method, Reconstruction method, and Boundary methods are used for OCC model. A product rule combination of the threemethods takes into consideration the strengths of each method in building a stronger OCC model. Any deviation from this model is considered as an attack. Experiments are conducted on KDD CUP’99 and SSENet-2011 datasets. The results show that the proposed model is able to identify attacks with higher detection rate and low false alarms.