Dec 2013, Volume 7 Issue 6
    

  • Select all
  • RESEARCH ARTICLE
    Dongchen JIANG, Wei LI, Jie LUO, Yihua LOU, Zhengzhong LIAO

    This paper proposes a decomposition based algorithm for revision problems in classical propositional logic. A set of decomposing rules are presented to analyze the satisfiability of formulas. The satisfiability of a formula is equivalent to the satisfiability of a set of literal sets. A decomposing function is constructed to calculate all satisfiable literal sets of a given formula. When expressing the satisfiability of a formula, these literal sets are equivalent to all satisfied models of such formula. A revision algorithm based on this decomposing function is proposed, which can calculate maximal contractions of a given problem. In order to reduce the memory requirement, a filter function is introduced. The improved algorithm has a good performance in both time and space perspectives.

  • RESEARCH ARTICLE
    Dantong OUYANG, Xianji CUI, Yuxin YE

    The extensions for logic-based knowledge bases with integrity constraints are rather popular. We put forward an alternative criteria for analysis of integrity constraints in Web ontology language (OWL) ontology under the closed world assumption. According to this criteria, grounded circumscription is applied to define integrity constraints in OWL ontology and the satisfaction of the integrity constraints by minimizing extensions of the predicates in integrity constraints. According to the semantics of integrity constraints, we provide a modified tableau algorithm which is sound and complete for deciding the consistency of an extended ontology. Finally, the integrity constraint validation is converted into the corresponding consistency of the extended ontology. Comparing our approach with existing integrity constraint validation approaches, we show that the results of our approach are more in accordance with user requirements than other approaches in certain cases.

  • RESEARCH ARTICLE
    Dunwei GONG, Yan ZHANG

    The aim of software testing is to find faults in a program under test, so generating test data that can expose the faults of a program is very important. To date, current studies on generating test data for path coverage do not perform well in detecting low probability faults on the covered path. The automatic generation of test data for both path coverage and fault detection using genetic algorithms is the focus of this study. To this end, the problem is first formulated as a bi-objective optimization problem with one constraint whose objectives are the number of faults detected in the traversed path and the risk level of these faults, and whose constraint is that the traversed path must be the target path. An evolutionary algorithm is employed to solve the formulated model, and several types of fault detection methods are given. Finally, the proposed method is applied to several real-world programs, and compared with a random method and evolutionary optimization method in the following three aspects: the number of generations and the time consumption needed to generate desired test data, and the success rate of detecting faults. The experimental results confirm that the proposed method can effectively generate test data that not only traverse the target path but also detect faults lying in it.

  • RESEARCH ARTICLE
    Chunping LIU, Yang ZHENG, Shengrong GONG

    Image categorization in massive image database is an important problem. This paper proposes an approach for image categorization, using sparse set of salient semantic information and hierarchy semantic label tree (HSLT) model. First, to provide more critical image semantics, the proposed sparse set of salient regions only at the focuses of visual attention instead of the entire scene was formed by our proposed saliency detection model with incorporating low and high level feature and Shotton’s semantic texton forests (STFs) method. Second, we also propose a new HSLT model in terms of the sparse regional semantic information to automatically build a semantic image hierarchy, which explicitly encodes a general to specific image relationship. And last, we archived image dataset using image hierarchical semantic, which is help to improve the performance of image organizing and browsing. Extension experimental results showed that the use of semantic hierarchies as a hierarchical organizing framework provides a better image annotation and organization, improves the accuracy and reduces human’s effort.

  • RESEARCH ARTICLE
    Zengchang QIN, Tao WAN

    Classical decision tree model is one of the classical machine learning models for its simplicity and effectiveness in applications. However, compared to the DT model, probability estimation trees (PETs) give a better estimation on class probability. In order to get a good probability estimation, we usually need large trees which are not desirable with respect to model transparency. Linguistic decision tree (LDT) is a PET model based on label semantics. Fuzzy labels are used for building the tree and each branch is associated with a probability distribution over classes. If there is no overlap between neighboring fuzzy labels, these fuzzy labels then become discrete labels and a LDT with discrete labels becomes a special case of the PET model. In this paper, two hybrid models by combining the naive Bayes classifier and PETs are proposed in order to build a model with good performance without losing too much transparency. The first model uses naive Bayes estimation given a PET, and the second model uses a set of small-sized PETs as estimators by assuming the independence between these trees. Empirical studies on discrete and fuzzy labels show that the first model outperforms the PET model at shallow depth, and the second model is equivalent to the naive Bayes and PET.

  • RESEARCH ARTICLE
    Ran LIU, Wenjian LUO, Lihua YUE

    Recently, negative databases (NDBs) are proposed for privacy protection. Similar to the traditional databases, some basic operations could be conducted over the NDBs, such as select, intersection, update, delete and so on. However, both classifying and clustering in negative databases have not yet been studied. Therefore, two algorithms, i.e., a k nearest neighbor (kNN) classification algorithm and a k-means clustering algorithm in NDBs, are proposed in this paper, respectively. The core of these two algorithms is a novel method for estimating the Hamming distance between a binary string and an NDB. Experimental results demonstrate that classifying and clustering in NDBs are promising.

  • RESEARCH ARTICLE
    Hadi SABOOHI, Sameem ABDUL KAREEM

    A Web service-based system never fulfills a user’s goal unless a failure recovery approach exists. It is inevitable that several Web services may either perish or fail before or during transactions. The completion of a composite process relies on the smooth execution of all constituent Web services. A mediator acts as an intermediary between providers and consumers to monitor the execution of these services. If a service fails, the mediator has to recover the whole composite process or else jeopardize achieving the intended goals. The atomic replacement of a perished Web service usually does not apply because the process of locating a matched Web service is unreliable. Even the system cannot depend on the replacement of the dead service with a composite service. In this paper, we propose an automatic renovation plan for failure recovery of composite semantic services based on an approach of subdigraph replacement. A replacement subdigraph is posed in lieu of an original subdigraph, which includes the failed service. The replacement is done in two separate phases, offline and online, to make the recovery faster. The offline phase foresees all possible subdigraphs, pre-calculates them, and ranks several possible replacements. The online phase compensates the unwanted effects and executes the replacement subdigraph in lieu of the original subdigraph. We have evaluated our approach during an experiment and have found that we could recover more than half of the simulated failures. These achievements show a significant improvement compared to current approaches.

  • RESEARCH ARTICLE
    Wenbing JIN, Feng SHI, Qiugui SONG, Yang ZHANG

    In theory, branch predictors with more complicated algorithms and larger data structures provide more accurate predictions. Unfortunately, overly large structures and excessively complicated algorithms cannot be implemented because of their long access delay. To date, many strategies have been proposed to balance delay with accuracy, but none has completely solved the issue. The architecture for ahead branch prediction (A2BP) separates traditional predictors into two parts. First is a small table located at the front-end of the pipeline, which makes the prediction brief enough even for some aggressive processors. Second, operations on complicated algorithms and large data structures for accurate predictions are all moved to the back-end of the pipeline. An effective mechanism is introduced for ahead branch prediction in the back-end and small table update in the front. To substantially improve prediction accuracy, an indirect branch prediction algorithm based on branch history and target path (BHTP) is implemented in A2BP. Experiments with the standard performance evaluation corporation (SPEC) benchmarks on gem5/SimpleScalar simulators demonstrate that A2BP improves average performance by 2.92% compared with a commonly used branch target buffer-based predictor. In addition, indirect branch misses with the BHTP algorithm are reduced by an average of 28.98% compared with the traditional algorithm.

  • REVIEW ARTICLE
    Chuang LIN, Yiping DENG, Yuming JIANG

    Performance evaluation plays a crucial role in the design of network systems. Many theoretical tools, including queueing theory, effective bandwidth and network calculus, have been proposed to provide modeling mechanisms and results. While these theories have been widely adopted for performance evaluation, each has its own limitation. With that network systems have become more complex and harder to describe, where a lot of uncertainty and randomness exists, to make performance evaluation of such systems tractable, some compromise is often necessary and helpful. Stochastic network calculus (SNC) is such a theoretical tool. While SNC is a relatively new theory, it is gaining increasing interest and popularity. In the current SNC literature, much attention has been paid on the development of the theory itself. In addition, researchers have also started applying SNC to performance analysis of various types of systems in recent years. The aim of this paper is to provide a tutorial on the new theoretical tool. Specifically, various SNC traffic models and SNC server models are reviewed. The focus is on how to apply SNC, for which, four critical steps are formalized and discussed. In addition, a list of SNC application topics/areas, where there may exist huge research potential, is presented.

  • RESEARCH ARTICLE
    Iftikhar Ahmed KHAN, Willem-Paul BRINKMAN, Robert HIERONS

    The purpose of this exploratory research was to study the relationship between the mood of computer users and their use of keyboard and mouse to examine the possibility of creating a generic or individualized mood measure. To examine this, a field study (n = 26) and a controlled study (n = 16) were conducted. In the field study, interaction data and self-reported mood measurements were collected during normal PC use over several days. In the controlled study, participants worked on a programming task while listening to high or low arousing background music. Besides subjective mood measurement, galvanic skin response (GSR) data was also collected. Results found no generic relationship between the interaction data and the mood data. However, the results of the studies found significant average correlations between mood measurement and personalized regression models based on keyboard and mouse interaction data. Together the results suggest that individualized mood prediction is possible from interaction behaviour with keyboard and mouse.

  • RESEARCH ARTICLE
    Xiujie ZHANG, Chunxiang XU, Wenzheng ZHANG, Wanpeng LI

    Threshold public key encryption allows a set of servers to decrypt a ciphertext if a given threshold of authorized servers cooperate. In the setting of threshold public key encryption, we consider the question of how to correctly decrypt a ciphertext where all servers continually leak information about their secret keys to an external attacker. Dodis et al. and Akavia et al. show two concrete schemes on how to store secrets on continually leaky servers. However, their constructions are only interactive between two servers. To achieve continual leakage security among more than two servers, we give the first threshold public key encryption scheme against adaptively chosen ciphertext attack in the continual leakage model under three static assumptions. In our model, the servers update their keys individually and asynchronously, without any communication between two servers. Moreover, the update procedure is re-randomized and the randomness can leak as well.