The need for safety critical systems (SCS) is both important and urgent, and their evaluation and verification are test-dependent. SCS are usually complex and very large, so manual testing of SCS are infeasible in practice, and developing automatic test approaches for SCS has become an important trend. This paper defines a formal semantics model for automatic test of SCS, called AutTMSCS, which describes behaviors in SCS testing. The model accommodates the high order collaboration in real time and temporariness of SCS testing. Testing tasks, test equipment and products under test are abstracted and architected in three layers, and a method for automatic testing is given. Based on extended label transition system (LTS), the convergency and correctness of the model are proved to demonstrate the computability of the model, indicating that the testing process of SCS can be automatic.
In explicit-state model checking, system properties are typically expressed in linear temporal logic (LTL), and translated into a Büchi automaton (BA) to be checked. In order to improve performance of the conversion algorithm, some model checkers involve the intermediate automata, such as a generalized Büchi automaton (GBA). The de-generalization is a translation from a GBA to a BA. In this paper, we present a conversion algorithm to translate an LTL formula to a BA directly. A labeling, acceptance degree, is presented to record acceptance conditions satisfied in each state and transition. Acceptance degree is a set of
Data races are ubiquitous in multi-threaded applications, but they are by no means easy to detect. One of the most important reasons is the complexity of thread interleavings. A volume of research has been devoted to the interleaving-insensitive detection. However, all the previous work focuses on the uniform detection (unknown to the characteristics of thread interleavings), thereby making the detection defective in either reporting false positives or suffering from prohibitive overhead. To cope with the problem above, we propose an efficient, precise, and sound step-by-step resolution based on the characteristics of thread interleavings. We first try to tease apart the categories of thread interleavings from the several typical sources arising from the lock synchronizations. We then conduct a brief study and find a new and complex pattern the previous work cannot detect. It is also revealed that the simple pattern with the majority of thread interleavings can be resolved by a simple processing to achieve a big profit for the previous reordering-based design. Our final experimental results demonstrate the effectiveness of our empiricism-based approach, and show that 51.0% of execution time and 52.3% of trace size arising from the stateof-the-art reordering technique can be saved through a quick filtering of the simple pattern with a negligible (4.45%) performance overhead introduced on-the-fly.
The internet of things (IoT) attracts great interest in many application domains concerned with monitoring and control of physical phenomena. However, application development is still one of the main hurdles to a wide adoption of IoT technology. Application development is done at a low level, very close to the operating system and requires programmers to focus on low-level system issues. The underlying APIs can be very complicated and the amount of data collected can be huge. This can be very hard to deal with as a developer. In this paper, we present a runtime model based approach to IoT application development. First, the manageability of sensor devices is abstracted as runtime models that are automatically connected with the corresponding systems. Second, a customized model is constructed according to a personalized application scenario and the synchronization between the customized model and sensor device runtime models is ensured through model transformation. Thus, all the application logic can be carried out by executing programs on the customized model. An experiment on a real-world application scenario demonstrates the feasibility, effectiveness, and benefits of the new approach to IoT application development.
Recently software crowdsourcing has become a viable development paradigm for Software-as-a-Service (SaaS) ecosystems. TopCoder, one of the largest competitive programming communities, enables enterprises to tap into its global talent pool and crowdsource a variety of SaaS development tasks including requirement analysis, architecture design, code, and testing. Many researchers have proposed auction-based modelling methods to characterize general software crowdsourcing. But there are few papers on the comprehensive analysis of SaaS crowdsourcing process and developer community.
This paper introduces a holistic analysis framework to model the SaaS-oriented software crowdsourcing from two dimensions: individual behavior in crowdsourcing contests and collective competition in the community. The framework includes a game-theoretical model to describe the competitive nature of software crowdsourcing process. Moreover, the framework defines a competition network to characterize the topological properties of a crowdsourcing community for SaaS development. The analysis of this model indicates that the success of a competitive software crowdsourcing project essentially depends upon the networks of key participants with sufficient skills and dedication for the project. This is validated by a large historical data collected from the Top-Coder website over a ten-year period.
Phase change memory (PCM) is a promising candidate to replace DRAM as main memory, thanks to its better scalability and lower static power than DRAM. However, PCM also presents a few drawbacks, such as long write latency and high write power. Moreover, the write commands parallelism of PCM is restricted by instantaneous power constraints, which degrades write bandwidth and overall performance. The write power of PCM is asymmetric: writing a zero consumes more power than writing a one. In this paper, we propose a new scheduling policy, write power asymmetry scheduling (WPAS), that exploits the asymmetry of write power. WPAS improveswrite commands parallelism of PCM memory without violating power constraint. The evaluation results show that WPAS can improve performance by up to 35.5%, and 18.5% on average. The effective read latency can be reduced by up to 33.0%, and 17.1% on average.
This paper presents a high-quality very large scale integration (VLSI) global router in X-architecture, called XGRouter, that heavily relies on integer linear programming (ILP) techniques, partition strategy and particle swarm optimization (PSO). A new ILP formulation, which can achieve more uniform routing solution than other formulations and can be effectively solved by the proposed PSO is proposed. To effectively use the new ILP formulation, a partition strategy that decomposes a large-sized problem into some small-sized sub-problems is adopted and the routing region is extended progressively from the most congested region. In the post-processing stage of XGRouter, maze routing based on new routing edge cost is designed to further optimize the total wire length and mantain the congestion uniformity. To our best knowledge, XGRouter is the first work to use a concurrent algorithm to solve the global routing problem in X-architecture. Experimental results show that XGRouter can produce solutions of higher quality than other global routers. And, like several state-of-the-art global routers, XGRouter has no overflow.
Entity resolution (ER) is the problem of identifying and grouping different manifestations of the same real world object. Algorithmic approaches have been developed where most tasks offer superior performance under supervised learning. However, the prohibitive cost of labeling training data is still a huge obstacle for detecting duplicate query records from online sources. Furthermore, the unique combinations of noisy data with missing elements make ER tasks more challenging. To address this, transfer learning has been adopted to adaptively share learned common structures of similarity scoring problems between multiple sources. Although such techniques reduce the labeling cost so that it is linear with respect to the number of sources, its random sampling strategy is not successful enough to handle the ordinary sample imbalance problem. In this paper, we present a novel multi-source active transfer learning framework to jointly select fewer data instances from all sources to train classifiers with constant precision/recall. The intuition behind our approach is to actively label the most informative samples while adaptively transferring collective knowledge between sources. In this way, the classifiers that are learned can be both label-economical and flexible even for imbalanced or quality diverse sources. We compare our method with the state-of-the-art approaches on real-word datasets. Our experimental results demonstrate that our active transfer learning algorithm can achieve impressive performance with far fewer labeled samples for record matching with numerous and varied sources.
Hashtags, terms prefixed by a hash-symbol #, are widely used and inserted anywhere within short messages (tweets) on micro-blogging systems as they present rich sentiment information on topics that people are interested in. In this paper, we focus on the problem of hashtag recommendation considering their personalized and temporal aspects. As far as we know, this is the first work addressing this issue specially to recommend personalized hashtags combining longterm and short-term user interest.We introduce three features to capture personal and temporal user interest: 1) hashtag textual information; 2) user behavior; and 3) time. We offer two recommendation models for comparison: a linearcombined model, and an enhanced session-based temporal graph (STG) model, Topic-STG, considering the features to learn user preferences and subsequently recommend personalized hashtags. Experiments on two real tweet datasets illustrate the effectiveness of the proposed models and algorithms.
Cuckoo search (CS), inspired by the obligate brood parasitic behavior of some cuckoo species, iteratively uses Lévy flights random walk (LFRW) and biased/selective random walk (BSRW) to search for new solutions. In this study, we seek a simple strategy to set the scaling factor in LFRW, which can vary the scaling factor to achieve better performance. However, choosing the best scaling factor for each problem is intractable. Thus, we propose a varied scaling factor (VSF) strategy that samples a value from the range [0,1] uniformly at random for each iteration. In addition, we integrate the VSF strategy into several advanced CS variants. Extensive experiments are conducted on three groups of benchmark functions including 18 common test functions, 25 functions proposed in CEC 2005, and 28 functions introduced in CEC 2013. Experimental results demonstrate the effectiveness of the VSF strategy.
We propose a method for representing heterogeneous concept lattices as classical concept lattices. Particularly, we describe a transformation of heterogeneous formal context into a binary one, such that corresponding concept lattices will be isomorphic. We prove the correctness of this transformation by the basic theorem for heterogeneous as well as classical concept lattices.
With the explosive growth in the number of protein sequences generated in the postgenomic age, research into identifying cytokines from proteins and detecting their biochemical mechanisms becomes increasingly important. Unfortunately, the identification of cytokines from proteins is challenging due to a lack of understanding of the structure space provided by the proteins and the fact that only a small number of cytokines exists in massive proteins. In view of fact that a proteins sequence is conceptually similar to a mapping of words to meaning, n-gram, a type of probabilistic language model, is explored to extract features for proteins. The second challenge focused on in this work is genetic algorithms, a search heuristic that mimics the process of natural selection, that is utilized to develop a classifier for overcoming the protein imbalance problem to generate precise prediction of cytokines in proteins. Experiments carried on imbalanced proteins data set show that our methods outperform traditional algorithms in terms of the prediction ability.
High-throughput RNA sequencing (RNA-seq) has emerged as a revolutionary and powerful technology for expression profiling. Most proposed methods for detecting differentially expressed (DE) genes from RNA-seq are based on statistics that compare normalized read counts between conditions. However, there are few methods considering the expression measurement uncertainty into DE detection. Moreover, most methods are only capable of detecting DE genes, and few methods are available for detecting DE isoforms. In this paper, a Bayesian framework (BDSeq) is proposed to detect DE genes and isoforms with consideration of expression measurement uncertainty. This expression measurement uncertainty provides useful information which can help to improve the performance of DE detection. Three real RAN-seq data sets are used to evaluate the performance of BDSeq and results show that the inclusion of expression measurement uncertainty improves accuracy in detection of DE genes and isoforms. Finally, we develop a GamSeq-BDSeq RNA-seq analysis pipeline to facilitate users.