For various applications, sensors are deployed to monitor belt regions to guarantee that every movement crossing a barrier of sensors will be detected in real-time with high accuracy and minimize the need for human support. The barrier coverage problem is introduced to model these requirements, and has been examined thoroughly in the past decades. In this survey, we state the problem definitions and systematically consider sensing models, design issues and challenges in barrier coverage problem. We also review representative algorithms in this survey. Furthermore, we provide discussions on some extensions and variants of barrier coverage problems.
As 3D technology, including computer graphics, virtual reality and 3D printing, has been rapidly developed in the past years, 3D models are gaining an increasingly huge demand. Traditional 3D modeling platforms such as Maya and ZBrush, utilize “windows, icons, menus, pointers” (WIMP) interface paradigms for fine-grained control to construct detailed models. However, the modeling progress can be tedious and frustrating and thus too hard for a novice user or even a well trained artist. Therefore, a more intuitive interface is needed. Sketch, an intuitive communication and modeling tool for human beings, becomes the first choice of modeling community. So far, various sketch-based modeling systems have been created and studied. In this paper, we attempt to show how these systems work and give a comprehensive survey. We review and categorize the systems in four aspects: the input, the knowledge they use, the modeling approach and the output. We also discuss about inherent challenges and open problems for researchers in the future.
In this paper, we have proposed and designed DPHK (data prediction based on HMM according to activity pattern knowledge mined from trajectories), a real-time distributed predicted data collection system to solve the congestion and data loss caused by too many connections to sink node in indoor smart environment scenarios (like Smart Home, Smart Wireless Healthcare and so on). DPHK predicts and sends predicted data at one time instead of sending the triggered data of these sensor nodes which people is going to pass in several times. Firstly, our system learns the knowledge of transition probability among sensor nodes from the historical binary motion data through data mining. Secondly, it stores the corresponding knowledge in each sensor node based on a special storage mechanism. Thirdly, each sensor node applies HMM (hidden Markov model) algorithm to predict the sensor node locations people will arrive at according to the received message. At last, these sensor nodes send their triggered data and the predicted data to the sink node. The significances of DPHK are as follows: (a) the procedure of DPHK is distributed; (b) it effectively reduces the connection between sensor nodes and sink node. The time complexities of the proposed algorithms are analyzed and the performance is evaluated by some designed experiments in a smart environment.
This paper proposes Adaptive Genetic Algorithms Guided by structural knowledges coming from decomposition methods, for solving PCSPs. The family of algorithms called AGAGD_x_y is designed to be doubly generic, meaning that any decompositionmethod and different heuristics for the genetic operators can be considered. To validate the approach, the decomposition algorithm due to Newman was used and several crossover operators based on structural knowledge such as the cluster, separator and the cut were tested. The experimental results obtained on the most challenging Minimum Interference-FAP problems of CALMA instances are very promising and lead to interesting perspectives to be explored in the future.
The study on person name disambiguation aims to identify different entities with the same person name through document linking to different entities. The traditional disambiguation approach makes use of words in documents as features to distinguish different entities. Due to the lack of use of word order as a feature and the limited use of external knowledge, the traditional approach has performance limitations. This paper presents an approach for named entity disambiguation through entity linking based on a multikernel and Internet verification to improve Chinese person name disambiguation. The proposed approach extends a linear kernel that uses in-document word features by adding a string kernel to construct a multi-kernel function. This multi-kernel can then calculate the similarities between an input document and the entity descriptions in a named person knowledge base to form a ranked list of candidates to different entities. Furthermore, Internet search results based on keywords extracted from the input document and entity descriptions in the knowledge base are used to train classifiers for verification. The evaluations on CIPS-SIGHAN 2012 person name disambiguation bakeoff dataset show that the use of word orders and Internet knowledge through a multi-kernel function can improve both precision and recall and our system has achieved state-of-the-art performance.
Identifying negative or speculative narrative fragments from facts is crucial for deep understanding on natural language processing (NLP). In this paper, we firstly construct a Chinese corpus which consists of three sub-corpora from different resources. We also present a general framework for Chinese negation and speculation identification. In our method, first, we propose a feature-based sequence labeling model to detect the negative or speculative cues. In addition, a cross-lingual cue expansion strategy is proposed to increase the coverage in cue detection. On this basis, this paper presents a new syntactic structure-based framework to identify the linguistic scope of a negative or speculative cue, instead of the traditional chunking-based framework. Experimental results justify the usefulness of our Chinese corpus and the appropriateness of our syntactic structure-based framework which has showed significant improvement over the state-of-the-art on Chinese negation and speculation identification.
Order-preserving submatrix (OPSM) has become important in modelling biologically meaningful subspace cluster, capturing the general tendency of gene expressions across a subset of conditions. With the advance of microarray and analysis techniques, big volume of gene expression datasets and OPSM mining results are produced. OPSM query can efficiently retrieve relevant OPSMs from the huge amount of OPSMdatasets. However, improvingOPSMquery relevancy remains a difficult task in real life exploratory data analysis processing. First, it is hard to capture subjective interestingness aspects, e.g., the analyst’s expectation given her/his domain knowledge. Second, when these expectations can be declaratively specified, it is still challenging to use them during the computational process of OPSM queries. With the best of our knowledge, existing methods mainly focus on batch OPSM mining, while few works involve OPSM query. To solve the above problems, the paper proposes two constrained OPSM query methods, which exploit userdefined constraints to search relevant results from two kinds of indices introduced. In this paper, extensive experiments are conducted on real datasets, and experiment results demonstrate that the multi-dimension index (cIndex) and enumerating sequence index (esIndex) based queries have better performance than brute force search.
The significance of the preprocessing stage in any data mining task is well known. Before attempting medical data classification, characteristics ofmedical datasets, including noise, incompleteness, and the existence of multiple and possibly irrelevant features, need to be addressed. In this paper, we show that selecting the right combination of preprocessing methods has a considerable impact on the classification potential of a dataset. The preprocessing operations considered include the discretization of numeric attributes, the selection of attribute subset(s), and the handling of missing values. The classification is performed by an ant colony optimization algorithm as a case study. Experimental results on 25 real-world medical datasets show that a significant relative improvement in predictive accuracy, exceeding 60% in some cases, is obtained.
In the last decade, functional-structural plant modelling (FSPM) has become a more widely accepted paradigm in crop and tree production, as 3D models for the most important crops have been proposed. Given the wider portfolio of available models, it is now appropriate to enter the next level in FSPM development, by introducing more efficient methods for model development. This includes the consideration of model reuse (by modularisation), combination and comparison, and the enhancement of existing models. To facilitate this process, standards for design and communication need to be defined and established. We present a first step towards an efficient and general, i.e., not speciesspecific FSPM, presently restricted to annual or bi-annual plants, but with the potential for extension and further generalization.
Model structure is hierarchical and object-oriented, with plant organs being the base-level objects and plant individual and canopy the higher-level objects. Modules for the majority of physiological processes are incorporated, more than in other platforms that have a similar aim (e.g., photosynthesis, organ formation and growth). Simulation runs with several general parameter sets adopted from the literature show that the present prototypewas able to reproduce a plausible output range for different crops (rapeseed, barley, etc.) in terms of both the dynamics and final values (at harvest time) of model state variables such as assimilate production, organ biomass, leaf area and architecture.
To investigate the robustness of face recognition algorithms under the complicated variations of illumination, facial expression and posture, the advantages and disadvantages of seven typical algorithms on extracting global and local features are studied through the experiments respectively on the Olivetti Research Laboratory database and the other three databases (the three subsets of illumination, expression and posture that are constructed by selecting images from several existing face databases). By taking the above experimental results into consideration, two schemes of face recognition which are based on the decision fusion of the twodimensional linear discriminant analysis (2DLDA) and local binary pattern (LBP) are proposed in this paper to heighten the recognition rates. In addition, partitioning a face nonuniformly for its LBP histograms is conducted to improve the performance. Our experimental results have shown the complementarities of the two kinds of features, the 2DLDA and LBP, and have verified the effectiveness of the proposed fusion algorithms.
IP covert timing channel (IPCTC) is an unconventional communication channel which attaches time information to the packets of an overt channel as messages carriers, e.g., using different inter-packet delays to transmit messages in a packet-switched network. Although the IPCTCs have many different communication methods, based on the concept of time, we categorized the base communication model of the IPCTCs into three types and then utilized the signal processing theory to build their mathematical models. As a result, the basic characteristics of the IPCTCs’ base model were formally derived. Hence, the characteristics of any IPCTC can be derived from the base models that consist of the IPCTC. Furthermore, a set of approaches was devised to implement the base model of the IPCTCs in a TCP/IP network. Experimental results show the correctness of the proposed base model of the IPCTCs in this paper.
Combining different independent cloud services must coordinate their access control policies. Otherwise unauthorized access to composite cloud service can occur when there’s a conflict among different cloud service providers’ access control policies, and then it will bring serious data security and privacy issues. In this paper, we propose Packet, a novel access control policy composition method that can detect and resolve policy conflicts in cloud service composition, including those conflicts related to privacyaware purposes and conditions. The Packet method is divided into four steps. First, employing a unified description, heterogeneous policies are transformed into a unified attributebased format. Second, to improve the conflict detection efficiency, policy conflicts on the same resource can be eliminated by adopting cosine similarity-based algorithm. Third, exploiting a hierarchical structure approach, policy conflicts related to different resources or privacy-aware purposes and conditions can be detected. Fourth, different conflict resolution techniques are presented based on the corresponding conflict types. We have successfully implemented the Packet method in Openstack platform. Comprehensive experiments have been conducted, which demonstrate the effectiveness of the proposed method by the comparison with the existing XACML-based system at conflict detection and resolution performance.