Please wait a minute...

Frontiers of Computer Science

Current Issue

, Volume 13 Issue 1 Previous Issue   
For Selected: View Abstracts Toggle Thumbnails
Survey of visual just noticeable difference estimation
Jinjian WU, Guangming SHI, Weisi LIN
Front. Comput. Sci.. 2019, 13 (1): 4-15.
Abstract   PDF (725KB)

The concept of just noticeable difference (JND), which accounts for the visibility threshold (visual redundancy) of the human visual system, is useful in perceptionoriented signal processing systems. In this work, we present a comprehensive review of JND estimation technology. First, the visual mechanism and its corresponding computational modules are illustrated. These include luminance adaptation, contrast masking, pattern masking, and the contrast sensitivity function. Next, the existing pixel domain and subband domain JND models are presented and analyzed. Finally, the challenges associated with JND estimation are discussed.

References | Supplementary Material | Related Articles | Metrics
A high-performance and endurable SSD cache for parity-based RAID
Chu LI, Dan FENG, Yu HUA, Fang WANG
Front. Comput. Sci.. 2019, 13 (1): 16-34.
Abstract   PDF (1049KB)

Solid-state drives (SSDs) have been widely used as caching tier for disk-based RAID systems to speed up dataintensive applications. However, traditional cache schemes fail to effectively boost the parity-based RAID storage systems (e.g., RAID-5/6), which have poor random write performance due to the small-write problem. What’s worse, intensive cache writes can wear out the SSD quickly, which causes performance degradation and cost increment. In this article, we present the design and implementation of KDD, an efficient SSD-based caching system which Keeps Data and Deltas in SSD. When write requests hit in the cache, KDD dispatches the data to the RAID storage without updating the parity blocks to mitigate the small write penalty, and compactly stores the compressed deltas in SSD to reduce the cache write traffic while guaranteeing reliability in case of disk failures. In addition, KDD organizes the metadata partition on SSD as a circular log to make the cache persistent with low overhead.We evaluate the performance of KDD via both simulations and prototype implementations. Experimental results show that KDD effectively reduces the small write penalty while extending the lifetime of the SSD-based cache by up to 6.85 times.

References | Supplementary Material | Related Articles | Metrics
Pinpointing and scheduling access conflicts to improve internal resource utilization in solid-state drives
Xuchao XIE, Liquan XIAO, Dengping WEI, Qiong LI, Zhenlong SONG, Xiongzi GE
Front. Comput. Sci.. 2019, 13 (1): 35-50.
Abstract   PDF (1019KB)

Modern solid-state drives (SSDs) are integrating more internal resources to achieve higher capacity. Parallelizing accesses across internal resources can potentially enhance the performance of SSDs. However, exploiting parallelism inside SSDs is challenging owing to real-time access conflicts. In this paper, we propose a highly parallelizable I/O scheduler (PIOS) to improve internal resource utilization in SSDs from the perspective of I/O scheduling. Specifically, we first pinpoint the conflicting flash requests with precision during the address translation in the Flash Translation Layer (FTL). Then, we introduce conflict eliminated requests (CERs) to reorganize the I/O requests in the device-level queue by dispatching conflicting flash requests to different CERs. Owing to the significant performance discrepancy between flash read and write operations, PIOS employs differentiated scheduling schemes for read and write CER queues to always allocate internal resources to the conflicting CERs that are more valuable. The small dominant size prioritized scheduling policy for the write queue significantly decreases the average write latency. The high parallelism density prioritized scheduling policy for the read queue better utilizes resources by exploiting internal parallelism aggressively. Our evaluation results show that the parallelizable I/O scheduler (PIOS) can accomplish better SSD performance than existing I/O schedulers implemented in both SSD devices and operating systems.

References | Supplementary Material | Related Articles | Metrics
A divide & conquer approach to liveness model checking under fairness & anti-fairness assumptions
Kazuhiro OGATA
Front. Comput. Sci.. 2019, 13 (1): 51-72.
Abstract   PDF (626KB)

This paper proposes an approach to making liveness model checking problems under fairness feasible. The proposed method divides such a problem into smaller ones that can be conquered. It is not superior to existing tools dedicated to model checking liveness properties under fairness assumptions in terms of model checking performance but has the following positive aspects: 1) the approach can be used to model check liveness properties under anti-fairness assumptions as well as fairness assumptions, 2) the approach can help humans better understand the reason why they need to use fairness and/or anti-fairness assumptions, and 3) the approach makes it possible to use existing linear temporal logic model checkers to model check liveness properties under fairness and/or anti-fairness assumptions.

References | Supplementary Material | Related Articles | Metrics
FunctionFlow: coordinating parallel tasks
Xuepeng FAN, Xiaofei LIAO, Hai JIN
Front. Comput. Sci.. 2019, 13 (1): 73-85.
Abstract   PDF (504KB)

With the growing popularity of task-based parallel programming, nowadays task-parallel programming libraries and languages are still with limited support for coordinating parallel tasks. Such limitation forces programmers to use additional independent components to coordinate the parallel tasks — the components can be third-party libraries or additional components in the same programming library or language. Moreover, mixing tasks and coordination components increase the difficulty of task-based programming, and blind schedulers for understanding tasks’ dependencies.

In this paper, we propose a task-based parallel programming library, FunctionFlow, which coordinates tasks in the purpose of avoiding additional independent coordination components. First, we use dependency expression to represent ubiquitous tasks’ termination. The key idea behind dependency expression is to use && for both task’s termination and || for any task termination, along with the combination of dependency expressions. Second, as runtime support, we use a lightweight representation for dependency expression. Also, we use suspended-task queue to schedule tasks that still have prerequisites to run.

Finally, we demonstrate FunctionFlow’s effectiveness in two aspects, case study about implementing popular parallel patterns with FunctionFlow, and performance comparision with state-of-the-art practice, TBB. Our demonstration shows that FunctionFlow can generally coordinate parallel tasks without involving additional components, along with comparable performance with TBB.

References | Supplementary Material | Related Articles | Metrics
Empirical investigation of stochastic local search for maximum satisfiability
Yi CHU, Chuan LUO, Shaowei CAI, Haihang YOU
Front. Comput. Sci.. 2019, 13 (1): 86-98.
Abstract   PDF (612KB)

The maximum satisfiability (MAX-SAT) problem is an important NP-hard problem in theory, and has a broad range of applications in practice. Stochastic local search (SLS) is becoming an increasingly popular method for solving MAX-SAT. Recently, a powerful SLS algorithm called CCLS shows efficiency on solving random and crafted MAX-SAT instances. However, the performance of CCLS on solving industrial MAX-SAT instances lags far behind. In this paper, we focus on experimentally analyzing the performance of SLS algorithms for solving industrial MAXSAT instances. First, we conduct experiments to analyze why CCLS performs poor on industrial instances. Then we propose a new strategy called additive BMS (Best from Multiple Selections) to ease the serious issue. By integrating CCLS and additive BMS, we develop a new SLS algorithm for MAXSAT called CCABMS, and related experiments indicate the efficiency of CCABMS. Also, we experimentally analyze the effectiveness of initialization methods on SLS algorithms for MAX-SAT, and combine an effective initialization method with CCABMS, resulting in an enhanced algorithm. Experimental results show that our enhanced algorithm performs better than its state-of-the-art SLS competitors on a large number of industrial MAX-SAT instances.

References | Supplementary Material | Related Articles | Metrics
Towards making co-training suffer less from insufficient views
Xiangyu GUO, Wei WANG
Front. Comput. Sci.. 2019, 13 (1): 99-105.
Abstract   PDF (728KB)

Co-training is a famous semi-supervised learning algorithm which can exploit unlabeled data to improve learning performance. Generally it works under a two-view setting (the input examples have two disjoint feature sets in nature), with the assumption that each view is sufficient to predict the label. However, in real-world applications due to feature corruption or feature noise, both views may be insufficient and co-training will suffer from these insufficient views. In this paper, we propose a novel algorithm named Weighted Co-training to deal with this problem. It identifies the newly labeled examples that are probably harmful for the other view, and decreases their weights in the training set to avoid the risk. The experimental results show that Weighted Co-training performs better than the state-of-art co-training algorithms on several benchmarks.

References | Supplementary Material | Related Articles | Metrics
Efficient reinforcement learning in continuous state and action spaces with Dyna and policy approximation
Shan ZHONG, Quan LIU, Zongzhang ZHANG, Qiming FU
Front. Comput. Sci.. 2019, 13 (1): 106-126.
Abstract   PDF (1290KB)

Dyna is an effective reinforcement learning (RL) approach that combines value function evaluation with model learning. However, existing works on Dyna mostly discuss only its efficiency in RL problems with discrete action spaces. This paper proposes a novel Dyna variant, called Dyna-LSTD-PA, aiming to handle problems with continuous action spaces. Dyna-LSTD-PA stands for Dyna based on least-squares temporal difference (LSTD) and policy approximation. Dyna-LSTD-PA consists of two simultaneous, interacting processes. The learning process determines the probability distribution over action spaces using the Gaussian distribution; estimates the underlying value function, policy, and model by linear representation; and updates their parameter vectors online by LSTD(λ). The planning process updates the parameter vector of the value function again by using offline LSTD(λ). Dyna-LSTD-PA also uses the Sherman–Morrison formula to improve the efficiency of LSTD(λ), and weights the parameter vector of the value function to bring the two processes together. Theoretically, the global error bound is derived by considering approximation, estimation, and model errors. Experimentally, Dyna-LSTD-PA outperforms two representative methods in terms of convergence rate, success rate, and stability performance on four benchmark RL problems.

References | Supplementary Material | Related Articles | Metrics
Stance detection via sentiment information and neural network model
Qingying SUN, Zhongqing WANG, Shoushan LI, Qiaoming ZHU, Guodong ZHOU
Front. Comput. Sci.. 2019, 13 (1): 127-138.
Abstract   PDF (581KB)

Stance detection aims to automatically determine whether the author is in favor of or against a given target. In principle, the sentiment information of a post highly influences the stance. In this study, we aim to leverage the sentiment information of a post to improve the performance of stance detection. However, conventional discretemodels with sentimental features can cause error propagation. We thus propose a joint neural network model to predict the stance and sentiment of a post simultaneously, because the neural network model can learn both representation and interaction between the stance and sentiment collectively. Specifically, we first learn a deep shared representation between stance and sentiment information, and then use a neural stacking model to leverage sentimental information for the stance detection task. Empirical studies demonstrate the effectiveness of our proposed joint neural model.

References | Supplementary Material | Related Articles | Metrics
ComR: a combined OWL reasoner for ontology classification
Changlong WANG, Zhiyong FENG, Xiaowang ZHANG, Xin WANG, Guozheng RAO, Daoxun FU
Front. Comput. Sci.. 2019, 13 (1): 139-156.
Abstract   PDF (406KB)

Ontology classification, the problem of computing the subsumption hierarchies for classes (atomic concepts), is a core reasoning service provided by Web Ontology Language (OWL) reasoners. Although general-purpose OWL 2 reasoners employ sophisticated optimizations for classification, they are still not efficient owing to the high complexity of tableau algorithms for expressive ontologies. Profile-specific OWL 2 EL reasoners are efficient; however, they become incomplete even if the ontology contains only a small number of axioms that are outside the OWL 2 EL fragment. In this paper, we present a technique that combines an OWL 2 EL reasoner with an OWL 2 reasoner for ontology classification of expressive SROIQ. To optimize the workload, we propose a task decomposition strategy for identifying the minimal non-EL subontology that contains only necessary axioms to ensure completeness. During the ontology classification, the bulk of the workload is delegated to an efficient OWL 2 EL reasoner and only the minimal non- EL subontology is handled by a less efficient OWL 2 reasoner. The proposed approach is implemented in a prototype ComR and experimental results show that our approach offers a substantial speedup in ontology classification. For the wellknown ontology NCI, the classification time is reduced by 96.9% (resp. 83.7%) compared against the standard reasoner Pellet (resp. the modular reasoner MORe).

References | Supplementary Material | Related Articles | Metrics
EnAli: entity alignment across multiple heterogeneous data sources
Chao KONG, Ming GAO, Chen XU, Yunbin FU, Weining QIAN, Aoying ZHOU
Front. Comput. Sci.. 2019, 13 (1): 157-169.
Abstract   PDF (543KB)

Entity alignment is the problem of identifying which entities in a data source refer to the same real-world entity in the others. Identifying entities across heterogeneous data sources is paramount to many research fields, such as data cleaning, data integration, information retrieval and machine learning. The aligning process is not only overwhelmingly expensive for large data sources since it involves all tuples from two or more data sources, but also need to handle heterogeneous entity attributes. In this paper, we propose an unsupervised approach, called EnAli, to match entities across two or more heterogeneous data sources. EnAli employs a generative probabilistic model to incorporate the heterogeneous entity attributes via employing exponential family, handle missing values, and also utilize the locality sensitive hashing schema to reduce the candidate tuples and speed up the aligning process. EnAli is highly accurate and efficient even without any ground-truth tuples. We illustrate the performance of EnAli on re-identifying entities from the same data source, as well as aligning entities across three real data sources. Our experimental results manifest that our proposed approach outperforms the comparable baseline.

References | Supplementary Material | Related Articles | Metrics
A fast registration algorithm of rock point cloud based on spherical projection and feature extraction
Yaru XIAN, Jun XIAO, Ying WANG
Front. Comput. Sci.. 2019, 13 (1): 170-182.
Abstract   PDF (785KB)

Point cloud registration is an essential step in the process of 3D reconstruction. In this paper, a fast registration algorithm of rock mass point cloud is proposed based on the improved iterative closest point (ICP) algorithm. In our proposed algorithm, the point cloud data of single station scanner is transformed into digital images by spherical polar coordinates, then image features are extracted and edge points are removed, the features used in this algorithm is scale-invariant feature transform (SIFT). By analyzing the corresponding relationship between digital images and 3D points, the 3D feature points are extracted, from which we can search for the two-way correspondence as candidates. After the false matches are eliminated by the exhaustive search method based on random sampling, the transformation is computed via the Levenberg-Marquardt-Iterative Closest Point (LM-ICP) algorithm. Experiments on real data of rock mass show that the proposed algorithm has the similar accuracy and better registration efficiency compared with the ICP algorithm and other algorithms.

References | Supplementary Material | Related Articles | Metrics
Saliency-based framework for facial expression recognition
Rizwan Ahmed KHAN, Alexandre MEYER, Hubert KONIK, Saida BOUAKAZ
Front. Comput. Sci.. 2019, 13 (1): 183-198.
Abstract   PDF (912KB)

This article proposes a novel framework for the recognition of six universal facial expressions. The framework is based on three sets of features extracted from a face image: entropy, brightness, and local binary pattern. First, saliency maps are obtained using the state-of-the-art saliency detection algorithm “frequency-tuned salient region detection”. The idea is to use saliency maps to determine appropriate weights or values for the extracted features (i.e., brightness and entropy).We have performed a visual experiment to validate the performance of the saliency detection algorithm against the human visual system. Eye movements of 15 subjects were recorded using an eye-tracker in free-viewing conditions while they watched a collection of 54 videos selected from the Cohn-Kanade facial expression database. The results of the visual experiment demonstrated that the obtained saliency maps are consistent with the data on human fixations. Finally, the performance of the proposed framework is demonstrated via satisfactory classification results achieved with the Cohn-Kanade database, FG-NET FEED database, and Dartmouth database of children’s faces.

References | Supplementary Material | Related Articles | Metrics
Visual tracking using discriminative representation with 2 regularization
Haijun WANG, Hongjuan GE
Front. Comput. Sci.. 2019, 13 (1): 199-211.
Abstract   PDF (1639KB)

In this paper, we propose a novel visual tracking method using a discriminative representation under a Bayesian framework. First, we exploit the histogram of gradient (HOG) to generate the texture features of the target templates and candidates. Second, we introduce a novel discriminative representation and 2-regularized least squares method to solve the proposed representation model. The proposed model has a closed-form solution and very high computational efficiency. Third, a novel likelihood function and an update scheme considering the occlusion factor are adopted to improve the tracking performance of our proposed method. Both qualitative and quantitative evaluations on 15 challenging video sequences demonstrate that our method can achieve more robust tracking results in terms of the overlap rate and center location error.

References | Supplementary Material | Related Articles | Metrics
Statistical relational learning based automatic data cleaning
Weibang LI, Ling LI, Zhanhuai LI, Mengtian CUI
Front. Comput. Sci.. 2019, 13 (1): 215-217.
Abstract   PDF (218KB)
References | Supplementary Material | Related Articles | Metrics
17 articles