Please wait a minute...

Frontiers of Computer Science

, Volume 13 Issue 3

For Selected: View Abstracts Toggle Thumbnails
REVIEW ARTICLE
A survey on fast simulation of elastic objects
Jin HUANG, Jiong CHEN, Weiwei XU, Hujun BAO
Front. Comput. Sci.. 2019, 13 (3): 443-459.  https://doi.org/10.1007/s11704-018-8081-1
Abstract   PDF (872KB)

Elastic simulation plays an important role in computer graphics and has been widely applied to film and game industries. It also has a tight relationship to virtual reality and computational fabrication applications. The balance between accuracy and performance are the most important challenge in the design of an elastic simulation algorithm. This survey will begin with the basic knowledge of elastic simulation, and then investigate two major acceleration techniques for it. From the viewpoint of deformation energy, we introduce typical linearization and reduction ideas for accelerating. We also introduce some recent progress in projective and position-based dynamics, which mainly rely on special numerical methods. Besides, optimal control for elastic objects and typical collision resolving techniques are discussed. Finally, we discuss several possible future works on integrating elastic simulation into virtual reality and 3D printing applications.

References | Supplementary Material | Related Articles | Metrics
RESEARCH ARTICLE
Integrating GPS trajectory and topics from Twitter stream for human mobility estimation
Satoshi MIYAZAWA, Xuan SONG, Tianqi XIA, Ryosuke SHIBASAKI, Hodaka KANEDA
Front. Comput. Sci.. 2019, 13 (3): 460-470.  https://doi.org/10.1007/s11704-017-6464-3
Abstract   PDF (1241KB)

Understanding urban dynamics and large-scale human mobility will play a vital role in building smart cities and sustainable urbanization. Existing research in this domain mainly focuses on a single data source (e.g., GPS data, CDR data, etc.). In this study, we collect big and heterogeneous data and aim to investigate and discover the relationship between spatiotemporal topics found in geo-tagged tweets and GPS traces from smartphones. We employ Latent Dirichlet Allocation-based topicmodeling on geo-tagged tweets to extract and classify the topics. Then the extracted topics from tweets and temporal population distribution from GPS traces are jointly used to model urban dynamics and human crowd flow. The experimental results and validations demonstrate the efficiency of our approach and suggest that the fusion of cross-domain data for urban dynamics modeling is more practical than previously thought.

References | Supplementary Material | Related Articles | Metrics
The time model for event processing in internet of things
Chunjie ZHOU, Xiaoling WANG, Zhiwang ZHANG, Zhenxing ZHANG, Haiping QU
Front. Comput. Sci.. 2019, 13 (3): 471-488.  https://doi.org/10.1007/s11704-018-7378-4
Abstract   PDF (833KB)

The time management model for event processing in internet of things has a special and important requirement. Many events in real world applications are long-lasting events which have different time granularity with order or out-of-order. The temporal relationships among those events are often complex. An important issue of complex event processing is to extract patterns from event streams to support decision making in real-time. However, current time management model does not consider the unified solution about time granularity, time interval, time disorder, and the difference between workday calendar systems in different organizations. In this work, we analyze the preliminaries of temporal semantics of events. A tree-plan model of out-of-order durable events is proposed. A hybrid solution is correspondingly introduced. A case study is illustrated to explain the time constraints and the time optimization. Extensive experimental studies demonstrate the efficiency of our approach.

References | Supplementary Material | Related Articles | Metrics
A novel index system describing program runtime characteristics for workload consolidation
Lin WANG, Depei QIAN, Rui WANG, Zhongzhi LUAN, Hailong YANG, Huaxiang ZHANG
Front. Comput. Sci.. 2019, 13 (3): 489-499.  https://doi.org/10.1007/s11704-018-6614-2
Abstract   PDF (778KB)

Workload consolidation is a common method to improve the resource utilization in clusters or data centers. In order to achieve efficient workload consolidation, the runtime characteristics of a program should be taken into consideration in scheduling. In this paper, we propose a novel index system for efficiently describing the program runtime characteristics. With the help of this index system, programs can be classified by the following runtime characteristics: 1) dependence to multi-dimensional resources including CPU, disk I/O, memory and network I/O; and 2) impact and vulnerability to resource sharing embodied by resource usage and resource sensitivity. In order to verify the effectiveness of this novel index system in workload consolidation, a scheduling strategy, Sche-index, using the new index system for workload consolidation is proposed. Experiment results show that compared with traditional least-loaded scheduling strategy, Sche-index can improve both program performance and system resource utilization significantly.

References | Supplementary Material | Related Articles | Metrics
Prefetch-aware fingerprint cache management for data deduplication systems
Mei LI, Hongjun ZHANG, Yanjun WU, Chen ZHAO
Front. Comput. Sci.. 2019, 13 (3): 500-515.  https://doi.org/10.1007/s11704-017-7119-0
Abstract   PDF (853KB)

Data deduplication has been widely utilized in large-scale storage systems, particularly backup systems. Data deduplication systems typically divide data streams into chunks and identify redundant chunks by comparing chunk fingerprints. Maintaining all fingerprints in memory is not cost-effective because fingerprint indexes are typically very large. Many data deduplication systems maintain a fingerprint cache in memory and exploit fingerprint prefetching to accelerate the deduplication process. Although fingerprint prefetching can improve the performance of data deduplication systems by leveraging the locality of workloads, inaccurately prefetched fingerprints may pollute the cache by evicting useful fingerprints. We observed that most of the prefetched fingerprints in a wide variety of applications are never used or used only once, which severely limits the performance of data deduplication systems. We introduce a prefetch-aware fingerprint cache management scheme for data deduplication systems (PreCache) to alleviate prefetch-related cache pollution. We propose three prefetch-aware fingerprint cache replacement policies (PreCache-UNU, PreCache-UOO, and PreCache-MIX) to handle different types of cache pollution. Additionally, we propose an adaptive policy selector to select suitable policies for prefetch requests. We implement PreCache on two representative data deduplication systems (Block Locality Caching and SiLo) and evaluate its performance utilizing three real-world workloads (Kernel, MacOS, and Homes). The experimental results reveal that PreCache improves deduplication throughput by up to 32.22% based on a reduction of on-disk fingerprint index lookups and improvement of the deduplication ratio by mitigating prefetch-related fingerprint cache pollution.

References | Supplementary Material | Related Articles | Metrics
AADL+: a simulation-based methodology for cyber-physical systems
Jing LIU, Tengfei LI, Zuohua DING, Yuqing QIAN, Haiying SUN, Jifeng HE
Front. Comput. Sci.. 2019, 13 (3): 516-538.  https://doi.org/10.1007/s11704-018-7039-7
Abstract   PDF (840KB)

AADL (architecture analysis and design language) concentrates on the modeling and analysis of application system architectures. It is quite popular for its simple syntax, powerful functionality and extensibility and has been widely applied in embedded systems for its advantage. However, it is not enough for AADL to model cyber-physical systems (CPS) mainly because it cannot be used to model the continuous dynamic behaviors. This paper proposes an approach to construct a new sublanguage of AADL called AADL+, to facilitate the modeling of not only the discrete and continuous behavior of CPS, but also interaction between cyber components and physical components. The syntax and semantics of the sublanguage are provided to describe the behaviors of the systems. What’s more, we develop a plug-in to OSATE (open-source AADL tool environment) for the modeling of CPS. And the plug-in supports syntax checking and simulation of the system model through linking with modelica. Finally, the AADL+ annex is successfully applied to model a lunar rover control system.

References | Supplementary Material | Related Articles | Metrics
An effective framework for asynchronous incremental graph processing
Xinqiao LV, Wei XIAO, Yu ZHANG, Xiaofei LIAO, Hai JIN, Qiangsheng HUA
Front. Comput. Sci.. 2019, 13 (3): 539-551.  https://doi.org/10.1007/s11704-018-7443-z
Abstract   PDF (477KB)

Although many graph processing systems have been proposed, graphs in the real-world are often dynamic. It is important to keep the results of graph computation up-todate. Incremental computation is demonstrated to be an efficient solution to update calculated results. Recently, many incremental graph processing systems have been proposed to handle dynamic graphs in an asynchronous way and are able to achieve better performance than those processed in a synchronous way. However, these solutions still suffer from suboptimal convergence speed due to their slow propagation of important vertex state (important to convergence speed) and poor locality. In order to solve these problems, we propose a novel graph processing framework. It introduces a dynamic partition method to gather the important vertices for high locality, and then uses a priority-based scheduling algorithm to assign them with a higher priority for an effective processing order. By such means, it is able to reduce the number of updates and increase the locality, thereby reducing the convergence time. Experimental results show that our method reduces the number of updates by 30%, and reduces the total execution time by 35%, compared with state-of-the-art systems.

References | Supplementary Material | Related Articles | Metrics
An improved fireworks algorithm for the capacitated vehicle routing problem
Weibo YANG, Liangjun KE
Front. Comput. Sci.. 2019, 13 (3): 552-564.  https://doi.org/10.1007/s11704-017-6418-9
Abstract   PDF (500KB)

The capacitated vehicle routing problem (CVRP), which aims at minimizing travel costs, is a wellknown NP-hard combinatorial optimization. Owing to its hardness, many heuristic search algorithms have been proposed to tackle this problem. This paper explores a recently proposed heuristic algorithm named the fireworks algorithm (FWA), which is a swarm intelligence algorithm. We adopt FWA for the combinatorial CVRP problem with severalmodifications of the original FWA: it employs a new method to generate “sparks” according to the selection rule, and it uses a new method to determine the explosion amplitude for each firework. The proposed algorithm is compared with several heuristic search methods on some classical benchmark CVRP instances. The experimental results show a promising performance of the proposed method.We also discuss the strengths and weaknesses of our algorithm in contrast to traditional algorithms.

References | Supplementary Material | Related Articles | Metrics
CodeAttention: translating source code to comments by exploiting the code constructs
Wenhao ZHENG, Hongyu ZHOU, Ming LI, Jianxin WU
Front. Comput. Sci.. 2019, 13 (3): 565-578.  https://doi.org/10.1007/s11704-018-7457-6
Abstract   PDF (522KB)

Appropriate comments of code snippets provide insight for code functionality, which are helpful for program comprehension. However, due to the great cost of authoring with the comments, many code projects do not contain adequate comments. Automatic comment generation techniques have been proposed to generate comments from pieces of code in order to alleviate the human efforts in annotating the code.Most existing approaches attempt to exploit certain correlations (usually manually given) between code and generated comments, which could be easily violated if coding patterns change and hence the performance of comment generation declines. In addition, recent approaches ignore exploiting the code constructs and leveraging the code snippets like plain text. Furthermore, previous datasets are also too small to validate the methods and show their advantage. In this paper, we propose a new attention mechanism called CodeAttention to translate code to comments, which is able to utilize the code constructs, such as critical statements, symbols and keywords. By focusing on these specific points, CodeAttention could understand the semantic meanings of code better than previous methods. To verify our approach in wider coding patterns, we build a large dataset from open projects in GitHub. Experimental results in this large dataset demonstrate that the proposed method has better performance over existing approaches in both objective and subjective evaluation. We also perform ablation studies to determine effects of different parts in CodeAttention.

References | Supplementary Material | Related Articles | Metrics
Crowd counting via learning perspective for multi-scale multi-view Web images
Chong SHANG, Haizhou AI, Yi YANG
Front. Comput. Sci.. 2019, 13 (3): 579-587.  https://doi.org/10.1007/s11704-017-6598-3
Abstract   PDF (822KB)

Estimating the number of people in Web images still remains a challenging problem owing to the perspective variation, different views, and diverse backgrounds. Existing deep learning models still have difficulties in dealing with scenarios where the size of a person is either extremely large or extremely small. In this paper, we propose a novel perspective-aware architecture to estimate the number of people in a crowd in web images. Specifically,we use a two-stage framework, where we first learn a policy network to infer the perspective of the target scene, which outputs a scale label for the subsequent perspective normalization. Next, given the aligned inputs, we further adjust the scale-specific counting network to regress the final count. Experiments on challenging datasets demonstrate our approach can deal with a large perspective variation and that we have achieved state-of-theart results.

References | Supplementary Material | Related Articles | Metrics
A self-adaptive correction method for perspective distortions of image
Lihua WU, Qinghua SHANG, Yupeng SUN, Xu BAI
Front. Comput. Sci.. 2019, 13 (3): 588-598.  https://doi.org/10.1007/s11704-018-7269-8
Abstract   PDF (765KB)

Frequently, the shooting angles available to photograph an object are limited, and the resultant images contain perspective distortions. These distortions make more difficult to perform subsequent tasks like feature extraction and information identification. This paper suggested a perspective correction method that extracts automatically distortion features through edge detection. Results showed that this method is powerful in correcting images with perspective distortions. The corrected image has virtually little information missing, clear features and high recovery rate.

References | Supplementary Material | Related Articles | Metrics
REVIEW ARTICLE
Cloud service selection using cloud service brokers: approaches and challenges
Meysam VAKILI, Neda JAHANGIRI, Mohsen SHARIFI
Front. Comput. Sci.. 2019, 13 (3): 599-617.  https://doi.org/10.1007/s11704-017-6124-7
Abstract   PDF (879KB)

Cloud computing users are faced with a wide variety of services to choose from. Consequently, a number of cloud service brokers (CSBs) have emerged to help users in their service selection process. This paper reviews the recent approaches that have been introduced and used for cloud service brokerage and discusses their challenges accordingly. We propose a set of attributes for a CSB to be considered effective. Different CSBs’ approaches are classified as either single service or multiple service models. The CSBs are then assessed, analyzed, and compared with respect to the proposed set of attributes. Based on our studies, CSBs with multiple service models that support more of the proposed effective CSB attributes have wider application in cloud computing environments.

References | Supplementary Material | Related Articles | Metrics
RESEARCH ARTICLE
Enabling entity discovery in indoor commercial environments without pre-deployed infrastructure
Bo YUAN, Xiaolei ZHOU, Xiaoqiang TENG, Deke GUO
Front. Comput. Sci.. 2019, 13 (3): 618-636.  https://doi.org/10.1007/s11704-017-6601-z
Abstract   PDF (1054KB)

Finding entities of interest in indoor commercial places, such as the merchandise in supermarkets and shopping malls, is an essential issue for customers, especially when they are unfamiliar with an ad hoc indoor environment. This type of location-based indoor service requires comprehensive knowledge of indoor entities, including locations as well as their semantic information. However, the existing indoor localization approaches fail to directly localize these general entities without dedicated devices. This paper first focuses on the problem of discovering large-scale general entities of interest in indoor commercial spaces without pre-deployed infrastructure.We present a unique entity localization approach that leverages the localization results from multiple independent users to accurately determine the location of corresponding entities. Our key idea is to exploit the short-distance estimation with dead reckoning to guarantee the accuracy of entity localization. We develop a prototype system based on the crowdsourcing method, iScan, and test it in one of the biggest supermarkets in Changsha, China, to validate the performance of our design. Extensive experimental results show that our approach can achieve meter-level accuracy in a single day with 70 participants. Moreover, in a monthly evaluation with 500 effective participants, iScan discovered more than 200 entities and localized approximately 75% of them within 2 m.

References | Supplementary Material | Related Articles | Metrics
Fingerprinting Android malware families
Nannan XIE, Xing WANG, Wei WANG, Jiqiang LIU
Front. Comput. Sci.. 2019, 13 (3): 637-646.  https://doi.org/10.1007/s11704-017-6493-y
Abstract   PDF (331KB)

The domination of the Android operating system in the market share of smart terminals has engendered increasing threats of malicious applications (apps). Research on Android malware detection has received considerable attention in academia and the industry. In particular, studies on malware families have been beneficial to malware detection and behavior analysis. However, identifying the characteristics of malware families and the features that can describe a particular family have been less frequently discussed in existing work. In this paper, we are motivated to explore the key features that can classify and describe the behaviors of Android malware families to enable fingerprinting the malware families with these features.We present a framework for signature-based key feature construction. In addition, we propose a frequency-based feature elimination algorithm to select the key features. Finally, we construct the fingerprints of ten malware families, including twenty key features in three categories. Results of extensive experiments using Support Vector Machine demonstrate that the malware family classification achieves an accuracy of 92% to 99%. The typical behaviors of malware families are analyzed based on the selected key features. The results demonstrate the feasibility and effectiveness of the presented algorithm and fingerprinting method.

References | Supplementary Material | Related Articles | Metrics
Distributed top-k similarity query on big trajectory streams
Zhigang ZHANG, Xiaodong QI, Yilin WANG, Cheqing JIN, Jiali MAO, Aoying ZHOU
Front. Comput. Sci.. 2019, 13 (3): 647-664.  https://doi.org/10.1007/s11704-018-7234-6
Abstract   PDF (742KB)

Recently, big trajectory data streams are generated in distributed environmentswith the popularity of smartphones and other mobile devices. Distributed top-k similarity query, which finds k trajectories that are most similar to a given query trajectory from all remote sites, is critical in this field. The key challenge in such a query is how to reduce the communication cost due to the limited network bandwidth resource. Although this query can be solved by sending the query trajectory to all the remote sites, in which the pairwise similarities are computed precisely. However, the overall cost, O(n · m), is huge when n or m is huge, where n is the size of query trajectory and m is the number of remote sites. Fortunately, there are some cheap ways to estimate pairwise similarity, which filter some trajectories in advance without precise computation. In order to overcome the challenge in this query, we devise two general frameworks, into which concrete distance measures can be plugged. The former one uses two bounds (the upper and lower bound), while the latter one only uses the lower bound. Moreover, we introduce detailed implementations of two representative distance measures, Euclidean and DTW distance, after inferring the lower and upper bound for the former framework and the lower bound for the latter one. Theoretical analysis and extensive experiments on real-world datasets evaluate the efficiency of proposed methods.

References | Supplementary Material | Related Articles | Metrics
LETTER
16 articles

LinksMore