Jan 2016, Volume 10 Issue 1
    

  • Select all
  • EDITORIAL
    Zhi-Hua ZHOU
  • REVIEW ARTICLE
    Zhaoning ZHANG,Dongsheng LI,Kui WU

    The scale of global data center market has been explosive in recent years. As the market grows, the demand for fast provisioning of the virtual resources to support elastic, manageable, and economical computing over the cloud becomes high. Fast provisioning of large-scale virtual machines (VMs), in particular, is critical to guarantee quality of service (QoS). In this paper, we systematically review the existing VM provisioning schemes and classify them in three main categories. We discuss the features and research status of each category, and introduce two recent solutions, VMThunder and VMThunder+, both of which can provision hundreds of VMs in seconds.

  • REVIEW ARTICLE
    Yingying ZHU,Cong YAO,Xiang BAI

    Text, as one of the most influential inventions of humanity, has played an important role in human life, so far from ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications, therefore text detection and recognition in natural scenes have become important and active research topics in computer vision and document analysis. Especially in recent years, the community has seen a surge of research efforts and substantial progresses in these fields, though a variety of challenges (e.g. noise, blur, distortion, occlusion and variation) still remain. The purposes of this survey are three-fold: 1) introduce up-to-date works, 2) identify state-of-the-art algorithms, and 3) predict potential research directions in the future. Moreover, this paper provides comprehensive links to publicly available resources, including benchmark datasets, source codes, and online demos. In summary, this literature review can serve as a good reference for researchers in the areas of scene text detection and recognition.

  • RESEARCH ARTICLE
    Zhibin YANG,Jean-Paul BODEVEIX,Mamoun FILALI,Kai HU,Yongwang ZHAO,Dianfu MA

    SIGNAL belongs to the synchronous languages family which are widely used in the design of safety-critical real-time systems such as avionics, space systems, and nuclear power plants. This paper reports a compiler prototype for SIGNAL. Compared with the existing SIGNAL compiler, we propose a new intermediate representation (named S-CGA, a variant of clocked guarded actions), to integrate more synchronous programs into our compiler prototype in the future. The front-end of the compiler, i.e., the translation from SIGNAL to S-CGA, is presented. As well, the proof of semantics preservation is mechanized in the theorem prover Coq. Moreover, we present the back-end of the compiler, including sequential code generation and multithreaded code generation with time-predictable properties. With the rising importance of multi-core processors in safetycritical embedded systems or cyber-physical systems (CPS), there is a growing need for model-driven generation of multithreaded code and thus mapping on multi-core. We propose a time-predictable multi-core architecture model in architecture analysis and design language (AADL), and map the multi-threaded code to this model.

  • RESEARCH ARTICLE
    Yang ZHANG,Xinyu FENG

    Happens-before memory model (HMM) is used as the basis of Java memory model (JMM). Although HMM itself is simple, some complex axioms have to be introduced in JMM to prevent the causality loop, which causes absurd out-of-thin-air reads that may break the type safety and security guarantee of Java. The resulting JMM is complex and difficult to understand. It also has many anti-intuitive behaviors, as demonstrated by the “ugly examples” by Aspinall and Ševˇcík [1]. Furthermore, HMM (and JMM) specifies only what execution traces are acceptable, but says nothing about how these traces are generated. This gap makes it difficult for static reasoning about programs.

    In this paper we present OHMM, an operational variation of HMM. The model is specified by giving an operational semantics to a language running on an abstract machine designed to simulate HMM. Thanks to its generative nature, the model naturally prevents out-of-thin-air reads. On the other hand, it uses a novel replay mechanism to allow instructions to be executed multiple times, which can be used to modelmany useful speculations and optimization. The model is weaker than JMM for lockless programs, thus can accommodate more optimization, such as the reordering of independent memory accesses that is not valid in JMM. Program behaviors are more natural in this model than in JMM, and many of the anti-intuitive examples in JMM are no longer valid here. We hope OHMM can serve as the basis for new memory models for Java-like languages.

  • RESEARCH ARTICLE
    Dingding LI,Xiaofei LIAO,Hai JIN,Yong TANG,Gansen ZHAO

    Storage class memory (SCM) has the potential to revolutionize the memory landscape by its non-volatile and byte-addressable properties. However, there is little published work about exploring its usage for modern virtualized cloud infrastructure.We propose SCM-vWrite, a novel architecture designed around SCM, to ease the performance interference of virtualized storage subsystem. Through a case study on a typical virtualized cloud system, we first describe why current writeback manners are not suitable for a virtualized environment, then design and implement SCM-vWrite to improve this problem. We also use typical benchmarks and realistic workloads to evaluate its performance. Compared with the traditional method on a conventional architecture, the experimental result shows that SCM-vWrite can coordinate the writeback flows more effectively among multiple co-located guest operating systems, achieving a better disk I/O performance without any loss of reliability.

  • RESEARCH ARTICLE
    Yinglong ZHANG,Cuiping LI,Chengwang XIE,Hong CHEN

    Link-based similarity measures play a significant role in many graph based applications. Consequently, measuring node similarity in a graph is a fundamental problem of graph data mining. Personalized PageRank (PPR) and Sim-Rank (SR) have emerged as the most popular and influential link-based similarity measures. Recently, a novel linkbased similarity measure, penetrating rank (P-Rank), which enriches SR, was proposed. In practice, PPR, SR and P-Rank scores are calculated by iterative methods. As the number of iterations increases so does the overhead of the calculation.The ideal solution is that computing similarity within the minimum number of iterations is sufficient to guarantee a desired accuracy. However, the existing upper bounds are too coarse to be useful in general. Therefore, we focus on designing an accurate and tight upper bounds for PPR,SR, and P-Rank in the paper. Our upper bounds are designed based on the following intuition: the smaller the difference between the two consecutive iteration steps is, the smaller the difference between the theoretical and iterative similarity scores becomes. Furthermore, we demonstrate the effectiveness of our upper bounds in the scenario of top-k similar nodes queries, where our upper bounds helps accelerate the speed of the query.We also run a comprehensive set of experiments on real world data sets to verify the effectiveness and efficiency of our upper bounds.

  • RESEARCH ARTICLE
    Dong LIU,Quanyuan WU,Weihong HAN,Bin ZHOU

    Users of social media sites can use more than one account. These identities have pseudo anonymous properties,and as such some users abuse multiple accounts to perform undesirable actions, such as posting false or misleading remarks comments that praise or defame the work of others.The detection of multiple user accounts that are controlled by an individual or organization is important. Herein, we define the problem as sockpuppet gang (SPG) detection. First, we analyze user sentiment orientation to topics based on emotional phrases extracted from their posted comments. Then we evaluate the similarity between sentiment orientations of user account pairs, and build a similar-orientation network (SON) where each vertex represents a user account on a social media site. In an SON, an edge exists only if the two user accounts have similar sentiment orientations to most topics. The boundary between detected SPGs may be indistinct, thus by analyzing account posting behavior features we propose a multiple random walk method to iteratively remeasure the weight of each edge. Finally, we adopt multiple community detection algorithms to detect SPGs in the network. User accounts in the same SPG are considered to be controlled by the same individual or organization. In our experiments on real world datasets, our method shows better performance than other contemporary methods.

  • RESEARCH ARTICLE
    Han XUE,Bing QIN,Ting LIU,Shen LIU

    Existing studies on hierarchy constructionmainly focus on text corpora and indiscriminately mix numerous topics,thus increasing the possibility of knowledge acquisition bottlenecks and misconceptions. To address these problems and provide a comprehensive and in-depth representation of domain specific topics, we propose a novel topic hierarchy construction method with real-time update. This method combines heterogeneous evidence from multiple sources including folksonomy and encyclopedia, separately in both initial topic hierarchy construction and topic hierarchy improvement.Results of comprehensive experiments indicate that the proposed method significantly outperforms state-of-theart methods (t-test, p-value<0.000 1); recall has particularly improved by 20.4% to 38.7%.

  • RESEARCH ARTICLE
    Xuan DONG,Jiangtao WEN

    We study the problem of low lighting image enhancement.Previous enhancement methods for images under low lighting conditions usually fail to consider the factor of image degradation during image formation. As a result,the lost contrast could not be recovered after enhancement.This paper will adaptively recover the contrast and adjust the exposure for low lighting images. Our first contribution is the modeling of image degradation in low lighting conditions.Then, the local maximum color value prior is proposed, i.e., in most regions of well exposed images, the local maximum color value of a pixel will be very high. By combining the image degradation model and local maximum color value prior, we propose to recover the un-degraded images under low lighting conditions. Last, an adaptive exposure adjustment module is proposed to obtain the final result.We show that our approach enables better enhancement comparing with popular image editing tools and academic algorithms.

  • RESEARCH ARTICLE
    Sio Kei IM,Mohammad Mahdi GHANDI

    Many modern video encoders use the Lagrangian rate-distortion optimization (RDO) algorithm for mode decisions during the compression procedure. For each encoding stage, this approach involves minimizing a cost, which is a function of rate, distortion and a multiplier called Lambda. This paper proposes to improve the RDO process by applying two modifications. The first modification is to increase the accuracy of rate estimation, which is achieved by computing a non-integer number of bits for arithmetic coding of the syntax elements. This leads to a more accurate cost computation and therefore a better mode decision. The second modification is to search and adjust the value of Lambda based on the characteristics of each coding stage. For the encoder used, this paper proposes to search multiple values of Lambda for the intra-4×4mode decision. Moreover, a simple shift in Lambda value is proposed for motion estimation. Each of these modifications offers a certain gain in RDO performance, and, when all are combined, an average bit-rate saving of up to 7.0% can be achieved for the H.264/AVC codec while the same concept is applicable to the H.265/HEVC codec as well. The extra added complexity is contained to a certain level, and is also adjustable according to the processing resources available.

  • REVIEW ARTICLE
    Ahmad ALI,Abdul JALIL,Jianwei NIU,Xiaoke ZHAO,Saima RATHORE,Javed AHMED,Muhammad AKSAM IFTIKHAR

    Visual object tracking (VOT) is an important subfield of computer vision. It has widespread application domains,and has been considered as an important part of surveillance and security system. VOA facilitates finding the position of target in image coordinates of video frames.While doing this, VOA also faces many challenges such as noise, clutter, occlusion, rapid change in object appearances, highly maneuvered (complex) object motion, illumination changes. In recent years, VOT has made significant progress due to availability of low-cost high-quality video cameras as well as fast computational resources, and many modern techniques have been proposed to handle the challenges faced by VOT.This article introduces the readers to 1) VOT and its applications in other domains, 2) different issues which arise in it, 3) various classical as well as contemporary approaches for object tracking, 4) evaluation methodologies for VOT, and 5) online resources, i.e., annotated datasets and source code available for various tracking techniques.

  • RESEARCH ARTICLE
    Juanjuan ZHAO, Guohua JI, Xiaohong HAN, Yan QIANG, Xiaolei LIAO

    To address the incomplete problem in pulmonary parenchyma segmentation based on the traditional methods,a novel automated segmentation method based on an eightneighbor region growing algorithm with left-right scanning and four-corner rotating and scanning is proposed in this paper.The proposed method consists of four main stages: image binarization, rough segmentation of lung, image denoising and lung contour refining. First, the binarization of images is done and the regions of interest are extracted. After that, the rough segmentation of lung is performed through a general region growing method. Then the improved eight-neighbor region growing is used to remove noise for the upper, middle,and bottom region of lung. Finally, corrosion and expansion operations are utilized to smooth the lung boundary.The proposed method was validated on chest positron emission tomography-computed tomography (PET-CT) data of 30 cases from a hospital in Shanxi, China. Experimental results show that our method can achieve an average volume overlap ratio of 96.21±0.39% with the manual segmentation results.Compared with the existing methods, the proposed algorithm segments the lung in PET-CT images more efficiently and accurately.