Nov 2015, Volume 9 Issue 6
    

  • Select all
  • REVIEW ARTICLE
    Yili GONG,Wei HUANG,Wenjie WANG,Yingchun LEI

    Software defined networking (SDN) achieves network routing management with logically centralized control software that decouples the network data plane from the control plane. This new design paradigm greatly emancipates network innovation. This paper introduces the background of SDN technology with its design principles, explains its differentiation, and summarizes the research efforts on SDN network architecture, components and applications. Based on the observation of current SDN development, this paper analyzes the potential driving forces of SDN deployment and its future trend.

  • RESEARCH ARTICLE
    Zixiao JIA,Jiwei HUANG,Chuang LIN

    Content-centric network (CCN) is a new Internet architecture in which content is treated as the primitive of communication. In CCN, routers are equipped with content stores at the content level, which act as caches for frequently requested content. Based on this design, the Internet is available to provide content distribution services without any application-layer support.

    In addition, as caches are integrated into routers, the overall performance of CCN will be deeply affected by the caching efficiency. In this paper, our aim is to gain some insights on how caches should be designed to maintain a high performance in a cost-efficient way. We try to model the two-layer cache hierarchy composed of CCN routers using a two-dimensional discrete-time Markov chain, and develop an efficient algorithm to calculate the hit ratios of these caches. Simulations validate the accuracy of our modeling method, and convey some meaningful information which can help us better understand the caching mechanism of CCN.

  • RESEARCH ARTICLE
    Lailong LUO,Deke GUO,Wenxin LI,Tian ZHANG,Junjie XIE,Xiaolei ZHOU

    In large-scale data centers, many servers are interconnected via a dedicated networking structure, so as to satisfy specific design goals, such as the low equipment cost, the high network capacity, and the incremental expansion. The topological properties of a networking structure are critical factors that dominate the performance of the entire data center. The existing networking structures are either fully random or completely structured. Although such networking structures exhibit advantages on given aspects, they suffer obvious shortcomings in other essential fields. In this paper, we aim to design a hybrid topology, called R3, which is the compound graph of structured and random topology. It employs random regular graph as a unit cluster and connects many such clusters by means of a structured topology, i.e., the generalized hypercube. Consequently, the hybrid topology combines the advantages of structured as well as random topologies seamlessly. Meanwhile, a coloring-based algorithm is proposed for R3 to enable fast and accurate routing. R3 possesses many attractive characteristics, such as the modularity and expansibility at the cost of only increasing the degree of any node by one. Comprehensive evaluation results show that our hybrid topology possesses excellent topology properties and network performance.

  • RESEARCH ARTICLE
    Bing YU,Yanni HAN,Hanning YUAN,Xu ZHOU,Zhen XU

    Cloud computing as an emerging technology promises to provide reliable and available services on demand. However, offering services for mobile requirements without dynamic and adaptive migration may hurt the performance of deployed services. In this paper, we propose MAMOC, a cost-effective approach for selecting the server and migrating services to attain enhanced QoS more economically. The goal of MAMOC is to minimize the total operating cost while guaranteeing the constraints of resource demands, storage capacity, access latency and economies, including selling price and reputation grade. First, we devise an objective optimal model with multi-constraints, describing the relationship among operating cost and the above constraints. Second, a normalized method is adopted to calculate the operating cost for each candidate VM. Then we give a detailed presentation on the online algorithm MAMOC, which determines the optimal server. To evaluate the performance of our proposal, we conducted extensive simulations on three typical network topologies and a realistic data center network. Results show that MAMOC is scalable and robust with the larger scales of requests and VMs in cloud environment. Moreover, MAMOC decreases the competitive ratio by identifying the optimal migration paths, while ensuring the constraints of SLA as satisfying as possible.

  • RESEARCH ARTICLE
    Hua MA,Zhigang HU

    How to discover the trustworthy services is a challenge for potential users because of the deficiency of usage experiences and the information overload of QoE (quality of experience) evaluations from consumers. Aiming to the limitations of traditional interval numbers in measuring the trustworthiness of service, this paper proposed a novel service recommendation approach using the interval numbers of four parameters (INF) for potential users. In this approach, a trustworthiness cloud model was established to identify the eigenvalue of INF via backward cloud generator, and a new formula of INF possibility degree based on geometrical analysis was presented to ensure the high calculation precision. In order to select the highly valuable QoE evaluations, the similarity of client-side feature between potential user and consumers was calculated, and the multi-attributes trustworthiness values were aggregated into INF by the fuzzy analytic hierarchy process method. On the basis of ranking INF, the sort values of trustworthiness of candidate services were obtained, and the trustworthy services were chosen to recommend to potential user. The experiments based on a realworld dataset showed that it can improve the recommendation accuracy of trustworthy services compared to other approaches, which contributes to solving cold start and information overload problem in service recommendation.

  • RESEARCH ARTICLE
    Quanqing XU,Rajesh Vellore ARUMUGAM,Khai Leong YONG,Yonggang WEN,Yew-Soon ONG,Weiya XI

    Big data is an emerging term in the storage industry, and it is data analytics on big storage, i.e., Cloud-scale storage. In Cloud-scale (or EB-scale) file systems, load balancing in request workloads across a metadata server cluster is critical for avoiding performance bottlenecks and improving quality of services.Many good approaches have been proposed for load balancing in distributed file systems. Some of them pay attention to global namespace balancing, making metadata distribution across metadata servers as uniform as possible. However, they do not work well in skew request distributions, which impair load balancing but simultaneously increase the effectiveness of caching and replication. In this paper, we propose Cloud Cache (C2), an adaptive and scalable load balancing scheme for metadata server cluster in EB-scale file systems. It combines adaptive cache diffusion and replication scheme to cope with the request load balancing problem, and it can be integrated into existing distributed metadata management approaches to efficiently improve their load balancing performance. C2 runs as follows: 1) to run adaptive cache diffusion first, if a node is overloaded, loadshedding will be used; otherwise, load-stealing will be used; and 2) to run adaptive replication scheme second, if there is a very popular metadata item (or at least two items) causing a node be overloaded, adaptive replication scheme will be used, in which the very popular item is not split into several nodes using adaptive cache diffusion because of its knapsack property. By conducting performance evaluation in trace-driven simulations, experimental results demonstrate the efficiency and scalability of C2.

  • RESEARCH ARTICLE
    Xiaoyan WANG,Tao YANG,Jinchuan CHEN,Long HE,Xiaoyong DU

    The volume of RDF data increases dramatically within recent years, while cloud computing platforms like Hadoop are supposed to be a good choice for processing queries over huge data sets for their wonderful scalability. Previous work on evaluating SPARQL queries with Hadoop mainly focus on reducing the number of joins through careful split of HDFS files and algorithms for generating Map/Reduce jobs. However, the way of partitioning RDF data could also affect system performance. Specifically, a good partitioning solution would greatly reduce or even totally avoid cross-node joins, and significantly cut down the cost in query evaluation. Based on HadoopDB, this work processes SPARQL queries in a hybrid architecture, where Map/Reduce takes charge of the computing tasks, and RDF query engines like RDF-3X store the data and execute join operations. According to the analysis of query workloads, this work proposes a novel algorithm for automatically partitioning RDF data and an approximate solution to physically place the partitions in order to reduce data redundancy. It also discusses how to make a good trade-off between query evaluation efficiency and data redundancy. All of these proposed approaches have been evaluated by extensive experiments over large RDF data sets.

  • REVIEW ARTICLE
    Jian HU, Tun LI, Sikun LI

    The increasing complexity of digital systems makes designers begin to design using abstract system level modeling (SLM). However, SLM brings new challenges for verification engineers to guarantee the functional equivalence between SLM specifications and lower-level implementations such as those of transaction level modeling (TLM). This paper proposes a novel method for equivalence checking between SLM and TLM based on coverage directed simulation. Our method randomly simulates an SLM model and uses an satisfiability modulo theories (SMT) solver to generate stimuli for the uncovered area with the direction of a composite coverage metric (code coverage and functional coverage). Then we run all the generated stimuli (random stimuli and direct stimuli) on both SLM and TLM designs. At the same time, the selected observation variables are compared to evaluate the equivalence between SLM and TLM. Promising experimental results show that our equivalence checking method is more efficient with lower simulation cost.

  • RESEARCH ARTICLE
    Xi CHANG,Zhuo ZHANG,Peng ZHANG,Jianxin XUE,Jianjun ZHAO

    Predictive trace analysis (PTA), a static trace analysis technique for concurrent programs, can offer powerful capability support for finding concurrency errors unseen in a previous program execution. Existing PTA techniques always face considerable challenges in scaling to large traces which contain numerous critical events. One main reason is that an analyzed trace includes not only redundant memory accessing events and threads that cannot contribute to discovering any additional errors different from the found candidate ones, but also many residual synchronization events which still affect PTA to check whether these candidate ones are feasible or not even after removing the redundant events. Removing them from the trace can significantly improve the scalability of PTA without affecting the quality of the PTA results. In this paper, we propose a biphasic trace filter approach, BIFER in short, to filter these redundant events and residual events for improving the scalability of PTA to expose general concurrency errors. In addition, we design a model which indicates the lock history and the happens-before history of each thread with two kinds of ways to achieve the efficient filtering. We implement a prototypical tool BIFER for Java programs on the basis of a predictive trace analysis framework. Experiments show that BIFER can improve the scalability of PTA during the process of analyzing all of the traces.

  • RESEARCH ARTICLE
    Franco RONCHETTI,Facundo QUIROGA,Laura LANZARINI,Cesar ESTREBOU

    Human action recognition from skeletal data is an important and active area of research in which the state of the art has not yet achieved near-perfect accuracy on many wellknown datasets. In this paper, we introduce the Distribution of Action Movements Descriptor, a novel action descriptor based on the distribution of the directions of the motions of the joints between frames, over the set of all possible motions in the dataset. The descriptor is computed as a normalized histogram over a set of representative directions of the joints, which are in turn obtained via clustering. While the descriptor is global in the sense that it represents the overall distribution of movement directions of an action, it is able to partially retain its temporal structure by applying a windowing scheme.

    The descriptor, together with a standard classifier, outperforms several state-of-the-art techniques on many wellknown datasets.

  • RESEARCH ARTICLE
    Hongbo NI,Shu WU,Bessam ABDULRAZAK,Daqing ZHANG,Xiaojuan MA,Xingshe ZHOU

    The quality of sleep may be a reflection of an elderly individual’s health state, and sleep pattern is an important measurement. Recognition of sleep pattern by itself is a challenge issue, especially for elderly-care community, due to both privacy concerns and technical limitations. We propose a novel multi-parametric sensing system called sleep pattern recognition system (SPRS). This system, equipped with a combination of various non-invasive sensors, can monitor an elderly user’s sleep behavior. It accumulates the detecting data from a pressure sensor matrix and ultra wide band (UWB) tags. Based on these two types of complementary sensing data, SPRS can assess the user’s sleep pattern automatically via machine learning algorithms. Compared to existing systems, SPRS operates without disrupting the users’ sleep. It can be used in normal households with minimal deployment. Results of tests in our real assistive apartment at the Smart Elder-care Lab are also presented in this paper.

  • RESEARCH ARTICLE
    Chuanping HU,Zheng XU,Yunhuai LIU,Lin MEI

    The increasing need of video based applications issues the importance of parsing and organizing the content in videos. However, the accurate understanding and managing video contents at the semantic level is still insufficient. The semantic gap between low level features and high level semantics cannot be bridged by manual or semi-automatic methods. In this paper, a semantic based model named video structural description (VSD) for representing and organizing the content in videos is proposed. Video structural description aims at parsing video content into the text information, which uses spatiotemporal segmentation, feature selection, object recognition, and semantic web technology. The proposed model uses the predefined ontologies including concepts and their semantic relations to represent the contents in videos. The defined ontologies can be used to retrieve and organize videos unambiguously. In addition, besides the defined ontologies, the semantic relations between the videos are mined. The video resources are linked and organized by their related semantic relations.

  • RESEARCH ARTICLE
    Vahid MEHRDAD,Hossein EBRAHIMNEZHAD

    In this paper, a content based descriptor is proposed to retrieve 3D models, which employs histogram of local orientation (HLO) as a geometric property of the shape. The proposed 3D model descriptor scheme consists of three steps. In the first step, Poisson equation is utilized to define a 3D model signature. Next, the local orientation is calculated for each voxel of the model using Hessian matrix. As the final step, a histogram-based 3D model descriptor is extracted by accumulating the values of the local orientation in bins. Due to efficiency of Poisson equation in describing the models with various structures, the proposed descriptor is capable of discriminating these models accurately. Since, the inner voxels have a dominant contribution in the formation of the descriptor, sufficient robustness against noise can be achieved. This is because the noise mostly influences the boundary voxels. Furthermore, we improve the retrieval performance using support vector machine based one-shot score (SVM-OSS) similarity measure, which is more efficient than the conventional methods to compute the distance of feature vectors. The rotation normalization is performed employing the principal component analysis. To demonstrate the applicability of HLO, we implement experimental evaluations of precisionrecall curve on ESB, PSB and WM-SHREC databases of 3D models. Experimental results validate the effectiveness of the proposed descriptor compared to some current methods.