Dec 2016, Volume 4 Issue 4
    

Cover illustration

  • Single-cell RNA sequencing (scRNA-seq) is an emerging technology that enables high resolution detection of heterogeneities between cells. One important application of scRNA-seq data is to detect differential expression (DE) of genes between different types or subtypes of cells or between cells of different conditions. Currently, many researchers still use DE analysis methods developed for bulk RNA-Seq data on single-cell data. Some new methods for scRNA-seq data have also bee [Detail] ...


  • Select all
  • RESEARCH ARTICLE
    Zhun Miao, Xuegong Zhang

    Background: Single-cell RNA sequencing (scRNA-seq) is an emerging technology that enables high resolution detection of heterogeneities between cells. One important application of scRNA-seq data is to detect differential expression (DE) of genes. Currently, some researchers still use DE analysis methods developed for bulk RNA-Seq data on single-cell data, and some new methods for scRNA-seq data have also been developed. Bulk and single-cell RNA-seq data have different characteristics. A systematic evaluation of the two types of methods on scRNA-seq data is needed.

    Results: In this study, we conducted a series of experiments on scRNA-seq data to quantitatively evaluate 14 popular DE analysis methods, including both of traditional methods developed for bulk RNA-seq data and new methods specifically designed for scRNA-seq data. We obtained observations and recommendations for the methods under different situations.

    Conclusions: DE analysis methods should be chosen for scRNA-seq data with great caution with regard to different situations of data. Different strategies should be taken for data with different sample sizes and/or different strengths of the expected signals. Several methods for scRNA-seq data show advantages in some aspects, and DEGSeq tends to outperform other methods with respect to consistency, reproducibility and accuracy of predictions on scRNA-seq data.

  • RESEARCH ARTICLE
    Petr Kloucek, Armin von Gunten

    Background: Identification of human subjects using a geometric approach to complexity analysis of behavioural data is designed to provide a basis for a more precise diagnosis leading towards personalised medicine.

    Methods: The approach is based on capturing behavioural time-series that can be characterized by a fractional dimension using non-invasive longer-time acquisitions of heart rate, perfusion, blood oxygenation, skin temperature, relative movement and steps frequency. The geometry based approach consists in the analysis of the area and centroid of convex hulls encapsulating the behavioural data represented in Euclidian index spaces based on the scaling properties of the self-similar normally distributed behavioural time-series of the above mentioned quantities.

    Results: An example demonstrating the presented approach of behavioural fingerprinting is provided using sensory data of eight healthy human subjects based on approximately fifteen hours of data acquisition. Our results show that healthy subjects can be factorized to different similarity groups based on a particular choice of a convex hull in the corresponding Euclidian space. One of the results indicates that healthy subjects share only a small part of the convex hull pertaining to a highly trained individual from the geometric comparison point of view. Similarly, the presented pair-wise individual geometric similarity measure indicates large differences among the subjects suggesting the possibility of neuro-fingerprinting.

    Conclusions: Recently introduced multi-channel body-attached sensors provide a possibility to acquire behavioural time-series that can be mathematically analysed to obtain various objective measures of behavioural patterns yielding behavioural diagnoses favouring personalised treatments of, e.g., neuropathologies or aging.

  • RESEARCH ARTICLE
    Sebastián Torcida, Paula Gonzalez, Federico Lotto

    Background: Symmetry of biological structures can be thought as the repetition of their parts in different positions and orientations. Asymmetry analyses, therefore, focuses on identifying and measuring the location and extent of symmetry departures in such structures. In the context of geometric morphometrics, a key step when studying morphological variation is the estimation of the symmetric shape. The standard procedure uses the least-squares Procrustes superimposition, which by averaging shape differences often underestimates the symmetry departures thus leading to an inaccurate description of the asymmetry pattern. Moreover, the corresponding asymmetry values are neither geometrically intuitive nor visually perceivable.

    Methods: In this work, a resistant method for landmark-based asymmetry analysis of individual bilateral symmetric structures in 2D is introduced. A geometrical derivation of this new approach is offered, while its advantages in comparison with the standard method are examined and discussed through a few illustrative examples.

    Results: Experimental tests on both artificial and real data show that asymmetry is more effectively measured by using the resistant method because the underlying symmetric shape is better estimated. Therefore, the most asymmetric (respectively symmetric) landmarks are better determined through their large (respectively small) residuals. The percentage of asymmetry that is accounted for by each landmark is an additional revealing measure the new method offers which agrees with the displayed results while helping in their biological interpretation.

    Conclusions: The resistant method is a useful exploratory tool for analyzing shape asymmetry in 2D, and it might be the preferable method whenever a non homogeneous deformation of bilateral symmetric structures is possible. By offering a more detailed and rather exhaustive explanation of the asymmetry pattern, this new approach will hopefully contribute to improve the quality of biological or developmental inferences.

  • REVIEW
    Jing Qin, Bin Yan, Yaohua Hu, Panwen Wang, Junwen Wang

    Background: Functional genomics employs dozens of OMICs technologies to explore the functions of DNA, RNA and protein regulators in gene regulation processes. Despite each of these technologies being powerful tools on their own, like the parable of blind men and an elephant, any one single technology has a limited ability to depict the complex regulatory system. Integrative OMICS approaches have emerged and become an important area in biology and medicine. It provides a precise and effective way to study gene regulations.

    Results: This article reviews current popular OMICs technologies, OMICs data integration strategies, and bioinformatics tools used for multi-dimensional data integration. We highlight the advantages of these methods, particularly in elucidating molecular basis of biological regulatory mechanisms.

    Conclusions: To better understand the complexity of biological processes, we need powerful bioinformatics tools to integrate these OMICs data. Integrating multi-dimensional OMICs data will generate novel insights into system-level gene regulations and serves as a foundation for further hypothesis-driven research.

  • REVIEW
    Bingxiang Xu, Zhihua Zhang

    Background: Chromosomes are packed in the cell’s nucleus, and chromosomal conformation is critical to nearly all intranuclear biological reactions, including gene transcription and DNA replication. Nevertheless, chromosomal conformation is largely a mystery in terms of its formation and the regulatory machinery that accesses it.

    Results: Thanks to recent technological developments, we can now probe chromatin interaction in substantial detail, boosting research interest in modeling genome spatial organization. Here, we review the current computational models that simulate chromosome dynamics, and explain the physical and topological properties of chromosomal conformation, as inferred from these newly generated data.

    Conclusion: Novel models shall be developed to address questions beyond averaged structure in the near further.

  • REVIEW
    Guanghui Zhu, Xing-Ming Zhao, Jun Wu

    Background: Identifying biomarkers for accurate diagnosis and prognosis of diseases is important for the prevention of disease development. The molecular networks that describe the functional relationships among molecules provide a global view of the complex biological systems. With the molecular networks, the molecular mechanisms underlying diseases can be unveiled, which helps identify biomarkers in a systematic way.

    Results: In this survey, we report the recent progress on identifying biomarkers based on the topology of molecular networks, and we categorize those biomarkers into three groups, including node biomarkers, edge biomarkers and network biomarkers. These distinct types of biomarkers can be detected under different conditions depending on the data available.

    Conclusions: The biomarkers identified based on molecular networks can provide more accurate diagnosis and prognosis. The pros and cons of different types of biomarkers as well as future directions to improve the methods for identifying biomarkers are also discussed.

  • REVIEW
    Yasen Jiao, Pufeng Du

    Background: Many existing bioinformatics predictors are based on machine learning technology. When applying these predictors in practical studies, their predictive performances should be well understood. Different performance measures are applied in various studies as well as different evaluation methods. Even for the same performance measure, different terms, nomenclatures or notations may appear in different context.

    Results: We carried out a review on the most commonly used performance measures and the evaluation methods for bioinformatics predictors.

    Conclusions: It is important in bioinformatics to correctly understand and interpret the performance, as it is the key to rigorously compare performances of different predictors and to choose the right predictor.