Proteomics become an important research area of interests in life science after the completion of the human genome project. This scientific is to study the characteristics of proteins at the large-scale data level, and then gain a holistic and comprehensive understanding of the process of disease occurrence and cell metabolism at the protein level. A key issue in proteomics is how to efficiently analyze the massive amounts of protein data produced by high-throughput technologies. Computational technologies with low-cost and short-cycle are becoming the preferred methods for solving some important problems in post-genome era, such as protein-protein interactions (PPIs). In this review, we focus on computational methods for PPIs detection and show recent advancements in this critical area from multiple aspects. First, we analyze in detail the several challenges for computational methods for predicting PPIs and summarize the available PPIs data sources. Second, we describe the stateof-the-art computational methods recently proposed on this topic. Finally, we discuss some important technologies that can promote the prediction of PPI and the development of computational proteomics.
In this paper we present a designated verifier-set signature (DVSS), in which the signer allows to designate many verifiers rather than one verifier, and each designated verifier can verify the validity of signature by himself. Our research starts from identity-based aggregator (IBA) that compresses a designated set of verifier’s identities to a constantsize random string in cryptographic space. The IBA is constructed by mapping the hash of verifier’s identity into zero or pole of a target curve, and extracting one curve’s point as the result of aggregation according to a specific secret. Considering the different types of target curves, these two IBAs are called as zeros-based aggregator and poles-based aggregator, respectively. Based on them, we propose a practical DVSS scheme constructed from the zero-pole cancellation method which can eliminate the same elements between zeros-based aggregator and poles-based aggregator. Due to this design, our DVSS scheme has some distinct advantages: (1) the signature supporting arbitrary dynamic verifiers extracted from a large number of users; and (2) the signature with short and constant length. We rigorously prove that our DVSS scheme satisfies the security properties: correctness, consistency, unforgeability and exclusivity.
Cyber-physical space is a spatial environment that integrates the cyber world and the physical world, aiming to provide an intelligent environment for users to conduct their day-to-day activities. The interplay between the cyber space and physical space proposes specific security requirements that are not captured by traditional access control frameworks. On one hand, the security of the physical space and the cyber space should be both concerned in the cyber-physical space. On the other hand, the bad results caused by failure in providing secure policy enforcementmay directly affect the controlled physical world. In this paper, we propose an effective access control framework for the cyber-physical space. Firstly, a topology-aware access control (TAAC) model is proposed. It can express the cyber access control, the physical access control, and the interaction access control simultaneously. Secondly, a risk assessment approach is proposed for the policy enforcement phase. It is used to evaluate the user behavior and ensures that the suspicious behaviors executed by authorized users can be handled correctly. Thirdly, we propose a role activation algorithm to ensure that the objects are accessed only by legal and honest users. Finally, we evaluate our approach by using an illustrative example and the performance analysis. The results demonstrate the feasibility of our approach.
Leakage of private information including private keys of user has become a threat to the security of computing systems. It has become a common security requirement that a cryptographic scheme should withstand various leakage attacks. In the real life, an adversary can break the security of cryptography primitive by performing continuous leakage attacks. Although, some research on the leakage-resilient cryptography had been made, there are still some remaining issued in previous attempts. The identity-based encryption (IBE) constructions were designed in the bounded-leakage model, and might not be able to meet their claimed security under the continuous-leakage attacks. In the real applications, the leakage is unbounded. That is, a practical cryptography scheme should keep its original security in the continuous leakage setting. The previous continuous leakageresilient IBE schemes either only achieve chosen-plaintext attacks security or the chosen-ciphertext attacks (CCA) security is proved in the selective identity model. Aiming to solve these problems, in this paper, we show how to construct the continuous leakage-resilient IBE scheme, and the scheme’s adaptive CCA security is proved in the standard model based on the hardness of decisional bilinear Diffie-Hellman exponent assumption. For any adversary, all elements in the ciphertext are random, and an adversary cannot obtain any leakage on the private key of user from the corresponding given ciphertext. Moreover, the leakage parameter of our proposal is independent of the plaintext space and has a constant size.
Recently, stacked hourglass network has shown outstanding performance in human pose estimation. However, repeated bottom-up and top-down stride convolution operations in deep convolutional neural networks lead to a significant decrease in the initial image resolution. In order to address this problem, we propose to incorporate affinage module and residual attention module into stacked hourglass network for human pose estimation. This paper introduces a novel network architecture to replace the stacked hourglass network of up-sampling operation for getting high-resolution features. We refer to the architecture as an affinage module which is critical to improve the performance of the stacked hourglass network. Additionally, we also propose a novel residual attention module to increase the supervision of upsample process. The effectiveness of the introduced module is evaluated on standard benchmarks. Various experimental results demonstrated that our method can achieve more accurate and more robust human pose estimation results in images with complex background.
Recently, in the area of big data, some popular applications such as web search engines and recommendation systems, face the problem to diversify results during query processing. In this sense, it is both significant and essential to propose methods to deal with big data in order to increase the diversity of the result set. In this paper, we firstly define the diversity of a set and the ability of an element to improve the overall diversity. Based on these definitions, we propose a diversification framework which has good performance in terms of effectiveness and efficiency. Also, this framework has theoretical guarantee on probability of success. Secondly, we design implementation algorithms based on this framework for both numerical and string data. Thirdly, for numerical and string data respectively, we carry out extensive experiments on real data to verify the performance of our proposed framework, and also perform scalability experiments on synthetic data.
With the fast development of software defined network (SDN), numerous researches have been conducted for maximizing the performance of SDN. Currently, flow tables are utilized in OpenFlows witch for routing. Due to the space limitation of flow table and switch capacity, variousissues exist in dealing with the flows.The existing schemes typically employ reactive approach such that the selection of evicted entries occurs when timeout or table miss occurs. In this paper a proactive approach is proposed based on the prediction of the probability of matching of the entries. Here eviction occurs proactively when the utilization of flow table exceeds a threshold, and the flow entry of the lowestmatching probability is evicted. The matching probability is estimated using hiddenMarkov model (HMM).Computersimulation reveals that it significantly enhances the prediction accuracy and decreases the number of table misses compared to the standard Hard timeout scheme and Flow master scheme.
With the advent of 5G, multi-homing will be an increasingly common scenario, which is expected to increase transmission rates, improve transmission reliability, and reduce costs for users. However, the current routing methods are unable to fully utilize the resources of networks to achieve high-performance data transmission for multi-homed devices. In the current routing mechanism, there is only one destination address in the packet forwarded to the multihomed host. Thus, the packet is difficult to adjust its path on the fly according to the status of the network to achieve better performance. In this paper, we present an efficient routing schema in multi-homing scenario based on protocoloblivious forwarding (POF). In the proposed schema, the packet forwarded to the multi-homed host carries multiple destination addresses to obtain the ability of switching the transmission path; meanwhile, the router dynamically adjusts the path of the packet through the perception of the networkstatus. Experimental results show that our schema could utilize the alternative paths properly and significantly improve the transmission efficiency.
Applications like identifying different customers from their unique buying behaviours, determining ratingsof a product given by users based on different sets of features, etc. require classification using class-specific subsets of features. Most of the existing state-of-the-art classifiers for multivariate data use complete feature set for classification regardless of the different class labels. Decision tree classifier can produce class-wise subsets of features. However, none of these classifiers model the relationship between features which may enhance classification accuracy. We call the class-specific subsets of features and the features’ interrelationships as class signatures. In this work, we propose to map the original input space of multivariate data to the feature space characterized by connected graphs as graphs can easily model entities, their attributes, and relationships among attributes. Mostly, entities are modeled using graphs, where graphs occur naturally, for example, chemical compounds. However, graphs do not occur naturally in multivariate data. Thus, extracting class signatures from multivariate data is a challenging task. We propose some feature selection heuristics to obtain class-specific prominent subgraph signatures. We also propose two variants of class signatures based classifier namely: 1) maximum matching signature (gMM), and 2) score and size of matched signatures (gSM). The effectiveness of the proposed approach on real-world and synthetic datasets has been studied and compared with other established classifiers. Experimental results confirm the ascendancy of the proposed class signatures based classifier on most of the datasets.
Sparse representation has been widely used in signal processing, pattern recognition and computer vision etc. Excellent achievements have been made in both theoretical researches and practical applications. However, there are two limitations on the application of classification. One is that sufficient training samples are required for each class, and the other is that samples should be uncorrupted. In order to alleviate above problems, a sparse and dense hybrid representation (SDR) framework has been proposed, where the training dictionary is decomposed into a class-specific dictionary and a non-class-specific dictionary. SDR puts
Extracting justifications for web ontology language (OWL) ontologies is an important mission in ontology engineering. In this paper, we focus on black-box techniques which are based on ontology reasoners. Through creating a recursive expansion procedure, all elements which are called critical axioms in the justification are explored one by one. In this detection procedure, an axiom selection function is used to avoid testing irrelevant axioms. In addition, an incremental reasoning procedure has been proposed in order to substitute series of standard reasoning tests w.r.t. satisfiability. It is implemented by employing a pseudo model to detect “obvious” satisfiability directly. The experimental results show that our proposed strategy for extracting justifications for OWL ontologies by adopting incremental expansion is superior to traditional Black-box methods in terms of efficiency and performance.
An iterative procedure introduced in MacKay’s evidence framework is often used for estimating the hyperparameter in empirical Bayes. Together with the use of a particular form of prior, the estimation of the hyperparameter reduces to an automatic relevance determination model, which provides a soft way of pruning model parameters. Despite the effectiveness of this estimation procedure, it has stayed primarily as a heuristic to date and its application to deep neural network has not yet been explored. This paper formally investigates the mathematical nature of this procedure and justifies it as a well-principled algorithm framework, which we call the MacKay algorithm. As an application, we demonstrate its use in deep neural networks, which have typically complicated structure with millions of parameters and can be pruned to reduce the memory requirement and boost computational efficiency. In experiments, we adopt MacKay algorithm to prune the parameters of both simple networks such as LeNet, deep convolution VGG-like networks, and residual netowrks for large image classification task. Experimental results show that the algorithm can compress neural networks to a high level of sparsity with little loss of prediction accuracy, which is comparable with the state-of-the-art.
Principal component analysis (PCA) is a widely used method for multivariate data analysis that projects the original high-dimensional data onto a low-dimensional subspace with maximum variance. However, in practice, we would be more likely to obtain a few compressed sensing (CS) measurements than the complete high-dimensional data due to the high cost of data acquisition and storage. In this paper, we propose a novel Bayesian algorithm for learning the solutions of PCA for the original data just from these CS measurements. To this end, we utilize a generative latent variable model incorporated with a structure prior to model both sparsity of the original data and effective dimensionality of the latent space. The proposed algorithm enjoys two important advantages: 1) The effective dimensionality of the latent space can be determined automatically with no need to be pre-specified; 2) The sparsity modeling makes us unnecessary to employ multiple measurement matrices to maintain the original data space but a single one, thus being storage efficient. Experimental results on synthetic and realworld datasets show that the proposed algorithm can accurately learn the solutions of PCA for the original data, which can in turn be applied in reconstruction task with favorable results.