The main challenge in the area of reinforcement learning is scaling up to larger and more complex problems. Aiming at the scaling problem of reinforcement learning, a scalable reinforcement learning method, DCS-SRL, is proposed on the basis of divide-and-conquer strategy, and its convergence is proved. In this method, the learning problem in large state space or continuous state space is decomposed into multiple smaller subproblems. Given a specific learning algorithm, each subproblem can be solved independently with limited available resources. In the end, component solutions can be recombined to obtain the desired result. To address the question of prioritizing subproblems in the scheduler, a weighted priority scheduling algorithm is proposed. This scheduling algorithm ensures that computation is focused on regions of the problem space which are expected to be maximally productive. To expedite the learning process, a new parallel method, called DCS-SPRL, is derived from combining DCS-SRL with a parallel scheduling architecture. In the DCS-SPRL method, the subproblems will be distributed among processors that have the capacity to work in parallel. The experimental results show that learning based on DCS-SPRL has fast convergence speed and good scalability.
Listwise approaches are an important class of learning to rank, which utilizes automatic learning techniques to discover useful information. Most previous research on listwise approaches has focused on optimizing ranking models using weights and has used imprecisely labeled training data; optimizing ranking models using features was largely ignored thus the continuous performance improvement of these approaches was hindered. To address the limitations of previous listwise work, we propose a quasi-KNN model to discover the ranking of features and employ rank addition rule to calculate the weight of combination. On the basis of this, we propose three listwise algorithms, FeatureRank, BLFeatureRank, and DiffRank. The experimental results show that our proposed algorithms can be applied to a strict ordered ranking training set and gain better performance than state-of-the-art listwise algorithms.
Powerful storage, high performance and scalability are the most important issues for analytical databases. These three factors interact with each other, for example, powerful storage needs less scalability but higher performance, high performance means less consumption of indexes and other materializations for storage and fewer processing nodes, larger scale relieves stress on powerful storage and the high performance processing engine. Some analytical databases (ParAccel, Teradata) bind their performance with advanced hardware supports, some (Asterdata, Greenplum) rely on the high scalability framework of MapReduce, some (MonetDB, Sybase IQ, Vertica) highlight performance on processing engine and storage engine. All these approaches can be integrated into an storage-performance-scalability (SP- S) model, and future large scale analytical processing can be built on moderate clusters to minimize expensive hardware dependency. The most important thing is a simple software framework is fundamental to maintain pace with the development of hardware technologies. In this paper, we propose a schema-aware on-line analytical processing (OLAP) model with deep optimization from native features of the star or snowflake schema. The OLAP model divides the whole process into several stages, each stage pipes its output to the next stage, we minimize the size of output data in each stage, whether in central processing or clustered processing. We extend this mechanism to cluster processing using two major techniques, one is using NetMemory as a broadcasting protocol based dimension mirror synchronizing buffer, the other is Received June 24, 2011; accepted August 16, 2012 E-mail: shingle@ruc.edu.cn predicate-vector based DDTA-OLAP cluster model which can minimize the data dependency of star-join using bitmap vectors. Our OLAP model aims to minimize network transmission cost (MiNT in short) for OLAP clusters and support a scalable but simple distributed storagemodel for large scale clustering processing. Finally, the experimental results show the speedup and scalability performance.
To manage dynamic access control and deter pirate attacks on outsourced databases, a dynamic access control scheme with tracing is proposed. In our scheme, we introduce the traitor tracing idea into outsource databases, and employ a polynomial function and filter function as the basic means of constructing encryption and decryption procedures to reduce computation, communication, and storage overheads. Compared to previous access control schemes for outsourced databases, our scheme can not only protect sensitive data from leaking and perform scalable encryption at the server side without shipping the outsourced data back to the data owner when group membership is changed, but also provide trace-and-revoke features.When malicious users clone and sell their decryption keys for profit, our scheme can trace the decryption keys to the malicious users and revoke them. Furthermore, our scheme avoids massive message exchanges for establishing the decryption key between the data owner and the user. Compared to previously proposed publickey traitor tracing schemes, our scheme can simultaneously achieve full collusion resistance, full recoverability, full revocation, and black-box traceability. The proof of security and analysis of performance show that our scheme is secure and efficient.
In this work, we propose several new methods for detecting photographic composites using circles. In particular, we focus on three kinds of scenes: (1) two coplanar circles with the same radius; (2) a single circle with its discriminable center; (3) a single circle with geometric constraints for camera calibration. For two circles’ situation, we first estimate the focal length based on the equality of the sizes of two coplanar circles, and then estimate the normal vector of the world circle plane. Inconsistencies in the angles among the normal vectors (Each circle determines a normal vector) are used as evidence of tampering. On the other hand, for the single circle case, we warp the circle to make metric measurement. To demonstrate the effectiveness of the approach, we present results for synthetic and visually plausible composite images.
Creating realistic 3D tree models in a convenient way is a challenge in game design and movie making due to diversification and occlusion of tree structures. Current sketch-based and image-based approaches for fast tree modeling have limitations in effect and speed, and they generally include complex parameter adjustment, which brings difficulties to novices. In this paper, we present a simple method for quickly generating various 3D tree models from freehand sketches without parameter adjustment. On two input images, the user draws strokes representing the main branches and crown silhouettes of a tree. The system automatically produces a 3D tree at high speed. First, two 2D skeletons are built from strokes, and a 3D tree structure resembling the input sketches is built by branch retrieval from the 2D skeletons. Small branches are generated within the sketched 2D crown silhouettes based on self-similarity and angle restriction. This system is demonstrated on a variety of examples. It maintains the main features of a tree: the main branch structure and crown shape, and can be used as a convenient tool for tree simulation and design.
In recent years, geometry-based image and video processing methods have aroused significant interest. This paper considers progress from four aspects: geometric characteristics and shape, geometric transformations, embedded geometric structure, and differential geometry methods. Current research trends are also pointed out.