In this paper, we first give a direct construction of the
Tolerance interval is a kind of interval that assures the probability of at least a given proportion of population falls into the interval attains to a fixed level. It is widely needed in various industrial practices and business activities, such as product design, reliability analysis, and quality inspection. However, comparing to its widely needs, the research on it is still quite limited. In this paper, we propose a numerical method to compute the tolerance interval for exponential distribution. As the simulation study illustrates, our method performs consistently well as the sample size varies. In particular, its good performance for small sample endows itself broadly potential usefulness in practice.
In this paper, we establish some deviation inequalities and the moderate deviation principles for the least squares estimators of the parameters in the threshold autoregressive model under the assumption that the noise random variable satisfies a logarithmic Sobolev inequality.
Latin hypercube design is a good choice for computer experiments. In this paper, we construct a new class of Latin hypercube designs with some high-dimensional hidden projective uniformity. The construction is based on a new class of orthogonal arrays of strength two which contain higher strength orthogonal arrays after their levels are collapsed. As a result, the obtained Latin hypercube designs achieve higher-dimensional uniformity when projected onto the columns corresponding to higher strength orthogonal arrays, as well as twodimensional projective uniformity. Simulation study shows that the constructed Latin hypercube designs are significantly superior to the currently used designs in terms of the times of correctly identifying the significant effects.
This paper considers an optimal control of a big financial company with debt liability under bankrupt probability constraints. The company, which faces constant liability payments and has choices to choose various production/business policies from an available set of control policies with different expected profits and risks, controls the business policy and dividend payout process to maximize the expected present value of the dividends until the time of bankruptcy. However, if the dividend payout barrier is too low to be acceptable, it may result in the company’s bankruptcy soon. In order to protect the shareholders’ profits, the managements of the company impose a reasonable and normal constraint on their dividend strategy, that is, the bankrupt probability associated with the optimal dividend payout barrier should be smaller than a given risk level within a fixed time horizon. This paper aims at working out the optimal control policy as well as optimal return function for the company under bankrupt probability constraint by stochastic analysis, partial differential equation and variational inequality approach. Moreover, we establish a riskbased capital standard to ensure the capital requirement can cover the total given risk by numerical analysis, and give reasonable economic interpretation for the results.
This work presents two methods, the Least-square and Bayesian method, to solve the multiple mapping problem in extracting gene expression profiles through the next-generation sequencing. We parallel the tag sequences to genome, and partition them to improving the methods’ efficiency. The essential feature of these methods is that they can solve the multiple mapping problem between genes and short-reads, while generating almost the same estimation in single-mapping situation as the traditional approaches. These two methods are compared by simulation and a real example, which was generated from radiation-induced lung cancer cells (A549), through mapping short-reads to human ncRNA database. The results show that the Bayesian method, as realized by Gibbs sampler, is more efficient and robust than the Least-square method.
In this paper, we study an ergodic theorem of a parabolic Andersen model driven by Lévy noise. Under the assumption that
This paper extends the model and analysis in that of Vandaele and Vanmaele [Insurance: Mathematics and Economics, 2008, 42: 1128-1137]. We assume that parameters of the L′evy process which models the dynamic of risky asset in the financial market depend on a finite state Markov chain. The state of the Markov chain can be interpreted as the state of the economy. Under the regime switching L′evy model, we obtain the locally risk-minimizing hedging strategies for some unit-linked life insurance products, including both the pure endowment policy and the term insurance contract.
Affymetrix single-nucleotide polymorphism (SNP) arrays have been widely used for SNP genotype calling and copy number variation (CNV) studies, both of which are dependent on accurate DNA copy number estimation significantly. However, the methods for copy number estimation may suffer from kinds of difficulties: probe dependent binding affinity, crosshybridization of probes, and the whole genome amplification (WGA) of DNA sequences. The probe intensity composite representation (PICR) model, one former established approach, can cope with most complexities and achieve high accuracy in SNP genotyping. Nevertheless, the copy numbers estimated by PICR model still show array and site dependent biases for CNV studies. In this paper, we propose a procedure to adjust the biases and then make CNV inference based on both PICR model and our method. The comparison indicates that our correction of copy numbers is necessary for CNV studies.
Spatio-temporal models are widely used for inference in statistics and many applied areas. In such contexts, interests are often in the fractal nature of the sample surfaces and in the rate of change of the spatial surface at a given location in a given direction. In this paper, we apply the theory of Yaglom (1957) to construct a large class of space-time Gaussian models with stationary increments, establish bounds on the prediction errors, and determine the smoothness properties and fractal properties of this class of Gaussian models. Our results can be applied directly to analyze the stationary spacetime models introduced by Cressie and Huang (1999), Gneiting (2002), and Stein (2005), respectively.
Principal strata are defined by the potential values of a posttreatment variable, and a principal effect is a causal effect within a principal stratum. Identifying the principal effect within every principal stratum is quite challenging. In this paper, we propose an approach for identifying principal effects on a binary outcome via a pre-treatment covariate. We prove the identifiability with single post-treatment intervention under the monotonicity assumption. Furthermore, we discuss the local identifiability with multicomponent intervention. Simulations are performed to evaluate our approach. We also apply it to a real data set from the Improving Mood-Promoting Access to Collaborate Treatment program.
In this paper, we introduce a saddlepoint approximation method for higher-order moments like