Model Selection Consistency of Lasso for Empirical Data

Yuehan Yang , Hu Yang

Chinese Annals of Mathematics, Series B ›› 2018, Vol. 39 ›› Issue (4) : 607 -620.

PDF
Chinese Annals of Mathematics, Series B ›› 2018, Vol. 39 ›› Issue (4) : 607 -620. DOI: 10.1007/s11401-018-0084-6
Article

Model Selection Consistency of Lasso for Empirical Data

Author information +
History +
PDF

Abstract

Large-scale empirical data, the sample size and the dimension are high, often exhibit various characteristics. For example, the noise term follows unknown distributions or the model is very sparse that the number of critical variables is fixed while dimensionality grows with n. The authors consider the model selection problem of lasso for this kind of data. The authors investigate both theoretical guarantees and simulations, and show that the lasso is robust for various kinds of data.

Keywords

Lasso / Model selection / Empirical data

Cite this article

Download citation ▾
Yuehan Yang, Hu Yang. Model Selection Consistency of Lasso for Empirical Data. Chinese Annals of Mathematics, Series B, 2018, 39(4): 607-620 DOI:10.1007/s11401-018-0084-6

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Bennet G.. Probability inequalities for sums of independent random variables. Journal of the American Statistical Association, 1962, 57: 33-45

[2]

Bickel P. J., Ritov Y., Tsybakov A. B.. Simultaneous analysis of lasso and Dantzig selector. Annals of Statistics, 2009, 37(4): 1705-1732

[3]

Buhlmann P., Van de Geer S.. Statistics for Hhigh-dimensional Data, Methods, Theory and Applications, 2011

[4]

Candes E., Tao T.. The Dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics, 2007, 35(6): 2313-2351

[5]

Efron B., Hastie T., Johnstone L. Least angle regression. Annals of Statistics, 2004, 32(2): 407-451

[6]

Fan J. Q., Fan Y. Y., Barut E.. Adaptive robust variable selection. Annals of Statistics, 2014, 42(1): 324-351

[7]

Fan J. Q., Li R. Z.. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 2001, 96(456): 1348-1360

[8]

Fan J. Q., Peng H.. Nonconcave penalized likelihood with a diverging number of parameters. Annals of Statistics, 2004, 32(3): 928-961

[9]

Huang J., Horowitz J. L., Ma S.. Asymptotic properties of bridge estimators in sparse highdimensional regression models. Annals of Statistics, 2008, 36(2): 587-613

[10]

Lv J. C., Fan Y. Y.. A unified approach to model selection and sparse recovery using regularized least squares. Annals of Statistics, 2009, 37(6): 3498-3528

[11]

Meinshausen N., Bühlmann P.. High-dimensional graphs and variable selection with the lasso. Annals of Statistics, 2006, 34(3): 1436-1462

[12]

Meinshausen N., Bühlmann P.. Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2010, 72(4): 417-473

[13]

Meinshausen N., Yu B.. Lasso-type recovery of sparse representations for high-dimensional data. Annals of Statistics, 2009, 37(1): 246-270

[14]

Tibshirani R.. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B, 1996, 58: 267-288

[15]

Tibshirani R. J.. The lasso problem and uniqueness. Electronic Journal of Statistics, 2013, 7: 1456-1490

[16]

Wainwright M. J.. Sharp thresholds for noisy and high-dimensional recovery of sparsity using l1-constrained quadratic programming (lasso). IEEE Transactions on Information Theory, 2009, 55(5): 2183-2202

[17]

Wu L., Yang Y. H.. Nonnegative elastic net and application in index tracking. Applied Mathematics and Computation, 2014, 227: 541-552

[18]

Wu L., Yang Y. H., Liu H. Z.. Nonnegative-lasso and application in index tracking. Computational Statistics & Data Analysis, 2014, 70: 116-126

[19]

Yang Y. H., Wu L.. Nonnegative adaptive lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling. Journal of Statistical Planning and Inference, 2016, 174: 52-67

[20]

Yuan M., Lin Y.. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B, 2006, 68: 49-67

[21]

Zhang C. H.. Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics, 2010, 38(2): 894-942

[22]

Zhao P., Yu B.. On model selection consistency of lasso. Journal of Machine Learning Research, 2006, 7: 2541-2563

[23]

Zou H.. The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 2006, 101: 1418-1429

[24]

Zou H., Hastie T.. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B, 2005, 67: 301-320

[25]

Zou H., Zhang H. L.. On the adaptive elastic-net with a diverging number of parameters. Annals of statistics, 2009, 37(4): 1733-1751

AI Summary AI Mindmap
PDF

184

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/