M. Bogdan, E. van den Berg, C. Sabatti, W. Su and E. J. Cands, SLOPE Adaptive variable selection via convex
optimization, Ann. Appl. Statist. 9 (2015), 1103-1140.
M. Bogdan, E. van den Berg, W. Su and E. J. Cands, Statistical estimation and testing via the sorted
l1 norm, arXiv:1310.1969 (2013).
M. Bogdan, X. Dupuis, P. Graczyk, B. Koł odziejek, T. Skalski, P. Tardivel and M. Wilczyński, Pattern recovery by SLOPE, arXiv:2203.12086 (2022).
H. D. Bondell and B. J. Reich, Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors
with OSCAR, Biometrics 64 (2008), 115-123.
H. D. Bondell and B. J. Reich, Simultaneous factor selection and collapsing levels in ANOVA, Biometrics 65 (2009), 169-177.
S. Sh. Chen and D. L. Donoho, Basis pursuit, in: Proc. 1994 28th Asilomar Conference on Signals, Systems and Computers, IEEE, 1994, 41-44.
S. Sh. Chen, D. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput. 20 (1998), 33-61.
X. Dupuis and P. Tardivel, Proximal operator for the sorted l1 norm:
Application to testing procedures based on SLOPE, hal-03177108v2 (2021).
K. Ewald and U. Schneider, Uniformly valid confidence sets based on the Lasso, Electron. J. Statist. 12 (2018), 1358-1387.
M. A. T. Figueiredo and R. Nowak, Ordered weighted â„“1 regularized regression
with strongly correlated covariates: Theoretical aspects, in: Proc. 19th Int. Conf. on Artificial Intelligence and Statistics,
Proc. Mach. Learning Res. 51, 2016, 930-938.
J. Gertheiss and G. Tutz, Sparse modeling of categorial explanatory variables, Ann. Appl. Statist. 4 (2010), 2150-2180.
P. Kremer, D. Brzyski, M. Bogdan and S. Paterlini, Sparse index clones via the sorted â„“1-norm,
Quant. Finance 22 (2022), 349-366.
A. Maj-Kańska, P. Pokarowski and A. Prochenka, Delete or merge regressors for linear model selection, Electron. J. Statist. 9 (2015), 1749-1778.
K. Minami, Degrees of freedom in submodular regularization: a computational perspective of Stein’s unbiased risk estimate,
J. Multivariate Anal. 175 (2020), art. 104546, 22 pp.
R. Negrinho and A. F. T. Martins, Orbit regularization, in: Advances in Neural Information Processing Systems 27, 2014, 9 pp.
Sz. Nowakowski, P. Pokarowski and W. Rejchel, Group Lasso merger for sparse prediction with high-dimensional categorical data, arXiv:2112.11114 (2021).
M.-R. Oelker, J. Gertheiss and G. Tutz, Regularization and model selection with categorical predictors and effect modifiers in generalized
linear models, Statist. Model. 14 (2014), 157-177.
K. R. Rao, N. Ahmed and M. A. Narasimhan, Orthogonal transforms for digital signal processing, in: Proc. 18th Midwest Symposium on Circuits
and Systems, 1975, 1-6.
U. Schneider and P. Tardivel, The geometry of uniqueness, sparsity and clustering in penalized estimation, arXiv:2004.09106 (2020).
P. Tardivel, R. Servien and D. Concordet, Simple expression of the LASSO and SLOPE estimators in low-dimension, Statistics 54 (2020), 340-352.
P. Tardivel, T. Skalski, P. Graczyk and U. Schneider, The geometry of model recovery by penalized and thresholded estimators, hal-03262087 (2021).
B. G. Stokell, R. D. Shah and R. J. Tibshirani, Modelling high-dimensional categorical data using nonconvex fusion penalties, arXiv:2002.12606 (2021).
R. Tibshirani, Regression shrinkage and selection via the lasso,
J. R. Statist. Soc. Ser. B. Statist. Methodology 101 (1996), 167-188.
X. Zeng and M. A. T. Figueiredo, Decreasing weighted sorted
l1 regularization,
IEEE Signal Process. Lett. 21 (2014), 1240-1244.
P. Zhao and B. Yu, On model selection consistency of Lasso, J. Mach. Learn. Res. 7 (2006), 2541-2563.
H. Zou, The adaptive lasso and its oracle properties, J. Amer. Statist. Assoc. 101 (2006), 1418–1429.
|