Statistics and Its Interface
Volume 14 (2021)
Generalized Newton–Raphson algorithm for high dimensional LASSO regression
Pages: 339 – 350
The least absolute shrinkage and selection operator (LASSO) penalized regression is a state-of-the-art statistical method in high dimensional data analysis, when the number of predictors exceeds the number of observations. The commonly used Newton–Raphson algorithm is not very successful in solving the non-smooth optimization in LASSO. In this paper, we propose a fast generalized Newton–Raphson (GNR) algorithm for LASSO-type problems. The proposed algorithm, derived from a suitable Karush–Kuhn–Tucker (KKT) conditions based on generalized Newton derivatives, is a non-smooth Newton-type method. We first establish the local one-step convergence of GNR and then show that it is very efficient and accurate when coupled with a constinuation strategy. We also develop a novel parameter selection method. Numerical studies of simulated and real data analysis suggest that the GNR algorithm, with better (or comparable) accuracy, is faster than the algorithm implemented in the popular glmnet package.
LASSO, generalized Newton–Raphson, continuation, local one-step convergence, voting
2010 Mathematics Subject Classification
Primary 62F12. Secondary 62J05, 62J07.
Yueyong Shi is supported by the National Natural Science Foundation of China (Grant Nos. 11801531 and 11701571), Yuling Jiao is supported by the National Natural Science Foundation of China (Grant No. 11871474), and Hu Zhang is supported by the National Social Science Fund of China (Grant No. 17BTJ017).
Received 24 July 2019
Accepted 7 October 2020
Published 9 February 2021