figshare
Browse

Regularized Linear Programming Discriminant Rule with Folded Concave Penalty for Ultrahigh-Dimensional Data

Download (273.89 kB)
journal contribution
posted on 2022-12-01, 17:40 authored by Changcheng Li, Runze Li, Jiawei Wen, Songshan Yang, Xiang Zhan

We propose the regularized linear programming discriminant (LPD) rule with folded concave penalty in the ultrahigh-dimensional regime. We use the local linear approximation (LLA) algorithm to redirect the model with folded concave penalty to a weighted 1 model. The strong oracle property of the solution constructed by the one-step local linear approximation (LLA) algorithm is verified. In addition, we propose efficient and parallelizable algorithms based on feature space split to address the computational challenges due to ultrahigh dimensionality. The proposed feature-split algorithm is compared to existing methods by both numerical simulations and applications to real data examples. The numerical comparisons suggest that the proposed method works well for ultrahigh dimensions, while the linear programming solver and alternating direction method of multiplier (ADMM) algorithm may fail for such high dimensions. Supplementary materials for this article are available online.

Funding

Runze Li’s work was supported by a NSF grant DMS 1820702.

History