Regularized Linear Programming Discriminant Rule with Folded Concave Penalty for Ultrahigh-Dimensional Data
We propose the regularized linear programming discriminant (LPD) rule with folded concave penalty in the ultrahigh-dimensional regime. We use the local linear approximation (LLA) algorithm to redirect the model with folded concave penalty to a weighted ℓ1 model. The strong oracle property of the solution constructed by the one-step local linear approximation (LLA) algorithm is verified. In addition, we propose efficient and parallelizable algorithms based on feature space split to address the computational challenges due to ultrahigh dimensionality. The proposed feature-split algorithm is compared to existing methods by both numerical simulations and applications to real data examples. The numerical comparisons suggest that the proposed method works well for ultrahigh dimensions, while the linear programming solver and alternating direction method of multiplier (ADMM) algorithm may fail for such high dimensions. Supplementary materials for this article are available online.