TY - DATA T1 - Sparse Partially Linear Additive Models PY - 2015/09/22 AU - Yin Lou AU - Jacob Bien AU - Rich Caruana AU - Johannes Gehrke UR - https://tandf.figshare.com/articles/journal_contribution/Sparse_Partially_Linear_Additive_Models/1569799 DO - 10.6084/m9.figshare.1569799.v1 L4 - https://ndownloader.figshare.com/files/2352540 KW - additive model KW - GPLAM KW - model selection challenges KW - Sparse Partially Linear Additive Models KW - SPLAM N2 - The generalized partially linear additive model (GPLAM) is a flexible and interpretable approach to building predictive models. It combines features in an additive manner, allowing each to have either a linear or nonlinear effect on the response. However, the choice of which features to treat as linear or nonlinear is typically assumed known. Thus, to make a GPLAM a viable approach in situations in which little is known a priori about the features, one must overcome two primary model selection challenges: deciding which features to include in the model and determining which of these features to treat nonlinearly. We introduce the sparse partially linear additive model (SPLAM), which combines model fitting and both of these model selection challenges into a single convex optimization problem. SPLAM provides a bridge between the Lasso and sparse additive models. Through a statistical oracle inequality and thorough simulation, we demonstrate that SPLAM can outperform other methods across a broad spectrum of statistical regimes, including the high-dimensional (p ≫ N) setting. We develop efficient algorithms that are applied to real data sets with half a million samples and over 45,000 features with excellent predictive performance. ER -