figshare
Browse
1/1
4 files

Optimal Subsampling for Large Sample Logistic Regression

dataset
posted on 2018-06-06, 22:31 authored by HaiYing Wang, Rong Zhu, Ping Ma

For massive data, the family of subsampling algorithms is popular to downsize the data volume and reduce computational burden. Existing studies focus on approximating the ordinary least-square estimate in linear regression, where statistical leverage scores are often used to define subsampling probabilities. In this article, we propose fast subsampling algorithms to efficiently approximate the maximum likelihood estimate in logistic regression. We first establish consistency and asymptotic normality of the estimator from a general subsampling algorithm, and then derive optimal subsampling probabilities that minimize the asymptotic mean squared error of the resultant estimator. An alternative minimization criterion is also proposed to further reduce the computational cost. The optimal subsampling probabilities depend on the full data estimate, so we develop a two-step algorithm to approximate the optimal subsampling procedure. This algorithm is computationally efficient and has a significant reduction in computing time compared to the full data approach. Consistency and asymptotic normality of the estimator from a two-step algorithm are also established. Synthetic and real datasets are used to evaluate the practical performance of the proposed method. Supplementary materials for this article are available online.

Funding

Zhu’s work was partially supported by National Natural Science Foundation of China grants 11301514 and 71532013. Ma’s work was partially supported by National Science Foundation grants DMS-1440037(1222718), DMS-1438957(1055815), DMS-1440038(1228288), National Institutes of Health grants R01GM113242 and R01GM122080. Wang's work was supported by a Microsoft Azure for Research Award and a Simons Foundation Collaboration Grant for Mathematicians (515599).

History