figshare
Browse
1/1
3 files

False Discovery Rate Smoothing

Version 2 2018-06-05, 21:01
Version 1 2017-06-26, 15:48
dataset
posted on 2018-06-05, 21:01 authored by Wesley Tansey, Oluwasanmi Koyejo, Russell A. Poldrack, James G. Scott

We present false discovery rate (FDR) smoothing, an empirical-Bayes method for exploiting spatial structure in large multiple-testing problems. FDR smoothing automatically finds spatially localized regions of significant test statistics. It then relaxes the threshold of statistical significance within these regions, and tightens it elsewhere, in a manner that controls the overall false discovery rate at a given level. This results in increased power and cleaner spatial separation of signals from noise. The approach requires solving a nonstandard high-dimensional optimization problem, for which an efficient augmented-Lagrangian algorithm is presented. In simulation studies, FDR smoothing exhibits state-of-the-art performance at modest computational cost. In particular, it is shown to be far more robust than existing methods for spatially dependent multiple testing. We also apply the method to a dataset from an fMRI experiment on spatial working memory, where it detects patterns that are much more biologically plausible than those detected by standard FDR-controlling methods. All code for FDR smoothing is publicly available in Python and R (https://github.com/tansey/smoothfdr). Supplementary materials for this article are available online.

Funding

The research described here was partially supported by NSF CAREER grant DMS-1255187 (JGS).

History