figshare
Browse
utas_a_1322143_sm0407.pdf (754.69 kB)

The Perils of Balance Testing in Experimental Design: Messy Analyses of Clean Data

Download (754.69 kB)
Version 2 2018-05-10, 22:33
Version 1 2017-06-26, 20:28
journal contribution
posted on 2018-05-10, 22:33 authored by Diana C. Mutz, Robin Pemantle, Philip Pham

Widespread concern over the credibility of published results has led to scrutiny of statistical practices. We address one aspect of this problem that stems from the use of balance tests in conjunction with experimental data. When random assignment is botched, due either to mistakes in implementation or differential attrition, balance tests can be an important tool in determining whether to treat the data as observational versus experimental. Unfortunately, the use of balance tests has become commonplace in analyses of “clean” data, that is, data for which random assignment can be stipulated. Here, we show that balance tests can destroy the basis on which scientific conclusions are formed, and can lead to erroneous and even fraudulent conclusions. We conclude by advocating that scientists and journal editors resist the use of balance tests in all analyses of clean data. Supplementary materials for this article are available online

Funding

Diana C. Mutz’s research is supported in part by the Institute for the Study of Citizens and Politics. Robin Pemantle’s research is supported in part by National Science Foundation grants # DMS-1209117 and DMS-1612674.

History