Second letter to editor in response to author response published in J. Air Waste Manage. Assoc. 64: 1218–1220

In order to quantify the attenuation bias claimed in Quick’s paper, he starts from the premise that the Clean Air Markets Division (CAMD) data exhibits greater random measurement error than the Energy Information Administration (EIA) data. His rebuttal states (emphasis added): “However, Figure 1 of Quick (2014) shows that the differences between the EIA and CAMD CO2 emission measurements are normally distributed around a mean near zero, which is clear evidence of random measurement error.” And later, “the greater variability of the CAMD emission rates is a necessary and inevitable outcome of greater random CAMD measurement error.” These statements and Figure 1 of the rebuttal are helpful in clarifying the misinterpretation of the data used in Quick’s study. If there were many repeat measurements at a single power plant from each of the CAMD and EIA data sets, we might be persuaded to Quick’s position on measurement error. But, this is not what is available in these two data sets. There is only one measurement at each power plant from each data set. A single measurement, by itself, cannot be used to assess random measurement error. One cannot pool those single measurements across all power plants analyzed to assess random measurement error because the “true” emission amount is different at each facility. The greater CAMD variance could just as easily be explained by the CAMD data perfectly reflecting the true emissions and the EIA randomly varying about that true emission amount. We provide supplementary material with a statistical analysis of this issue, written by colleague Dr. Bo Li (Professor of Statistics, University of Illinois). Since there is no statistical basis to support greater measurement error in one of the two datasets, the slope and standard error of the regressed relationships provide no information on attenuation bias.


Dear Editor,
We appreciate JA&WMA's willingness to enable discussion on this topic of national importance. We also continue to appreciate and respect Jeff Quick's honest and rigorous engagement in the course of this discussion. Our review of Jeff Quick's rebuttal, however, further reinforces our initial conclusion that the analysis he presents contains flaws.

On the Use of Attenuation Bias to Identify Measurement Error
In order to quantify the attenuation bias claimed in Quick's paper, he starts from the premise that the Clean Air Markets Division (CAMD) data exhibits greater random measurement error than the Energy Information Administration (EIA) data. His rebuttal states (emphasis added): "However, Figure 1 of Quick (2014) shows that the differences between the EIA and CAMD CO 2 emission measurements are normally distributed around a mean near zero, which is clear evidence of random measurement error." And later, "the greater variability of the CAMD emission rates is a necessary and inevitable outcome of greater random CAMD measurement error." These statements and Figure 1 of the rebuttal are helpful in clarifying the misinterpretation of the data used in Quick's study. If there were many repeat measurements at a single power plant from each of the CAMD and EIA data sets, we might be persuaded to Quick's position on measurement error. But, this is not what is available in these two data sets. There is only one measurement at each power plant from each data set. A single measurement, by itself, cannot be used to assess random measurement error. One cannot pool those single measurements across all power plants analyzed to assess random measurement error because the "true" emission amount is different at each facility. The greater CAMD variance could just as easily be explained by the CAMD data perfectly reflecting the true emissions and the EIA randomly varying about that true emission amount. We provide supplementary material with a statistical analysis of this issue, written by colleague Dr. Bo Li (Professor of Statistics, University of Illinois).
Since there is no statistical basis to support greater measurement error in one of the two datasets, the slope and standard error of the regressed relationships provide no information on attenuation bias.
On the Transformation of Power Plant Emissions (tons CO 2 ) to Emission Rates (lbs CO 2 /MWh) Quick misses the critical element in our argument against the use of data transformation. Our concern does not rest on whether or not the gross generation is related to the CAMD data or the EIA data. Our concern is that Quick has normalized the CAMD and EIA emissions with another measured quantity, whose relationship to the two compared variables is not quantified. Hence, this confounds one's ability to analyze/compare the CAMD and EIA variance. Since Quick's assertion of greater measurement error in the CAMD data rests on a comparison to the EIA data variance, this transformation further confounds what was already a misinterpretation of the data.
Quick rejects our recommendation of a log transformation where he suggests, "The suggested log transformation is not useful for the purpose of my study which was to determine why the EIA and CAMD CO 2 emissions tallies differ." However, Quick's rationale for transforming the emissions is unambiguously stated in Quick's (2014) supplementary material, page S2: to achieve homoscedastic residuals in a regression between the two data sets such that they can be used in the attenuation bias analysis. The log transformation we suggested is a viable candidate for that purpose. Indeed, we are not arguing for a transformation in order to analyze the two datasets. We merely point out that the log transformation meets Quick's stated statistical needs but undermines his argument for attenuation bias-the core element in his conclusion that the CAMD data has greater measurement error than the EIA data.

Uncertainty and Propagation of Errors
We do not question whether or not an assumed EIA error can be used to deduce a CAMD error through a root mean square calculation; we question the initial EIA error value. As with the reasoning in the previous two sections, it is structured to provide an answer that is already asserted at the outset-an example of circular reasoning. The assumption of a ±1.6% (or rounded ±2%) EIA error contains multiple terms not based on any empirical evidence but simply assumed from recommended standards. For example, Quick uses an error of ±0.25% for scales measuring delivered coal. However, this is not a reflection of actual empirical data but the NIST recommended standard. Similarly, the "stockpile survey error is thought to be about ±5%" references conference proceedings at which the presenter was responding to "How accurate do you feel the inventory is?" [emphasis added]. To be fair, some of the elements in Quick's error propagation for the EIA data appear to be empirically based. But the final EIA uncertainty, so precariously calculated, cannot be reliably used to deduce the CAMD error by differencing.
In closing, we want to emphasize that we consider the evidence insufficient to conclude which of these data sets is more or less responsible for their disagreement. We continue to pursue better data/analysis such that an understanding of which measured or calculated quantities are leading to the divergence.