figshare
Browse
cjas_a_2252208_sm0440.pdf (296.59 kB)

Prediction and model evaluation for space–time data

Download (296.59 kB)
journal contribution
posted on 2023-09-03, 13:00 authored by G. L. Watson, C. E. Reid, M. Jerrett, D. Telesca

Evaluation metrics for prediction error, model selection and model averaging on space–time data are understudied and poorly understood. The absence of independent replication makes prediction ambiguous as a concept and renders evaluation procedures developed for independent data inappropriate for most space–time prediction problems. Motivated by air pollution data collected during California wildfires in 2008, this manuscript attempts a formalization of the true prediction error associated with spatial interpolation. We investigate a variety of cross-validation (CV) procedures employing both simulations and case studies to provide insight into the nature of the estimand targeted by alternative data partition strategies. Consistent with recent best practice, we find that location-based cross-validation is appropriate for estimating spatial interpolation error as in our analysis of the California wildfire data. Interestingly, commonly held notions of bias-variance trade-off of CV fold size do not trivially apply to dependent data, and we recommend leave-one-location-out (LOLO) CV as the preferred prediction error metric for spatial interpolation.

History