A simulation study to examine the sensitivity of the Pettitt test to detect abrupt changes in mean

2015-12-09T10:54:11Z (GMT) by Gabriele Villarini Iman Mallakpour
<div><p></p><p>The Pettitt test is a non-parametric test that has been used in a number of hydroclimatological studies to detect abrupt changes in the mean of the distribution of the variable of interest. This test is based on the Mann-Whitney two-sample test (rank-based test), and allows the detection of a single shift at an unknown point in time. This test is often used to detect shifts in extremes because of the lack of distributional assumptions. However, the downside of not specifying a distribution is that the Pettitt test may be inefficient in detecting breaks when dealing with extremes. Here we adopt a Monte Carlo approach to examine the sensitivity of the Pettitt test in detecting shifts in the mean under different conditions (location of the break within the series, magnitude of the shift, record length, level of variability in the data, extreme <i>vs</i> non-extreme records, and pre-assigned significance level). These simulation results show that the sensitivity of this test in detecting abrupt changes increases with the increase in the magnitude of the shift and record length. The number of detections is higher when the time series represents the central part of the distribution (e.g. changes in the time series of medians), while the skill decreases as we move toward either low or high extremes (e.g. changes in the time series of maxima). Furthermore, the number of detections decreases as the variability in the data increases. Finally, abrupt changes are more easily detected when they occur toward the center of the time series.</p><p></p><p><b>Editor</b> D. Koutsoyiannis <b>Associate editor</b> K. Hamed</p><p></p><p></p></div>