Validity and power of minimization algorithm in longitudinal analysis of clinical trials
We studied the validity of longitudinal statistical inferences of clinical trials using minimization, a dynamic randomization algorithm, designed to minimize treatment imbalance for prognostic factors. Repeated measures analysis of covariance and the random intercept and slope models were used to simulate longitudinal clinical trials randomized by minimization or simple randomization. The simulations represented a wide range of analyses in real-world trials, including missing data caused by dropouts, unequal allocation of treatment arms, and efficacy analyses on either the original outcome or its change from baseline. We also analyzed the database from the Dominantly Inherited Alzheimer Network (DIAN), and used the estimated parameters to simulate the ongoing DIAN trial. Our analyses demonstrated minimization had conservative type I errors when the prognostic factor used in the minimization algorithm had a relatively strong correlation with the outcome and was not adjusted for in analyses. In contrast, adjusted tests for the prognostic factor as a covariate resulted in type I errors close to the nominal significance level. In many simulation scenarios, the adjusted tests using minimization had slightly greater statistical power than those using simple randomization, whereas in the other scenarios, the power of adjusted tests using these two randomization methods are almost indistinguishable.