figshare
Browse
1/1
2 files

Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation

dataset
posted on 2018-04-23, 16:36 authored by D. Angus Clark, Ryan P. Bowles

In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker–Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

Funding

This work was supported by Grants R305A110293 and R324A150063 from the Institute of Education Sciences, U.S. Department of Education.

History

Usage metrics

    Multivariate Behavioral Research

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC