A general practice workplace-based assessment instrument: Content and construct validity

Introduction: Relatively few general practice (GP) workplace-based assessment instruments have been psychometrically evaluated. This study aims to establish the content validity and internal consistency of the General Practice Registrar Competency Assessment Grid (GPR-CAG).

Methods: The GPR-CAG was constructed as a formative assessment instrument for Australian GP registrars (trainees). GPR-CAG items were determined by an iterative literature review, expert opinion and pilot-testing process. Validation data were collected, between 2014 and 2016, during routine clinical teaching visits within registrars’ first two general practice training terms (GPT1 and GPT2) for registrars across New South Wales and the Australian Capital Territory. Factor analysis and expert consensus were used to refine items and establish GPR-CAG’s internal structure. GPT1 and GPT2 competencies were analysed separately.

Results: Data of 555 registrars undertaking GPT1 and 537 registrars undertaking GPT2 were included in analyses. A four-factor, 16-item solution was identified for GPT1 competencies (Cronbach’s alpha range: 0.71–0.83) and a seven-factor 27-item solution for GPT2 competencies (Cronbach’s alpha: 0.63–0.84). The emergent factor structures were clinically characterisable and resonant with existing medical education competency frameworks.

Discussion: This study establishes initial evidence for the content validity and internal consistency of GPR-CAG. GPR-CAG appears to have utility as a formative GP training WBA instrument.