Studies into the interaction between the plasminogen activating system and the blood-brain barrier

2016-12-02T04:50:53Z (GMT) by Niego, Be'eri
Valid, reliable and standardised assessment formats and procedures, suited to application in the workplace, are important for meaningful and consistent assessment of the clinical performance of physiotherapy students. The choice of clinical assessment instruments for physiotherapy programs in Australia has typically been influenced by historical precedents and the personal experience of assessors rather than by the known strengths and weaknesses of an assessment instrument, a situation common to that observed in medical programs (E. D. Newble, Jolly, & Wakeford, 1994). The Queensland Health Clinical Education Project (2005) acknowledged the variability of procedures and instruments for assessment of physiotherapy practices across different universities in Australia. At that time there were 16 entry-level physiotherapy programs in Australia, all accredited by the Australian Physiotherapy Council (APC). Each physiotherapy program was required to demonstrate that graduates met the performance standards outlined in the Australian Standards for Physiotherapy (2006). Despite each program having curriculum designed to meet the same standards, when this thesis commenced each physiotherapy program used unique clinical assessment forms and assessment criteria. The Queensland Health Clinical Education Project emphasised the diversity of assessment forms and supporting documentation as a substantial and unnecessary burden on assessors who were required to use multiple assessment instruments. In addition, the measurement properties of these assessment instruments were unknown, impacting on confidence in the reliability and validity of decisions based on these assessment approaches. As new physiotherapy programs commenced, this burden multiplied. This thesis describes the development of a standardized assessment instrument to meet the needs of physiotherapy students and educators and provide valid and reliable measurements of clinical performance. The need for this research was identified by university-based physiotherapy programs across Australia and New Zealand, physiotherapy educators and supervisors, and the APC responsible for accreditation of physiotherapy programs within Australian universities. Funding was provided by the Australian Learning and Teaching Council (formerly The Carrick Institute) to commence work on the development of an assessment instrument. The research in this thesis is reported in chronological order with each phase informing subsequent steps. Streiner and Norman (2003) proposed that the first step in the development of a new instrument was to be fully informed of existing scales and the quality of such instruments prior to embarking on the development of a new instrument. This work began with a systematic review of methods used to assess professional competence in physiotherapy practice (Chapter One). The systematic review found a number of reports of research into assessment of competence in physiotherapy practice; these varied in design and method quality (see Chapter One). Eight instruments developed to assess professional competence of physiotherapy students within the clinical environment were located. The review failed to identify convincing evidence sufficient to support the merits of one instrument above others. In addition, investigation of the psychometric properties of these instruments was not performed utilising Item Response Theory (IRT) or the Rasch Measurement Model (RMM), rather employing the Classical Test Theory (CTT) approach. The thesis argues for the need to investigate instrument properties using IRT or RMM; these approaches offer substantial clinical and scientific advantage over traditional psychometric methods in the development and evaluation of rating scales, and in the analysis of rating scale data (Andrich, 1988; J. Hobart & Cano, 2009; Wilson, 2005; Wright, 1996a; Wright & Mok, 2000). Chapter Two describes and defends the plan for instrument development. The first phase (Chapter Three) involved development of the assessment instrument content, format and processes. The research was guided by the Standards for Educational and Psychological Testing, (American Educational Research Association, 1999). The process of test design was based on the ‘four building blocks’ approach outlined by Wilson (2005) which comprised construct mapping, items design, outcome space and measurement model. Once development of the first version of the instrument with the working title Clinical Assessment of Physiotherapy Skills (CAPS) was complete, cycles of action and reflection on outcomes (an action research approach) were utilised. The iterative research cycles included preliminary information gathering, instrument development, pilot trial / field test stages, and continuous refinement of the instrument based on evaluation throughout the different phases following recommendations for best practice in research of this nature (Coghlan & Brannick, 2001). A pilot trial (Chapters 4 and 5) was conducted, using the instrument to assess 295 third and fourth year physiotherapy students. Rasch analysis of outcome data showed an overall fit to the Rasch model. The difficulty of the items was well matched to the abilities of the persons being assessed and the 5-level rating scale performed as intended. The results of the pilot trial supported the continuation of the research into field tests one and two. The results of both field tests (Chapters 7 and 8) supported the findings of the pilot trial demonstrating that the APP data had adequate fit to the chosen measurement model (Rasch Partial Credit Model), the Person Separation Index demonstrated the scale was internally consistent discriminating between four groups of students with different levels of professional competence (0.96), the items were targeting the intended construct (professional competence) and the instrument demonstrated unidimensionality. Additionally differential item function (DIF) studies demonstrated there was no item bias in either field test for the variables: student age, gender and level of clinical experience, clinical educator age, gender and experience as an educator, facility type, and clinical area. Qualitative data (Chapters 6 and 9) provided evidence of the acceptability of the instrument for use within the work place by educators and students. Further research investigating how educators were interpreting and scoring written communication and the impact on student learning of the assessment process was recommended. Ongoing evaluation and refinement of training methods and resources was also advocated. The results of field testing provided data supported the validity of the APP instrument scores and acceptability of the instrument for use within the workplace. These data enabled the final phase of research, investigation of inter-rater reliability, to proceed (Chapter Ten). Thirty pairs of clinical educators (60 independent educators) and 30 third and fourth year physiotherapy students from five universities participated in the reliability trial. Both correlational coefficients and metricated errors were estimated to provide a comprehensive analysis of the likely utility of APP scores and to enable score and change score interpretation. The Intraclass Correlation Coefficient 2,1 (two-way random effects model) for total APPs scores for the two raters was 0.92 (95% CI 0.84 to 0.96) and the ICC 2,1 for the global rating scale scores was 0.72 (95% CI 0.50 – 0.86). The 95% confidence band around a single score for this data was 6.5 APP points. With a scale width of 0 – 80, an error margin of 6.5 (95%CI) was considered acceptable. This error enables a high level of accuracy in ranking student performance as evidenced by test/retest correlation of .92. For the APP the magnitude of change in scores required to conclude that real change has occurred is in the order of 7.8 points which compared favourably to other instruments used to assessment professional competence of physiotherapy students (Coote, et al., 2007; Meldrum, et al., 2008; Task Force for the Development of Student Clinical Performance Instruments, 2002). Overall the physiotherapy clinical educators demonstrated a high level of reliability in the assessment and marking of physiotherapy students’ performance on clinical placements when using the APP. This was found despite the variability anticipated due to different areas of practice, types of facilities and a spectrum of educator experience. The final step in the research was to evaluate the evidence for validity of APP scores. Using the five sources of validity evidence presented in the American Educational Research Association (1999) standards, data from multiple sources was accumulated to establish the likely validity of interpretations made based on the instrument scores. The validity of scores for workplace-based professional competence awarded by educators to pre-entry level physiotherapy students using the APP was evaluated through Rasch analysis, parametric statistical evaluation, and qualitative data obtained from multiple sources. This approach enabled triangulation and reinforcement of decisions based on quantitative and qualitative data obtained from multiple sources. The APP was found to have strong validity characteristics across all five sources of validity evidence as described in Chapter Eleven. The APP was developed and applied within the constraints of a dynamic and unpredictable clinical environment. This is a key strength of the assessment instrument. The research has delivered an important benefit for physiotherapy education in that a single instrument with known validity and reliability is now available to replace the twenty-five distinct assessment forms formerly being used. To date, 17 out of 18 Universities in Australia and New Zealand have adopted the APP as the sole assessment form, and a further three new programs commencing within the next two years are also adopting the instrument.