figshare
Browse

Developing specialized and reliable rubrics to evaluate data

Download (1.99 MB)
presentation
posted on 2024-05-08, 14:08 authored by Zoe DeKruifZoe DeKruif, Emily Dux SpeltzEmily Dux Speltz, Evgeny ChukharevEvgeny Chukharev

In research surrounding the social sciences, there is a common need to develop specialized rubrics for evaluating the data in a given study. These methods of evaluation are often difficult to develop and adhere to when navigating obstacles such as adequate inter-rater reliability, making subjective decisions as a collective, and internalizing one’s own understanding of the rubric criteria. The ProWrite project, an initiative that aims to provide writing feedback to college students, was in need of a specific rubric to qualitatively evaluate hundreds of essays collected from ISU introductory English composition classes. The lengthy process of creating this rubric began with four researchers and a basic list of essay criteria. To increase inter-rater reliability, the rating team iteratively evaluated small sets of test texts, and used Krippendorf’s alpha to calculate agreement for each rating criterion. This process allowed for the creation of explicit rules, and the rubric was slowly developed until sufficient inter-rater reliability was met prior to evaluating the true data set. This presentation thus details the importance and process of developing thorough rubrics to evaluate data in social science research.

Presented at the 2024 ISU Symposium on Undergraduate Research and Creative Expression on April 16, 2024.

To cite this presentation: DeKruif, Z., Dux Speltz, E., & Chukharev, E. (2024, April 16). Developing specialized and reliable rubrics to evaluate data. ISU Symposium on Undergraduate Research and Creative Expression. doi.org/10.6084/m9.figshare.25704741

Funding

National Science Foundation Award #2016868

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC