figshare
Browse

replication_package.zip

Download (395.92 MB)
dataset
posted on 2025-04-01, 19:57 authored by Madhusudan SrinivasanMadhusudan Srinivasan

Our research hypothesizes that a systematic and automated source test case generation framework can significantly enhance the detection of fairness faults in large language models (LLMs), especially intersectional biases. To test this, we developed GenFair and compared it against template-based and grammar-based (ASTRAEA) methods. We generated test cases using 15 manually designed templates, applied structured transformations, and evaluated the outputs of GPT-4.0 and LLaMA-3.0 under metamorphic relations (MRs). Fairness violations were identified when the tone or content changed unjustifiably between source and follow-up responses. GenFair demonstrated superior fault detection rates (FDR), higher syntactic and semantic diversity, and better coherence scores compared to the baselines. These results indicate that GenFair is more effective at uncovering subtle and intersectional biases, making it a robust tool for fairness testing in real-world LLM applications.

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC