<p>This dataset contains 304 manual evaluations of class-level software maintainability. It can be used to develop and evaluate automated quality prediction tools.<br></p><p>This archive was created along the work described in detail in</p><p><em>M. Schnappinger, A. Fietzke, and A. Pretschner, "</em><i>Defining a Software Maintainability Dataset: Collecting, Aggregating and Analysing Expert Evaluations of Software Maintainability</i><em>", International Conference on Software Maintenance and Evolution (ICSME), 2020</em></p><p><em><br></em></p><p>If you use this dataset in your research, please cite both this dataset and the corresponding paper. </p><p></p><p>This archive contains</p><ul><li><p>A readme with all relevant information about</p><ul><li><p>Study objects</p></li><li><p>Label definition</p></li><li><p>Threats to validity</p></li><li><p>Hints for using the dataset</p></li><li><p>List of metrics used to prioritize the samples</p></li></ul></li><li><p>The code of the study objects</p></li><li><p>A .csv file containing the readability, understandability, complexity, adequate size, and overall maintainability labels</p><p></p></li><li><p>The original publication</p></li></ul><p>Figshare uses a strict character limit for this description. Please refer to the `Readme.md` for further information.</p><div><br></div>