<p dir="ltr">The VegQual dataset contains <b>4,736</b> high-quality, annotated images of <b>14 </b>commonly used vegetables, captured under real-world conditions. The images include variations in angles, backgrounds, distances, and lighting, providing a diverse and challenging resource for training and evaluating deep learning–based object detection models. Each image has been carefully annotated using bounding boxes in TXT (YOLO) format, incorporating both class IDs and normalized bounding box coordinates. These annotations enable precise object localization and are fully compatible with major deep learning frameworks. All annotations were created using the Roboflow platform, ensuring consistency, accuracy, and high-quality labeling standards.</p><p dir="ltr">To achieve a better learning procedure, the dataset has been split into three sub-datasets: training, validation, and testing. The training dataset constitutes <b>70%</b> of the entire dataset, with validation and testing at <b>20%</b> and <b>10%</b> respectively. In addition, all images undergo scaling to a standard of <b>640x640</b> pixels while being auto-oriented to rectify rotation discrepancies brought about by the EXIF metadata. The dataset is structured in three main folders - train, valid, and test, and each contains images/ and labels/ subfolders. Every image contains a label file containing class and bounding box data corresponding to each detected object. </p><p dir="ltr">The whole dataset features <b>1,</b><b>1407 </b>labeled instances per 14 categories. The dataset provides a valuable benchmark for research in computer vision, deep learning, agricultural automation, and food quality assessment. It supports advancements in real-time classification and defect detection of vegetables, contributing to innovation in sustainable food production and intelligent agricultural systems.</p>