Version 6 2025-10-23, 06:24Version 6 2025-10-23, 06:24
Version 5 2025-10-14, 06:34Version 5 2025-10-14, 06:34
Version 4 2025-10-11, 07:21Version 4 2025-10-11, 07:21
Version 3 2025-10-10, 08:27Version 3 2025-10-10, 08:27
Version 2 2025-01-20, 00:53Version 2 2025-01-20, 00:53
Version 1 2025-01-16, 09:27Version 1 2025-01-16, 09:27
figure
posted on 2025-10-23, 06:24authored byxin chenxin chen, Jianwen Deng
<p dir="ltr"># OHID-FF</p><p dir="ltr"><br></p><p dir="ltr">The OHID-FF dataset contains high-resolution remote sensing images (5056 × 5056 px) collected from OHS. This repository contains the original imagery and a prepared YOLO-style sliced dataset used for object detection and binary fire/non-fire classification experiments.</p><p dir="ltr"><br></p><p dir="ltr">## Repository layout</p><p dir="ltr"><br></p><p dir="ltr">- tif/fire/ </p><p dir="ltr"> Original high-resolution TIFF images (22 files, 5056 × 5056 px each).</p><p dir="ltr"><br></p><p dir="ltr">- YOLODataset/ </p><p dir="ltr"> - images/ — Sliced 512×512 images used for training. </p><p dir="ltr"> - labels/ — YOLO-format label files (xywh, normalized) matching each sliced image. </p><p dir="ltr"> - viz/ — Visualizations of labels overlaid on slices. </p><p dir="ltr"> - classes.txt — Category names (one per line). </p><p dir="ltr"> - dataset.yaml — Dataset configuration providing paths to images and labels. Update paths as needed for your environment.</p><p dir="ltr"><br></p><p dir="ltr">- train val scripts/ </p><p dir="ltr"> Classification experiments and training scripts for fire/non-fire models (see the folder README for usage details).</p><p dir="ltr"><br></p><p dir="ltr">- Split_dataset.ipynb </p><p dir="ltr"> Notebook to create train/val/test splits and produce file_list.csv.</p><p dir="ltr"><br></p><p dir="ltr">## Label file naming convention</p><p dir="ltr"><br></p><p dir="ltr">Example filename:</p><p dir="ltr">HEM1_20200623235326_0005_L1B_CMOS2_0_8_512_512_1.txt</p><p dir="ltr"><br></p><p dir="ltr">- `HEM1_20200623235326_0005_L1B_CMOS2` — original source image identifier. </p><p dir="ltr">- `0_8_512_512` — position and size of the slice inside the original image (x_y_w_h). </p><p dir="ltr">- Final digit indicates whether the slice contains the target category (1 = contains). </p><p dir="ltr">- Label coordinates are YOLO `xywh` normalized format (center_x center_y width height).</p><p dir="ltr"><br></p><p dir="ltr">## Quick start</p><p dir="ltr"><br></p><p dir="ltr">1. Install dependencies (for the classification scripts):</p><p dir="ltr">```bash</p><p dir="ltr">pip install -r "train val scripts/requirements.txt"</p><p>```</p><p dir="ltr"><br></p><p dir="ltr">2. Prepare the YOLODataset structure (if you need to rebuild it):</p><p dir="ltr">```bash</p><p dir="ltr">python "train val scripts/prepare_data.py"</p><p>```</p><p dir="ltr"><br></p><p dir="ltr">3. Check and update dataset paths in:</p><p dir="ltr">- YOLODataset/dataset.yaml</p><p dir="ltr">- train val scripts/config.py (if used by your training scripts)</p><p dir="ltr"><br></p><p dir="ltr">4. Run training / experiments:</p><p dir="ltr">- For object detection with your chosen YOLO implementation, point the trainer at `YOLODataset/dataset.yaml` and `YOLODataset/classes.txt`.</p><p dir="ltr">- For binary classification experiments with the included scripts:</p><p dir="ltr">```bash</p><p dir="ltr">python "train val scripts/main.py"</p><p>```</p><p dir="ltr"><br></p><p dir="ltr">Results and logs from training runs are saved under `results/` (see the scripts folder README for details).</p><p dir="ltr"><br></p><p dir="ltr">## Dataset splitting</p><p dir="ltr"><br></p><p dir="ltr">- Use `Split_dataset.ipynb` to generate `file_list.csv` and produce train/val/test splits.</p><p dir="ltr">- The notebook uses stratified sampling to preserve class balance; update parameters in the notebook if you need a different split ratio.</p><p dir="ltr"><br></p><p dir="ltr">## Dataset summary (as provided)</p><p dir="ltr"><br></p><p dir="ltr">- Original images: 22 TIFFs at 5056 × 5056 px </p><p dir="ltr">- Sliced images: 512 × 512 px in `YOLODataset/images/` </p><p dir="ltr">- Labels: YOLO-format labels in `YOLODataset/labels/` </p><p dir="ltr">- Classes file: `YOLODataset/classes.txt`</p><p dir="ltr"><br></p><p dir="ltr">(From classification experiments folder: dataset size = 1,197 images (512×512), class distribution: 647 fire / 550 non-fire.)</p><p dir="ltr"><br></p><p dir="ltr">## Contributing</p><p dir="ltr"><br></p><p dir="ltr">Contributions, issues, and feature requests are welcome. If you add scripts or tools that change dataset paths or formats, please update `YOLODataset/dataset.yaml` and this README accordingly.</p><p dir="ltr"><br></p><p dir="ltr">## License</p><p dir="ltr"><br></p><p dir="ltr">Add a LICENSE file to the repository to specify licensing terms. If no license file exists in the repo, default repository copyright applies.</p><p dir="ltr"><br></p><p dir="ltr">## Contact</p><p dir="ltr"><br></p><p dir="ltr">Maintainer: hrnavy</p><p dir="ltr"><br></p><p dir="ltr">## Changes in this update</p><p dir="ltr"><br></p><p dir="ltr">- Clarified repository layout and dataset paths. </p><p dir="ltr">- Added quick-start instructions for both YOLO-style detection use and the binary classification scripts. </p><p dir="ltr">- Emphasized the need to update `dataset.yaml` to match local paths. </p><p dir="ltr">- Pointed to the `train val scripts/` README for model-specific commands and dependencies.</p>