figshare
Browse
1/1
2 files

Supplemental Synthetic Images (outdated)

Version 2 2021-05-07, 02:30
Version 1 2021-01-08, 09:27
dataset
posted on 2021-05-07, 02:30 authored by Duke Bass Connections Deep Learning for Rare Energy Infrastructure 2020-2021Duke Bass Connections Deep Learning for Rare Energy Infrastructure 2020-2021
Overview
This is a set of synthetic overhead imagery of wind turbines that was created with CityEngine. There are corresponding labels that provide the class, x and y coordinates, and height and width (YOLOv3 format) of the ground truth bounding boxes for each wind turbine in the images. These labels are named similarly to the images (e.g. image.png will have the label titled image.txt)..

Use
This dataset is meant as supplementation to training an object detection model on overhead images of wind turbines. It can be added to the training set of an object detection model to potentially improve performance when using the model on real overhead images of wind turbines.

Why
This dataset was created to examine the utility of adding synthetic imagery to the training set of an object detection model to improve performance on rare objects. Since wind turbines are both very rare in number and sparse, this makes acquiring data very costly. This synthetic imagery is meant to solve this issue by automating the generation of new training data. The use of synthetic imagery can also be applied to the issue of cross-domain testing, where the model lacks training data on a particular region and consequently struggles when used on that region.

Method
The process for creating the dataset involved selecting background images from NAIP imagery available on Earth OnDemand. These images were randomly
selected from these geographies: forest, farmland, grasslands, water, urban/suburban,
mountains, and deserts. No consideration was put into whether the background images would seem realistic. This is because we wanted to see if this would help the model become better at detecting wind turbines regardless of their context (which would help when using the model on novel geographies). Then, a script was used to select these at random and uniformly generate 3D models of large wind turbines over the image and then position the virtual camera to save four 608x608 pixel images. This process was repeated with the same random seed, but with no background image and the wind turbines colored as black. Next, these black and white images were converted into ground truth labels by grouping the black pixels in the images.

History