figshare
Browse

Perceiving Multidimensional Disaster Damages from Street-View Images Using Visual-Language Models

Version 2 2025-04-15, 22:57
Version 1 2025-04-15, 22:55
dataset
posted on 2025-04-15, 22:57 authored by Yifan YangYifan Yang

Post-disaster street-view imagery has increasingly become a critical resource for ground-level damage assessment and disaster perception classification tasks, significantly contributing to disaster reporting. However, existing approaches face notable limitations, such as the requirement for extensive manual annotations and limited interpretability inherent in pretrained image classification models. Recently, Large Language Models (LLMs) have attracted considerable interest due to their robust natural disaster-related domain knowledge, powerful text generation abilities, and advanced capabilities in visual comprehension. This study explores and expands the potential of LLMs for perceiving multi-dimensional disaster impacts through street-view imagery. Specifically, we collected 2,556 street-view images taken post-hurricane from regions in Florida. For experimental purposes, these images were annotated directly by GPT-4 mini (via prompt engineering) to quantify disaster impacts, alongside parallel annotations by human experts.

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC