Pre-VFall: Vision Sensor Simulated Early Signs of Fall Dataset
The Pre-VFall dataset is a multimodal dataset. It includes images, keygradient vector magnitude features, and keygradient vector direction features available to researchers for advancing the robustness of fall detection systems. The dataset is intended for use by the machine learning community to identify pattern cues that signal the onset of falls. The dataset provides new insights into how frailty states in older adults may serve as precursors to fall incidents. This will help improve the robustness of fall detection systems, ensuring they can effectively account for irregularities in movement and behavior that indicate early signs of fall. The dataset consists of around 22K images selected from recorded videos of nine healthy young adult participants. Each participant's videos and corresponding images are organized in folders named after each video session as follows: confusion_delirium, confusion_nph, dizzy_fall_forward, dizzy_fall_side, weakness_fall_forward, and weakness_fall_side. It should be noted that weakness and dizziness sessions were succeeded by falls and so included fall in their labels. The terms “forward” and “side” in some labels indicate the direction of fall. Each folder contains videos recorded with RGB cameras positioned at 90° and 45° with forward-view and side-view cameras included to identify the angle of the camera. The frames of the videos were manually selected and sorted into their respective categories. For example, the pre-fall activities such as weakness, dizziness, delirium-confusion, and NPH-confusion were categorized as “Abnormal,” while states of falling and actual falls were categorized as “Fall.” Therefore, this dataset encompasses three activity classes: normal, abnormal, and fall.