A multimodal dataset of spontaneous speech and movement production on object affordances

2016-01-19T11:30:32Z (GMT) by Argiro Vatakis Katerina Pastra
<p>This is the description of the multimodal dataset of spontaneous speech and movement production on object affordances. </p> <p>All the data can be found in the .rar files for each experiment conducted (refer to paper).</p> <p>This resource contains an excel file (Experimental_Information.xls) with information on: a) the participant’s assigned number and experiment (e.g., PN#_E#, where PN corresponds to the participant number and E to the experiment), which serves as a guide to the corresponding video, audio, and transcription files, b) basic demographic information (e.g., gender, age), and c) the available data files for each participant, details regarding their size (in mb) and duration (in secs), and potential problems with these files. These problems are mostly due to dropped frames in one of the cameras and in some rare cases missing files. The excel file is composed of three different sheets that correspond to the three different experiments conducted (refer to Methods section of paper).</p> <p>The audiovisual videos (.mp4), audio files (.aac), and transcription files (.trs) are organized by experiment and participant. Each participant file contains the frontal (F) and profile (P) video recordings (e.g., PN1_E1_F that refers to participant 1, experiment 1, frontal view) and the transcribed file along with the audio file. Also, the videos are labelled according to the condition with ‘NH’ when the object is in isolation, ‘H’ when the object is held by an agent, and ‘T’ when the actual, physical object is presented (e.g., PN1_E1_F_H.mp4 that refers to participant 1, experiment 1, frontal view, object held by an agent). These files are compressed in a .rar format.</p>