figshare
Browse
1/1
9 files

Speaker Gaze and Trust

Version 5 2016-10-04, 10:20
Version 4 2016-07-13, 16:28
Version 3 2016-07-13, 16:28
Version 2 2016-07-08, 14:41
Version 1 2016-03-18, 15:10
dataset
posted on 2016-10-04, 10:20 authored by Helene KreysaHelene Kreysa, Luise Kessler, Stefan R. Schweinberger
The data and supplementary materials presented here form the basis of our paper "Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements" (H. Kreysa, L. Kessler, & S.R. Schweinberger, 2016, PLoS ONE, 11(9), e0162291. doi:10.1371/journal.pone.0162291.).

As described in the paper (downloadable here), 35 student participants indicated by button press whether or not they believed a truth-ambiguous statement, uttered by a speaker in one of 36 short video clips. Importantly, the speaker sometimes looked directly into the camera, at other times she averted her gaze.

1. Data
We present four datasets as tab-delimited text:

1.1. Gaze_RESPrts.txt
(responses and RTs for main experiment), with the following variables:
- SubjectCode (N = 35)
- Video (.avi)
- Item (N = 36)
- RT from response screen in ms
- Response (yes/no)
- Orientation (direct gaze / averted right / averted left)
- debrief (mention of gaze direction in debrief questionnnaire)

1.2 Audioonly_RESPrts.txt
(responses and RTs for control experiment), with the following variables:
- SubjectCode (N = 37)
- Audio (.wav)
- Item (N = 36)
- RT from response screen in ms
- Response (yes/no)
- Orientation_original (direct gaze / averted)

1.3 Gaze_ratings.txt
(post-experimental ratings of the speaker's attributes, main experiment), with the following variables:
- SubjectCode (N = 35)
- Attribute (6 levels)
- Rating-recoded (1: lowest - 6: highest)
- Response_time
- Rating_trial (6 per participant)
- Num.yes (number of yes-responses in main experiment per participant, out of 36)

1.4 Gaze_fixations.txt
(total fixation time to each AoI during video presentation), with the following variables:
- SubjectCode (N = 35)
- Video (.avi)
- AoI (Area of Interest assigned in SMI BeGaze, see attached example image "AOIs.bmp": eyes, bottomright, bottomleft, topright, topleft, whitespace)
- Fixtime (total fixation time per region)
Eye movements were recorded using an SMI iViewX Hi-Speed 500 tracker, and fixation events were extracted for each participant using SMI BeGaze (v. 3.4.52).

2. Stimulus examples
Two videos, one with direct gaze (1_Hund_direct.avi) and one with with averted gaze (mirrored to create right-averted gaze: 1_Hund_right.avi)

3. Further Material

- AOIs.bmp: image showing assignment of areas of interest on the videos
- HK_Poster_gaze&trust_PPRU2015.pdf: poster presented at the XIth Workshop of the Person Perception Research Unit, Jena, Germany (Workshop XI (April, 9-10, 2015): “Human Communication: From Person Perception to Social Action”)

Please email me if you require any further information (helene.kreysa@uni-jena.de).

Funding

Deutsche Forschungsgemeinschaft (http://www.dfg.de/en/), grant FOR1097

History