figshare
Browse
plcp_a_1411961_sm2603.docx (17.06 kB)

Prediction in a visual language: real-time sentence processing in American Sign Language across development

Download (17.06 kB)
journal contribution
posted on 2017-12-09, 05:35 authored by Amy M. Lieberman, Arielle Borovsky, Rachel I. Mayberry

Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eye-tracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4–8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimising visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.

Funding

This work was supported by National Institute on Deafness and Other Communication Disorders [grant numbers R01DC015272 (AL), R03DC013638 (AB), and R01DC012797 (RM)].

History