figshare
Browse
1/1
4 files

Auditory–visual perception of Down syndrome speech (Hennequin et al., 2018)

Download all (2.1 MB)
dataset
posted on 2018-04-10, 21:54 authored by Alexandre Hennequin, Amélie Rochet-Capellan, Silvain Gerber, Marion Dohen
Purpose: This work evaluates whether seeing the speaker’s face could improve the speech intelligibility of adults with Down syndrome (DS). This is not straightforward because DS induces a number of anatomical and motor anomalies affecting the orofacial zone.
Method: A speech-in-noise perception test was used to evaluate the intelligibility of 16 consonants (Cs) produced in a vowel–consonant–vowel context (Vo = /a/) by 4 speakers with DS and 4 control speakers. Forty-eight naïve participants were asked to identify the stimuli in 3 modalities: auditory (A), visual (V), and auditory–visual (AV). The probability of correct responses was analyzed, as well as AV gain, confusions, and transmitted information as a function of modality and phonetic features.
Results: The probability of correct response follows the trend AV > A > V, with smaller values for the DS than the control speakers in A and AV but not in V. This trend depended on the C: the V information particularly improved the transmission of place of articulation and to a lesser extent of manner, whereas voicing remained specifically altered in DS.
Conclusions: The results suggest that the V information is intact in the speech of people with DS and improves the perception of some phonetic features in Cs in a similar way as for control speakers. This result has implications for further studies, rehabilitation protocols, and specific training of caregivers.

Supplemental Material S1. General information about intelligibility and orofacial specificities of the four speakers with Down syndrome (DS).

Supplemental Material S2. Details of transmitted information computation.

Supplemental Material S3. Effect of presentation order on Prob_correct_VCV.

Supplemental Material S4. Analysis of the probability of correct consonant identification as a function of experimental condition.

Hennequin, A., Rochet-Capellan, A., Gerber, S., & Dohen, M. (2018). Does the visual channel improve the perception of consonants produced by speakers of French with Down syndrome? Journal of Speech, Language, and Hearing Research, 61, 957–972. https://doi.org/10.1044/2017_JSLHR-H-17-0112

Funding

This research has received funding from the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013 Grant Agreement no.339152 “Speech Unit(e)s,” awarded to PI Jean-Luc Schwartz) and from the FIRAH foundation (International Foundation of Applied Disability Research) awarded to PIs Marion Dohen and Amélie Rochet-Capellan.

History