Social Dual Task Developmental Dataset and Analyses

2017-09-05T18:08:43Z (GMT) by Kathryn Mills Sarah-Jayne Blakemore

Project abstract:

Multitasking is part of the everyday lives of both adolescents and adults. We often multitask during social interactions by simultaneously keeping track of other, non-social information. Here, we examined how keeping track of non-social information impacts the ability to navigate social interactions in adolescents and adults. Participants aged 11-17 and 22-30 years old were instructed to carry out two tasks, one social and one non-social, within each trial. The social task involved referential communication, requiring participants to use social cues to guide their decisions, which sometimes required taking a different perspective. The non-social task manipulated cognitive load by requiring participants to remember non-social information in the form of one two-digit number (low load) or three two-digit numbers (high load) presented before each social task stimulus. Participants showed performance deficits when under high cognitive load and when the social task involved taking a different perspective, and individual differences in both trait perspective taking and working memory capacity predicted performance. Overall, adolescents were less adept at multitasking than adults when under high cognitive load. These results suggest that multitasking during social interactions incurs performance deficits, and that adolescents, compared to adults, are more sensitive to the effects of cognitive load while multitasking.



These files include our dataset, as well as the scripts used to analyze the data. You will need to download R ( to use these files. Data are from 37 adolescents (11-17 years) and 30 adults (22-30 years). Ages are not provided to preserve anonymity. Participants completed an adapted version of the “Director Task” (Dumontheil, Hillebrandt, Apperly, & Blakemore, 2012) with an embedded working memory (WM) Task component. Afterwards, participants completed a verbal reverse digit-span task as a measure of WM capacity and the Interpersonal Reactivity Index questionnaire to assess individual differences in trait perspective taking (Davis, 1980). We have also included information about the errors made by individuals, as well as the Director Task stimuli.


Data Analysis:

We used mixed effects modelling to determine what factors best predicted multitasking performance. Accuracy was determined on a trial-by-trial basis, where a trial was considered accurate only if participants correctly performed both the Director Task and embedded WM Task. As our main interest was performance during social interactions, and not recall of non-social information, we analysed Director Task RT (correct trials only). We used the lme4 package in R (Bates, Maechler, & Bolker, 2013) to perform a linear mixed effects analysis on the relationship between our factors of interest and multitasking performance (accuracy and RT). Our factors of interest included three from the task: cognitive load (low vs. high), condition (DA vs. DP), perspective (same vs. different), and two individual traits: WM capacity and trait perspective taking (TPT). As we hypothesised an interaction between WM capacity and TPT would relate to our task, we included a combined measure of these two individual traits by calculating and summing the ratios of TPT and WM capacity (Combined Traits). We determined which factors best predicted performance for our measures of interest by testing global models including our factors of interest as fixed effects. Each model included a random intercept for each participant. Because of computational limitations, we performed a two-step procedure that involved five global models. First, all possible combinations of the variables within each of the five global models were tested using an automated model selection procedure (MuMIn1.9.0; Barton, 2013). Models were ranked using Second-order Akaike Information Criterion (AICc; Burnham & Anderson, 2002). Second, the best fitting model for each of the five global models were compared and ranked using AIC and likelihood ratio tests. 


The object_positions.xls file shows the object positions (slot number) for each of the 48 Director Task stimuli. For example, in a trial using picture number 1, the participant would need to select the object in slot number 3 during 3-object trials in which the Director shared the same perspective as the participant. However, the participant would need to select the object in slot number 11 during 3-object trials in which the Director had a different perspective than the participant (i.e., the Director is behind the shelf array). Finally, on 1-object trials, the participant would need to select the object in show number 6. You can use this file to do error analyses with the provided dataset to see what kind of errors participants made in the Director Task.