figshare
Browse

Attention and Cognitive Workload

Version 3 2025-04-09, 12:14
Version 2 2025-01-11, 01:25
Version 1 2025-01-11, 00:10
dataset
posted on 2025-04-09, 12:14 authored by Rui VarandasRui Varandas, Inês SilveiraInês Silveira, Hugo GamboaHugo Gamboa

1. Attention and Cognitive Workload

1.1. Experimental design

Two standard cognitive tasks, N-Back and mental subtraction, were conducted using PsychoPy. The N-Back task is a working memory task where participants are presented with a sequence of stimuli and are required to indicate when the current stimulus matched the one from 'n' steps earlier in the sequence, with 'n' varying across different levels. To avoid any interference from reading the instructions, rest periods of 60 seconds were incorporated before, in between, and after the two main tasks, along with a 20-second rest period between the explanation of tasks and the procedure. Additionally, a 10-second rest period was introduced between the different difficulty levels of the N-Back task and between the subtraction periods. The N-Back task was divided into 4 levels, each consisting of 60 trials. The mental subtraction task involved 20 periods of 10 seconds each, during which participants were required to continuously subtract a given number from the result of the previous subtraction, all while a visual cue was displayed.

In the final stage, participants engaged in a practical learning task that required them to complete a Python tutorial, which included both theoretical concepts and practical examples. During this phase of the data collection process, not only were physiological sensors used, but HCI was also tracked.

1.2. Data recording

Data was collected from a group of 8 volunteers (including 4 females), who were all between the ages of 20 and 27 (average age=22.9, standard deviation=2.1). Each participant was right-handed and did not report any psychological or neurological conditions. None of them were on any medication, except for contraceptive pills.

The data for subject 2 do not include the 2nd part of the acquisition (python task) because the equipment stopped acquiring; subject 3 has the 1st (N-Back task and mental subtraction) and the 2nd part (python tutorial) together in the First part folder (file D1_S3_PB_description.json indicates the start and end of each task); subject 4 only has the mental subtraction task in the 1st part acquisition and in subject 8, the subtraction task data is included in the 2nd part acquisition, along with python task.

1.3. Data labelling

Data labeling can be performed in two ways: to categorize data into cognitive workload levels and baseline, either the PB description JSON files or the task_results.csv files can be used. Separately, the labelling of data into cognitive states was carried out every 10 seconds by researchers in biomedical engineering, in which they used image captures of the participants at various instants of the experiment, response time and signals from the respiration sensor to label the subjects’ state as bored, frustrated, interested and at rest. These cognitive state labels are stored in the cognitive_states_labels.txt files located within each subject's folder.

1.4. Data description

Biosignals include EEG, fNIRS (not converted to oxi and deoxiHb), ECG, EDA, respiration (RIP), accelerometer (ACC), and push-button (PB) data. All signals have already been converted to physical units. In each biosignal file, the first column corresponds to the timestamps. For the first dataset, the biosignals folder is split into two parts: part 1 corresponds to the mental n-back and subtraction tasks, and part 2 corresponds to the Python tutorial. The PB files can be inside each part of the Biosignals folder, in case there are 2 files instead of 1.


HCI features encompass keyboard, mouse, and screenshot data. A Python code snippet for extracting screenshot files from the screenshots csv file can be found below.

import base64
from os import mkdir
from os.path import join

file = '...'

with open(file, 'r') as f:
    lines = f.readlines()

for line in lines[1:]:
    timestamp = line.split(',')[0]
    code = line.split(',')[-1][:-2]
    imgdata = base64.b64decode(code)
    filename = str(timestamp) + '.jpeg'
   
    mkdir('screenshot')

    with open(join('screenshot', filename), 'wb') as f:
        f.write(imgdata)


A characterization file containing age and gender information for all subjects in each dataset is provided within the respective dataset folder (e.g., D1_subject-info.csv). Other complementary files include (i) description of the pushbuttons to help segment the signals (e.g., D1_S2_PB_description.json) and (ii) labelling (e.g., D1_S2_cognitive_states_labels.txt). The D1_Sx_task_results.csv files show the results for the n-back task. A result of -1 means no answer, 0 wrong answer and 1 right answer. As for difficulty, 0 corresponds to baseline or rest periods, 1 corresponds to the 0-back task, 2 to 1-back, 3 to 2-back and 4 to 3-back. In the case of the mental subtraction task, we only distinguish between rest, represented with 0, and task, represented with 1. The response time refers to the time it takes the subject to respond and the key answer was the key the subject pressed ('y' corresponding to yes if, for example, for the 0-back task, the letter shown on the screen was identical to the previous one, 'n' corresponding to no if it wasn't and 'None' if there was no response). This file also provides the information needed to segment the signals into the different tasks and baselines.

Funding

PD/BDE/150304/2019

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC