figshare
Browse
Data_Sheet_1_A Robust Screen-Free Brain-Computer Interface for Robotic Object Selection.pdf (1.44 MB)

Data_Sheet_1_A Robust Screen-Free Brain-Computer Interface for Robotic Object Selection.pdf

Download (1.44 MB)
dataset
posted on 2020-03-31, 14:42 authored by Henrich Kolkhorst, Joseline Veit, Wolfram Burgard, Michael Tangermann

Brain signals represent a communication modality that can allow users of assistive robots to specify high-level goals, such as the object to fetch and deliver. In this paper, we consider a screen-free Brain-Computer Interface (BCI), where the robot highlights candidate objects in the environment using a laser pointer, and the user goal is decoded from the evoked responses in the electroencephalogram (EEG). Having the robot present stimuli in the environment allows for more direct commands than traditional BCIs that require the use of graphical user interfaces. Yet bypassing a screen entails less control over stimulus appearances. In realistic environments, this leads to heterogeneous brain responses for dissimilar objects—posing a challenge for reliable EEG classification. We model object instances as subclasses to train specialized classifiers in the Riemannian tangent space, each of which is regularized by incorporating data from other objects. In multiple experiments with a total of 19 healthy participants, we show that our approach not only increases classification performance but is also robust to both heterogeneous and homogeneous objects. While especially useful in the case of a screen-free BCI, our approach can naturally be applied to other experimental paradigms with potential subclass structure.

History