figshare
Browse
dataset.tar.gz (99.09 MB)

Supplementary Material for "Evaluating Different Strategies for Visuo-Haptic Object Recognition"

Download (99.09 MB) This item is shared privately
dataset
modified on 2017-07-04, 12:45
This directory contains the supplementary material (the dataset) for the journal paper "Evaluating Different Strategies for Visuo-Haptic Object Recognition" (to be published).

Dataset

This dataset contains the visual and haptic information extracted with a NAO robot from 11 every-day objects. It was collected to compare different strategies to integrating the information from the two modalities, two commonly used in existing work on visuo-haptic object recognition and the proposed one that is inspired by the brain.

The data was collected under two conditions. First, 10 observations were collected for every object in the object set under ideal lab conditions. The observations for each object were them added in a 70:30 ratio to the training and test sets. Another 3 observations per object were collected under uncontrolled real-world conditions. The assignment of these additional observations to the training and test sets was done in the same way as above. The dataset contains therefore a total of 143 observations, with 99 of them in training set and the remaining 44 in the test set.

The visual features that are extracted from the objects are shape, color and texture, while the haptic features are shape, weight, texture and hardness. The information for each object property (and for every sensor placement in the case of haptic texture and hardness) are contained in separate text (.txt) files in the training and test subfolders. The labels for the observations are also provided in a separate txt-file.

The total dimensionality of the data is 44 949, with the dimensionality of each feature being as follows:
- visual_shape.txt : 7
- color.txt : 768
- visual_texture.txt : 26
- haptic_shape.txt : 12
- weight.txt : 36
- haptic_texture.txt : 22 050
- hardness.txt : 22 050

Contact

For more information, please refer to the paper. For specific questions regarding the paper, please contact Sibel Toprak.