figshare
Browse
SNL2015_Barres&Arbib[FINAL].pdf (4.15 MB)

Visual attention, meaning, and grammar: neuro-computational modeling of situated language use.

Download (0 kB)
poster
posted on 2015-10-23, 18:28 authored by Victor BarresVictor Barres

How does our neural system orchestrate the interactions between visuo-attentional and language processing?

The current line of work expands the schema level Template Construction Grammar (TCG) model of language production developed by Arbib and Lee (2012) to offer:
(1) a novel implementation of the production model that is fully dynamic and distributed in its operations and architecture (focus of this poster)
(2) a unified model of the processes supporting the interaction of vision and language in both the production and comprehension of visual scene descriptions. (see future directions)

Here we give a first overview of the novel production model and we present an example of how it can smoothly integrate visual-attentional saccadic dynamics with online grammatical processing during a visual scene description task, simulating the impact of eye movements on grammatical structure (Gleitman et al. 2007).

 

History