figshare
Browse
Abbott2013-Approximating_Bayesian_inference_wi-poster.pdf (2.78 MB)

Approximating Bayesian inference with a sparse distributed memory system

Download (0 kB)
poster
posted on 2013-08-09, 23:39 authored by Jessica HamrickJessica Hamrick, Joshua Abbott, Thomas Griffiths

Probabilistic models of cognition have enjoyed recent success in explaining how people make inductive inferences. Yet, the difficult computations over structured representations that are often required by these models seem incompatible with the continuous and distributed nature of human minds. To reconcile this issue, and to understand the implications of constraints on probabilistic models, we take the approach of formalizing the mechanisms by which cognitive and neural processes could approximate Bayesian inference. Specifically, we show that an associative memory system using sparse, distributed representations can be reinterpreted as an importance sampler, a Monte Carlo method of approximating Bayesian inference. This capacity is illustrated through two case studies: a simple letter reconstruction task, and the classic problem of property induction. Broadly, our work demonstrates that probabilistic models can be implemented in a practical, distributed manner, and helps bridge the gap between algorithmic- and computational-level models of cognition.

History