figshare
Browse

File(s) stored somewhere else

Please note: Linked content is NOT stored on Carnegie Mellon University and we can't guarantee its availability, quality, security or accept any liability.

Sequence Encoders Enable Large-Scale Lexical Modeling: Reply to Bowers and Davis (2009).

journal contribution
posted on 2009-01-01, 00:00 authored by Daragh E. Sibley, Christopher T. Kello, David PlautDavid Plaut, Jeffrey L. Elman

Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed-width distributed representations of variable-length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (in press) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence is not a useful component of large-scale word reading models. In this reply, it is noted that the sequence encoder has facilitated the creation of large-scale word reading models. The reasons for this success are explained, and stand as counterarguments to claims made by Bowers and Davis.

History

Date

2009-01-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC