Xavier Hinaut

Language Acquisition using Echo State Networks

Oberseminar NatS, 17.11.2015

Primates can learn complex sequences that can be represented in the form of categories, and even more complex hierarchical structures such as language. The prefrontal cortex is involved in the representation of the context of such sequential structures, and the striatum permits the association of this context with appropriate behaviours. In this experiment, the Reservoir Computing paradigm is used: a recurrent neural network with fixed random connections models the prefrontal cortex, and the read-out layer (i.e. output layer) models the striatum. The model was applied to language syntactic processing, especially thematic role assignment: given a sentence this corresponds to answer the question "Who did what to whom?". The model processes categories (i.e. abstractions) of sentences which are called "grammatical constructions". The model is able to (1) process correctly the majority of the grammatical constructions that were not learned, demonstrating generalization capabilities, and (2) to make online predictions while processing a grammatical construction. Moreover, we observed that when the model processes less frequent constructions an important shift in output predictions occurs. It is proposed that a significant modification of predictions in a short period of time is responsible for generating Evoked-Related Potentials (ERP) like the P600: this typically occurs when unusual sentences are processed. The use of the model for complex abstract sequence processing shows that the faculty of representation and learning of sequences in the brain may be based on highly recurrent connections. This experiment suggests that artificial recurrent neural networks can provide insight into the underlying mechanisms of human cortico-striatal function in sentence processing. Finally, to show the ability of the model to deal with a real-world application, the model was successfully applied in the framework of human-robot interaction for both sentence comprehension and production. The sentence production model was obtained by "reversing" the sentence comprehension model: it processes meanings as input and produces sequence of words in output. More recently, an incremental version of the model with Hebbian-like learning has also been developed, demonstrating the biological plausibility of the underlying mechanisms. This incremental version could be used in a developmental learning scheme for robots.
 
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback