Processing natural language dialogues obviously is much more than simply processing a sequence of more than one sentence. Similarly, processing spoken dialogues is much more than handling just another type of input.
Our working hypotheses (relying at least on minimal cognitive claims) are, that in-tegration of speech and language must be incremental, synchronous and more or less deterministic. In other words, processing must
These principles first of all require a much more elaborated architecture for the inter-connection of all the modules on all levels and not only another (spoken language) lexicon and some more specific modules. A sequential architecture where modules follow each other in a fixed sequence (say in accordance to a phonetic/linguistic layer model) and work as filters on the set of hypotheses from the lower level, will not meet the principles explained above.
- incrementally follow ongoing signal interpretation of low level recognition modules (in contrast to sentence oriented processing), even though the relevant units to be analsed on each level are of different size,
- synchronously proceed with the ongoing reception of the speech signal such that processing tim and elapsed real time alsways are in a defined linear relation to each other (ideally 1:1)
- deterministically end up with one unambiguous interpretation of the speech input without resort to backtracking or the need to reanalise previous intervals
This paper describes the principles and decisions concerning the system architecture with-in the project ASL (Architectures for Integrative Speech and Language Processing). We do not report on all those activities in the project which contrast parallel and connectionist architectures to explicit software architectures. Two examples will sketch the requirements of such a speech/language system:
To download the paper click the title, please!
You will receive
file format: rtf
In case of any problems please send a mail to email@example.com