Kerstin Fischer


Particles (interjections, hesitation markers, discourse und modal particles, as well as conjunctions)

My interests during the last years concerned the development of a unified account for the functional polysemy of discourse particles. Discourse particles, such as well, yes, oh, uhm, and uhuh, belong to the most frequent words used in spontaneous spoken language dialogues. They fulfill extremely many pragmatic functions with respect to a large number of linguistic and interactional domains. An account of their different interpretations which attempts at describing these functions in relation to a particular discourse particle lexeme is faced with a dilemma to which Hentschel and Weydt (1989) refer as the particle paradox: Previous approaches either identified an invariant component with respect to each discourse particle but then could not relate the lexical meaning to its large number of possible functions; or they listed the different readings possible without being able to explain how a particular discourse particle gets its different interpretations, how these readings are related, and why a given discourse particle fulfills just exactely these pragmatic functions and not others.
As a solution to this problem a computational lexicon is proposed which incorporates three types of information: the invariant contribution of each discourse particle lexeme; a number of linguistic constructions which generally describe the relations between certain surface properties, such as position with respect to turn and utterance or the intonation contour, and functional interpretation; and a conceptual background frame which constitutes a model of the tasks the speakers attend to regarding their hearers. Methodologically, conversation analysis and various methods from lexical semantics, such as field analysis and semantic decomposition, as well as contrastive studies are combined with the statistical analyses of large corpora, simulation experiments involving supervised learning in artificial neural networks, and the representation of the results in a computational lexicon.
The cognitive semantic model resulting accounts for what the whole range of functions is that English and German discourse particles can fulfil, how these functions are related, why discourse particles fulfil just these functions and not others, and what factors condition their interpretation.

Currently I am editing a book on Approaches to Discourse Particles.

Emotionality in human-computer communication

It is often the case that speech processing systems are not working the way they should, and irritations caused by the system may lead to speakers' reactions that are difficult to process; recent studies (Levow (1998), Oviatt et al. (1998)) indicate that, because of increasing recognition error rates, research is necessary which deals with the linguistic features of dissatisfied or even angry users' utterances.
To approach this problem, Wizard-of-Oz dialogues are recorded in order to determine how speakers react if the system repeatedly misinterprets their utterances. As a methodology for controlling inter- and intrapersonal variation, a fixed dialogue schema has been created which guides the utterances made by the system. The dialogues are therefore comparable and allow furthermore to determine those differences which are created by different attitudes towards the system such as increasing dissatifaction. Recursively recurring dialog phases are defined so that it is possible to analyse the same sequences of utterances in different phases of the dialogue. This methodology allows the comparison of the lexical, conversational, and prosodic properties of pairs (or triplets etc.) of utterances from different dialogue phases. It also allows us to analyse the changing attitude of the speakers toward the computer, their conceptualisation of their communication partner and its influence on their formulating utterances, and interactive negotiation processes between humans and computers.

Human-Robot Communication

Analysing human-robot communication, even more than human-computer interaction, can reveal what we, in normal conversations among communication partners that are similar to ourselves, normally take for granted: What we perceive, what we understand as `the situation', what linguistic code we can use, what is a valid argument, etc. Since all of these aspects may demand explicit and implicit negotiation with a robot, an artificial communication partner that interacts with its environment, the human speakers' insecurity and problems in the interaction with such a communication partner reveal indirectly what they can usually rely on. Investigating human-robot communication can thus provide us with information about what we usually regard as `the context' of an interaction.

This term I am teaching a corpus linguistics seminar.