Patrick Sturt.
The time-course of the application of binding constraints in
reference resolution.
Journal of Memory and Language, 48(3):542--562, 2003.
[ .pdf ]
Abstract: We report two experiments which examined
the role of binding theory in on-line sentence processing.
Participants' eye movements were recorded while they read
short texts which included anaphoric references with
reflexive anaphors (himself or herself). In each of the
experiments, two characters were introduced into the
discourse before the anaphor, and only one of these
characters was a grammatical antecedent for the anaphor in
terms of binding theory. Both experiments showed that
Principle A of the binding theory operates at the very
earliest stages of processing; early eyemovement measures
showed evidence of processing diffculty when the gender of
the reflexive anaphor mismatched the stereotypical gender of
the grammatical antecedent. However, the gender of the
ungrammatical antecedent had no effect on early processing,
although it affected processing during later stages in
Experiment 1. An additional experiment showed that the gender
of the ungrammatical antecedent also affected the likelihood
of participants settling on an ungrammatical final
interpretation. The results are interpreted in relation to
the notions of bonding and resolution in reference
processing.
Jesse Snedeker and John Trueswell.
Using prosody to avoid ambiguity: Effects of speaker awareness and
referential context.
Journal of Memory and Language, 48(1):103--130, 2003.
[ .pdf ]
Abstract: In three experiments, a referential
communication task was used to determine the conditions under
which speakers produce and listeners use prosodic cues to
distinguish alternative meanings of a syntactically ambiguous
phrase. Analyses of the actions and utterances from
Experiments 1 and 2 indicated that Speakers chose to produce
effective prosodic cues to disambiguation only when the
referential scene provided support for both interpretations
of the phrase. In Experiment 3, on-line measures of parsing
commitments were obtained by recording the Listener's eye
movements to objects as the Speaker gave the instructions.
Results supported the previous experiments but also showed
that the Speaker's prosody affect the Listener's
interpretation prior to the onset of the ambiguous phrase,
thus demonstrating that prosodic cues not only influence
initial parsing but can also be used to predict material
which has yet to be spoken. The findings suggest that
informative prosodic cues depend upon speakers knowledge of
the situation: speakers' provide prosodic cues when needed;
listeners use these prosodic cues when present.
Yuki Kamide, Gerry Altmann, and Sarah L. Haywood.
The time-course of prediction in incremental sentence processing:
Evidence from anticipatory eye movements.
Journal of Memory and Language, 49:133--156, 2003.
see Corrigendum.
[ .pdf ]
Abstract: Three eye-tracking experiments using the
visual-world paradigm are described that explore the basis by
which thematic dependencies can be evaluated in advance of
linguistic input that unambiguously signals those
dependencies. Following Altmann and Kamide (1999), who found
that selectional information conveyed by a verb can be used
to anticipate an upcoming Theme, we attempt to draw here a
more precise picture of the basis for such anticipatory
processing. Our data from two studies in English and one in
Japanese suggest that (a) verb-based information is not
limited to anticipating the immediately following
(grammatical) object, but can also anticipate later occurring
objects (e.g., Goals), (b) in combination with information
conveyed by the verb, a pre-verbal argument (Agent) can
constrain the anticipation of a subsequent Theme, and (c) in
a head-final construction such as that typically found in
Japanese, both syntactic and semantic constraints extracted
from pre-verbal arguments can enable the anticipation, in
effect, of a further forthcoming argument in the absence of
their head (the verb). We suggest that such processing is the
hallmark of an incremental processor that is able to draw on
different sources of information (some non-linguistic) at the
earliest possible opportunity to establish the fullest
possible interpretation of the input at each moment in time.
Jennifer E. Arnold, Maria Fagnano, and Michael K. Tanenhaus.
Disfluencies signal theee, um, new information.
Journal of Psycholinguistic Research, 32(1):25--36, 2003.
[ .pdf ]
Abstract: Speakers are often disfluent, for example,
saying ”theee uh candle” instead of ”the candle”.
Production data show that disfluencies occur more often
during references to things that are discourse-new, rather
than given. An eyetracking experiment shows that this
correlation between disfluency and discourse status affects
speech comprehension. Subjects viewed scenes containing four
objects, including two cohort competitors (e.g., camel,
candle), and followed spoken instructions to move the
objects. The first instruction established one cohort as
discourse-given; the other was discoursenew. The second
instruction was either fluent or disfluent, and referred to
either the given or new cohort. Fluent instructions led to
more initial fixations on the given cohort object
(replicating Dahan et al., 2002). By contrast, disfluent
instructions resulted in more fixations on the new cohort.
This shows that discourse-new information can be accessible
under some circumstances. More generally, it suggests that
disfluency affects core language comprehension processes.
Matthew J. Traxler, Robin K. Morris, and Rachel E. Seely.
Processing subject and object relative clauses: Evidence from
eye-movements.
Journal of Memory and Language, 47:69--90, 2002.
[ .pdf ]
Abstract: Three eye-movement-monitoring experiments
investigated processing of sentences containing
subject-relative and object-relative clauses. The first
experiment showed that sentences containing object-relative
clauses were more difficult to process than sentences
containing subject-relative clauses during the relative
clause and the matrix verb. The second experiment manipulated
the plausibility of the sentential subject and the noun
within the relative clause as the agent of the action
represented by the verb in the relative clause. Readers
experienced greater difficulty during processing of sentences
containing object-relative clauses than subject-relative
clauses. The third experiment manipulated the animacy of the
sentential subject and the noun within the relative clause.
This experiment demonstrated that the difficulty associated
with object-relative clauses was greatly reduced when the
sentential subject was inanimate. We interpret the results
with respect to theories of syntactic parsing.
Michael J. Spivey, Michael K. Tanenhaus, Kathleen M. Eberhard, and Julie C.
Sedivy.
Eye movements and spoken language comprehension: Effects of visual
context on syntactic ambiguity resolution.
Cognitive Psychology, 45(4):447--481, 2002.
[ .pdf ]
Abstract: When participants follow spoken
instructions to pick up and move objects in a visual
workspace, their eye movements to the objects are closely
time-locked to referential expressions in the instructions.
Two experiments used this methodology to investigate the
processing of the temporary ambiguities that arise because
spoken language unfolds over time. Experiment 1 examined the
processing of sentences with a temporarily ambiguous
prepositional phrase (e.g., Put the apple on the towel in the
box) using visual contexts that supported either the normally
preferred initial interpretation (the apple should be put on
the towel) or the less-preferred interpretation (the apple is
already on the towel and should be put in the box). Eye
movement patterns clearly established that the initial
interpretation of the ambiguous phrase was the one consistent
with the context. Experiment 2 replicated these results using
prerecorded digitized speech to eliminate any possibility of
prosodic differences across conditions or experimenter
demand. Overall, the ndings are consistent with a broad
theoretical framework in which real-time language
comprehension immediately takes into account a rich array of
relevant nonlinguistic context.
Martin J. Pickering, Matthew J. Traxler, and Matthew W. Crocker.
Ambiguity resolution in sentence processing: Evidence against
frequency-based accounts.
Journal of Memory and Language, 43(3):447--475, 2000.
[ .pdf ]
Abstract: Three eye-tracking experiments investigated
two frequency-based processing accounts: the serial
lexical-guidance account, in which people adopt the analysis
compatible with the most likely subcategorization of a verb;
and the serial-likelihood account, in which people adopt the
analysis that they would regard as the most likely analysis,
given the information available at the point of ambiguity.
The results demonstrate that neither of these accounts
explains readers performance. Instead people preferred to
attach noun phrases as arguments of verbs even when such
analyses were unlikely to be correct. We suggest that these
results fit well with a model in which the processor
initially favors informative analyses.