Institut für Linguistik / Allgemeine Sprachwissenschaft
Refine
Has Fulltext
- yes (5) (remove)
Document Type
Language
- English (5) (remove)
Keywords
- Psycholinguistik (5) (remove)
Institute
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
This dissertation addresses the question of how linguistic structures can be represented in working memory. We propose a memory-based computational model that derives offline and online complexity profiles in terms of a top-down parser for minimalist grammars (Stabler, 2011). The complexity metric reflects the amount of time an item is stored in memory. The presented architecture links grammatical representations stored in memory directly to the cognitive behavior by deriving predictions about sentence processing difficulty.
Results from five different sentence comprehension experiments were used to evaluate the model's assumptions about memory limitations. The predictions of the complexity metric were compared to the locality (integration and storage) cost metric of Dependency Locality Theory (Gibson, 2000). Both metrics make comparable offline and online predictions for four of the five phenomena. The key difference between the two metrics is that the proposed complexity metric accounts for the structural complexity of intervening material. In contrast, DLT's integration cost metric considers the number of discourse referents, not the syntactic structural complexity.
We conclude that the syntactic analysis plays a significant role in memory requirements of parsing. An incremental top-down parser based on a grammar formalism easily computes offline and online complexity profiles, which can be used to derive predictions about sentence processing difficulty.
Pronoun resolution normally takes place without conscious effort or awareness, yet the processes behind it are far from straightforward. A large number of cues and constraints have previously been recognised as playing a role in the identification and integration of potential antecedents, yet there is considerable debate over how these operate within the resolution process. The aim of this thesis is to investigate how the parser handles multiple antecedents in order to understand more about how certain information sources play a role during pronoun resolution. I consider how both structural information and information provided by the prior discourse is used during online processing. This is investigated through several eye tracking during reading experiments that are complemented by a number of offline questionnaire experiments. I begin by considering how condition B of the Binding Theory (Chomsky 1981; 1986) has been captured in pronoun processing models; some researchers have claimed that processing is faithful to syntactic constraints from the beginning of the search (e.g. Nicol and Swinney 1989), while others have claimed that potential antecedents which are ruled out on structural grounds nonetheless affect processing, because the parser must also pay attention to a potential antecedent’s features (e.g. Badecker and Straub 2002). My experimental findings demonstrate that the parser is sensitive to the subtle changes in syntactic configuration which either allow or disallow pronoun reference to a local antecedent, and indicate that the parser is normally faithful to condition B at all stages of processing. Secondly, I test the Primitives of Binding hypothesis proposed by Koornneef (2008) based on work by Reuland (2001), which is a modular approach to pronoun resolution in which variable binding (a semantic relationship between pronoun and antecedent) takes place before coreference. I demonstrate that a variable-binding (VB) antecedent is not systematically considered earlier than a coreference (CR) antecedent online. I then go on to explore whether these findings could be attributed to the linear order of the antecedents, and uncover a robust recency preference both online and offline. I consider what role the factor of recency plays in pronoun resolution and how it can be reconciled with the first-mention advantage (Gernsbacher and Hargreaves 1988; Arnold 2001; Arnold et al., 2007). Finally, I investigate how aspects of the prior discourse affect pronoun resolution. Prior discourse status clearly had an effect on pronoun resolution, but an antecedent’s appearance in the previous context was not always facilitative; I propose that this is due to the number of topic switches that a reader must make, leading to a lack of discourse coherence which has a detrimental effect on pronoun resolution. The sensitivity of the parser to structural cues does not entail that cue types can be easily separated into distinct sequential stages, and I therefore propose that the parser is structurally sensitive but not modular. Aspects of pronoun resolution can be captured within a parallel constraints model of pronoun resolution, however, such a model should be sensitive to the activation of potential antecedents based on discourse factors, and structural cues should be strongly weighted.
Production of regular and non-regular verbs : evidence for a lexical entry complexity account
(2010)
The incredible productivity and creativity of language depends on two fundamental resources: a mental lexicon and a mental grammar. Rules of grammar enable us to produce and understand complex phrases we have not encountered before and at the same time constrain the computation of complex expressions. The concepts of the mental lexicon and mental grammar have been thoroughly tested by comparing the use of regular versus non-regular word forms. Regular verbs (e.g. walk-walked) are computed using a suffixation rule in a neural system for grammatical processing; non-regular verbs (run-ran) are retrieved from associative memory. The role of regularity has only been explored for the past tense, where regularity is overtly visible. To explore the representation and encoding of regularity as well as the inflectional processes involved in the production of regular and non-regular verbs, this dissertation investigated three groups of German verbs: regular, irregular and hybrid verbs. Hybrid verbs in German have completely regular conjugation in the present tense and irregular conjugation in the past tense. Articulation latencies were measured while participants named pictures of actions, producing the 3rd person singular of regular, hybrid, and irregular verbs in present and past tense. Studying the production of German verbs in past and present tense, this dissertation explored the complexity of lexical entries as a decisive factor in the production of verbs.
This is the first issue of a series in which affiliates of the Institute of Linguistics report the results of their experimental work. Generative linguistics usually rely on the method of native speaker judgements in order to test their hypotheses. If a hypothesis rules out a set of sentences, linguistics can ask native speakers whether they feel these sentences are indeed ungrammatical in their language. There are, however, circumstances where this method is unreliable. In such cases a more elaborate method to test a hypothesis is called. All papes in this series, and hence, all papers in this volume deal with issues that cannot be reliably tested with native speaker judgements. This volume contains 7 papers, all using different methods and finding answers to very different questions. This heterogenity, by the way, reflects the various interests and research programs of the institute. The paper, by Trutkowski, Zugck, Blaszczak, Fanselow, Fischer and Vogel deals with superiority in 10 Indo-European languages. The paper by Schlesewsky, Fanselow and Frisch and by Schlesewsky and Frisch, deal with the role of case in processing German sentences. The paper by Vogel and Frisch deals with resolving case conflicts, as does the paper by Vogel and Zugck. The nature of partitive case is the topic of the paper by Fischer. The paper by K?gler deals with the realization of question intonation in two German dialects. We hope that you enjoy reading the papers!