Refine
Year of publication
Document Type
- Doctoral Thesis (140) (remove)
Keywords
- Spracherwerb (14)
- language acquisition (13)
- Informationsstruktur (8)
- Satzverarbeitung (8)
- information structure (8)
- psycholinguistics (7)
- sentence processing (7)
- Psycholinguistik (6)
- Syntax (6)
- eye-tracking (6)
Institute
- Department Linguistik (140) (remove)
Der W-Fragen-Erwerb stellt einen Teilbereich der kindlichen Syntaxentwicklung dar, die sich maßgeblich innerhalb der ersten drei Lebensjahre eines Kindes vollzieht. Eine wesentliche Rolle spielen dabei zwei Bewegungsoperationen, die sich auf die Position des Interrogativpronomens an die erste Stelle der W-Frage sowie die Position des Verbs an die zweite Stelle beziehen. In drei Studien wurde einerseits untersucht, ob deutschsprachige Kinder, die noch keine W-Fragen produzieren können, in der Lage sind, grammatische von ungrammatischen W-Fragen zu unterscheiden und andererseits, welche Leistungen sprachunauffällige und sprachauffällige deutschsprachige Kinder beim Verstehen und Korrigieren unterschiedlich komplexer W-Fragen (positive und negative W-Fragen) zeigen. Die Ergebnisse deuten auf ein frühes syntaktisches Wissen über W-Fragen im Spracherwerb hin und stützen damit die Annahme einer Kontinuität der kindlichen Grammatik zur Standardsprache. Auch scheinen sprachauffällige Kinder sich beim Erwerb von W-Fragen nicht qualitativ von sprachgesunden Kindern zu unterscheiden, sondern W-Fragen lediglich später korrekt umzusetzen. In beiden Populationen konnte ein syntaktischer Ökonomieeffekt beobachtet werden, der für eine spätere Umsetzung der Verbbewegung im Vergleich zur Bewegung des W-Elementes spricht.
Wortartige Zwischenfälle
(2004)
There is evidence that infants start extracting words from fluent speech around 7.5 months of age (e.g., Jusczyk & Aslin, 1995) and that they use at least two mechanisms to segment words forms from fluent speech: prosodic information (e.g., Jusczyk, Cutler & Redanz, 1993) and statistical information (e.g., Saffran, Aslin & Newport, 1996). However, how these two mechanisms interact and whether they change during development is still not fully understood.
The main aim of the present work is to understand in what way different cues to word segmentation are exploited by infants when learning the language in their environment, as well as to explore whether this ability is related to later language skills. In Chapter 3 we pursued to determine the reliability of the method used in most of the experiments in the present thesis (the Headturn Preference Procedure), as well as to examine correlations and individual differences between infants’ performance and later language outcomes. In Chapter 4 we investigated how German-speaking adults weigh statistical and prosodic information for word segmentation. We familiarized adults with an auditory string in which statistical and prosodic information indicated different word boundaries and obtained both behavioral and pupillometry responses. Then, we conducted further experiments to understand in what way different cues to word segmentation are exploited by 9-month-old German-learning infants (Chapter 5) and by 6-month-old German-learning infants (Chapter 6). In addition, we conducted follow-up questionnaires with the infants and obtained language outcomes at later stages of development.
Our findings from this thesis revealed that (1) German-speaking adults show a strong weight of prosodic cues, at least for the materials used in this study and that (2) German-learning infants weight these two kind of cues differently depending on age and/or language experience. We observed that, unlike English-learning infants, 6-month-old infants relied more strongly on prosodic cues. Nine-month-olds do not show any preference for either of the cues in the word segmentation task. From the present results it remains unclear whether the ability to use prosodic cues to word segmentation relates to later language vocabulary. We speculate that prosody provides infants with their first window into the specific acoustic regularities in the signal, which enables them to master the specific stress pattern of German rapidly. Our findings are a step forwards in the understanding of an early impact of the native prosody compared to statistical learning in early word segmentation.
This thesis explores word order variability in verb-final languages. Verb-final languages have a reputation for a high amount of word order variability. However, that reputation amounts to an urban myth due to a lack of systematic investigation. This thesis provides such a systematic investigation by presenting original data from several verb-final languages with a focus on four Uralic ones: Estonian, Udmurt, Meadow Mari, and South Sámi. As with every urban myth, there is a kernel of truth in that many unrelated verb-final languages share a particular kind of word order variability, A-scrambling, in which the fronted elements do not receive a special information-structural role, such as topic or contrastive focus. That word order variability goes hand in hand with placing focussed phrases further to the right in the position directly in front of the verb. Variations on this pattern are exemplified by Uyghur, Standard Dargwa, Eastern Armenian, and three of the Uralic languages, Estonian, Udmurt, and Meadow Mari. So far for the kernel of truth, but the fourth Uralic language, South Sámi, is comparably rigid and does not feature this particular kind of word order variability. Further such comparably rigid, non-scrambling verb-final languages are Dutch, Afrikaans, Amharic, and Korean. In contrast to scrambling languages, non-scrambling languages feature obligatory subject movement, causing word order rigidity next to other typical EPP effects.
The EPP is a defining feature of South Sámi clause structure in general. South Sámi exhibits a one-of-a-kind alternation between SOV and SAuxOV order that is captured by the assumption of the EPP and obligatory movement of auxiliaries but not lexical verbs. Other languages that allow for SAuxOV order either lack an alternation because the auxiliary is obligatorily present (Macro-Sudan SAuxOVX languages), or feature an alternation between SVO and SAuxOV (Kru languages; V2 with underlying OV as a fringe case). In the SVO–SAuxOV languages, both auxiliaries and lexical verbs move. Hence, South Sámi shows that the textbook difference between the VO languages English and French, whether verb movement is restricted to auxiliaries, also extends to OV languages. SAuxOV languages are an outlier among OV languages in general but are united by the presence of the EPP.
Word order variability is not restricted to the preverbal field in verb-final languages, as most of them feature postverbal elements (PVE). PVE challenge the notion of verb-finality in a language. Strictly verb-final languages without any clause-internal PVE are rare. This thesis charts the first structural and descriptive typology of PVE. Verb-final languages vary in the categories they allow as PVE. Allowing for non-oblique PVE is a pivotal threshold: when non-oblique PVE are allowed, PVE can be used for information-structural effects. Many areally and genetically unrelated languages only allow for given PVE but differ in whether the PVE are contrastive. In those languages, verb-finality is not at stake since verb-medial orders are marked. In contrast, the Uralic languages Estonian and Udmurt allow for any PVE, including information focus. Verb-medial orders can be used in the same contexts as verb-final orders without semantic and pragmatic differences. As such, verb placement is subject to actual free variation. The underlying verb-finality of Estonian and Udmurt can only be inferred from a range of diagnostics indicating optional verb movement in both languages. In general, it is not possible to account for PVE with a uniform analysis: rightwards merge, leftward verb movement, and rightwards phrasal movement are required to capture the cross- and intralinguistic variation.
Knowing that a language is verb-final does not allow one to draw conclusions about word order variability in that language. There are patterns of homogeneity, such as the word order variability driven by directly preverbal focus and the givenness of postverbal elements, but those are not brought about by verb-finality alone. Preverbal word order variability is restricted by the more abstract property of obligatory subject movement, whereas the determinant of postverbal word order variability has to be determined in the future.
Die vorliegende Arbeit befasst sich mit der wissensbasierten Modellierung von Audio-Signal-Klassifikatoren (ASK) für die Bioakustik. Sie behandelt ein interdisziplinäres Problem, das viele Facetten umfasst. Zu diesen gehören artspezifische bioakustische Fragen, mathematisch-algorithmische Details und Probleme der Repräsentation von Expertenwissen. Es wird eine universelle praktisch anwendbare Methode zur wissensbasierten Modellierung bioakustischer ASK dargestellt und evaluiert. Das Problem der Modellierung von ASK wird dabei durchgängig aus KDD-Perspektive (Knowledge Discovery in Databases) betrachtet. Der grundlegende Ansatz besteht darin, mit Hilfe von modifizierten KDD-Methoden und Data-Mining-Verfahren die Modellierung von ASK wesentlich zu erleichtern. Das etablierte KDD-Paradigma wird mit Hilfe eines detaillierten formalen Modells auf den Bereich der Modellierung von ASK übertragen. Neunzehn elementare KDD-Verfahren bilden die Grundlage eines umfassenden Systems zur wissensbasierten Modellierung von ASK. Methode und Algorithmen werden evaluiert, indem eine sehr umfangreiche Sammlung akustischer Signale des Großen Tümmlers mit ihrer Hilfe untersucht wird. Die Sammlung wurde speziell für diese Arbeit in Eilat (Israel) angefertigt. Insgesamt werden auf Grundlage dieses Audiomaterials vier empirische Einzelstudien durchgeführt: - Auf der Basis von oszillographischen und spektrographischen Darstellungen wird ein phänomenologisches Klassifikationssystem für die vielfältigen Laute des Großen Tümmlers dargestellt. - Mit Hilfe eines Korpus halbsynthetischer Audiodaten werden verschiedene grundlegende Verfahren zur Modellierung und Anwendung von ASK in Hinblick auf ihre Genauigkeit und Robustheit untersucht. - Mit einem speziell entwickelten Clustering-Verfahren werden mehrere Tausend natürliche Pfifflaute des Großen Tümmlers untersucht. Die Ergebnisse werden visualisiert und diskutiert. - Durch maschinelles mustererkennungsbasiertes akustisches Monitoring wird die Emissionsdynamik verschiedener Lauttypen im Verlaufe von vier Wochen untersucht. Etwa 2.5 Millionen Klicklaute werden im Anschluss auf ihre spektralen Charakteristika hin untersucht. Die beschriebene Methode und die dargestellten Algorithmen sind in vielfältiger Hinsicht erweiterbar, ohne dass an ihrer grundlegenden Architektur etwas geändert werden muss. Sie lassen sich leicht in dem gesamten Gebiet der Bioakustik einsetzen. Hiermit besitzen sie auch für angrenzende Disziplinen ein hohes Potential, denn exaktes Wissen über die akustischen Kommunikations- und Sonarsysteme der Tiere wird in der theoretischen Biologie, in den Kognitionswissenschaften, aber auch im praktischen Naturschutz, in Zukunft eine wichtige Rolle spielen.
Wie interpretieren Kinder nur? : Experimentelle Untersuchungen zum Erwerb von Informationsstruktur
(2010)
Im Zentrum der Arbeit steht die Frage, wie sechsjährige monolingual deutsche Kinder Sätze mit der Fokuspartikel nur interpretieren. In 5 Experimenten wurde untersucht, welchen Einfluss die Oberflächenposition der Fokuspartikel auf das Satzverständnis hat und ob die kontextuelle Einbettung der nur-Sätze zu einer zielsprachlichen Interpretation führt. Im Gegensatz zu den Ergebnissen bisheriger Studien (u.a. Crain, et al. 1994; Paterson et al. 2003) zeigen die Daten der Arbeit, dass die getesteten Kinder die präsentierten nur-Sätze zielsprachlich interpretierten, wenn diese in einen adäquaten Kontext eingebettet waren. Es zeigte sich weiterhin, dass die Kinder mehr Fehler bei der Interpretation von Sätzen mit nur vor dem Subjekt (Nur die Maus hat einen Ball.) als mit nur vor dem Objekt (Die Maus hat nur einen Ball.) machten. Entgegen dem syntaktisch basierten Ansatz von Crain et al. (1994) und dem semantisch-pragmatisch basierten Ansatz von Paterson et al. (2003) werden in der Arbeit informationsstrukturelle Eigenschaften bzw. Unterschiede der nur-Sätze für die beobachteten Leistungen verantwortlich gemacht. Der in der Arbeit postulierte Topik-Default Ansatz nimmt an, dass die Kinder das Subjekt eines Satzes immer als Topik analysieren. Dies führt im Fall der Sätze mit nur vor dem Subjekt zu einer falschen informationsstrukturellen Repräsentation des Satzes. Basierend auf den Ergebnissen der Arbeit und dem postulierten Topik-Default Ansatz wird in der Arbeit abschließend ein Erwerbsmodell für das Verstehen von Sätzen mit der Fokuspartikel nur entworfen und diskutiert.
This dissertation examines the integration of incongruent visual-scene and morphological-case information (“cues”) in building thematic-role representations of spoken relative clauses in German.
Addressing the mutual influence of visual and linguistic processing, the Coordinated Interplay Account (CIA) describes a mechanism in two steps supporting visuo-linguistic integration (Knoeferle & Crocker, 2006, Cog Sci). However, the outcomes and dynamics of integrating incongruent thematic-role representations from distinct sources have been investigated scarcely. Further, there is evidence that both second-language (L2) and older speakers may rely on non-syntactic cues relatively more than first-language (L1)/young speakers. Yet, the role of visual information for thematic-role comprehension has not been measured in L2 speakers, and only limitedly across the adult lifespan.
Thematically unambiguous canonically ordered (subject-extracted) and noncanonically ordered (object-extracted) spoken relative clauses in German (see 1a-b) were presented in isolation and alongside visual scenes conveying either the same (congruent) or the opposite (incongruent) thematic relations as the sentence did.
1 a Das ist der Koch, der die Braut verfolgt.
This is the.NOM cook who.NOM the.ACC bride follows
This is the cook who is following the bride.
b Das ist der Koch, den die Braut verfolgt.
This is the.NOM cook whom.ACC the.NOM bride follows
This is the cook whom the bride is following.
The relative contribution of each cue to thematic-role representations was assessed with agent identification. Accuracy and latency data were collected post-sentence from a sample of L1 and L2 speakers (Zona & Felser, 2023), and from a sample of L1 speakers from across the adult lifespan (Zona & Reifegerste, under review). In addition, the moment-by-moment dynamics of thematic-role assignment were investigated with mouse tracking in a young L1 sample (Zona, under review).
The following questions were addressed: (1) How do visual scenes influence thematic-role representations of canonical and noncanonical sentences? (2) How does reliance on visual-scene, case, and word-order cues vary in L1 and L2 speakers? (3) How does reliance on visual-scene, case, and word-order cues change across the lifespan?
The results showed reliable effects of incongruence of visually and linguistically conveyed thematic relations on thematic-role representations. Incongruent (vs. congruent) scenes yielded slower and less accurate responses to agent-identification probes presented post-sentence. The recently inspected agent was considered as the most likely agent ~300ms after trial onset, and the convergence of visual scenes and word order enabled comprehenders to assign thematic roles predictively.
L2 (vs. L1) participants relied more on word order overall. In response to noncanonical clauses presented with incongruent visual scenes, sensitivity to case predicted the size of incongruence effects better than L1-L2 grouping. These results suggest that the individual’s ability to exploit specific cues might predict their weighting.
Sensitivity to case was stable throughout the lifespan, while visual effects increased with increasing age and were modulated by individual interference-inhibition levels. Thus, age-related changes in comprehension may stem from stronger reliance on visually (vs. linguistically) conveyed meaning.
These patterns represent evidence for a recent-role preference – i.e., a tendency to re-assign visually conveyed thematic roles to the same referents in temporally coordinated utterances. The findings (i) extend the generalizability of CIA predictions across stimuli, tasks, populations, and measures of interest, (ii) contribute to specifying the outcomes and mechanisms of detecting and indexing incongruent representations within the CIA, and (iii) speak to current efforts to understand the sources of variability in sentence comprehension.
Verbales Arbeitgedächtnis und die Verarbeitung lexikalisch ambiger Wörter in Wort- und Satzkontexten
(2003)
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
Background: Individuals with aphasia after stroke (IWA) often present with working memory (WM) deficits. Research investigating the relationship between WM and language abilities has led to the promising hypothesis that treatments of WM could lead to improvements in language, a phenomenon known as transfer. Although recent treatment protocols have been successful in improving WM, the evidence to date is scarce and the extent to which improvements in trained tasks of WM transfer to untrained memory tasks, spoken sentence comprehension, and functional communication is yet poorly understood.
Aims: We aimed at (a) investigating whether WM can be improved through an adaptive n-back training in IWA (Study 1–3); (b) testing whether WM training leads to near transfer to unpracticed WM tasks (Study 1–3), and far transfer to spoken sentence comprehension (Study 1–3), functional communication (Study 2–3), and memory in daily life in IWA (Study 2–3); and (c) evaluating the methodological quality of existing WM treatments in IWA (Study 3). To address these goals, we conducted two empirical studies – a case-controls study with Hungarian speaking IWA (Study 1) and a multiple baseline study with German speaking IWA (Study 2) – and a systematic review (Study 3).
Methods: In Study 1 and 2 participants with chronic, post-stroke aphasia performed an adaptive, computerized n-back training. ‘Adaptivity’ was implemented by adjusting the tasks’ difficulty level according to the participants’ performance, ensuring that they always practiced at an optimal level of difficulty. To assess the specificity of transfer effects and to better understand the underlying mechanisms of transfer on spoken sentence comprehension, we included an outcome measure testing specific syntactic structures that have been proposed to involve WM processes (e.g., non-canonical structures with varying complexity).
Results: We detected a mixed pattern of training and transfer effects across individuals: five participants out of six significantly improved in the n-back training. Our most important finding is that all six participants improved significantly in spoken sentence comprehension (i.e., far transfer effects). In addition, we also found far transfer to functional communication (in two participants out of three in Study 2) and everyday memory functioning (in all three participants in Study 2), and near transfer to unpracticed n-back tasks (in four participants out of six). Pooled data analysis of Study 1 and 2 showed a significant negative relationship between initial spoken sentence comprehension and the amount of improvement in this ability, suggesting that the more severe the participants’ spoken sentence comprehension deficit was at the beginning of training, the more they improved after training. Taken together, we detected both near far and transfer effects in our studies, but the effects varied across participants. The systematic review evaluating the methodological quality of existing WM treatments in stroke IWA (Study 3) showed poor internal and external validity across the included 17 studies. Poor internal validity was mainly due to use of inappropriate design, lack of randomization of study phases, lack of blinding of participants and/or assessors, and insufficient sampling. Low external validity was mainly related to incomplete information on the setting, lack of use of appropriate analysis or justification for the suitability of the analysis procedure used, and lack of replication across participants and/or behaviors. Results in terms of WM, spoken sentence comprehension, and reading are promising, but further studies with more rigorous methodology and stronger experimental control are needed to determine the beneficial effects of WM intervention.
Conclusions: Results of the empirical studies suggest that WM can be improved with a computerized and adaptive WM training, and improvements can lead to transfer effects to spoken sentence comprehension and functional communication in some individuals with chronic post-stroke aphasia. The fact that improvements were not specific to certain syntactic structures (i.e., non-canonical complex sentences) in spoken sentence comprehension suggest that WM is not involved in the online, automatic processing of syntactic information (i.e., parsing and interpretation), but plays a more general role in the later stage of spoken sentence comprehension (i.e., post-interpretive comprehension). The individual differences in treatment outcomes call for future research to clarify how far these results are generalizable to the population level of IWA. Future studies are needed to identify a few mechanisms that may generalize to at least a subpopulation of IWA as well as to investigate baseline non-linguistic cognitive and language abilities that may play a role in transfer effects and the maintenance of such effects. These may require larger yet homogenous samples.
In experiments investigating sentence processing, eye movement measures such as fixation durations and regression proportions while reading are commonly used to draw conclusions about processing difficulties. However, these measures are the result of an interaction of multiple cognitive levels and processing strategies and thus are only indirect indicators of processing difficulty. In order to properly interpret an eye movement response, one has to understand the underlying principles of adaptive processing such as trade-off mechanisms between reading speed and depth of comprehension that interact with task demands and individual differences. Therefore, it is necessary to establish explicit models of the respective mechanisms as well as their causal relationship with observable behavior. There are models of lexical processing and eye movement control on the one side and models on sentence parsing and memory processes on the other. However, no model so far combines both sides with explicitly defined linking assumptions.
In this thesis, a model is developed that integrates oculomotor control with a parsing mechanism and a theory of cue-based memory retrieval. On the basis of previous empirical findings and independently motivated principles, adaptive, resource-preserving mechanisms of underspecification are proposed both on the level of memory access and on the level of syntactic parsing. The thesis first investigates the model of cue-based retrieval in sentence comprehension of Lewis & Vasishth (2005) with a comprehensive literature review and computational modeling of retrieval interference in dependency processing. The results reveal a great variability in the data that is not explained by the theory. Therefore, two principles, 'distractor prominence' and 'cue confusion', are proposed as an extension to the theory, thus providing a more adequate description of systematic variance in empirical results as a consequence of experimental design, linguistic environment, and individual differences. In the remainder of the thesis, four interfaces between parsing and eye movement control are defined: Time Out, Reanalysis, Underspecification, and Subvocalization. By comparing computationally derived predictions with experimental results from the literature, it is investigated to what extent these four interfaces constitute an appropriate elementary set of assumptions for explaining specific eye movement patterns during sentence processing. Through simulations, it is shown how this system of in itself simple assumptions results in predictions of complex, adaptive behavior.
In conclusion, it is argued that, on all levels, the sentence comprehension mechanism seeks a balance between necessary processing effort and reading speed on the basis of experience, task demands, and resource limitations. Theories of linguistic processing therefore need to be explicitly defined and implemented, in particular with respect to linking assumptions between observable behavior and underlying cognitive processes. The comprehensive model developed here integrates multiple levels of sentence processing that hitherto have only been studied in isolation. The model is made publicly available as an expandable framework for future studies of the interactions between parsing, memory access, and eye movement control.
This study presents new insights into null subjects, topic drop and the interpretation of topic-dropped elements. Besides providing an empirical data survey, it offers explanations to well-known problems, e.g. syncretisms in the context of null-subject licensing or the marginality of dropping an element which carries oblique case. The book constitutes a valuable source for both empirically and theoretically interested (generative) linguists.
This dissertation focuses on the handling of time in dialogue. Specifically, it investigates how humans bridge time, or “buy time”, when they are expected to convey information that is not yet available to them (e.g. a travel agent searching for a flight in a long list while the customer is on the line, waiting). It also explores the feasibility of modeling such time-bridging behavior in spoken dialogue systems, and it examines
how endowing such systems with more human-like time-bridging capabilities may affect humans’ perception of them.
The relevance of time-bridging in human-human dialogue seems to stem largely from a need to avoid lengthy pauses, as these may cause both confusion and discomfort among the participants of a conversation (Levinson, 1983; Lundholm Fors, 2015). However, this avoidance of prolonged silence is at odds with the incremental nature of speech production in dialogue (Schlangen and Skantze, 2011): Speakers often start to verbalize their contribution before it is fully formulated, and sometimes even before they possess the information they need to provide, which may result in them running out of content mid-turn.
In this work, we elicit conversational data from humans, to learn how they avoid being silent while they search for information to convey to their interlocutor. We identify commonalities in the types of resources employed by different speakers, and we propose a classification scheme. We explore ways of modeling human time-buying behavior computationally, and we evaluate the effect on human listeners of embedding this behavior in a spoken dialogue system.
Our results suggest that a system using conversational speech to bridge time while searching for information to convey (as humans do) can provide a better experience in several respects than one which remains silent for a long period of time. However, not all speech serves this purpose equally: Our experiments also show that a system whose time-buying behavior is more varied (i.e. which exploits several categories from the classification scheme we developed and samples them based on information from human data) can prevent overestimation of waiting time when compared, for example, with a system that repeatedly asks the interlocutor to wait (even if these requests for waiting are phrased differently each time). Finally, this research shows that it is possible to model human time-buying behavior on a relatively small corpus, and that a system using such a model can be preferred by participants over one employing a simpler strategy, such as randomly choosing utterances to produce during the wait —even when the utterances used by both strategies are the same.
Thematic role assignment and word order preferences in the child language acquisition of Tagalog
(2018)
A critical task in daily communications is identifying who did what to whom in an utterance, or assigning the thematic roles agent and patient in a sentence. This dissertation is concerned with Tagalog-speaking children’s use of word order and morphosyntactic markers for thematic role assignment. It aims to explain children’s difficulties in interpreting sentences with a non-canonical order of arguments (i.e., patient-before-agent) by testing the predictions of the following accounts: the frequency account (Demuth, 1989), the Competition model (MacWhinney & Bates, 1989), and the incremental processing account (Trueswell & Gleitman, 2004). Moreover, the experiments in this dissertation test the influence of a word order strategy in a language like Tagalog, where the thematic roles are always unambiguous in a sentence, due to its verb-initial order and its voice-marking system. In Tagalog’s voice-marking system, the inflection on the verb indicates the thematic role of the noun marked by 'ang.' First, the possible basis for a word order strategy in Tagalog was established using a sentence completion experiment given to adults and 5- and 7-year-old children (Chapter 2) and a child-directed speech corpus analysis (Chapter 3). In general, adults and children showed an agent-before-patient preference, although adults’ preference was also affected by sentence voice. Children’s comprehension was then examined through a self-paced listening and picture verification task (Chapter 3) and an eye-tracking and picture selection task (Chapter 4), where word order (agent-initial or patient-initial) and voice (agent voice or patient voice) were manipulated. Offline (i.e., accuracy) and online (i.e., listening times, looks to the target) measures revealed that 5- and 7-year-old Tagalog-speaking children had a bias to interpret the first noun as the agent. Additionally, the use of word order and morphosyntactic markers was found to be modulated by voice. In the agent voice, children relied more on a word order strategy; while in the patient voice, they relied on the morphosyntactic markers. These results are only partially explained by the accounts being tested in this dissertation. Instead, the findings support computational accounts of incremental word prediction and learning such as Chang, Dell, & Bock’s (2006) model.
There are many factors which make speaking and understanding a second language (L2) a highly complex challenge. Skills and competencies in in both linguistic and metalinguistic areas emerge as parts of a multi-faceted, flexible concept underlying bilingual/multilingual communication. On the linguistic level, a combination of an extended knowledge of idiomatic expressions, a broad lexical familiarity, a large vocabulary size, and the ability to deal with phonetic distinctions and fine phonetic detail has been argued necessary for effective nonnative comprehension of spoken language. The scientific interest in these factors has also led to more interest in the L2’s information structure, the way in which information is organised and packaged into informational units, both within and between clauses. On a practical level, the information structure of a language can offer the means to assign focus to a certain element considered important. Speakers can draw from a rich pool of linguistic means to express this focus, and listeners can in turn interpret these to guide them to the highlighted information which in turn facilitates comprehension, resulting in an appropriate understanding of what has been said. If a speaker doesn’t follow the principles of information structure, and the main accent in a sentence is placed on an unimportant word, then there may be inappropriate information transfer within the discourse, and misunderstandings. The concept of focus as part of the information structure of a language, the linguistic means used to express it, and the differential use of focus in native and nonnative language processing are central to this dissertation. Languages exhibit a wide range of ways of directing focus, including by prosodic means, by syntactic constructions, and by lexical means. The general principles underlying information structure seem to contrast structurally across different languages, and they can also differ in the way they express focus. In the context of L2 acquisition, characteristics of the L1 linguistic system are argued to influence the acquisition of the L2. Similarly, the conceptual patterns of information structure of the L1 may influence the organization of information in the L2. However, strategies and patterns used to exploit information structure for succesful language comprehension in the native L1, may not apply at all, or work in different ways or todifferent degrees in the L2. This means that L2 learners ideally have to understand the way that information structure is expressed in the L2 to fully use the information structural benefit in the L2. The knowledge of information structural requirements in the L2 could also imply that the learner would have to make adjustments regarding the use of information structural devices in the L2. The general question is whether the various means to mark focus in the learners’ native language are also accessible in the nonnative language, and whether a L1-L2 transfer of their usage should be considered desirable. The current work explores how information structure helps the listener to discover and structure the forms and meanings of the L2. The central hypothesis is that the ability to access information structure has an impact on the level of the learners’ appropriateness and linguistic competence in the L2. Ultimately, the ability to make use of information structure in the L2 is believed to underpin the L2 learners’ ability to effectively communicate in the L2. The present study investigated how use of focus markers affects processing speed and word recall recall in a native-nonnative language comparison. The predominant research question was whether the type of focus marking leads to more efficient and accurate word processing in marked structures than in unmarked structures, and whether differences in processing patterns can be observed between the two language conditions. Three perception studies were conducted, each concentrating on one of the following linguistic parameters: 1. Prosodic prominence: Does prosodic focus conveyed by sentence accent and by word position facilitate word recognition? 2. Syntactical means: Do cleft constructions result in faster and more accurate word processing? 3. Lexical means: Does focus conveyed by the particles even/only (German: sogar/nur) facilitate word processing and word recall? Experiments 2 and 3 additionally investigated the contribution of context in the form of preceding questions. Furthermore, they considered accent and its facilitative effect on the processing of words which are in the scope of syntactic or lexical focus marking. All three experiments tested German learners of English in a native German language condition and in English as their L2. Native English speakers were included as a control for the English language condition. Test materials consisted of single sentences, all dealing with bird life. Experiment 1 tested word recognition in three focus conditions (broad focus, narrow focus on the target, and narrow focus on a constituent than the target) in one condition using natural unmanipulated sentences, and in the other two conditions using spliced sentences. Experiment 2 (effect of syntactic focus marking) and Experiment 3 (effect of lexical focus marking) used phoneme monitoring as a measure for the speed of word processing. Additionally, a word recall test (4AFC) was conducted to assess the effective entry of target-bearing words in the listeners’ memory. Experiment 1: Focus marking by prosodic means Prosodic focus marking by pitch accent was found to highlight important information (Bolinger, 1972), making the accented word perceptually more prominent (Klatt, 1976; van Santen & Olive, 1990; Eefting, 1991; Koopmans-van Beinum & van Bergem, 1989). However, accent structure seems to be processed faster in native than in nonnative listening (Akker& Cutler, 2003, Expt. 3). Therefore, it is expected that prosodically marked words are better recognised than unmarked words, and that listeners can exploit accent structure better for accurate word recognition in their L1 than they do in the L2 (L1 > L2). Altogether, a difference in word recognition performance in L1 listening is expected between different focus conditions (narrow focus > broad focus). Results of Experiments 1 show that words were better recognized in native listening than in nonnative listening. Focal accent, however, doesn’t seem to help the German subjects recognize accented words more accurately, in both the L1 and the L2. This could be due to the focus conditions not being acoustically distinctive enough. Results of experiments with spliced materials suggest that the surrounding prosodic sentence contour made listeners remember a target word and not the local, prosodic realization of the word. Prosody seems to indeed direct listeners’ attention to the focus of the sentence (see Cutler, 1976). Regarding the salience of word position, VanPatten (2002; 2004) postulated a sentence location principle for L2 processing, stating a ranking of initial > final > medial word position. Other evidence mentions a processing adantage of items occurring late in the sentence (Akker & Cutler, 2003), and Rast (2003) observed in an English L2 production study a trend of an advantage of items occurring at the outer ends of the sentence. The current Experiment 1 aimed to keep the length of the sentences to an acceptable length, mainly to keep the task in the nonnative lnaguage condition feasable. Word length showed an effect only in combination with word position (Rast, 2003; Rast & Dommergues, 2003). Therefore, word length was included in the current experiment as a secondary factor and without hypotheses. Results of Experiment 1 revealed that the length of a word doesn’t seem to be important for its accurate recognition. Word position, specifically the final position, clearly seems to facilitate accurate word recognition in German. A similar trend emerges in condition English L2, confirming Klein (1984) and Slobin (1985). Results don’t support the sentence location principle of VanPatten (2002; 2004). The salience of the final position is interpreted as recency effect (Murdock, 1962). In addition, the advantage of the final position may benefit from the discourse convention that relevant background information is referred to first, and then what is novel later (Haviland & Clark, 1974). This structure is assumed to cue the listener as to what the speaker considers to be important information, and listeners might have reacted according to this convention. Experiment 2: Focus marking by syntactic means Atypical syntactic structures often draw listeners’ attention to certain information in an utterance, and the cleft structure as a focus marking device appears to be a common surface feature in many languages (Lambrecht, 2001). Surface structure influences sentence processing (Foss & Lynch, 1969; Langford & Holmes, 1979), which leads to competing hypotheses in Experiment 2: on the one hand, the focusing effect of the cleft construction might reduce processing times. On the other, cleft constructions in German were found to be used less to mark fo than in English (Ahlemeyer & Kohlhof, 1999; Doherty, 1999; E. Klein, 1988). The complexity of the constructions, and the experience from the native language might work against an advantage of the focus effect in the L2. Results of Experiment 2 show that the cleft structure is an effective device to mark focus in German L1. The processing advantage is explained by the low degree of structural markedness of cleft structures: listeners use the focus function of sentence types headed by the dummy subject es (English: it) due to reliance on 'safe' subject-prominent SVO-structures. The benefit of cleft is enhanced when the sentences are presented with context, suggesting a substantial benefit when focus effects of syntactic surface structure and coherence relation between sentences are integrated. Clefts facilitate word processing for English native speakers. Contrary to German L1, the marked cleft construction doesn’t reduce processing times in English L2. The L1-L2 difference was interpreted as a learner problem of applying specific linguistic structures according to the principles of information structure in the target language. Focus marking by cleft did not help German learners in native or in nonnative word recall. This could be attributed to the phonological similarity of the multiple choice options (Conrad & Hull, 1964), and to a long time span between listening and recall (Birch & Garnsey, 1995; McKoon et al., 1993). Experiment 3: Focus marking by lexical means Focus particles are elements of structure that can indicate focus (König, 1991), and their function is to emphasize a certain part of the sentence (Paterson et al., 1999). I argue that the focus particles even/only (German: sogar/nur) evoke contrast sets of alternatives resp. complements to the element in focus (Ni et al., 1996), which causes interpretations of context. Therefore, lexical focus marking isn’t expected to lead to faster word processing. However, since different mechanisms of encoding seem to underlie word memory, a benefit of the focusing function of particles is expected to show in the recall task: due to focus particles being a preferred and well-used feature for native speakers of German, a transfer of this habitualness is expected, resulting in a better recall of focused words. Results indicated that focus particles seem to be the weakest option to mark focus: Focus marking by lexical particle don’t seem to reduce word processing times in either German L1, English L2, or in English L1. The presence of focus particles is likely to instantiate a complex discourse model which lets the listener await further modifying information (Liversedge et al., 2002). This semantic complexity might slow down processing. There are no indications that focus particles facilitate native language word recall in German L1 and English L1. This could be because focus particles open sets of conditions and contexts that enlarge the set of representations in listeners rather than narrowing it down to the element in the scope of the focus particle. In word recall, the facilitative effect of focus particles emerges only in the nonnative language condition. It is suggested that L2 learners, when faced with more demanding tasks in an L2, use a broad variety of means that identify focus for a better representation of novel words in the memory. In Experiments 2 and 3, evidence suggests that accent is an important factor for efficient word processing and accurate recall in German L1 and English L1, but less so in English L2. This underlines the function of accent as core speech parameter and consistent cue to the perception of prominence native language use (see Cutler & Fodor, 1979; Pitt & Samuel, 1990a; Eriksson et al., 2002; Akker & Cutler, 2003); the L1-L2 difference is attributed to patterns of expectation that are employed in the L1 but not (yet?) in the L2. There seems to exist a fine-tuned sensitivity to how accents are distributed in the native language, listeners expect an appropriate distribution and interpret it accordingly (Eefting, 1991). This pleads for accent placement as extremely important to L2 proficiency; the current results also suggest that accent and its relationship with other speech parameters has to be newly established in the L2 to fully reveal its benefits for efficient processing of speech. There is evidence that additional context facilitates processing of complex syntactic structures but that a surplus of information has no effect if the sentence construction is less challenging for the listener. The increased amount of information to be processed seems to impede better word recall, particularly in the L2. Altogether, it seems that focus marking devices and context can combine to form an advantageous alliance: a substantial benefit in processing efficiency is found when parameters of focus marking and sentence coherence are integrated. L2 research advocates the beneficial aspects of providing context for efficient L2 word learning (Lawson & Hogben, 1996). The current thesis promotes the view that a context which offers more semantic, prosodic, or lexical connections might compensate for the additional processing load that context constitutes for the listeners. A methodological consideration concerns the order in which language conditions are presented to listeners, i.e., L1-L2 or L2-L1. Findings suggest that presentation order could enforce a learning bias, with the performance in the second experiment being influenced by knowledge acquired in the first (see Akker & Cutler, 2003). To conclude this work: The results of the present study suggest that information structure is more accessible in the native language than it is in the nonnative language. There is, however, some evidence that L2 learners have an understanding of the significance of some information-structural parameters of focus marking. This has a beneficial effect on processing efficiency and recall accuracy; on the cognitive side it illustrates the benefits and also the need of a dynamic exchange of information-structural organization between L1 and L2. The findings of the current thesis encourage the view that an understanding of information structure can help the learner to discover and categorise forms and meanings of the L2. Information structure thus emerges as a valuable resource to advance proficiency in a second language.
Adopting a minimalist framework, the dissertation provides an analysis for the syntactic structure of comparatives, with special attention paid to the derivation of the subclause. The proposed account explains how the comparative subclause is connected to the matrix clause, how the subclause is formed in the syntax and what additional processes contribute to its final structure. In addition, it casts light upon these problems in cross-linguistic terms and provides a model that allows for synchronic and diachronic differences. This also enables one to give a more adequate explanation for the phenomena found in English comparatives since the properties of English structures can then be linked to general settings of the language and hence need no longer be considered as idiosyncratic features of the grammar of English. First, the dissertation provides a unified analysis of degree expressions, relating the structure of comparatives to that of other degrees. It is shown that gradable adjectives are located within a degree phrase (DegP), which in turn projects a quantifier phrase (QP) and that these two functional layers are always present, irrespectively of whether there is a phonologically visible element in these layers. Second, the dissertation presents a novel analysis of Comparative Deletion by reducing it to an overtness constraint holding on operators: in this way, it is reduced to morphological differences and cross-linguistic variation is not conditioned by way of postulating an arbitrary parameter. Cross-linguistic differences are ultimately dependent on whether a language has overt operators equipped with the relevant – [+compr] and [+rel] – features. Third, the dissertation provides an adequate explanation for the phenomenon of Attributive Comparative Deletion, as attested in English, by way of relating it to the regular mechanism of Comparative Deletion. I assume that Attributive Comparative Deletion is not a universal phenomenon, and its presence in English can be conditioned by independent, more general rules, while the absence of such restrictions leads to its absence in other languages. Fourth, the dissertation accounts for certain phenomena related to diachronic changes, examining how the changes in the status of comparative operators led to changes in whether Comparative Deletion is attested in a given language: I argue that only operators without a lexical XP can be grammaticalised. The underlying mechanisms underlying are essentially general economy principles and hence the processes are not language-specific or exceptional. Fifth, the dissertation accounts for optional ellipsis processes that play a crucial role in the derivation of typical comparative subclauses. These processes are not directly related to the structure of degree expressions and hence the elimination of the quantified expression from the subclause; nevertheless, they are shown to be in interaction with the mechanisms underlying Comparative Deletion or the absence thereof.
Pronoun resolution normally takes place without conscious effort or awareness, yet the processes behind it are far from straightforward. A large number of cues and constraints have previously been recognised as playing a role in the identification and integration of potential antecedents, yet there is considerable debate over how these operate within the resolution process. The aim of this thesis is to investigate how the parser handles multiple antecedents in order to understand more about how certain information sources play a role during pronoun resolution. I consider how both structural information and information provided by the prior discourse is used during online processing. This is investigated through several eye tracking during reading experiments that are complemented by a number of offline questionnaire experiments. I begin by considering how condition B of the Binding Theory (Chomsky 1981; 1986) has been captured in pronoun processing models; some researchers have claimed that processing is faithful to syntactic constraints from the beginning of the search (e.g. Nicol and Swinney 1989), while others have claimed that potential antecedents which are ruled out on structural grounds nonetheless affect processing, because the parser must also pay attention to a potential antecedent’s features (e.g. Badecker and Straub 2002). My experimental findings demonstrate that the parser is sensitive to the subtle changes in syntactic configuration which either allow or disallow pronoun reference to a local antecedent, and indicate that the parser is normally faithful to condition B at all stages of processing. Secondly, I test the Primitives of Binding hypothesis proposed by Koornneef (2008) based on work by Reuland (2001), which is a modular approach to pronoun resolution in which variable binding (a semantic relationship between pronoun and antecedent) takes place before coreference. I demonstrate that a variable-binding (VB) antecedent is not systematically considered earlier than a coreference (CR) antecedent online. I then go on to explore whether these findings could be attributed to the linear order of the antecedents, and uncover a robust recency preference both online and offline. I consider what role the factor of recency plays in pronoun resolution and how it can be reconciled with the first-mention advantage (Gernsbacher and Hargreaves 1988; Arnold 2001; Arnold et al., 2007). Finally, I investigate how aspects of the prior discourse affect pronoun resolution. Prior discourse status clearly had an effect on pronoun resolution, but an antecedent’s appearance in the previous context was not always facilitative; I propose that this is due to the number of topic switches that a reader must make, leading to a lack of discourse coherence which has a detrimental effect on pronoun resolution. The sensitivity of the parser to structural cues does not entail that cue types can be easily separated into distinct sequential stages, and I therefore propose that the parser is structurally sensitive but not modular. Aspects of pronoun resolution can be captured within a parallel constraints model of pronoun resolution, however, such a model should be sensitive to the activation of potential antecedents based on discourse factors, and structural cues should be strongly weighted.
It is a common finding that preschoolers have difficulties in identifying who is doing what to whom in non-canonical sentences, such as (object-verb-subject) OVS and passive sentences in German. This dissertation investigates how German monolingual and German-Italian simultaneous bilingual children process German OVS sentences in Study 1 and German passives in Study 2. Offline data (i.e., accuracy data) and online data (i.e., eye-gaze and pupillometry data) were analyzed to explore whether children can assign thematic roles during sentence comprehension and processing. Executive functions, language-internal and -external factors were investigated as potential predictors for children’s sentence comprehension and processing.
Throughout the literature, there are contradicting findings on the relation between language and executive functions. While some results show a bilingual cognitive advantage over monolingual speakers, others suggest there is no relationship between bilingualism and executive functions. If bilingual children possess more advanced executive function abilities than monolingual children, then this might also be reflected in a better performance on linguistic tasks. In the current studies monolingual and bilingual children were tested by means of two executive function tasks: the Flanker task and the task-switching paradigm. However, these findings showed no bilingual cognitive advantages and no better performance by bilingual children in the linguistic tasks. The performance was rather comparable between bilingual and monolingual children, or even better for the monolingual group. This may be due to cross-linguistic influences and language experience (i.e., language input and output). Italian was used because it does not syntactically overlap with the structure of German OVS sentences, and it only overlapped with one of the two types of sentence condition used for the passive study - considering the subject-(finite)verb alignment. The findings showed a better performance of bilingual children in the passive sentence structure that syntactically overlapped in the two languages, providing evidence for cross-linguistic influences.
Further factors for children’s sentence comprehension were considered. The parents’ education, the number of older siblings and language experience variables were derived from a language background questionnaire completed by parents. Scores of receptive vocabulary and grammar, visual and short-term memory and reasoning ability were measured by means of standardized tests. It was shown that higher German language experience by bilinguals correlates with better accuracy in German OVS sentences but not in passive sentences. Memory capacity had a positive effect on the comprehension of OVS and passive sentences in the bilingual group. Additionally, a role was played by executive function abilities in the comprehension of OVS sentences and not of passive sentences. It is suggested that executive function abilities might help children in the sentence comprehension task when the linguistic structures are not yet fully mastered.
Altogether, these findings show that bilinguals’ poorer performance in the comprehension and processing of German OVS is mainly due to reduced language experience in German, and that the different performance of bilingual children with the two types of passives is mainly due to cross-linguistic influences.
The present dissertation focuses on the question whether and under which conditions infants recognise clauses in fluent speech and the role a prosodic marker such as a pause may have in the segmentation process. In the speech signal, syntactic clauses often coincide with intonational phrases (IPhs) (Nespor & Vogel, 1986, p. 190), the boundaries of which are marked by changes in fundamental frequency (e.g., Price, Ostendorf, Shattuck-Hufnagel & Fong, 1991), lengthening of the final syllable (e.g., Cooper & Paccia-Cooper, 1980) and the occurrence of a pause (Nespor & Vogel, 1986, p. 188). Thus, IPhs seem to be reliably marked in the speech stream and infants may use these cues to recognise them. Furthermore, corpus studies on the occurrence and distribution of pauses have revealed that there is a strong correlation between the duration of a pause and the type of boundary it marks (e.g., Butcher, 1981, for German). Pauses between words are either non-existent or short, pauses between phrases are a bit longer, and pauses between clauses and at sentence boundaries further increase in duration. This suggests the existence of a natural pause hierarchy that complements the prosodic hierarchy described by Nespor and Vogel (1986). These hierarchies on the side of the speech signal correspond to the syntactic hierarchy of a language. In the present study, five experiments using the Headturn preference paradigm (Hirsh-Pasek, Kemler Nelson, Jusczyk, Cassidy, Druss & Kennedy, 1987) were conducted to investigate German-learning 6- and 8-month-olds’ use of pauses to recognise clauses in the signal and their sensitivity to the natural pause hierarchy. Previous studies on English-learning infants’ recognition of clauses (Hirsh-Pasek et al., 1987; Nazzi, Kemler Nelson, Jusczyk & Jusczyk, 2000) have found that infants as young as 6 months recognise clauses in fluent speech. Recently, Seidl and colleagues have begun to investigate the status the pause may have in this process (Seidl, 2007; Johnson & Seidl, 2008; Seidl & Cristià, 2008). However, none of these studies investigated infants’ sensitivity to the natural pause hierarchy and especially the sensitivity to the correlation between pause durations and the respective within-sentence clause boundaries / sentence boundaries. To address these questions highly controlled stimuli were used. In all five experiments the stimuli were sentences consisting of two IPhs which each coincided with a syntactic clause. In the first three experiments pauses were inserted either at clause and sentence boundaries or within the first clause and the sentence boundaries. The duration of the pauses varied between the experiments. The results show that German-learning 6-month-olds recognise clauses in the speech stream, but only in a condition in which the duration of the pauses conforms to the mean duration of pauses found at the respective boundaries in German. Experiments 4 and 5 explicitly addressed the question of infants’ sensitivity to the natural pause hierarchy by inserting pauses at the clause and sentence boundaries only. Their durations were either conforming to the natural pause hierarchy or were being reversed. The results of these experiments provide evidence that 8-, but not 6-month-olds seem to be sensitive to the correlation of the duration of pauses and the type of boundary they demarcate. The present study provides first evidence that infants not only use pauses to recognise clause and sentence boundaries, but are sensitive to the duration and distribution of pauses in their native language as reflected in the natural pause hierarchy.
The individual’s mental lexicon comprises all known words as well related infor-mation on semantics, orthography and phonology. Moreover, entries connect due to simi-larities in these language domains building a large network structure. The access to lexical information is crucial for processing of words and sentences. Thus, a lack of information in-hibits the retrieval and can cause language processing difficulties. Hence, the composition of the mental lexicon is essential for language skills and its assessment is a central topic of lin-guistic and educational research.
In early childhood, measurement of the mental lexicon is uncomplicated, for example through parental questionnaires or the analysis of speech samples. However, with growing content the measurement becomes more challenging: With more and more words in the mental lexicon, the inclusion of all possible known words into a test or questionnaire be-comes impossible. That is why there is a lack of methods to assess the mental lexicon for school children and adults. For the same reason, there are only few findings on the courses of lexical development during school years as well as its specific effect on other language skills. This dissertation is supposed to close this gap by pursuing two major goals: First, I wanted to develop a method to assess lexical features, namely lexicon size and lexical struc-ture, for children of different age groups. Second, I aimed to describe the results of this method in terms of lexical development of size and structure. Findings were intended to help understanding mechanisms of lexical acquisition and inform theories on vocabulary growth.
The approach is based on the dictionary method where a sample of words out of a dictionary is tested and results are projected on the whole dictionary to determine an indi-vidual’s lexicon size. In the present study, the childLex corpus, a written language corpus for children in German, served as the basis for lexicon size estimation. The corpus is assumed to comprise all words children attending primary school could know. Testing a sample of words out of the corpus enables projection of the results on the whole corpus. For this purpose, a vocabulary test based on the corpus was developed. Afterwards, test performance of virtual participants was simulated by drawing different lexicon sizes from the corpus and comparing whether the test items were included in the lexicon or not. This allowed determination of the relation between test performance and total lexicon size and thus could be transferred to a sample of real participants. Besides lexicon size, lexical content could be approximated with this approach and analyzed in terms of lexical structure.
To pursue the presented aims and establish the sampling method, I conducted three consecutive studies. Study 1 includes the development of a vocabulary test based on the childLex corpus. The testing was based on the yes/no format and included three versions for different age groups. The validation grounded on the Rasch Model shows that it is a valid instrument to measure vocabulary for primary school children in German. In Study 2, I estab-lished the method to estimate lexicon sizes and present results on lexical development dur-ing primary school. Plausible results demonstrate that lexical growth follows a quadratic function starting with about 6,000 words at the beginning of school and about 73,000 words on average for young adults. Moreover, the study revealed large interindividual differences. Study 3 focused on the analysis of network structures and their development in the mental lexicon due to orthographic similarities. It demonstrates that networks possess small-word characteristics and decrease in interconnectivity with age.
Taken together, this dissertation provides an innovative approach for the assessment and description of the development of the mental lexicon from primary school onwards. The studies determine recent results on lexical acquisition in different age groups that were miss-ing before. They impressively show the importance of this period and display the existence of extensive interindividual differences in lexical development. One central aim of future research needs to address the causes and prevention of these differences. In addition, the application of the method for further research (e.g. the adaptation for other target groups) and teaching purposes (e.g. adaptation of texts for different target groups) appears to be promising.
Die Dissertation untersucht die Entwicklung der prosodischen Struktur von Simplizia und Komposita im Deutschen. Ausgewertet werden langzeitlich erhobene Produktionsdaten von vier monolingualen Kindern im Alter von 12 bis 26 Monaten. Es werden vier Entwicklungsstufen angenommen, in denen jedoch keine einheitlichen Outputs produziert werden. Die Asymmetrien zwischen den verschiedenen Wörtern werden systematisch auf die Struktur des Zielwortes zurückgeführt. In einer optimalitätstheoretischen Analyse wird gezeigt, dass sich die Entwicklungsstufen aus der Umordnung von Constraints ergeben und dass dasselbe Ranking die Variation zwischen den Worttypen zu einer bestimmten Entwicklungsstufe vorhersagt.
This thesis investigates the comprehension of the passive voice in three distinct populations. First, the comprehension of passives by adult German speakers was studied, followed by an examination of how German-speaking children comprehend the structure. Finally, bilingual Mandarin-English speakers were tested on their comprehension of the passive voice in English, which is their L2. An integral part of testing the comprehension in all three populations is the use of structural priming. In each of the three distinct parts of the research, structural priming was used for a specific reason. In the study involving adult German speakers, productive and receptive structural priming was directly compared. The goal was to see the effect the two priming modalities have on language comprehension. In the study on German-acquiring children, structural priming was an important tool in answering the question regarding the delayed acquisition of the passive voice. Finally, in the study on the bilingual population, cross-linguistic priming was used to investigate the importance of word order in the priming effect, since Mandarin and English have different word orders in passive voice sentences.
The comprehension of figurative language : electrophysiological evidence on the processing of irony
(2008)
This dissertation investigates the comprehension of figurative language, in particular the temporal processing of verbal irony. In six experiments using event-related potentials(ERP) brain activity during the comprehension of ironic utterances in relation to equivalent non-ironic utterances was measured and analyzed. Moreover, the impact of various language-accompanying cues, e.g., prosody or the use of punctuation marks, as well as non-verbal cues such as pragmatic knowledge has been examined with respect to the processing of irony. On the basis of these findings different models on figurative language comprehension, i.e., the 'standard pragmatic model', the 'graded salience hypothesis', and the 'direct access view', are discussed.
This thesis investigates temporal and aspectual reference in the typologically unrelated African languages Hausa (Chadic, Afro–Asiatic) and Medumba (Grassfields Bantu).
It argues that Hausa is a genuinely tenseless language and compares the interpretation of temporally unmarked sentences in Hausa to that of morphologically tenseless sentences in Medumba, where tense marking is optional and graded.
The empirical behavior of the optional temporal morphemes in Medumba motivates an analysis as existential quantifiers over times and thus provides new evidence suggesting that languages vary in whether their (past) tense is pronominal or quantificational (see also Sharvit 2014).
The thesis proposes for both Hausa and Medumba that the alleged future tense marker is a modal element that obligatorily combines with a prospective future shifter (which is covert in Medumba). Cross-linguistic variation in whether or not a future marker is compatible with non-future interpretation is proposed to be predictable from the aspectual architecture of the given language.
Seit den Anfängen empirisch-neurowissenschaftlicher Forschung gilt Sprachkompetenz zuvorderst als eine Leistung der Hirnrinde (Kortex), jedoch wurden v. a. im Zuge sich verbessernder bildgebender Verfahren aphasische Syndrome auch nach Läsionen subkortikaler Hirnregionen, insbesondere der Basalganglien und des Thalamus nachgewiesen. Diese Strukturen liegen in der Tiefe des Gehirns und kommunizieren über weit gefächerte Faserverbindungen mit dem Kortex. In erster Linie werden den Basalganglien senso-motorische Kontrollfunktionen zugewiesen. Dementsprechend werden diverse Erkrankungen, die durch Störungen physiologischer Bewegungsabläufe gekennzeichnet sind (z. B. Morbus Parkinson, Chorea Huntington), auf Funktionsdefekte dieser Strukturen zurückgeführt. Der Thalamus wird häufig als Relaisstation des Informationsaustauschs zwischen anatomisch entfernten Arealen des Nervensystems aufgefasst. Basalganglien und Thalamus werden jedoch auch darüber hinausgehende Funktionen, z. B. zur Bereitstellung, Aufrechterhaltung und Auslenkung von Aufmerksamkeit bei der Bearbeitung kognitiver Aufgaben zugesprochen. In der vorliegenden Arbeit wurde mit elektrophysiologischen Methoden untersucht, ob auf der Ebene von Thalamus und Basalganglien kognitive Sprachleistungen, spezifisch der syntaktischen und semantischen Verarbeitung nachgewiesen werden können und inwieweit sich eventuell subkortikale von kortikaler Sprachverarbeitung unterscheidet. Die Untersuchung spezieller Sprachfunktionen der Basalganglien und des Thalamus ist im Rahmen der operativen Behandlung bewegungsgestörter Patienten mit der sog. Tiefenhirnstimulation (DBS = engl. Deep Brain Stimulation) möglich. Hierbei werden Patienten mit Morbus Parkinson Stimulationselektroden in den Nucleus subthalamicus (STN) implantiert. Bei Patienten mit generalisierten Dystonien erfolgt die Implantation in den Globus pallidus internus (GPI) und bei Patienten mit essentiellem Tremor in den Nucleus ventralis intermedius (VIM). STN und GPI sind Kernareale der Basalganglien, der VIM ist Teil des motorischen Systems. Nach der Implantation besteht die Möglichkeit, direkt von diesen Elektroden elektroenzephalographische (EEG)-Signale abzuleiten und diese mit simultan abgeleiteten Oberflächen-EEG zu vergleichen. In dieser Arbeit wurden DBS-Patienten aus allen genannten Gruppen in Bezug auf Sprachverständnisleistungen untersucht. Neben der Präsentation korrekter Sätze hörten die Patienten Sätze mit syntaktischen oder semantischen Fehlern. In verschiedenen Studien wurden an der Skalp-Oberfläche EKP-Komponenten (EKP = ereigniskorrelierte Potentiale) beschrieben, welche mit der Verarbeitung solcher Fehler in Verbindung gebracht werden. So verursachen syntaktische Phrasenstrukturverletzungen eine frühe links-anteriore Negativierung (ELAN). Dieser Komponente folgt eine späte Positivierung (P600), die mit Reanalyse und Reparaturmechanismen in Verbindung gebracht wird. Semantische Verletzungen evozieren eine breite Negativierung um 400ms (N400). In den thalamischen Ableitungen wurden zwei zusätzliche syntaktische fehlerbezogene Komponenten gefunden, die (i) ~ 80ms nach der Skalp-ELAN und (ii) ~ 70ms vor der Skalp-P600 auftraten. Bei semantischen Verletzungen wurde im Thalamus ein fehlerbezogenes Potential nachgewiesen, welches weitgehend parallel mit dem am Skalp gefundenen Muster verläuft. Aus den Ergebnissen der vorliegenden Studie folgt, dass der Thalamus spezifische Sprachfunktionen erfüllt. Komponenten, die Sprachverarbeitungsprozesse reflektieren, konnten in den Basalganglienstrukturen STN und GPI nicht identifiziert werden. Aufgrund der erhobenen Daten werden zwei getrennte Netzwerke für die Verarbeitung syntaktischer bzw. semantischer Fehler angenommen. In diesen Netzwerken scheint der Thalamus spezifische Aufgaben zu übernehmen. In einem ‚Syntaxnetzwerk’ kommunizieren frontale Hirnstrukturen unter Einbeziehung des Thalamus mit parietalen Hirnstrukturen. Dem Thalamus wurde eine Mediationsfunktion in der syntaktischen Reanalyse zugesprochen. In einem ‚Semantiknetzwerk’ waren keine eindeutig zuordenbaren Prozesse auf thalamischer Ebene nachweisbar. Es wurde eine unscharfe, jedoch aber spezifische Aktivierung des Thalamus über den gesamten Zeitraum der kortikalen semantischen Analyse gezeigt, welche als Integration verschiedener Analysemechanismen gewertet wurde.
Die vorliegende Dissertation widmet sich dem Thema Synonymie im besonderen Fall der phraseologischen Einheiten. Es handelt sich um eine korpusbasierte, empirische Untersuchung, die sich insbesondere mit der Frage beschäftigt, ob und inwiefern es möglich ist, sich typisch semantischen Kategorien wie Bedeutung, Idiomatizität, Bildlichkeit etc. über die Untersuchung typischer Verwendungsmuster zu nähern. Diese Themenstellung motiviert sich aus bisher in der linguistischen Literatur strittigen grundsätzlichen Aspekten der Diskussion um Bedeutung und Synonymie: Sie ist zum einen als Beitrag zum besseren Verständnis des Verhältnisses zwischen Verwendungsdaten und Bedeutung sowie zum Status traditioneller wörterbuchähnlicher Bedeutungsangaben innerhalb einer gebrauchsbasierten Semantik gedacht: Zum anderen geht es darum, detaillierte Erkenntnisse über die Übertragbarkeit des Konzepts Synonymie von Einzellexemen auf phraseologische Einheiten zu gewinnen. Unter der Annahme, dass menschliches Lernen bzw. Erschließen von Bedeutung primär empirisch funktioniert, ist die Analyse der Breite der Varianz des tatsächlichen kontextuellen Verhaltens phraseologischer Einheiten bei gleicher oder ähnlicher Bedeutung dazu geeignet, detaillierte Erkenntnisse über die Korrelation zwischen Bedeutungs- und Verwendungsaspekten sowie über den Einfluss phraseologiespezifischer Eigenschaften zu gewinnen. Ausgangspunkt der Untersuchung ist eine Gruppe phraseologischer Einheiten, die in Wörterbüchern als bedeutungsähnlich bzw. synonym klassifiziert werden. Unter diesen phraseologischen Einheiten finden sich Ausdrücke unterschiedlicher Bildlichkeit, Idiomatizität und morphosyntaktischer Struktur, von denen einige aus mehreren Inhaltswörtern bestehen, wie etwa jmd. schüttelt etw. aus dem Ärmel, jmd. hat etw. mit der Muttermilch eingesogen und jmd. weiß Bescheid, während andere lediglich Verbindungen eines Inhaltswortes mit einem oder mehreren Funktionswörtern und dem Verb haben oder sein darstellen. Zur letzteren Gruppe gehören unter anderem Ausdrücke wie jmd. hat das Zeug zu etw., jmd. ist auf Zack und jmd. ist vom Fach. Diese Heterogenität der zu untersuchenden Ausdrücke bezüglich ihres phraseologischen Status macht sie zu einem geeigneten Gegenstand der differenzierten Betrachtung der Rolle phraseologiespezifischer Eigenschaften für die Beschreibung von Bedeutung und Synonymie. Die Untersuchung besteht im ersten Schritt in einer detaillierten Annotation aller Vorkommenskontexte der phraseologischen Einheiten im Korpus des DWDS (Berlin-Brandeburgische Akademie der Wissenschaften, www.dwds.de) auf Basis einer Reihe vordefinierter Analysekriterien. Aus dieser Annotation wird in einem zweiten Schritt eine Beschreibung der typischen Verwendungsmerkmale jeder einzelnen lexikalischen Einheit gewonnen. Gleichzeitig enstehen nach einer festgelegten Methode Bedeutungsbeschreibungen in Form von Paraphrasen, die auf elementare Bedeutungsbestandteile reduziert werden. Diese Bedeutungs- und Verwendungsbeschreibungen der untersuchten phraseologschen Einheiten bilden die Basis für den dritten Teil der Arbeit, in der die beiden Beschreibungsebenen aufeinander bezogen werden. Im Ergebnis zeigt die Arbeit, dass a) Zwischen Verwendung und Bedeutung lexikalischer (phraseologischer) Einheiten identifizierbare systematische Korrelationen bestehen, die einen datenzentrierten Zugang zur Untersuchung und Beschreibung lexikalischer Semantik ermöglichen. b) Innerhalb einer Gruppe von Synonymen so wie auch innerhalb einer erweiterten Menge quasisynonymer Ausdrücke jedem einzelnen Element ein eigener Platz zukommt, der dieses Element von allen anderen Elementen unterscheidet. c) Die Verwendungsdaten einer phraseologischen Einheit positive Evidenz für den individuellen Grad der Relevanz der Merkmale Festigkeit, Idiomatizität und Motiviertheit einer phraseologischen Einheit liefern.
Sprachverstehen mit Cochlea-Implantat : EKP-Studien mit postlingual ertaubten erwachsenen CI-Trägern
(2004)
The present study addresses the question of how German vowels are perceived and produced by Polish learners of German as a Foreign Language. It comprises three main experiments: a discrimination experiment, a production experiment, and an identification experiment. With the exception of the discrimination task, the experiments further investigated the influence of orthographic marking on the perception and production of German vowel length. It was assumed that explicit markings such as the Dehnungs-h ("lengthening h") could help Polish GFL learners in perceiving and producing German words more correctly.
The discrimination experiment with manipulated nonce words showed that Polish GFL learners detect pure length differences in German vowels less accurately than German native speakers, while this was not the case for pure quality differences. The results of the identification experiment contrast with the results of the discrimination task in that Polish GFL learners were better at judging incorrect vowel length than incorrect vowel quality in manipulated real words. However, orthographic marking did not turn out to be the driving factor and it is suggested that metalinguistic awareness can explain the asymmetry between the two perception experiments. The production experiment supported the results of the identification task in that lengthening h did not help Polish learners in producing German vowel length more correctly. Yet, as far as vowel quality productions are concerned, it is argued that orthography does influence L2 sound productions because Polish learners seem to be negatively influenced by their native grapheme-to-phoneme correspondences.
It is concluded that it is important to differentiate between the influence of the L1 and L2 orthographic system. On the one hand, the investigation of the influence of orthographic vowel length markers in German suggests that Polish GFL learners do not make use of length information provided by the L2 orthographic system. On the other hand, the vowel quality data suggest that the L1 orthographic system plays a crucial role in the acquisition of a foreign language. It is therefore proposed that orthography influences the acquisition of foreign sounds, but not in the way it was originally assumed.
The immense popularity of online communication services in the last decade has not only upended our lives (with news spreading like wildfire on the Web, presidents announcing their decisions on Twitter, and the outcome of political elections being determined on Facebook) but also dramatically increased the amount of data exchanged on these platforms. Therefore, if we wish to understand the needs of modern society better and want to protect it from new threats, we urgently need more robust, higher-quality natural language processing (NLP) applications that can recognize such necessities and menaces automatically, by analyzing uncensored texts. Unfortunately, most NLP programs today have been created for standard language, as we know it from newspapers, or, in the best case, adapted to the specifics of English social media.
This thesis reduces the existing deficit by entering the new frontier of German online communication and addressing one of its most prolific forms—users’ conversations on Twitter. In particular, it explores the ways and means by how people express their opinions on this service, examines current approaches to automatic mining of these feelings, and proposes novel methods, which outperform state-of-the-art techniques. For this purpose, I introduce a new corpus of German tweets that have been manually annotated with sentiments, their targets and holders, as well as lexical polarity items and their contextual modifiers. Using these data, I explore four major areas of sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained opinion mining, (iii) message-level polarity classification, and (iv) discourse-aware sentiment analysis. In the first task, I compare three popular groups of lexicon generation methods: dictionary-, corpus-, and word-embedding–based ones, finding that dictionary-based systems generally yield better polarity lists than the last two groups. Apart from this, I propose a linear projection algorithm, whose results surpass many existing automatically-generated lexicons. Afterwords, in the second task, I examine two common approaches to automatic prediction of sentiment spans, their sources, and targets: conditional random fields (CRFs) and recurrent neural networks, obtaining higher scores with the former model and improving these results even further by redefining the structure of CRF graphs. When dealing with message-level polarity classification, I juxtapose three major sentiment paradigms: lexicon-, machine-learning–, and deep-learning–based systems, and try to unite the first and last of these method groups by introducing a bidirectional neural network with lexicon-based attention. Finally, in order to make the new classifier aware of microblogs' discourse structure, I let it separately analyze the elementary discourse units of each tweet and infer the overall polarity of a message from the scores of its EDUs with the help of two new approaches: latent-marginalized CRFs and Recursive Dirichlet Process.
Semantische Repräsentation, obligatorische Aktivierung und verbale Produktion arithmetischer Fakten
(2006)
Die vorliegende Arbeit widmet sich der Repräsentation und Verarbeitung arithmetischer Fakten. Dieser Bereich semantischen Wissens eignet sich unter anderem deshalb besonders gut als Forschungsgegenstand, weil nicht nur seine einzelne Bestandteile, sondern auch die Beziehungen dieser Bestandteile untereinander außergewöhnlich gut definierbar sind. Kognitive Modelle können also mit einem Grad an Präzision entwickelt werden, der in anderen Bereichen kaum je zu erreichen sein wird. Die meisten aktuellen Modelle stimmen darin überein, die Repräsentation arithmetischer Fakten als eine assoziative, netzwerkartig organisierte Struktur im deklarativen Gedächtnis zu beschreiben. Trotz dieser grundsätzlichen Übereinstimmung bleibt eine Reihe von Fragen offen. In den hier vorgestellten Untersuchungen werden solche offene Fragen in Hinsicht auf drei verschiedene Themenbereiche angegangen: 1) die neuroanatomischen Korrelate 2) Nachbarschaftskonsistenzeffekte bei der verbalen Produktion sowie 3) die automatische Aktivierung arithmetischer Fakten. In einer kombinierten fMRT- und Verhaltensstudie wurde beispielsweise der Frage nachgegangen, welche neurofunktionalen Entsprechungen es für den Erwerb arithmetischer Fakten bei Erwachsenen gibt. Den Ausgangspunkt für diese Untersuchung bildete das Triple-Code-Modell von Dehaene und Cohen, da es als einziges auch Aussagen über neuroanatomische Korrelate numerischer Leistungen macht. Das Triple-Code-Modell geht davon aus, dass zum Abruf arithmetischer Fakten eine „perisylvische“ Region der linken Hemisphäre unter Einbeziehung der Stammganglien sowie des Gyrus angularis nötig ist (Dehaene & Cohen, 1995; Dehaene & Cohen, 1997; Dehaene, Piazza, Pinel, & Cohen, 2003). In der aktuellen Studie sollten gesunde Erwachsene komplexe Multiplikationsaufgaben etwa eine Woche lang intensiv üben, so dass ihre Beantwortung immer mehr automatisiert erfolgt. Die Lösung dieser geübten Aufgaben sollte somit – im Gegensatz zu vergleichbaren ungeübten Aufgaben – immer stärker auf Faktenabruf als auf der Anwendung von Prozeduren und Strategien beruhen. Hingegen sollten ungeübte Aufgaben im Vergleich zu geübten höhere Anforderungen an exekutive Funktionen einschließlich des Arbeitsgedächtnisses stellen. Nach dem Training konnten die Teilnehmer – wie erwartet – geübte Aufgaben deutlich schneller und sicherer beantworten als ungeübte. Zusätzlich wurden sie auch im Magnetresonanztomografen untersucht. Dabei konnte zunächst bestätigt werden, dass das Lösen von Multiplikationsaufgaben allgemein von einem vorwiegend linkshemisphärischen Netzwerk frontaler und parietaler Areale unterstützt wird. Das wohl wichtigste Ergebnis ist jedoch eine Verschiebung der Hirnaktivierungen von eher frontalen Aktivierungsmustern zu einer eher parietalen Aktivierung und innerhalb des Parietallappens vom Sulcus intraparietalis zum Gyrus angularis bei den geübten im Vergleich zu den ungeübten Aufgaben. So wurde die zentrale Bedeutung von Arbeitsgedächtnis- und Planungsleistungen für komplexe ungeübte Rechenaufgaben erneut herausgestellt. Im Sinne des Triple-Code-Modells könnte die Verschiebung innerhalb des Parietallappens auf einen Wechsel von quantitätsbasierten Rechenleistungen (Sulcus intraparietalis) zu automatisiertem Faktenabruf (linker Gyrus angularis) hindeuten. Gibt es bei der verbalen Produktion arithmetischer Fakten Nachbarschaftskonsistenzeffekte ähnlich zu denen, wie sie auch in der Sprachverarbeitung beschrieben werden? Solche Effekte sind nach dem aktuellen „Dreiecksmodell“ von Verguts & Fias (2004) zur Repräsentation von Multiplikationsfakten erwartbar. Demzufolge sollten richtige Antworten leichter gegeben werden können, wenn sie Ziffern mit möglichst vielen semantisch nahen falschen Antworten gemeinsam haben. Möglicherweise sollten demnach aber auch falsche Antworten dann mit größerer Wahrscheinlichkeit produziert werden, wenn sie eine Ziffer mit der richtigen Antwort teilen. Nach dem Dreiecksmodell wäre darüber hinaus sogar der klassische Aufgabengrößeneffekt bei einfachen Multiplikationsaufgaben (Zbrodoff & Logan, 2004) auf die Konsistenzverhältnisse der richtigen Antwort mit semantisch benachbarten falschen Antworten zurückzuführen. In einer Reanalyse der Fehlerdaten von gesunden Probanden (Campbell, 1997) und einem Patienten (Domahs, Bartha, & Delazer, 2003) wurden tatsächlich Belege für das Vorhandensein von Zehnerkonsistenzeffekten beim Lösen einfacher Multiplikationsaufgaben gefunden. Die Versuchspersonen bzw. der Patient hatten solche falschen Antworten signifikant häufiger produziert, welche die gleiche Zehnerziffer wie das richtigen Ergebnisses aufwiesen, als ansonsten vergleichbare andere Fehler. Damit wird die Annahme unterstützt, dass die Zehner- und die Einerziffern zweistelliger Zahlen separate Repräsentationen aufweisen – bei der Multiplikation (Verguts & Fias, 2004) wie auch allgemein bei numerischer Verarbeitung (Nuerk, Weger, & Willmes, 2001; Nuerk & Willmes, 2005). Zusätzlich dazu wurde in einer Regressionsanalyse über die Fehlerzahlen auch erstmalig empirische Evidenz für die Hypothese vorgelegt, dass der klassische Aufgabengrößeneffekt beim Abruf von Multiplikationsfakten auf Zehnerkonsistenzeffekte zurückführbar ist: Obwohl die Aufgabengröße als erster Prädiktor in das Modell einging, wurde diese Variable wieder verworfen, sobald ein Maß für die Nachbarschaftskonsistenz der richtigen Antwort in das Modell aufgenommen wurde. Schließlich wurde in einer weiteren Studie die automatische Aktivierung von Multiplikationsfakten bei gesunden Probanden mit einer Zahlenidentifikationsaufgabe (Galfano, Rusconi, & Umilta, 2003; Lefevre, Bisanz, & Mrkonjic, 1988; Thibodeau, Lefevre, & Bisanz, 1996) untersucht. Dabei sollte erstmals die Frage beantwortet werden, wie sich die automatische Aktivierung der eigentlichen Multiplikationsergebnisse (Thibodeau et al., 1996) zur Aktivierung benachbarter falscher Antworten (Galfano et al., 2003) verhält. Ferner sollte durch die Präsentation mit verschiedenen SOAs der zeitliche Verlauf dieser Aktivierungen aufgeklärt werden. Die Ergebnisse dieser Studie können insgesamt als Evidenz für das Vorhandensein und die automatische, obligatorische Aktivierung eines Netzwerkes arithmetischer Fakten bei gesunden, gebildeten Erwachsenen gewertet werden, in dem die richtigen Produkte stärker mit den Faktoren assoziiert sind als benachbarte Produkte (Operandenfehler). Dabei führen Produkte kleiner Aufgaben zu einer stärkeren Interferenz als Produkte großer Aufgaben und Operandenfehler großer Aufgaben zu einer stärkeren Interferenz als Operandenfehler kleiner Aufgaben. Ein solches Aktivierungsmuster passt gut zu den Vorhersagen des Assoziationsverteilungsmodells von Siegler (Lemaire & Siegler, 1995; Siegler, 1988), bei dem kleine Aufgaben eine schmalgipflige Verteilung der Assoziationen um das richtige Ergebnis herum aufweisen, große Aufgaben jedoch eine breitgipflige Verteilung. Somit sollte die vorliegende Arbeit etwas mehr Licht in bislang weitgehend vernachlässigte Aspekte der Repräsentation und des Abrufs arithmetischer Fakten gebracht haben: Die neuronalen Korrelate ihres Erwerbs, die Konsequenzen ihrer Einbindung in das Stellenwertsystem mit der Basis 10 sowie die spezifischen Auswirkungen ihrer assoziativen semantischen Repräsentation auf ihre automatische Aktivierbarkeit. Literatur Campbell, J. I. (1997). On the relation between skilled performance of simple division and multiplication. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 1140-1159. Dehaene, S. & Cohen, L. (1995). Towards an anatomical and functional model of number processing. Mathematical Cognition, 1, 83-120. Dehaene, S. & Cohen, L. (1997). Cerebral pathways for calculation: double dissociation between rote verbal and quantitative knowledge of arithmetic. Cortex, 33, 219-250. Dehaene, S., Piazza, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487-506. Domahs, F., Bartha, L., & Delazer, M. (2003). Rehabilitation of arithmetic abilities: Different intervention strategies for multiplication. Brain and Language, 87, 165-166. Galfano, G., Rusconi, E., & Umilta, C. (2003). Automatic activation of multiplication facts: evidence from the nodes adjacent to the product. Quarterly Journal of Experimental Psychology A, 56, 31-61. Lefevre, J. A., Bisanz, J., & Mrkonjic, L. (1988). Cognitive arithmetic: evidence for obligatory activation of arithmetic facts. Memory and Cognition, 16, 45-53. Lemaire, P. & Siegler, R. S. (1995). Four aspects of strategic change: contributions to children's learning of multiplication. Journal of Experimental Psychology: General, 124, 83-97. Nuerk, H. C., Weger, U., & Willmes, K. (2001). Decade breaks in the mental number line? Putting the tens and units back in different bins. Cognition, 82, B25-B33. Nuerk, H. C. & Willmes, K. (2005). On the magnitude representations of two-digit numbers. Psychology Science, 47, 52-72. Siegler, R. S. (1988). Strategy choice procedures and the development of multiplication skill. Journal of Experimental Psychology: General, 117, 258-275. Thibodeau, M. H., Lefevre, J. A., & Bisanz, J. (1996). The extension of the interference effect to multiplication. Canadian Journal of Experimental Psychology, 50, 393-396. Verguts, T. & Fias, W. (2004). Neighborhood Effects in Mental Arithmetic. Psychology Science. Zbrodoff, N. J. & Logan, G. D. (2004). What everyone finds: The problem-size effect. In J. I. D. Campbell (Hrsg.), Handbook of Mathematical Cognition (pp.331-345). New York, NY: Psychology Press.
Die Arbeit untersucht die Annahme einer unterschiedlichen Gewichtung von distinktiven enzyklopädischen, funktionalen und sensorischen Merkmalen innerhalb der Repräsentationen von Objekten der belebten und unbelebten semantischen Domäne. Hierzu wurde ein Reaktionszeitexperiment zur Merkmalsverifikation durchgeführt. Vorab wurden deutsche Normen über das geschätzte Erwerbsalter für 244 Stimuli aus dem Korpus von Snodgrass & Vanderwart (1980) erhoben. Weiterhin wurde eine Datenbank von Merkmalsnormen für 80 konkrete Objektbegriffe erstellt. Insgesamt wurden zwei Reaktionszeitexperimente durchgeführt, die sich lediglich durch die Darbietungsdauer des Konzeptbegriffes unterschieden. Der Konzeptbegriff wurde entweder 1000 ms (lange Darbietung) oder 250 ms (kurze Darbietung) präsentiert, bevor das zu verifizierende semantische Merkmal erschien. Bei langer Präsentationszeit des Objektbegriffes zeigten sich für Objekte der unbelebten Domäne schnellere Reaktionszeiten beim Verifizieren von distinktiven funktionalen Merkmalen als beim Verifizieren von distinktiven enzyklopädischen Merkmalen. Dieser Effekt wurde bei kurzer Darbietungsdauer des Konzeptbegriffes repliziert. Bei kurzer Darbietung konnten für Objekte der unbelebten Domäne zusätzlich kürzere Reaktionszeiten beim Verifizieren distinktiver funktionaler Merkmale als beim Verifizieren distinktiver sensorischer Merkmale beobachtet werden. Für Objekte der belebten Domäne lagen weder nach kurzer noch nach langer Präsentation des Objektbegriffes Unterschiede in den Reaktionszeiten beim Verifizieren der semantischen Merkmale vor. Die Ergebnisse werden vor dem Hintergrund aktueller neurolinguistischer Modelle zur Organisation des semantischen Gedächtnisses diskutiert. Die Ergebnisse deuten darauf hin, dass innerhalb der Objektrepräsentationen belebter Objekte alle drei Merkmalstypen interkorrelieren. Für Objekte der unbelebten Domäne werden starke Interkorrelationen zwischen funktionalen und sensorischen Merkmalen angenommen. Zusätzlich wird davon ausgegangen, dass distinktive funktionale Merkmale innerhalb der Repräsentationen unbelebter Objekte besonders stark gewichtet sind.
In this thesis, I develop a theoretical implementation of prosodic reconstruction and apply it to the empirical domain of German sentences in which part of a focus or contrastive topic is fronted.
Prosodic reconstruction refers to the idea that sentences involving syntactic movement show prosodic parallels with corresponding simpler structures without movement. I propose to model this recurrent observation by ordering syntax-prosody mapping before copy deletion.
In order to account for the partial fronting data, the idea is extended to the mapping between prosody and information structure. This assumption helps to explain why object-initial sentences containing a broad focus or broad contrastive topic show similar prosodic and interpretative restrictions as sentences with canonical word order.
The empirical adequacy of the model is tested against a set of gradient acceptability judgments.