Refine
Has Fulltext
- yes (13391) (remove)
Year of publication
Document Type
- Article (4018)
- Postprint (3294)
- Doctoral Thesis (2552)
- Monograph/Edited Volume (973)
- Review (571)
- Part of Periodical (492)
- Preprint (446)
- Master's Thesis (266)
- Conference Proceeding (246)
- Working Paper (245)
Language
- German (7064)
- English (6027)
- Spanish (80)
- French (75)
- Multiple languages (62)
- Russian (62)
- Hebrew (9)
- Italian (6)
- Portuguese (2)
- Hungarian (1)
Keywords
- Germany (118)
- Deutschland (106)
- climate change (79)
- Sprachtherapie (77)
- Patholinguistik (73)
- patholinguistics (73)
- Logopädie (72)
- Zeitschrift (71)
- Nachhaltigkeit (61)
- European Union (59)
Institute
- Extern (1405)
- MenschenRechtsZentrum (943)
- Institut für Physik und Astronomie (718)
- Institut für Biochemie und Biologie (713)
- Wirtschaftswissenschaften (583)
- Institut für Chemie (556)
- Institut für Mathematik (521)
- Institut für Romanistik (514)
- Institut für Geowissenschaften (510)
- Mathematisch-Naturwissenschaftliche Fakultät (489)
Zeichnet sich die Außenpolitik der Großen Koalition durch „mangelndes Profil“ aus? Marieluise Beck belegt dies durch die deutsche Reaktion auf die Initiative für eine atomwaffenfreie Welt des US-Präsidenten. Auch bei den Debatten um Menschenrechte, den außenpolitischen Umgang mit China und Russland sowie die Verwendung von selektiven Sanktionen ist die Große Koalition den Anforderungen nicht gewachsen.
Kooperative Beziehungen zu Russland sind angesichts der politisch-wirtschaftlichen Interessenlage, der geopolitischen Realitäten im Osten Europas und des geschichtlichen Hintergrundes in der deutschen politischen Klasse Konsens. Mit dem Begriff der strategischen Partnerschaft werden die deutsch-russischen Beziehungen immer wieder charakterisiert, d. h. diese Kooperation soll weit über die normalen Interessen Deutschlands hinausgehen und eine globalpolitische Dimension erreichen.
Die deutsche Außenpolitik hat seit dem 19. Jahrhundert bis zur Gegenwart viele Veränderungen erlebt. Lange stand Deutschland in einem angespannten Verhältnis zu den europäischen Ordnungen, doch nun ist es ein wichtiger Akteur der internationalen Gemeinschaft. Manfred Görtemaker, Professor für Neuere Geschichte aus Potsdam, zeichnet die wichtigsten Stationen dieser Entwicklung in seinem Beitrag nach.
Deutschland ist wegen seiner verlässlichen Außenpolitik ein weltweit angesehener Partner. Jedoch gelingt es der deutschen Außenpolitik momentan nicht, mit den Entwicklungen der globalisierten Welt Schritt zu halten. Bei wichtigen Themen hat die deutsche Regierung zu selten versucht, Stellung zu beziehen. Die Fähigkeit zur friedlichen Konfliktlösung muss Hauptanliegen unserer Außenpolitik sein. Als Grundlage dafür muss ein ständiger Dialog auch mit den Ländern, die unsere Werte nicht teilen, vorangetrieben werden.
Der Koordinator für deutsch-amerikanische Zusammenarbeit im Auswärtigen Amt, Karsten Voigt, attestiert der Außenpolitik der Großen Koalition eine erfolgreiche, solide Arbeit. Diese setzt die international geschätzte deutsche Kontinuität und Verlässlichkeit fort; auch in den zuletzt als schwierig empfundenen Beziehungen zu Russland und den USA.
Europäische Union als globale Macht : Plädoyer für eine wertbestimmte interregionale Ordnungsmacht
(2009)
Viele betrachten die EU als einen behäbigen Koloss, den widerstreitende Interessen lähmen. Der Autor weist die Kritiker jedoch in ihre Schranken. In seinem flammenden Plädoyer für die EU veranschaulicht er ihre positive Wirkung nach außen. Dabei kommt er zu dem Schluss, dass die Aufgaben Europas als universale Wertegemeinschaft im Bereich des Krisenmanagements und der Friedensstiftung liegen.
Dass die Große Koalition die Kontinuität deutscher Außenpolitik fortsetzt, ist für den Autor, verteidigunspolitischer Sprecher von DIE LINKE, ein Zeichen der Stagnation, sogar des Versagens. Er wirft der Bundesregierung Einfallslosigkeit, mangelndes Engagement und kalte Interessenpolitik vor. Doch neben der umfassenden Kritik werden auch neue Lösungsansätze vorgestellt, die sich auf Erwartungen an die neue US-Administration stützen.
Wie stabil ist die Außenpolitik der Großen Koalition? Aufgrund ihrer Position als stellvertretende außenpolitische Sprecherin der SPD ist es der Autorin möglich, neben den Gemeinsamkeiten auch die entscheidenden Unterschiede zu beleuchten. Vor dem Hintergrund mehrerer Beispiele, wie dem EU-Beitritt der Türkei, stellt sie den Konsens zwischen CDU/ CSU und SPD als fragil heraus.
"Deutschland ist schwer vermittelbar. Deutsche Kulturaußenpolitik tut, was sie kann. Das Goethe-Institut kämpft so für die globale Friedensmacht Deutschland in aller Herren Länder. Beim satzungsgemäß gestellten Ziel der 'Vermittlung eines umfassenden Deutschlandbildes' fällt ihm in seinem Internet-Glossar zu Deutschland dann aber für den Buchstaben C nur 'Cluster' ein. Vielleicht, weil es den Deutschen an 'Charisma' fehlt? [...]"
Thailand in der Dauerkrise
(2009)
Mit dem erzwungenen Abbruch des ASEAN-Gipfels im April 2009 erreichte der politische Machtkampf in Thailand einen neuen Höhepunkt. Trotz des Rückzuges der Aufständischen ist kein Ende des Konflikts abzusehen. Die erhoffte politische Ruhe mit der Wahl Abhisit Vejjajivas zum neuen thailändischen Regierungschef hat sich in dem tief gespaltenen Land nicht eingestellt.
Schwarz-Rot vertritt wie keine andere deutsche Regierung vor ihr selbstbewusst deutsche und europäische Interessen, ohne dieses Selbstbewusstsein bemüht zur Schau zu stellen. Deutschland hat in den vergangenen Jahren ganz selbstverständlich die Rolle des Antreibers europäischer Politik für sich reklamiert. Jedoch erwiesen sich Merkel und Steinmeier dabei mehr als realistische Pragmatiker denn als Visionäre.
Contents: Artem Polyvanny, Sergey Smirnow, and Mathias Weske The Triconnected Abstraction of Process Models 1 Introduction 2 Business Process Model Abstraction 3 Preliminaries 4 Triconnected Decomposition 4.1 Basic Approach for Process Component Discovery 4.2 SPQR-Tree Decomposition 4.3 SPQR-Tree Fragments in the Context of Process Models 5 Triconnected Abstraction 5.1 Abstraction Rules 5.2 Abstraction Algorithm 6 Related Work and Conclusions
The complex system of strike-slip and thrust faults in the Alborz Mountains, Northern Iran, are not well understood yet. Mainly structural and geomorphic data are available so far. As a more extensive base for seismotectonic studies and seismic hazard analysis we plan to do a comprehensive seismic moment tensor study also from smaller magnitudes (M < 4.5) by developing a new algorithm. Here, we present first preliminary results.
It has always been enigmatic which processes control the accretion of the North American terranes towards the Pacific plate and the landward migration of the San Andreas plate boundary. One of the theories suggests that the Pacific plate first cools and captures the uprising mantle in the slab window, and then it causes the accretion of the continental crustal blocks. The alternative theory attributes the accretion to the capture of Farallon plate fragments (microplates) stalled in the ceased Farallon-North America subduction zone. Quantitative judgement between these two end-member concepts requires a 3D thermomechanical numerical modeling. However, the software tool required for such modeling is not available at present in the geodynamic modeling community. The major aim of the presented work is comprised basically of two interconnected tasks. The first task is the development and testing of the research Finite Element code with sufficiently advanced facilities to perform the three-dimensional geological time scale simulations of lithospheric deformation. The second task consists in the application of the developed tool to the Neogene deformations of the crust and the mantle along the San Andreas Fault System in Central and northern California. The geological time scale modeling of lithospheric deformation poses numerous conceptual and implementation challenges for the software tools. Among them is the necessity to handle the brittle-ductile transition within the single computational domain, adequately represent the rock rheology in a broad range of temperatures and stresses, and resolve the extreme deformations of the free surface and internal boundaries. In the framework of this thesis the new Finite Element code (SLIM3D) has been successfully developed and tested. This code includes a coupled thermo-mechanical treatment of deformation processes and allows for an elasto-visco-plastic rheology with diffusion, dislocation and Peierls creep mechanisms and Mohr-Coulomb plasticity. The code incorporates an Arbitrary Lagrangian Eulerian formulation with free surface and Winkler boundary conditions. The modeling technique developed is used to study the aspects influencing the Neogene lithospheric deformation in central and northern California. The model setup is focused on the interaction between three major tectonic elements in the region: the North America plate, the Pacific plate and the Gorda plate, which join together near the Mendocino Triple Junction. Among the modeled effects is the influence of asthenosphere upwelling in the opening slab window on the overlying North American plate. The models also incorporate the captured microplate remnants in the fossil Farallon subduction zone, simplified subducting Gorda slab, and prominent crustal heterogeneity such as the Salinian block. The results show that heating of the mantle roots beneath the older fault zones and the transpression related to fault stepping, altogether, render cooling in the slab window alone incapable to explain eastward migration of the plate boundary. From the viewpoint of the thermomechanical modeling, the results confirm the geological concept, which assumes that a series of microplate capture events has been the primary reason of the inland migration of the San Andreas plate boundary over the recent 20 Ma. The remnants of the Farallon slab, stalled in the fossil subduction zone, create much stronger heterogeneity in the mantle than the cooling of the uprising asthenosphere, providing the more efficient and direct way for transferring the North American terranes to Pacific plate. The models demonstrate that a high effective friction coefficient on major faults fails to predict the distinct zones of strain localization in the brittle crust. The magnitude of friction coefficient inferred from the modeling is about 0.075, which is far less than typical values 0.6 – 0.8 obtained by variety of borehole stress measurements and laboratory data. Therefore, the model results presented in this thesis provide additional independent constrain which supports the “weak-fault” hypothesis in the long-term ongoing debate over the strength of major faults in the SAFS.
Inhalt: 1. Einleitung 2. Fragestellungen 3. Methoden 3.1 Methodisches Vorgehen: Interdisziplinäre Trachealkanülenentwöhnung und Dekanülierungsentscheidung im Basler Ansatz 3.2 Methodisches Vorgehen: Probanden und Messverfahren 4. Ergebnisse 4.1 Effektivität und Effizienz des multidisziplinären Ansatzes: Dekanülierungs- und Komplikationsraten und Therapiedauer bis zur Dekanülierung 4.2 Einfluss der Dekanülierung auf den Rehabilitationsverlauf funktioneller Fähigkeiten: Vergleich der funktionellen Selbständigkeit vor vs. nach der Dekanülierung 4.3 Entwicklung der Schluckfunktion und oralen Nahrungsaufnahme nach der Dekanülierung 5. Diskussion 6. Fazit 7. Literatur 8. Danksagung
Inhalt: 1. Einleitung 1.1 Blickbewegungen beim Lesen 1.2 Kognitive Kontrolle und verteilte Verarbeitung 2. Fragestellungen und Hypothesen 3. Methoden 3.1 Probanden 3.2 Material 3.3 Durchführung und Auswertung 4. Ergebnisse 4.1 Unterschiede in Effekten der Wortvorhersagbarkeit 4.2 Unterschiede in Effekten der Wortfrequenz 5. Diskussion 6. Literatur
Inhalt: 1. Einleitung 2. Hintergrund 2.1 Die prosodische Organisation des Deutschen 2.2 Implikationen für den Erwerb der Wortprosodie im Deutschen 3. Methode 3.1 Datenerhebung 3.2 Empirische Analyse 4. Ergebnisse: Die Entwicklung des Prosodischen Wortes im Deutschen 5. Analyse der empirischen Daten 5.1 Grundannahmen 5.2 Analyse der Entwicklungsstufen 6. Zusammenfassung und Diskussion 7. Literatur
Inhalt: 1. Einführung 1.1 Methoden zur Untersuchung sprachlicher Fähigkeiten 1.2 Die Anfänge der Erforschung von Mehrsprachigkeit 2. Funktionelle Bildgebung 2.1 Einfluss des Erwerbsalters 2.2 Einfluss der Sprachkompetenz 3. Elektrophysiologische Daten 3.1 Einfluss des Erwerbsalters 3.2 Einfluss der Sprachkompetenz 4. Neurokognitive Modelle 4.1 Lexikalisch-semantische Modelle 4.2 Lexikalisch-Grammatikalisches Modell 4.3 Implizit-Explizites Modell 5. Schlussfolgerung 6. Literatur
"Spektrum Patholinguistik" (Band 2) ist der Tagungsband zum 2. Herbsttreffen Patholinguistik, das der Verband für Patholinguistik (vpl) e.V. am 22.11.2008 an der Universität Potsdam veranstaltet hat. Zum Schwerpunktthema "Ein Kopf - Zwei Sprachen: Mehrsprachigkeit in Forschung und Therapie" sind die drei Hauptvorträge und vier Abstracts von Posterpräsentationen veröffentlicht. Desweiteren enthält der Tagungsband freie Beiträge, u.a. zu Satzverarbeitung und Agrammatismus, Lesestrategien und LRS, Prosodie-Entwicklung, kindlichen Aphasien, Dysphagie-Therapie sowie zu kognitiven Defiziten bei älteren Menschen.
Übergewicht und Adipositas führen zu Insulinresistenz und erhöhen deutlich das Risiko für die Entwicklung von Typ-2-Diabetes und kardiovaskulären Erkrankungen. Sowohl Adipositas als auch die Suszeptibilität gegenüber Diabetes sind zu einem erheblichen Teil genetisch determiniert. Die relevanten Risikogene, deren Interaktion mit der Umwelt, insbesondere mit Bestandteilen der Nahrung, und die Pathomechanismen, die zur Insulinresistenz und Diabetes führen, sind nicht vollständig aufgeklärt. In der vorliegenden Arbeit sollte durch Genexpressionsanalysen des weißen Fettgewebes (WAT) und der Langerhansschen Inseln die Entstehung und Progression von Adipositas und Typ-2-Diabetes untersucht werden, um relevante Pathomechanismen und neue Kandidatengene zu identifizieren. Zu diesem Zweck wurden Diät-Interventionsstudien mit NZO- und verwandten NZL-Mäusen, zwei polygenen Mausmodellen für das humane metabolische Syndrom, durchgeführt. Eine kohlenhydrathaltige Hochfett-Diät (HF: 14,6 % Fettanteil) führte in beiden Mausmodellen zu früher Adipositas, Insulinresistenz und Typ 2 Diabetes. Eine fettreduzierte Standarddiät (SD: 3,3 % Fettanteil), welche die Entstehung von Adipositas und Diabetes stark verzögert, sowie eine diabetesprotektive kohlenhydratfreie Hochfett-Diät (CHF: 30,2 % Fettanteil) dienten als Kontrolldiäten. Mit Hilfe der Microarray-Technologie wurden genomweite Expressionsprofile des WAT erstellt. Pankreatische Inseln wurden durch laserbasierte Mikropräparation (Laser Capture Microdissection; LCM) isoliert und ebenfalls hinsichtlich ihres Expressionsprofils analysiert. Differenziell exprimierte Gene wurden durch Real-Time-PCR validiert. Im WAT der NZO-Maus bewirkte die HF-Diät eine reduzierte Expression nukleärer Gene der oxidativen Phosphorylierung und von lipogenen Enzymen. Dies deutet auf eine inadäquate Fettspeicherung und -verwertung in diesen Tieren hin. Die Reduktion in der Fettspeicherung und -oxidation ist spezifisch für das adipöse NZO-Modell und konnte bei der schlanken SJL Maus nicht beobachtet werden, was auf eine mögliche Beteiligung an der Entstehung der Insulinresistenz hinweist. Zusätzlich wurde bestätigt, dass die Expansion des Fettgewebes bei der adipösen NZO-Maus eine zeitlich verzögerte Infiltration von Makrophagen in das WAT und dort eine lokale Immunantwort auslöst. Darüber hinaus wurde die Methode der LCM etabliert und zur Gewinnung hochangereicherter RNA aus den Langerhansschen Inseln eingesetzt. In erstmalig durchgeführten genomweiten Expressionsanalysen wurde zu einem frühen Zeitpunkt in der Diabetesentwicklung der Einfluss einer diabetogenen HF-Diät und einer diabetesprotektiven CHF-Diät auf das Expressionsprofil von pankreatischen Inselzellen verglichen. Im Gegensatz zum WAT bewirkt die diabetogene HF-Diät in Inselzellen einerseits, eine erhöhte Expression von nukleären Genen für die oxidative Phosphorylierung und andererseits von Genen, die mit Zellproliferation assoziiert sind. Zudem wurden 37 bereits annotierte Gene identifiziert, deren differenzielle Expression mit der Diabetesentwicklung korreliert. Das Peptidhormon Cholecystokinin (Cck, 11,8-fach erhöht durch die HF) stellt eines der am stärksten herauf regulierten Gene dar. Die hohe Anreicherung der Cck-mRNA in Inselzellen deutet auf eine bisher unbekannte Funktion des Hormons in der Regulation der Inselzellproliferation hin. Der Transkriptionsfaktor Mlxipl (ChREBP; 3,8-fach erniedrigt durch die HF) stellt in Langerhansschen Inseln eines der am stärksten herunter regulierten Gene dar. Ferner wurde ChREBP, dessen Funktion als glucoseregulierter Transkriptionsfaktor für lipogene Enzyme bislang in der Leber, aber nicht in Inselzellen nachgewiesen werden konnte, erstmals immunhistochemisch in Inselzellen detektiert. Dies deutet auf eine neue, bisher unbekannte regulatorische Funktion von ChREBP im Glucosesensor-Mechanismus der Inselzellen hin. Eine durchgeführte Korrelation der mit der Diabetesentwicklung assoziierten, differenziell exprimierten Inselzellgene mit Genvarianten aus humanen genomweiten Assoziationsstudien für Typ-2-Diabetes (WTCCC, Broad-DGI-T2D-Studie) ermöglichte die Identifizierung von 24 neuartigen Diabetes-Kandidatengenen. Die Ergebnisse der erstmals am polygenen NZO-Mausmodell durchgeführten genomweiten Expressionsuntersuchungen bestätigen bisherige Befunde aus Mausmodellen für Adipositas und Diabetes (z.B. ob/ob- und db/db-Mäuse), zeigen in einigen Fällen aber auch Unterschiede auf. Insbesondere in der oxidativen Phosphorylierung könnten die Ergebnisse relevant sein für das Verständnis der Pathogenese des polygen-bedingten humanen metabolischen Syndroms.
This thesis presents investigations on sediments from two African lakes which have been recording changes in their surrounding environmental and climate conditions since more than 200,000 years. Focus of this work is the time of the last Glacial and the Holocene (the last ~100,000 years before present [in the following 100 kyr BP]). One important precondition for this kind of research is a good understanding of the present ecosystems in and around the lakes and of the sediment formation under modern climate conditions. Both studies therefore include investigations on the modern environment (including organisms, soils, rocks, lake water and sediments). A 90 m long sediment sequence was investigated from Lake Tswaing (north-eastern South Africa) using geochemical analyses. These investigations document alternating periods of high detrital input and low (especially autochthonous) organic matter content and periods of low detrital input, carbonatic or evaporitic sedimentation and high autochthonous organic matter content. These alternations are interpreted as changes between relatively humid and arid conditions, respectively. Before c. 75 kyr BP, they seem to follow changes in local insolation whereas afterwards they appear to be acyclic and are probably caused by changes in ocean circulation and/or in the mean position of the Inter-Tropical Convergence Zone (ITCZ). Today, these factors have main influence on precipitation in this area where rainfall occurs almost exclusively during austral summer. All modern organisms were analysed for their biomarker and bulk organic and compound-specific stable carbon isotope composition. The same investigations on sediments from the modern lake floor document the mixed input of the investigated individual organisms and reveal additional influences by methanotrophic bacteria. A comparison of modern sediment characteristics with those of sediments covering the time 14 to 2 kyr BP shows changes in the productivity of the lake and the surrounding vegetation which are best explained by changes in hydrology. More humid conditions are indicated for times older than 10 kyr BP and younger than 7.5 kyr BP, whereas arid conditions prevailed in between. These observations agree with the results from sediment composition and indications from other climate archives nearby. The second lake study deals with Lake Challa, a small, deep crater lake on the foot of Mount Kilimanjaro. In this lake form mm-scale laminated sediments which were analyses with micro-XRF scanning for changes in the element composition. By comparing these results with investigations on thin sections, results from ongoing sediment trap studies, meteorological data, and investigations on the surrounding rocks and soils, I develop a model for seasonal variability in the limnology and sedimentation of Lake Challa. The lake appears to be stratified during the warm rain seasons (October – December and March – May) during which detrital material is delivered to the lake and carbonates precipitate. On the lake floor forms a dark lamina with high contents of Fe and Ti and high Ca/Al and low Mn/Fe ratios. Diatoms bloom during the cool and windy season (June – September) when mixing down to c. 60 m depth provides easily bio-available nutrients. Contemporaneously, Fe and Mn-oxides are precipitating which cause high Mn/Fe ratios in the light diatom-rich laminae of the sediments. Trends in the Mn/Fe ratio of the sediments are interpreted to reflect changes in the intensity or duration of seasonal mixing in Lake Challa. This interpretation is supported by parallel changes in the organic matter and biogenic silica content observed in the 22 m long profile recovered from Lake Challa. This covers the time of the last 25 kyr BP. It documents a transition around 16 kyr BP from relatively well-mixed conditions with high detrital input during glacial times to stronger stratified conditions which are probably related to increasing lake levels in Challa and generally more humid conditions in East Africa. Intensified mixing is recorded for the time of the Younger Dryas and the period between 11.4 and 10.7 kyr BP. For these periods, reduced intensity of the SW monsoon and intensified NE monsoon are reported from archives of the Indian-Asian Monsoon region, arguing for the latter as a probable source for wind mixing in Lake Challa. This connection is probably also responsible for contemporaneous events in the Mn/Fe ratios of the Lake Challa sediments and in other records of northern hemisphere monsoon intensity during the Holocene and underlines the close interaction of global low latitude atmospheric circulation.
The bibliographic project 'Renaissance Linguistics Archive' (R.L.A.) aimed at establishing a comprehensive database of secondary sources covering the linguistics ideas developed by Renaissance scholars in Europe. The database project, founded in 1986 by Mirko Tavoni (Pisa) and in 1994 transferred to Gerda Haßler (Potsdam), resulted so far in three print-outs, each of them counting 1000 records. It is the aim of this website to publish the results of the collective efforts undertaken thus far (R.L.A. 1.0, 1986-1999).
Inhalt: 1. Untersuchungsfeld 2. Allgemeine Angaben zu den befragten Kommunen 2.1 Haushaltsdefizit/-überschuss 2.2. Schuldenstand zum 31.12.2004 und Zinsausgaben 2.3. Durchschnittsverzinsung 3. Kommunales Debt Management 3.1. Zielsetzungen 3.2. Kreditmanagement 3.3. Derivatemanagement 3.4. Organisatorische Aspekte 4. Fazit
Content: 1 The Typology 1.1 Object Placement 2 Treatment of StG in terms of LF Movement – with and without Head Movement 3 An OT-solution in terms of linearisation (‘LF-to-PF-Mapping’) 3.1 The trigger for additional orders: Focus 3.2 Competitions 3.3 Summary 4 RP 4.1 LF Movement – with and without Head Movement 4.2 The OT-account for RP 4.3 Competitions 5 Summary
Content: 0 Introduction 1 Elements that block verb raising – a discussion 1.1 Haider’s observation 1.2 The other constructions 1.3 A possible explanation 1.4 Riemsdijk’s grafting approach as a possible alternative? 1.5 Intermediate Summary 2 Parsing problems with speech act adverbials in the pre-field
Content: 1 Introduction 2 A restrictive theory of head movement 2.1 Preliminary Remarks 2.2 Theoretical Problems of Head Movement 2.3 Remnant Phrasal Movement 2.4 Münchhausen Style Head Movement 3 Verb Second Movement 3.1 Introductory Remarks 3.2 Problems of V/2 constructions: Does V really move to Comp? 3.3 The preverbal position 3.4 The Second Position 4 References
Counting Markedness
(2003)
This paper reports the results of a corpus investigation on case conflicts in German argument free relative constructions. We investigate how corpus frequencies reflect the relative markedness of free relative and correlative constructions, the relative markedness of different case conflict configurations, and the relative markedness of different conflict resolution strategies. Section 1 introduces the conception of markedness as used in Optimality Theory. Section 2 introduces the facts about German free relative clauses, and section 3 presents the results of the corpus study. By and large, markedness and frequency go hand in hand. However, configurations at the highest end of the markedness scale rarely show up in corpus data, and for the configuration at the lowest end we found an unexpected outcome: the more marked structure is preferred.
The present paper addresses a current view in the psycholinguistic literature that case exhibits processing properties distinct from those of other morphological features such as number (cf. Fodor & Inoue, 2000; Meng & Bader, 2000a/b). In a speeded-acceptability judgement experiment, we show that the low performance previously found for case in contrast to number violations is limited to nominative case, whereas violations involving accusative and dative are judged more accurately. The data thus do not support the proposal that case per se is associated with special properties (in contrast to other features such as number) in reanalysis processes. Rather, there are significant judgement differences between the object cases accusative and dative on the one hand and the subject nominative case on the other. This may be explained by the fact that nominative has a specific status in German (and many other languages) as a default case.
The present paper addresses a current view in the psycholinguistic literature that case exhibits processing properties distinct from those of other morphological features such as number (cf. Fodor & Inoue, 2000; Meng & Bader, 2000a/b). In a speeded-acceptability judgement experiment, we show that the low performance previously found for case in contrast to number violations is limited to nominative case, whereas violations involving accusative and dative are judged more accurately. The data thus do not support the proposal that case per se is associated with special properties (in contrast to other features such as number) in reanalysis processes. Rather, there are significant judgement differences between the object cases accusative and dative on the one hand and the subject nominative case on the other. This may be explained by the fact that nominative has a specific status in German (and many other languages) as a default case.
In the recent literature there is a hypothesis that the human parser uses number and case information in different ways to resolve an initially incorrect case assignment. This paper investigates what role morphological case information plays during the parser’s detection of an ungrammaticality or its recognition that a reanalysis is necessary. First, we compare double nominative with double accusative ungrammaticalities in a word by word, speeded grammaticality task and in this way show that only double nominatives lead to a so-called ”illusion of grammaticality” (a low rate of ungrammaticality detection). This illusion was found to disappear when the second argument was realized by a pronoun rather than by a full definite determiner phrase, i.e. when the saliency of the second argument was increased. Thus, the accuracy in recognizing an ungrammaticality induced by the case feature of the second argument is dependent on the type of this argument. Furthermore, we found that the accuracy in detecting such case ungrammaticalities is distance sensitive insofar as a shorter distance leads to a higher accuracy. The results are taken as support for an ”expectationdriven” parse strategy in which the way the parser uses the information of a current input item depends on the expectation resulting from the parse carried out so far. By contrast, ”input-driven” parse strategies, such as the diagnosis model (Fodor & Inoue, 1999) are unable to explain the data presented here.
This work presents mathematical and computational approaches to cover various aspects of metabolic network modelling, especially regarding the limited availability of detailed kinetic knowledge on reaction rates. It is shown that precise mathematical formulations of problems are needed i) to find appropriate and, if possible, efficient algorithms to solve them, and ii) to determine the quality of the found approximate solutions. Furthermore, some means are introduced to gain insights on dynamic properties of metabolic networks either directly from the network structure or by additionally incorporating steady-state information. Finally, an approach to identify key reactions in a metabolic networks is introduced, which helps to develop simple yet useful kinetic models. The rise of novel techniques renders genome sequencing increasingly fast and cheap. In the near future, this will allow to analyze biological networks not only for species but also for individuals. Hence, automatic reconstruction of metabolic networks provides itself as a means for evaluating this huge amount of experimental data. A mathematical formulation as an optimization problem is presented, taking into account existing knowledge and experimental data as well as the probabilistic predictions of various bioinformatical methods. The reconstructed networks are optimized for having large connected components of high accuracy, hence avoiding fragmentation into small isolated subnetworks. The usefulness of this formalism is exemplified on the reconstruction of the sucrose biosynthesis pathway in Chlamydomonas reinhardtii. The problem is shown to be computationally demanding and therefore necessitates efficient approximation algorithms. The problem of minimal nutrient requirements for genome-scale metabolic networks is analyzed. Given a metabolic network and a set of target metabolites, the inverse scope problem has as it objective determining a minimal set of metabolites that have to be provided in order to produce the target metabolites. These target metabolites might stem from experimental measurements and therefore are known to be produced by the metabolic network under study, or are given as the desired end-products of a biotechological application. The inverse scope problem is shown to be computationally hard to solve. However, I assume that the complexity strongly depends on the number of directed cycles within the metabolic network. This might guide the development of efficient approximation algorithms. Assuming mass-action kinetics, chemical reaction network theory (CRNT) allows for eliciting conclusions about multistability directly from the structure of metabolic networks. Although CRNT is based on mass-action kinetics originally, it is shown how to incorporate further reaction schemes by emulating molecular enzyme mechanisms. CRNT is used to compare several models of the Calvin cycle, which differ in size and level of abstraction. Definite results are obtained for small models, but the available set of theorems and algorithms provided by CRNT can not be applied to larger models due to the computational limitations of the currently available implementations of the provided algorithms. Given the stoichiometry of a metabolic network together with steady-state fluxes and concentrations, structural kinetic modelling allows to analyze the dynamic behavior of the metabolic network, even if the explicit rate equations are not known. In particular, this sampling approach is used to study the stabilizing effects of allosteric regulation in a model of human erythrocytes. Furthermore, the reactions of that model can be ranked according to their impact on stability of the steady state. The most important reactions in that respect are identified as hexokinase, phosphofructokinase and pyruvate kinase, which are known to be highly regulated and almost irreversible. Kinetic modelling approaches using standard rate equations are compared and evaluated against reference models for erythrocytes and hepatocytes. The results from this simplified kinetic models can simulate acceptably the temporal behavior for small changes around a given steady state, but fail to capture important characteristics for larger changes. The aforementioned approach to rank reactions according to their influence on stability is used to identify a small number of key reactions. These reactions are modelled in detail, including knowledge about allosteric regulation, while all other reactions were still described by simplified reaction rates. These so-called hybrid models can capture the characteristics of the reference models significantly better than the simplified models alone. The resulting hybrid models might serve as a good starting point for kinetic modelling of genome-scale metabolic networks, as they provide reasonable results in the absence of experimental data, regarding, for instance, allosteric regulations, for a vast majority of enzymatic reactions.
Do we know the answer?
(2003)
The Earth’s magnetic field (EMF) is generated by convections in the electrically conducting liquid iron-rich outer core, modified by the Earth’s rotation. A drastic manifestation of the dynamics of this fluid body is the occurrence of geomagnetic field reversals in the Earth’s history but also geomagnetic excursions, which are more frequent features of otherwise stable polarity chrons, but often poorly constrained in the geological record. To better understand the origin of the field, we need to know how the field has varied on different geological timescales. This includes not only information about changes in the ancient field’s direction but also about the absolute intensity (palaeointensity) and the age. This palaeointensity record is needed for compiling a full-vector description of the field. A palaeomagnetic and palaeointensity study on lava flows allows gaining insights about the evolution of the EMF through time and space. However, constraining the EMF evolution over different geological timescales remains a difficult objective due to the paucity of available palaeointensity data. One new alternative approach in palaeointensity studies is the recently proposed multispecimen parallel differential pTRM (MS) method, which has potentially several advantages over the commonly used Thellier method, because it is in theory independent of magnetic domain state, less prone to biasing effects, such as thermal alteration and significantly faster to perform in the laboratory. A study of highly active volcanic regions, such as the Trans-Mexican Volcanic Belt, seems promising when attempting a full-vector reconstruction or when looking for field excursions. One aim of this thesis was to gain new information about the occurrence and global validity of geomagnetic excursions from the Brunhes- or Matuyama Chron. For this purpose some 75 lava flows from within the Trans-Mexican Volcanic Belt were sampled for palaeomagnetic analyses. The scatter of virtual geomagnetic poles from lavas younger than 1.7 Ma was used for estimating palaeosecular variation and was found to be consistent with latitude dependent Model G and other high quality palaeomagnetic data from Mexico. The palaeomagnetic mean-vectors of 56 lavas were correlated to the Geomagnetic Polarity Timescale supplemented with information on geomagnetic excursions. On the grounds of their associated radioisotopic ages, four lavas were tentatively correlated with known excursions from marine records. Two lava flows dating of Brunhes Chron were associated with the Big Lost and Delts/Stage 17 excursions, respectively. From further two flows dating of Matuyama Chron, one flow was associated with either the Santa Rosa- or Kamikatsura excursions, while the other could have been emplaced during the Gilsa excursion. The most significant outcome was the finding that both Brunhes excursional flows display nearly fully reversed directions that deviate almost 180°C from the expected normal polarity direction. This observation could indicate that in particular the Big Lost and Delta/Stage17 excursions may represent other short periods during which the field completed a full reversal for a short time, such as was previously found for other older cryptochrons or tiny wiggles. Another focus of this thesis was set on estimating the feasibility of the new MS method for routine palaeointensity determination. This was accomplished by applying the MS method to samples from 11 historical lava flows from Mexico and Iceland from which the actual field intensity was either known from contemporary observatory data, or deduced from magnetic field models. Comparing observed with expected intensity values allowed to test the accuracy of the MS method. It a was found that the majority of palaeointensity estimates after the MS method yielded results that were very close or indistinguishable within the range of uncertainty from the expected values. However, a general trend towards an overestimate in the palaeointensity was also observed, which, on the grounds of corroborating rock magnetic analyses, was associated with multidomain material. This observation was taken as first evidence that the MS method is not entirely independent of magnetic domain state, as was originally claimed. However, a second experiment in which a modification of the most widely used Thellier method was applied to sister samples from 5 Icelandic flows revealed that, in comparison to the MS method, the latter produced more accurate and statistically better defined palaeointensities. Thus, from these first results, the MS method appeared as a viable alternative for future palaeointensity studies. Subsequently it was attempted to corroborate the directional record from Mexican lavas with palaeointensity data. It was possible to acquire palaeointensity estimates for 32 out of 51 investigated lava flows. These new results revealed that the new MS palaeointensities for Mexico are, with a high degree of statistical significance, around 30% higher than expected. The generally high palaeointensities seem to corroborate the results obtained from historical lava flows in this study and other previous studies on synthetic samples where domain state effects were found to cause overestimates in the palaeointensity of up to 30 per cent in the MS method. The primary process that leads to this overestimate is assigned to an asymmetry in the demagnetisation and remagnetisation process. Yet, this overestimate is expected to be no larger than what might be expected from Thellier experiments performed on samples with a given degree of multidomain behaviour.
Modern anthropogenic forcing of atmospheric chemistry poses the question of how the Earth System will respond as thousands of gigatons of greenhouse gas are rapidly added to the atmosphere. A similar, albeit nonanthropogenic, situation occurred during the early Paleogene, when catastrophic release of carbon to the atmosphere triggered abrupt increase in global temperatures. The best documented of these events is the Paleocene-Eocene Thermal Maximum (PETM, ~55 Ma) when the magnitude of carbon addition to the oceans and atmosphere was similar to those expected for the future. This event initiated global warming, changes in hydrological cycles, biotic extinction and migrations. A recently proposed hypothesis concerning changes in marine ecosystems suggests that this global warming strongly influenced the shallow-water biosphere, triggering extinctions and turnover in the Larger Foraminifera (LF) community and the demise of corals. The successions from the Adriatic Carbonate Platform (SW Slovenia) represent an ideal location to test the hypothesis of a possible causal link between the PETM and evolution of shallow-water organisms because they record continuous sedimentation from the Late Paleocene to the Early Eocene and are characterized by a rich biota, especially LF, fundamental for detailed biostratigraphic studies. In order to reconstruct paleoenvironmental conditions during deposition, I focused on sedimentological analysis and paleoecological study of benthic assemblages. During the Late Paleocene-earliest Eocene, sedimentation occurred on a shallow-water carbonate ramp system characterized by enhanced nutrient levels. LF represent the common constituent of the benthic assemblages that thrived in this setting throughout the Late Paleocene to the Early Eocene. With detailed biostratigraphic and chemostratigraphic analyses documenting the most complete record to date available for the PETM event in a shallow-water marine environment, I correlated chemostratigraphically for the first time the evolution of LF with the δ¹³C curves. This correlation demonstrated that no major turnover in the LF communities occurred synchronous with the PETM; thus the evolution of LF was mainly controlled by endogenous biotic forces. The study of Late Thanetian metric-sized microbialite-coral mounds which developed in the middle part of the ramp, documented the first Cenozoic occurrence of microbially-cemented mounds. The development of these mounds, with temporary dominance of microbial communities over corals, suggest environmentally-triggered “phase shifts” related to frequent fluctuations of nutrient/turbidity levels during recurrent wet phases which preceding the extreme greenhouse conditions of the PETM. The paleoecological study of the coral community in the microbialites-coral mounds, the study of corals from Early Eocene platform from SW France, and a critical, extensive literature research of Late Paleocene – Early Eocene coral occurrences from the Tethys, the Atlantic, the Caribbean realms suggested that these corals types, even if not forming extensive reefs, are common in the biofacies as small isolated colonies, piles of rubble or small patch-reefs. These corals might have developed ‘alternative’ life strategies to cope with harsh conditions (high/fluctuating nutrients/turbidity, extreme temperatures, perturbation of aragonite saturation state) during the greenhouse times of the early Paleogene, representing a good fossil analogue to modern corals thriving close to their thresholds for survival. These results demonstrate the complexity of the biological responses to extreme conditions, not only in terms of temperature but also nutrient supply, physical disturbance and their temporal variability and oscillating character.
Dietary antioxidants are believed to play an important role in the prevention and treatment of a variety of diseases associated with oxidative stress. Although there is a wide range of dietary antioxidants, the bulk of the research to date has been focused on the nutrient antioxidants vitamin C, E, and carotenoids. Certain relatively uncommon antioxidants such as lipoic acid (LA), and phenolic compounds such as (-)-epicatechin (EC), (-)-epigallocatechin (EGC), (-)-epicatechin gallate (ECG), and (-)-epigallocatechin gallate (EGCG), have not been extensively investigated although they may exert greater antioxidant potency than that of carotenoids and vitamins. Extracts from selected plants and plant byproducts may represent rich sources for one or more of such antioxidants and therefore exhibit higher effects than a single antioxidant due to the synergistic effects produced between such antioxidants. However, in the last decade a number of epidemiological, animal and in vitro studies have suggested a protective and therapeutic potency of these antioxidants in a broad range of diseases such as cancer, diabetes, atherosclerosis, cataract and acute and chronic neurological disorders. Inflammation, the response of the host toward any infection or injury, plays a central role in the development of many chronic diseases. Several evidences demonstrated the rise of different types of cancer from sites of inflammation. This suggests that active oxygen species and some cytokines generated in the inflamed tissues can cause injury to DNA and ultimately lead to carcinogenesis. Diethylnitrosamine (DEN) is one of the most important environmental carcinogens, present in a variety of foods, alcoholic beverages, tobacco smoke and it can be synthesized endogenously. In addition to the liver it can induce carcinogenesis in other organs like kidney, trachea, lung, esophagus, fore stomach, and nasal cavity. Several epidemiological and laboratory studies indicate that nitroso compounds including DEN may induce hyperplasia and chronic inflammation which is closely associated with the development of hepatocellular carcinoma. Despite increasing evidence on the potential of antioxidants in modulating the etiology of chronic diseases, little is known about their role in inflammation and acute phase response (APR). Therefore the aim of the present work was to study the protective effect of water and solvent extracts of eight plant and plant byproducts including green tea, artichoke, spinach, broccoli, onion and eggplant, orange and potato peels as well as eight antioxidants agents including EC, EGC, ECG, EGCG, ascorbic acid (AA), acetylcysteine (NAC), α-LA, and alpha-tocopherol (α-TOC) toward acute inflammation induced by interleukin-6 (IL-6) and hepatotoxicity induced by DEN in vitro. The negative acute phase proteins (APP), transthyretin (TTR) and retinol-binding protein (RBP) were used as inflammatory biomarkers analyzed by ELISA, whereas neutral red assay was used for evaluating the cytotoxicity. All experiments were performed in vitro using human hepatocarcinoma cell line (HepG2). Additionally the antioxidant activity was measured by TEAC and FRAP assays, phenolic content was measured by Folin–Ciocalteu and characterized by HPLC. Moreover, the microheterogeneity of TTR was detected using immunoprecipitation assay combined with SELDI-TOF MS. Results of present study showed that HepG2 cells provide a simple, sensitive in vitro system for studying the regulation of the negative APP, TTR and RBP under free and inflammatory condition. IL-6, a potent proinflammatory cytokine, in a concentration of 25 ng/ml was able to reduce TTR and RBP secretion by approximately 50-60% after 24h of incubation. With exception of broccoli and water extract of onion which showed pro-inflammatory effects in this study, all other plant extracts, at specific concentrations, were able to elevate TTR secretion in normal condition and even under treatment of IL-6 where the effect was quite lower. Green tea followed by artichoke and potato peel exhibited the highest elevation in TTR concentration which reached 1.1 and 2.5 folds of control in presence and absences of IL-6 respectively. In general Plant extracts were ordered according their anti-inflammatory potency as following: in water extracts; green tea > artichoke > potato peel > orange peel > spinach > eggplant peel, where in solvent extracts; green tea > artichoke > potato peel > spinach > eggplant peel > onion > orange peel. The antiinflammatory effect of water extracts of green tea, artichoke and orange peel were significantly higher than their corresponding solvent extracts whereas water extracts of eggplant-, potato peels and spinach showed lower effect than their solvent extracts. On the other hand α-LA followed by EGCG and ECG exhibited the highest elevation in TTR concentration compared to other antioxidants. The relation between the anti-inflammatory potential and antioxidants activity and phenolic content for the investigated substances was generally weak. This may suggest the involvement of other mechanisms than antioxidants properties for the observed effect. TTR secreted by HepG2 cells has a molecular structure quite similar to the purified standard and serum TTR in which all the three main variants are contained including native, S-cystinylated and Sglutathionylated TTR. Interestingly, a variant with molecular mass of 13453.8 + 8.3 Da has been detected only in TTR secreted by HepG2. Among all investigated antioxidants and plant extracts, six substances were able to elevate the native preferable TTR variant. The potency of these substances can be ordered as following α-LA > NAC > onion > AA > EGCG > green tea. A weak correlation between elevation on TTR and shifting to the native form was observed. Similar weak correlation has also been observed between antioxidants activity and elevation in native TTR. Although DEN was able to induce cell death in a concentration dependent manner, it requires considerably higher concentrations for its effects especially after 24h. This may be attributed to a lack in cytochrome P450 enzymes produced by HepG2. At selected concentrations some antioxidants and plant extracts significantly attenuate DEN cytotoxicity as following: spinach > α-LA > artichoke > orange peel > eggplant peel > α-TOC > onion > AA. Contrary all other substances especially green tea, broccoli, potato peel, and ECG stimulate DEN toxicity. In conclusion, this study demonstrated that selected antioxidants and plant extracts may attenuate the inflammatory process, not only by their antioxidants potency but also by other mechanisms which remain unclear. They may also play a vital role on stabilizing the tetramic structure of TTR and thereby prevent amyloidosis diseases. Lipoic acid represents in this study unique function against inflammation and hepatotoxicity. Despite the protective effect demonstrated by investigated substances, attention should also be given to the pro-oxidant and potential cytotoxic effects produced at higher concentrations.
Holmberg (1997, 1999) assumes that Holmberg's generalisation (HG) is derivational, prohibiting Object Shift (OS) across an intervening non-adverbial element at any point in the derivation. Counterexamples to this hypothesis are given in Fox & Pesetsky (2005) which show that remnant VP-topicalisations are possible in Scandinavian as long as the VP-internal order relations are maintained. Extending the empirical basis concerning remnant VP-topicalisations, we argue that HG and the restrictions on object stranding result from the same, more general condition on order preservation. Considering this condition to be violable and to interact with various constraints on movement in an Optimality-theoretic fashion, we suggest an account for various asymmetries in the interaction between remnant VP-topicalisations and both OS and other movement operations (especially subject raising) as to their order preserving characteristics and stranding abilities.
The main claim of this paper is that the minimalist framework and optimality theory adopt more or less the same architecture of grammar: both assume that a generator defines a set S of potentially well-formed expressions that can be generated on the basis of a given input, and that there is an evaluator that selects the expressions from S that are actually grammatical in a given language L. The paper therefore proposes a model of grammar in which the strengths of the two frameworks are combined: more specifically, it is argued that the computational system of human language CHL from MP creates a set S of potentially well-formed expressions, and that these are subsequently evaluated in an optimality theoretic fashion.
The simple generator
(2006)
I argue that the shift of explanatory burden from the generator to the evaluator in OT syntax – together with the difficulties that arise when we try to formulate a working theory of the interfaces of syntax – leads to a number of assumptions about syntactic structures in OT which are quite different from those typical of minimalist syntax: formal features, as driving forces behind syntactic movement, are useless, and derivational and representational economy are problematic for both empirical and conceptual reasons. The notion of markedness, central in Optimality Theory, is not fully compatible with the idea of synactic economy. Even more so, seemingly obvious cases of blocking by structural economy do not seem to result from grammar proper, but reflect (economical) aspects of language use.
Natural law
(2006)
This work concentrates on the requirements of the computational system of HL, by developing the idea that Natural Law applies to universal syntactic principles. The systems of efficient growth are for the continuation of motion and maximal distance between the elements. The condition of maximization accounts for the properties of syntactic trees - binary branching, labeling, and the EPP. NL justifies the basic principle of organization in Merge: it provides a functional explanation of phase formation and thematic domains. In Optimality Theory, it accounts for the selection of a particular word order in languages. A comprehensive and definitive understanding of the principles underlying MP will eventually lead to a more advanced design of OT.