Refine
Has Fulltext
- yes (649) (remove)
Year of publication
- 2016 (649) (remove)
Document Type
- Postprint (216)
- Article (175)
- Doctoral Thesis (136)
- Monograph/Edited Volume (28)
- Part of Periodical (22)
- Preprint (18)
- Review (14)
- Master's Thesis (12)
- Part of a Book (11)
- Working Paper (6)
Keywords
- Migration (13)
- migration (13)
- religion (13)
- Religion (12)
- interkulturelle Missverständnisse (12)
- religiöses Leben (12)
- confusions and misunderstandings (11)
- Logopädie (6)
- Zeitschrift (6)
- model (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Slavistik (75)
- Institut für Geowissenschaften (41)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (39)
- Institut für Physik und Astronomie (31)
- Institut für Biochemie und Biologie (30)
- Vereinigung für Jüdische Studien e. V. (29)
- Bürgerliches Recht (28)
- Department Linguistik (23)
Trotz aller Bemühungen um Chancengleichheit entscheiden sich weitaus weniger Frauen als Männer für einen MINT-bezogenen Studiengang oder Beruf. Auch in der heranwachsenden Generation deutscher Schülerinnen liegt die Motivation einen naturwissenschaftlichen Beruf zu ergreifen unter dem Durchschnitt deutscher Schüler. Schulleistungsuntersuchungen belegen, dass vor allem Schülerinnen der Sekundarstufe I ein deutlich geringeres Interesse an Fächern der Naturwissenschaften, insbesondere Physik, aufweisen als gleichaltrige Jungen. Aus diesem Grund widmet sich die vorliegende Untersuchung der Frage, ob es bereits am Ende der Grundschulzeit einen geschlechtstypischen Unterschied des Interesses am Fach Physik bei Schülerinnen und Schüler gibt. Teil der schriftlichen Befragung wurden Schülerinnen und Schüler der sechsten Klasse des Landes Brandenburg (N=235). Die Datenerhebung erfolgte mittels eines eigens entwickelten Messinstrumentes (.52≤α≤.79). Es lassen sich mit Effektstärken von |d|_1=.38, |d|_2=.27, |d|_3=.18 sowie |d|_4=.28 Unterschiede mit einer teils geringen praktischen Bedeutsamkeit zugunsten der befragten Jungen finden. Zudem deuten die Ergebnisse darauf hin, dass sowohl Jungen als auch Mädchen, die der Ansicht sind, dass das eigene Geschlecht generell mehr Interesse an Physik aufweist, tatsächlich selbst mehr Interesse als das jeweils andere Geschlecht haben. Eine Interpretation der Ergebnisse sowie Limitationen und Implikationen der Untersuchung werden diskutiert.
In the past decades, development cooperation (DC) led by conventional bi- and multilateral donors has been joined by a large number of small, private or public-private donors. This pluralism of actors raises questions as to whether or not these new donors are able to implement projects more or less effectively than their conventional counterparts. In contrast to their predecessors, the new donors have committed themselves to be more pragmatic, innovative and flexible in their development cooperation measures. However, they are also criticized for weakening the function of local civil society and have the reputation of being an intransparent and often controversial alternative to public services. With additional financial resources and their new approach to development, the new donors have been described in the literature as playing a controversial role in transforming development cooperation. This dissertation compares the effectiveness of initiatives by new and conventional donors with regard to the provision of public goods and services to the poor in the water and sanitation sector in India.
India is an emerging country but it is experiencing high poverty rates and poor water supply in predominantly rural areas. It lends itself for analyzing this research theme as it is currently being confronted by a large number of actors and approaches that aim to find solutions for these challenges .
In the theoretical framework of this dissertation, four governance configurations are derived from the interaction of varying actor types with regard to hierarchical and non-hierarchical steering of their interactions. These four governance configurations differ in decision-making responsibilities, accountability and delegation of tasks or direction of information flow. The assumption on actor relationships and steering is supplemented by possible alternative explanations in the empirical investigation, such as resource availability, the inheritance of structures and institutions from previous projects in a project context, gaining acceptance through beneficiaries (local legitimacy) as a door opener, and asymmetries of power in the project context.
Case study evidence from seven projects reveals that the actors' relationship is important for successful project delivery. Additionally, the results show that there is a systematic difference between conventional and new donors. Projects led by conventional donors were consistently more successful, due to an actor relationship that placed the responsibility in the hands of the recipient actors and benefited from the trust and reputation of a long-term cooperation. The trust and reputation of conventional donors always went along with a back-up from federal level and trickled down as reputation also at local level implementation. Furthermore, charismatic leaders, as well as the acquired structures and institutions of predecessor projects, also proved to be a positive influencing factor for successful project implementation.
Despite the mixed results of the seven case studies, central recommendations for action can be derived for the various actors involved in development cooperation. For example, new donors could fulfill a supplementary function with conventional donors by developing innovative project approaches through pilot studies and then implementing them as a supplement to the projects of conventional donors on the ground. In return, conventional donors would have to make room the new donors by integrating their approaches into already programs in order to promote donor harmonization. It is also important to identify and occupy niches for activities and to promote harmonization among donors on state and federal sides.
The empirical results demonstrate the need for a harmonization strategy of different donor types in order to prevent duplication, over-experimentation and the failure of development programs. A transformation to successful and sustainable development cooperation can only be achieved through more coordination processes and national self-responsibility.
Im Graduiertenkolleg NatRiskChange der Universität Potsdam und anderen Forschungseinrichtungen werden beobachtete sowie zukünftig mögliche Veränderungen von Naturgefahren untersucht. Teil des strukturierten Doktorandenprogramms sind sogenannte Task-Force-Einsätze, bei denen die Promovierende zeitlich begrenzt ein aktuelles Ereignis auswerten. Im Zuge dieser Aktivität wurde die Sturzflut vom 29.05.2016 in Braunsbach (Baden-Württemberg) untersucht.
In diesem Bericht werden erste Auswertungen zur Einordnung der Niederschläge, zu den hydrologischen und geomorphologischen Prozessen im Einzugsgebiet des Orlacher Bachs sowie zu den verursachten Schäden beleuchtet.
Die Region war Zentrum extremer Regenfälle in der Größenordnung von 100 mm innerhalb von 2 Stunden. Das 6 km² kleine Einzugsgebiet hat eine sehr schnelle Reaktionszeit, zumal bei vorgesättigtem Boden. Im steilen Bachtal haben mehrere kleinere und größere Hangrutschungen über 8000 m³ Geröll, Schutt und Schwemmholz in das Gewässer eingetragen und möglicherweise kurzzeitige Aufstauungen und Durchbrüche verursacht. Neben den großen Wassermengen mit einer Abflussspitze in einer Größenordnung von 100 m³/s hat gerade die Geschiebefracht zu großen Schäden an den Gebäuden entlang des Bachlaufs in Braunsbach geführt.
Die Gründung des Europäischen Stabilitätsmechanismus (ESM) durch einen völkerrechtlichen Vertrag außerhalb der EU-Verträge ist mit mehreren Nachteilen verbunden. So schwächt die Zersplitterung der Rechtsquellen die europäischen Institutionen und deren Legitimation. Auch kann der ESM nicht ohne weiteres auf die Strukturen und das Personal der Europäischen Kommission zurückgreifen. Daher ist eine Integration des ESM in das Gemeinschaftsrecht sinnvoll. Dies setzt eine Rechtsgrundlage voraus. Die Arbeit kommt unter Berücksichtigung der deutschen und französischen Positionen sowie der einschlägigen Rechtsprechung des EuGH zu dem Ergebnis, dass die bestehenden EUVerträge keine Rechtsgrundlage für die Integration des ESM in das Gemeinschaftsrecht enthalten. Vor diesem Hintergrund ist eine Änderung des AEUV gemäß dem ordentlichen Vertragsänderungsverfahren unumgänglich. Damit sollte eine ausdrückliche Rechtsgrundlage für einen gemeinschaftsrechtlichen Stabilitätsmechanismus geschaffen werden. In der Arbeit wird ein konkreter Formulierungsvorschlag für eine derartige Rechtsgrundlage entwickelt, auf deren Basis der Rat mit einer Verordnung einen gemeinschaftsrechtlichen Stabilitätsmechanismus schaffen kann. Außerdem werden die wesentlichen Strukturprinzipien für den gemeinschaftsrechtlichen Stabilitätsmechanismus entwickelt, im Hinblick auf Trägerschaft, Governance, Finanzierung, demokratische Kontrolle und die verfügbaren Finanzhilfeinstrumente.
In der vorliegenden Arbeit wird die planetare Grenzschicht in Ny-Ålesund, Spitzbergen, sowohl bezüglich kleinskaliger („mikrometeorologischer“) Effekte als auch in ihrer Kopplung mit der Synoptik untersucht. Dazu werden verschiedene Beobachtungsdaten aus der Säule und in Bodennähe zusammengezogen und bewertet. Die so gewonnenen Datensätze werden dann zur Validierung eines nicht-hydrostatischen, regionalen Klimamodells genutzt. Weiterhin werden orographisch bedingte Einflüsse, die Untergrundbeschaffenheit und die lokale Heterogenität der Unterlage untersucht. Hierzu werden meteorologische Größen, wie die Variabilität der Temperatur und insbesondere die jährliche Windverteilung in Bodennähe untersucht und es erfolgt ein Vergleich von in-situ gemessenen turbulenten Flüssen von den Eddy-Kovarianz-Messkomplexen bei Ny-Ålesund und im Bayelva-Tal unter demselben Aspekt. Es zeigt sich, dass der Eddy-Kovarianz-Messkomplex im Bayelva-Tal sehr stark durch eine orographisch bedingte Kanalisierung der Strömung beeinflusst ist und sich nicht für Vergleiche mit regionalen Klimamodellen mit horizontalen Auflösungen von <1km eignet. Die hohe Bodenfeuchte im Bayelva-Tal führt zudem zu einem deutlich kleineren Bowen-Verhältnis, als es für diese Region zu erwarten ist. Der Eddy-Kovarianz-Messkomplex bei Ny-Ålesund erweist sich hingegen als geeigneter für solche Modellvergleiche, aufgrund der typischen, küstennahen Windverteilung und des repräsentativen Footprints. Letzteres wird durch die Bestimmung der Footprint-Klimatologie des Jahres 2013 mit einem aktuellen Footprint-Modell erarbeitet.
Weiterhin wird die Auswirkung von (Anti-) Zyklonen über den Archipel auf die zeitliche Variabilität der lokalen Grenzschichteigenschaften untersucht und bewertet. Dazu wird ein Zyklonen-Detektions-Algorithmus auf ERA-Interim-Reanalysedatensätze angewendet, wodurch die Häufigkeit von nahezu ideal konzentrischen Hoch- und die Tiefdruckgebieten für drei Jahre bestimmt wird. Aus dieser Verteilung werden insgesamt drei interessante Zeiträume zu verschiedenen Jahreszeiten ausgewählt und im Rahmen von Prozessstudien die lokalen bodennahen meteorologischen Messungen, der turbulente Austausch an der Oberfläche und die Grenzschichtdynamik in der Säule untersucht. Die zeitliche Variabilität der dynamischen Grenzschichtstabilität in der Säule wird anhand von zeitlich hoch aufgelösten vertikalen Profilen der Bulk-Richardson-Zahl aus Kompositprofilen aus Fernerkundungsinstrumenten (Radiometer, Wind-LIDAR) sowie Mastdaten (BSRN-Mast) untersucht und die Grenzschichthöhe ermittelt. Aus diesen Analysen ergibt sich eine deutliche Abhängigkeit der thermischen Stabilität beim Durchzug von Fronten, eine damit einhergehende erhebliche Abhängigkeit der Grenzschichtdynamik und der Grenzschichthöhe sowie des turbulenten Austauschs von der zeitlichen Variabilität der Windgeschwindigkeit in der Säule.
Auf Grundlage der Standortanalysen und Prozessstudien erfolgt ein Vergleich der bodennahen Messungen und den Beobachtungen aus der Säule, sowohl von den genannten Fernerkundungsinstrumenten als auch von In-situ-Messungen (Radiosonden) für den Zeitraum einer Radiosondierungskampagne mit dem nicht-hydrostatischen, regionalen Klimamodel WRF (ARW). Auf Grundlage der Fragestellung, inwieweit aktuelle Schemata die Grenzschichtcharakteristika in orographisch stark gegliedertem Gelände in der Arktis reproduzieren können, werden zwei Grenzschichtparametrisierungsschemata mit verschiedenen Ordnungen der Schließung validiert. Hierzu wird die zeitliche Variabilität der Temperatur, der Feuchte und des Windfeldes in der Säule bis 2000m in den Simulationen mit den Beobachtungsdaten vergleichen. Es wird gezeigt, dass durch Modifikation der Initialwertfelder eine sehr gute Übereinstimmung zwischen den Simulationen und den Beobachtungen bereits bei einer horizontalen Auflösung von 1km erreicht werden kann und die Wahl des Grenzschichtschemas nur untergeordneten Einfluss hat. Hieraus werden Ansätze der Weiterentwicklung der Parametrisierungen, aber auch Empfehlungen bezüglich der Initialwertfelder, wie der Landmaske und der Orographie, vorgeschlagen.
Background: Low back pain (LBP) is one of the world wide leading causes of limited activity and disability. Impaired motor control has been found to be one of the possible factors related to the development or persistence of LBP. In particularly, motor control strategies seemed to be altered in situations requiring reactive responses of the trunk counteracting sudden external forces. However, muscular responses were mostly assessed in (quasi) static testing situations under simplified laboratory conditions. Comprehensive investigations in motor control strategies during dynamic everyday situations are lacking. The present research project aimed to investigate muscular compensation strategies following unexpected gait perturbations in people with and without LBP. A novel treadmill stumbling protocol was tested for its validity and reliability to provoke muscular reflex responses at the trunk and the lower extremities (study 1). Thereafter, motor control strategies in response to sudden perturbations were compared between people with LBP and asymptomatic controls (CTRL) (study 2). In accordance with more recent concepts of motor adaptation to pain, it was hypothesized that pain may have profound consequences on motor control strategies in LBP. Therefore, it was investigated whether differences in compensation strategies were either consisting of changes local to the painful area at the trunk, or also being present in remote areas such as at the lower extremities.
Methods: All investigations were performed on a custom build split-belt treadmill simulating trip-like events by unexpected rapid deceleration impulses (amplitude: 2 m/s; duration: 100 ms; 200 ms after heel contact) at 1m/s baseline velocity. A total number of 5 (study 1) and 15 (study 2) right sided perturbations were applied during walking trials. Muscular activities were assessed by surface electromyography (EMG), recorded at 12 trunk muscles and 10 (study 1) respectively 5 (study 2) leg muscles. EMG latencies of muscle onset [ms] were retrieved by a semi-automatic detection method. EMG amplitudes (root mean square (RMS)) were assessed within 200 ms post perturbation, normalized to full strides prior to any perturbation [RMS%]. Latency and amplitude investigations were performed for each muscle individually, as well as for pooled data of muscles grouped by location. Characteristic pain intensity scores (CPIS; 0-100 points, von Korff) based on mean intensity ratings reported for current, worst and average pain over the last three months were used to allocate participants into LBP (≥30 points) or CTRL (≤10 points). Test-retest reproducibility between measurements was determined by a compilation of measures of reliability. Differences in muscular activities between LBP and CTRL were analysed descriptively for individual muscles; differences based on grouped muscles were statistically tested by using a multivariate analysis of variance (MANOVA, α =0.05).
Results: Thirteen individuals were included into the analysis of study 1. EMG latencies revealed reflex muscle activities following the perturbation (mean: 89 ms). Respective EMG amplitudes were on average 5-fold of those assessed in unperturbed strides, though being characterized by a high inter-subject variability. Test-retest reliability of muscle latencies showed a high reproducibility, both for muscles at the trunk and legs. In contrast, reproducibility of amplitudes was only weak to moderate for individual muscles, but increased when being assessed as a location specific outcome summary of grouped muscles. Seventy-six individuals were eligible for data analysis in study 2. Group allocation according to CPIS resulted in n=25 for LBP and n=29 for CTRL. Descriptive analysis of activity onsets revealed longer delays for all muscles within LBP compared to CTRL (trunk muscles: mean 10 ms; leg muscles: mean 3 ms). Onset latencies of grouped muscles revealed statistically significant differences between LBP and CTRL for right (p=0.009) and left (p=0.007) abdominal muscle groups. EMG amplitude analysis showed a high variability in activation levels between individuals, independent of group assignment or location. Statistical testing of grouped muscles indicated no significant difference in amplitudes between LBP and CTRL.
Discussion: The present research project could show that perturbed treadmill walking is suitable to provoke comprehensive reflex responses at the trunk and lower extremities, both in terms of sudden onsets and amplitudes of reflex activity. Moreover, it could demonstrate that sudden loadings under dynamic conditions provoke an altered reflex timing of muscles surrounding the trunk in people with LBP compared to CTRL. In line with previous investigations, compensation strategies seemed to be deployed in a task specific manner, with differences between LBP and CTRL being evident predominately at ventral sides. No muscular alterations exceeding the trunk could be found when being assessed under the automated task of locomotion. While rehabilitation programs tailored towards LBP are still under debate, it is tempting to urge the implementation of dynamic sudden loading incidents of the trunk to enhance motor control and thereby to improve spinal protection. Moreover, in respect to the consistently observed task specificity of muscular compensation strategies, such a rehabilitation program should be rich in variety.
Prevalence of Achilles tendinopathy increases with age, leading to a weaker tendon with predisposition to rupture. Previous studies, investigating Achilles tendon (AT) properties, are restricted to standardized isometric conditions. Knowledge regarding the influence of age and pa-thology on AT response under functional tasks remains limited. Therefore, the aim of the thesis was to investigate the influence of age and pathology on AT properties during a single-leg vertical jump.
Healthy children, asymptomatic adults and patients with Achilles tendinopathy participated. Ultrasonography was used to assess AT-length, AT-cross-sectional area and AT-elongation. The reliability of the methodology used was evaluated both Intra- and inter-rater at rest and at maximal isometric plantar-flexion contraction and was further implemented to investigate tendon properties during functional task. During the functional task a single-leg vertical jump on a force plate was performed while simultaneously AT elongation and vertical ground reaction forces were recorded. AT compliance [mm/N] (elongation/force) and AT strain [%] (elongation/length) were calculated. Differences between groups were evaluated with respect to age (children vs. adults) and pathology (asymptomatic adults vs. patients).
Good to excellent reliability with low levels of variability was achieved in the assessment of AT properties. During the jumps AT elongation was found to be statistical significant higher in children. However, no statistical significant difference was found for force among the groups. AT compliance and strain were found to be statistical significant higher only in children. No significant differences were found between asymptomatic adults and patients with tendinopathy.
The methodology used to assess AT properties is reliable, allowing its implementation into further investigations. Higher AT-compliance in children might be considered as a protective factor against load-related injuries. During functional task, when higher forces are acting on the AT, tendinopathy does not result in a weaker tendon.
Diese Arbeit ist im Bereich des Qualitätsmanagements (QM) in öffentlichen Organisationen zu verorten. Sie fragt konkret, welche Faktoren eine effektive Implementierung des QM-Systems Common Assessment Framework (CAF) in Deutschen Bundesbehörden beeinflussen. Auf der Basis des soziologischen Neo-Institutionalismus wurden Hypothesen zu möglichen Einflussfaktoren aufgestellt. Im Rahmen einer systematischen Fallauswahl wurden folgende Organisationen untersucht: das Bundeskartellamt, das Bundeszentralamt für Steuern sowie die Staatsbibliothek zu Berlin. Für den empirischen Teil dieser Arbeit wurden halbstrukturierte Leitfadeninterviews mit Experten der ausgewählten Organisationen geführt. Im Rahmen einer qualitativen Inhaltsanalyse wurden diese dann ausgewertet und mit einer „cross case synthesis“ nach Yin (2014) anschließend theoriegeleitet analysiert.
Es lassen sich letztendlich drei entscheidende Bedingungen für eine effektive CAF-Implementierung in Bundesbehörden ableiten: Zum einen die formale Unterstützung der jeweiligen Hausleitung, die eine aktive Rolle innerhalb des CAF-Projektes einnimmt und dabei auch alle mittleren Führungsspitzen zielführend mit einbinden sollte, beispielsweise durch die Übernahme der QM-Projektleitung. Zum anderen ist es für eine zielkohärente Handlungsweise aller Organisationsmitgliedern vonnöten, die verschiedenen Steuerungsinstrumente im Rahmen einer mittelfristigen Gesamtstrategie miteinander zu verzahnen und so formal zu institutionalisieren. Außerdem ist die formale Institutionalisierung einer QM-Einheit, nahe der Hausleitung außerhalb der Fachabteilungen angesiedelt, zu empfehlen. Es hat sich im Rahmen der untersuchten Fallbeispiele gezeigt, dass diese Einheiten ein größeres Potential aufweisen, sich zu QM- und CAF-Kompetenzzentren zu entwickeln und unnötige Arbeiten, die das CAF-Engagement der Mitarbeiterschaft schmälern würden, von eben jener fernzuhalten.
Durch diese Ergebnisse konnte die Arbeit zwei entscheidende Beiträge leisten: Die Forschungslandkarte der QM- und CAF-Forschung in öffentlichen Organisationen wies, speziell auf Bundesebene, vorab verschiedenste weiße Flecken auf, die von dieser Arbeit teilweise gefüllt werden konnten. Zum anderen ist es auf Basis dieser Forschungsarbeit nun möglich, Verwaltungspraktikern konkrete Handlungsempfehlungen an die Hand zu geben, wenn diese erstmals CAF in ihrer Organisation implementieren möchten oder bei einer schon erfolgten Einführung des QM-Instruments nachsteuern möchten.
Difficulties with object relative clauses (ORC), as compared to subject relative clauses (SR), are widely attested across different languages, both in adults and in children. This SR-ORC asymmetry is reduced, or even eliminated, when the embedded constituent in the ORC is a pronoun, rather than a lexical noun phrase. The studies included in this thesis were designed to explore under what circumstances the pronoun facilitation occurs; whether all pronouns have the same effect; whether SRs are also affected by embedded pronouns; whether children perform like adults on such structures; and whether performance is related to cognitive abilities such as memory or grammatical knowledge. Several theoretical approaches that explain the pronoun facilitation in relative clauses are evaluated. The experimental data have been collected in three languages–German, Italian and Hebrew–stemming from both children and adults.
In the German study (Chapter 2), ORCs with embedded 1st- or 3rd-person pronouns are compared to ORCs with an embedded lexical noun phrase. Eye-movement data from 5-year-old children show that the 1st-person pronoun facilitates processing, but not the 3rd-person pronoun. Moreover, children’s performance is modulated by additive effects of their memory and grammatical skills. In the Italian study (Chapter 3), the 1st-person pronoun advantage over the 3rd-person pronoun is tested in ORCs and SRs that display a similar word order. Eye-movement data from 5-year-olds and adult controls and reading times data from adults are pitted against the outcome of a corpus analysis, showing that the 1st-/3rd-person pronoun asymmetry emerges in the two relative clause types to an equal extent. In the Hebrew study (Chapter 4), the goal is to test the effect of a special kind of pronoun–a non-referential arbitrary subject pronoun–on ORC comprehension, in the light of potential confounds in previous studies that used this pronoun. Data from a referent-identification task with 4- to 5-year-olds indicate that, when the experimental material is controlled, the non-referential pronoun does not necessarily facilitate ORC comprehension. Importantly, however, children have even more difficulties when the embedded constituent is a referential pronoun. The non-referentiality / referentiality asymmetry is emphasized by the relation between children’s performance on the experimental task and their memory skills.
Together, the data presented in this thesis indicate that sentence processing is not only driven by structural (or syntactic) factors, but also by discourse-related ones, like pronouns’ referential properties or their discourse accessibility mechanism, which is defined as the level of ease or difficulty with which referents of pronouns are identified and retrieved from the discourse model. Although independent in essence, these structural and discourse factors can in some cases interact in a way that affects sentence processing. Moreover, both types of factors appear to be strongly related to memory. The data also support the idea that, from early on, children are sensitive to the same factors that affect adults’ sentence processing, and that the processing strategies of both populations are qualitatively similar.
In sum, this thesis suggests that a comprehensive theory of human sentence processing needs to account for effects that are due to both structural and discourse-related factors, which operate as a function of memory capacity.
Quantitative thermodynamic and geochemical modeling is today applied in a variety of geological environments from the petrogenesis of igneous rocks to the oceanic realm. Thermodynamic calculations are used, for example, to get better insight into lithosphere dynamics, to constrain melting processes in crust and mantle as well as to study fluid-rock interaction. The development of thermodynamic databases and computer programs to calculate equilibrium phase diagrams have greatly advanced our ability to model geodynamic processes from subduction to orogenesis. However, a well-known problem is that despite its broad application the use and interpretation of thermodynamic models applied to natural rocks is far from straightforward. For example, chemical disequilibrium and/or unknown rock properties, such as fluid activities, complicate the application of equilibrium thermodynamics.
One major aspect of the publications presented in this Habilitationsschrift are new approaches to unravel dynamic and chemical histories of rocks that include applications to chemically open system behaviour. This approach is especially important in rocks that are affected by element fractionation due to fractional crystallisation and fluid loss during dehydration reactions. Furthermore, chemically open system behaviour has also to be considered for studying fluid-rock interaction processes and for extracting information from compositionally zoned metamorphic minerals. In this Habilitationsschrift several publications are presented where I incorporate such open system behaviour in the forward models by incrementing the calculations and considering changing reacting rock compositions during metamorphism. I apply thermodynamic forward modelling incorporating the effects of element fractionation in a variety of geodynamic and geochemical applications in order to better understand lithosphere dynamics and mass transfer in solid rocks.
In three of the presented publications I combine thermodynamic forward models with trace element calculations in order to enlarge the application of geochemical numerical forward modeling. In these publications a combination of thermodynamic and trace element forward modeling is used to study and quantify processes in metamorphic petrology at spatial scales from µm to km. In the thermodynamic forward models I utilize Gibbs energy minimization to quantify mineralogical changes along a reaction path of a chemically open fluid/rock system. These results are combined with mass balanced trace element calculations to determine the trace element distribution between rock and melt/fluid during the metamorphic evolution. Thus, effects of mineral reactions, fluid-rock interaction and element transport in metamorphic rocks on the trace element and isotopic composition of minerals, rocks and percolating fluids or melts can be predicted.
One of the included publications shows that trace element growth zonations in metamorphic garnet porphyroblasts can be used to get crucial information about the reaction path of the investigated sample. In order to interpret the major and trace element distribution and zoning patterns in terms of the reaction history of the samples, we combined thermodynamic forward models with mass-balance rare earth element calculations. Such combined thermodynamic and mass-balance calculations of the rare earth element distribution among the modelled stable phases yielded characteristic zonation patterns in garnet that closely resemble those in the natural samples. We can show in that paper that garnet growth and trace element incorporation occurred in near thermodynamic equilibrium with matrix phases during subduction and that the rare earth element patterns in garnet exhibit distinct enrichment zones that fingerprint the minerals involved in the garnet-forming reactions.
In two of the presented publications I illustrate the capacities of combined thermodynamic-geochemical modeling based on examples relevant to mass transfer in subduction zones. The first example focuses on fluid-rock interaction in and around a blueschist-facies shear zone in felsic gneisses, where fluid-induced mineral reactions and their effects on boron (B) concentrations and isotopic compositions in white mica are modeled. In the second example, fluid release from a subducted slab and associated transport of B and variations in B concentrations and isotopic compositions in liberated fluids and residual rocks are modeled. I show that, combined with experimental data on elemental partitioning and isotopic fractionation, thermodynamic forward modeling unfolds enormous capacities that are far from exhausted.
In my publications presented in this Habilitationsschrift I compare the modeled results to geochemical data of natural minerals and rocks and demonstrate that the combination of thermodynamic and geochemical models enables quantification of metamorphic processes and insights into element cycling that would have been unattainable so far.
Thus, the contributions to the science community presented in this Habilitatonsschrift concern the fields of petrology, geochemistry, geochronology but also ore geology that all use thermodynamic and geochemical models to solve various problems related to geo-materials.
Intracontinental deformation usually is a result of tectonic forces associated with distant plate collisions. In general, the evolution of mountain ranges and basins in this environment is strongly controlled by the distribution and geometries of preexisting structures. Thus, predictive models usually fail in forecasting the deformation evolution in these kinds of settings. Detailed information on each range and basin-fill is vital to comprehend the evolution of intracontinental mountain belts and basins. In this dissertation, I have investigated the complex Cenozoic tectonic evolution of the western Tien Shan in Central Asia, which is one of the most active intracontinental ranges in the world. The work presented here combines a broad array of datasets, including thermo- and geochronology, paleoenvironmental interpretations, sediment provenance and subsurface interpretations in order to track changes in tectonic deformation. Most of the identified changes are connected and can be related to regional-scale processes that governed the evolution of the western Tien Shan.
The NW-SE trending Talas-Fergana fault (TFF) separates the western from the central Tien Shan and constitutes a world-class example of the influence of preexisting anisotropies on the subsequent structural development of a contractile orogen. While to the east most of ranges and basins have a sub-parallel E-W trend, the triangular-shaped Fergana basin forms a substantial feature in the western Tien Shan morphology with ranges on all three sides. In this thesis, I present 55 new thermochronologic ages (apatite fission track and zircon (U-Th)/He)) used to constrain exhumation histories of several mountain ranges in the western Tien Shan. At the same time, I analyzed the Fergana basin-fill looking for progressive changes in sedimentary paleoenvironments, source areas and stratal geometrical configurations in the subsurface and outcrops.
The data presented in this thesis suggests that low cooling rates (<1°C Myr-1), calm depositional environments, and low depositional rates (<10 m Myr-1) were widely distributed across the western Tien Shan, describing a quiescent tectonic period throughout the Paleogene. Increased cooling rates in the late Cenozoic occurred diachronously and with variable magnitudes in different ranges. This rapid cooling stage is interpreted to represent increased erosion caused by active deformation and constrains the onset of Cenozoic deformation in the western Tien Shan. Time-temperature histories derived from the northwestern Tien Shan samples show an increase in cooling rates by ~25 Ma. This event is correlated with a synchronous pulse
iv
in the South Tien Shan. I suggest that strike-slip motion along the TFF commenced at the Oligo-Miocene boundary, facilitating CCW rotation of the Fergana basin and enabling exhumation of the linked horsetail splays. Higher depositional rates (~150 m Myr-1) in the Oligo-Miocene section (Massaget Fm.) of the Fergana basin suggest synchronous deformation in the surrounding ranges. The central Alai Range also experienced rapid cooling around this time, suggesting that the onset of intramontane basin fragmentation and isolation is coeval. These results point to deformation starting simultaneously in the late Oligocene – early Miocene in geographically distant mountain ranges. I suggest that these early uplifts are controlled by reactivated structures (like the TFF), which are probably the frictionally weakest and most-suitably oriented for accommodating and transferring N-S horizontal shortening along the western Tien Shan.
Afterwards, in the late Miocene (~10 Ma), a period of renewed rapid cooling affected the Tien Shan and most mountain ranges and inherited structures started to actively deform. This episode is widely distributed and an increase in exhumation is interpreted in most of the sampled ranges. Moreover, the Pliocene section in the basin subsurface shows the higher depositional rates (>180 m Myr-1) and higher energy facies. The deformation and exhumation increase further contributed to intramontane basin partitioning. Overall, the interpretation is that the Tien Shan and much of Central Asia suffered a global increase in the rate of horizontal crustal shortening. Previously, stress transfer along the rigid Tarim block or Pamir indentation has been proposed to account for Himalayan hinterland deformation. However, the extent of the episode requires a different and broader geodynamic driver.
Kritische Anthropologie?
(2016)
This article compares Max Horkheimer’s and Theodor W. Adorno’s foundation of the Frankfurt Critical Theory with Helmuth Plessner’s foundation of Philosophical Anthropology. While Horkheimer’s and Plessner’s paradigms are mutually incompatible, Adorno’s „negative dialectics“ and Plessner’s „negative anthropology“ (G. Gamm) can be seen as complementing one another. Jürgen Habermas at one point sketched a complementary relationship between his own publicly communicative theory of modern society and Plessner’s philosophy of nature and human expressivity, and though he then came to doubt this, he later reaffirmed it. Faced with the „life power“ in „high capitalism“ (Plessner), the ambitions for a public democracy in a pluralistic society have to be broadened from an argumentative focus (Habermas) to include the human condition and the expressive modes of our experience as essentially embodied persons. The article discusses some possible aspects of this complementarity under the title of a „critical anthropology“ (H. Schnädelbach)
This article is a response to calls in prior research that we need more longitudi-nal analyses to better understand the foundations of PSM and related prosocial values. There is wide agreement that it is crucial for theory-building but also for tailoring hiring practices and human resource development programs to sort out whether PSM-related values are stable or developable. The article summarizes existent theoretical expecta-tions, which turn out to be partially conflicting, and tests them against multiple waves of data from the German Socio-Economic Panel Study which covers a time period of sixteen years. It finds that PSM-related values of public employees are stable rather than dynamic but tend to increase with age and decrease with organizational member-ship. The article also examines cohort effects, which have been neglected in prior work, and finds moderate evidence that there are differences between those born during the Second World War and later generations.
Precision horticulture encompasses site- or tree-specific management in fruit plantations. Of decisive importance is spatially resolved data (this means data from each tree) from the production site, since it may enable customized and, therefore, resource-efficient production measures.
The present thesis involves an examination of the apparent electrical conductivity of the soil (ECa), the plant water status spatially measured by means of the crop water stress index (CWSI), and the fruit quality (e.g. fruit size) for Prunus domestica L. (plums) and Citrus x aurantium, Syn. Citrus paradisi (grapefruit). The goals of the present work were i) characterization of the 3D distribution of the apparent electrical conductivity of the soil and variability of the plant’s water status; ii) investigation of the interaction between ECa, CWSI, and fruit quality; and iii) an approach for delineating management zones with respect to managing trees individually.
To that end, the main investigations took place in the plum orchard. This plantation got a slope of 3° grade on Pleistocene and post-Pleistocene substrates in a semi-humid climate (Potsdam, Germany) and encloses an area of 0.37 ha with 156 trees of the cultivar ˈTophit Plusˈ on a Wavit rootstock. The plantation was laid in 2009 with annual and biannual trees spaced 4 m distance along the irrigation system and 5 m between the rows. The trees were watered three times a week with a drip irrigation system positioned 50 cm above ground level providing 1.6 l per tree per event. With the help of geoelectric measurements, the apparent electrical conductivity of the upper soil (0.25 m) was measured for each tree with an electrode spacing of 0.5 m (4-point light hp). In this manner, the plantation was spatially charted with respect to the soil’s ECa. Additionally, tomography measurements were performed for 3D mapping of the soil ECa and spot checks of drilled cores with a profile of up to 1 m. The vegetative, generative, and fruit quality data were collected for each tree. The instantaneous plant water status was comprehensively determined in spot checks with the established Scholander method for water potential analysis (Scholander pressure bomb) as well as thermal imaging. An infrared camera was used for the thermal imaging (ThermaCam SC 500), mounted on a tractor 3.3 m above ground level. The thermal images (320 x 240 px) of the canopy surface were taken with an aperture of 45° and a geometric resolution of 8.54 x 6.41 mm. With the aid of the canopy temperature readings from the thermal images, cross-checked with manual temperature measurements of a dry and a wet reference leaf, the crop water stress index (CWSI) was calculated. Adjustments in CWSI for measurements in a semi-humid climate were developed, whereas the collection of reference temperatures was automatically collected from thermal images.
The bonitur data were transformed with the help of a variance stabilization process into a normal distribution. The statistical analyses as well as the automatic evaluation routine were performed with several scripts in MATLAB® (R2010b and R2016a) and a free program (spatialtoolbox). The hot spot analysis served to check whether an observed pattern is statistically significant. The method was evaluated with an established k-mean analysis. To test the hot-spot analysis by comparison, data from a grapefruit plantation (Adana, Turkey) was collected, including soil ECa, trunk circumference, and yield data. The plantation had 179 trees on a soil of type Xerofkuvent with clay and clay-loamy texture. The examination of the interaction between the critical values from the soil and plant water status information and the vegetative and generative plant growth variables was performed with the application from ANOVA.
The study indicates that the variability of the soil and plant information in fruit production is high, even considering small orchards. It was further indicated that the spatial patterns found in the soil ECa stayed constant through the years (r = 0.88 in 2011-2012 and r = 0.71 in 2012-2013). It was also demonstrated that CWSI determination may also be possible in semi-humid climate. A correlation (r = - 0.65, p < 0.0001) with the established method of leaf water potential analysis was found. The interaction between the ECa from various depths and the plant variables produced a highly significant connection with the topsoil in which the irrigation system was to be found. A correlation between yield and ECatopsoil of r = 0.52 was determined. By using the hot-spot analysis, extreme values in the spatial data could be determined. These extremes served to divide the zones (cold-spot, random, hot-spot). The random zone showed the highest correlation to the plant variables.
In summary it may be said that the cumulative water use efficiency (WUEc) was enhanced with high crop load. While the CWSI had no effect on fruit quality, the interaction of CWSI and WUEc even outweighed the impact of soil ECa on fruit quality in the production system with irrigation. In the plum orchard, irrigation was relevant for obtaining high quality produce even in the semi-humid climate.
Kommunikative Vernunft
(2016)
Jürgen Habermas explicates the concept of communicative reason. He explains the key assumptions of the philosophy of language and social theory associated with this concept. Also discussed is the category of life-world and the role of the body-mind difference for the consciousness of exclusivity in our access to subjective experience. as well as the role of emotions and perceptions in the context of a theory of communicative action. The question of the redemption of the various validity claims as they are associated with the performance of speech acts is related to processes of social learning and to the role of negative experiences. Finally the interview deals with the relationship between religion and reason and the importance of religion in modern, post-secular societies. Questions about the philosophical culture of our present times are discussed at the end of the conversation.
This article explores a recent performance of excerpts from T.S. Eliot’s Four Quartets (1935/36–1942) entitled Engaging Eliot: Four Quartets in Word, Color, and Sound as an example of live poetry. In this context, Eliot’s poem can be analysed as an auditory artefact that interacts strongly with other oral performances (welcome addresses and artists’ conversations), as well as with the musical performance of Christopher Theofanidis’s quintet “At the Still Point” at the end of the opening of Engaging Eliot. The event served as an introduction to a 13-day art exhibition and engaged in a re-evaluation of Eliot’s poem after 9/11: while its first part emphasises the connection between Eliot’s poem and Christian doctrine, its second part – especially the combination of poetry reading and musical performance – highlights the philosophical and spiritual dimensions of Four Quartets.
This article first outlines different ways of how psycholinguists have dealt with linguistic diversity and illustrates these approaches with three familiar cases from research on language processing, language acquisition, and language disorders. The second part focuses on the role of morphology and morphological variability across languages for psycholinguistic research. The specific phenomena to be examined are to do with stem-formation morphology and inflectional classes; they illustrate how experimental research that is informed by linguistic typology can lead to new insights.
The present study approaches the Spanish postposed constructions creo Ø and creo yo ‘[p], [I] think’ from a cognitive-constructionist perspective. It is argued that both constructions are to be distinguished from one another because creo Ø has a subjective function, while in creo yo, it is the intersubjective dimension that is particularly prominent. The present investigation takes both a qualitative and a quantitative perspective. With regard to the latter, the problem of quantitative representativity is addressed. The discussion posed the question of how empirical research can feed back into theory, more precisely, into the framework of Cognitive Construction Grammar. The data to be analyzed here are retrieved from the corpora Corpus de Referencia del Español Actual and Corpus del Español.
Complex networks are ubiquitous in nature and society. They appear in vastly different domains, for instance as social networks, biological interactions or communication networks. Yet in spite of their different origins, these networks share many structural characteristics. For instance, their degree distribution typically follows a power law. This means that the fraction of vertices of degree k is proportional to k^(−β) for some constant β; making these networks highly inhomogeneous. Furthermore, they also typically have high clustering, meaning that links between two nodes are more likely to appear if they have a neighbor in common.
To mathematically study the behavior of such networks, they are often modeled as random graphs. Many of the popular models like inhomogeneous random graphs or Preferential Attachment excel at producing a power law degree distribution. Clustering, on the other hand, is in these models either not present or artificially enforced.
Hyperbolic random graphs bridge this gap by assuming an underlying geometry to the graph: Each vertex is assigned coordinates in the hyperbolic plane, and two vertices are connected if they are nearby. Clustering then emerges as a natural consequence: Two nodes joined by an edge are close by and therefore have many neighbors in common. On the other hand, the exponential expansion of space in the hyperbolic plane naturally produces a power law degree sequence. Due to the hyperbolic geometry, however, rigorous mathematical treatment of this model can quickly become mathematically challenging.
In this thesis, we improve upon the understanding of hyperbolic random graphs by studying its structural and algorithmical properties. Our main contribution is threefold. First, we analyze the emergence of cliques in this model. We find that whenever the power law exponent β is 2 < β < 3, there exists a clique of polynomial size in n. On the other hand, for β >= 3, the size of the largest clique is logarithmic; which severely contrasts previous models with a constant size clique in this case. We also provide efficient algorithms for finding cliques if the hyperbolic node coordinates are known. Second, we analyze the diameter, i. e., the longest shortest path in the graph. We find
that it is of order O(polylog(n)) if 2 < β < 3 and O(logn) if β > 3. To complement
these findings, we also show that the diameter is of order at least Ω(logn). Third, we provide an algorithm for embedding a real-world graph into the hyperbolic plane using only its graph structure. To ensure good quality of the embedding, we perform extensive computational experiments on generated hyperbolic random graphs. Further, as a proof of concept, we embed the Amazon product recommendation network and observe that products from the same category are mapped close together.
During the last decade, high intensity interval training (HIIT) has been used as an alternative to endurance (END) exercise, since it requires less time to produce similar physiological adaptations. Previous literature has focused on HIIT changes in aerobic metabolism and cardiorespiratory fitness, however, there are currently no studies focusing on its neuromuscular adaptations.
Therefore, this thesis aimed to compare the neuromuscular adaptations of both HIIT and END after a two-week training intervention, by using a novel technology called high-density surface electromyography (HDEMG) motor unit decomposition. This project consisted in two experiments, where healthy young men were recruited (aged between 18 to 35 years). In experiment one, the reliability of HDEMG motor unit variables (mean discharge rate, peak-to-peak amplitude, conduction velocity and discharge rate variability) was tested (Study 1), a new method to track the same motor units longitudinally was proposed (Study 2), and the level of low (<5Hz) and high (>5Hz) frequency motor unit coherence between vastus medialis (VM) and lateralis (VL) knee extensor muscles was measured (Study 4). In experiment two, a two-week HIIT and END intervention was conducted where cardiorespiratory fitness parameters (e.g. peak oxygen uptake) and motor unit variables from the VM and VL muscles were assessed pre and post intervention (Study 3).
The results showed that HDEMG is reliable to monitor changes in motor unit activity and also allows the tracking of the same motor units across different testing sessions. As expected, both HIIT and END improved cardiorespiratory fitness parameters similarly. However, the neuromuscular adaptations of both types of training differed after the intervention, with HIIT showing a significant increase in knee extensor muscle strength that was accompanied by increased VM and VL motor unit discharge rates and HDEMG amplitude at the highest force levels [(50 and 70% of the maximum voluntary contraction force (MVC)], while END training induced a marked increase in time to task failure at lower force levels (30% MVC), without any influence on HDEMG amplitude and discharge rates. Additionally, the results showed that VM and VL muscles share most of their synaptic input since they present a large amount of low and high frequency motor unit coherence, which can explain the findings of the training intervention where both muscles showed similar changes in HDEMG amplitude and discharge rates.
Taken together, the findings of the current thesis show that despite similar improvements in cardiopulmonary fitness, HIIT and END induced opposite adjustments in motor unit behavior. These results suggest that HIIT and END show specific neuromuscular adaptations, possibly related to their differences in exercise load intensity and training volume.
This work reports about new high-resolution imaging and spectroscopic observations of solar type III radio bursts at low radio frequencies in the range from 30 to 80 MHz. Solar type III radio bursts are understood as result of the beam-plasma interaction of electron beams in the corona. The Sun provides a unique opportunity to study these plasma processes of an active star. Its activity appears in eruptive events like flares, coronal mass ejections and radio bursts which are all accompanied by enhanced radio emission. Therefore solar radio emission carries important information about plasma processes associated with the Sun’s activity. Moreover, the Sun’s atmosphere is a unique plasma laboratory with plasma processes under conditions not found in terrestrial laboratories. Because of the Sun’s proximity to Earth, it can be studied in greater detail than any other star but new knowledge about the Sun can be transfer to them. This “solar stellar connection” is important for the understanding of processes on other stars.
The novel radio interferometer LOFAR provides imaging and spectroscopic capabilities to study these processes at low frequencies. Here it was used for solar observations.
LOFAR, the characteristics of its solar data and the processing and analysis of the latter with the Solar Imaging Pipeline and Solar Data Center are described. The Solar Imaging Pipeline is the central software that allows using LOFAR for solar observations. So its development was necessary for the analysis of solar LOFAR data and realized here. Moreover a new density model with heat conduction and Alfvén waves was developed that provides the distance of radio bursts to the Sun from dynamic radio spectra.
Its application to the dynamic spectrum of a type III burst observed on March 16, 2016 by LOFAR shows a nonuniform radial propagation velocity of the radio emission. The analysis of an imaging observation of type III bursts on June 23, 2012 resolves a burst as bright, compact region localized in the corona propagating in radial direction along magnetic field lines with an average velocity of 0.23c. A nonuniform propagation velocity is revealed. A new beam model is presented that explains the nonuniform motion of the radio source as a propagation effect of an electron ensemble with a spread velocity distribution and rules out a monoenergetic electron distribution. The coronal electron number density is derived in the region from 1.5 to 2.5 R☉ and fitted with the newly developed density model. It determines the plasma density for the interplanetary space between Sun and Earth. The values correspond to a 1.25- and 5-fold Newkirk model for harmonic and fundamental emission, respectively. In comparison to data from other radio instruments the LOFAR data shows a high sensitivity and resolution in space, time and frequency.
The new results from LOFAR’s high resolution imaging spectroscopy are consistent with current theories of solar type III radio bursts and demonstrate its capability to track fast moving radio sources in the corona. LOFAR solar data is found to be a valuable source for solar radio physics and opens a new window for studying plasma processes associated with highly energetic electrons in the solar corona.
Proteins are natural polypeptides produced by cells; they can be found in both animals and plants, and possess a variety of functions. One of these functions is to provide structural support to the surrounding cells and tissues. For example, collagen (which is found in skin, cartilage, tendons and bones) and keratin (which is found in hair and nails) are structural proteins. When a tissue is damaged, however, the supporting matrix formed by structural proteins cannot always spontaneously regenerate. Tailor-made synthetic polypeptides can be used to help heal and restore tissue formation.
Synthetic polypeptides are typically synthesized by the so-called ring opening polymerization (ROP) of α-amino acid N-carboxyanhydrides (NCA). Such synthetic polypeptides are generally non-sequence-controlled and thus less complex than proteins. As such, synthetic polypeptides are rarely as efficient as proteins in their ability to self-assemble and form hierarchical or structural supramolecular assemblies in water, and thus, often require rational designing. In this doctoral work, two types of amino acids, γ-benzyl-L/D-glutamate (BLG / BDG) and allylglycine (AG), were selected to synthesize a series of (co)polypeptides of different compositions and molar masses.
A new and versatile synthetic route to prepare polypeptides was developed, and its mechanism and kinetics were investigated. The polypeptide properties were thoroughly studied and new materials were developed from them. In particular, these polypeptides were able to aggregate (or self-assemble) in solution into microscopic fibres, very similar to those formed by collagen. By doing so, they formed robust physical networks and organogels which could be processed into high water-content, pH-responsive hydrogels. Particles with highly regular and chiral spiral morphologies were also obtained by emulsifying these polypeptides. Such polypeptides and the materials derived from them are, therefore, promising candidates for biomedical applications.
Extreme hydro-meteorological events, such as severe droughts or heavy rainstorms, constitute primary manifestations of climate variability and exert a critical impact on the natural environment and human society. This is particularly true for high-mountain areas, such as the eastern flank of the southern Central Andes of NW Argentina, a region impacted by deep convection processes that form the basis of extreme events, often resulting in floods, a variety of mass movements, and hillslope processes. This region is characterized by pronounced E-W gradients in topography, precipitation, and vegetation cover, spanning low to medium-elevation, humid and densely vegetated areas to high-elevation, arid and sparsely vegetated environments. This strong E-W gradient is mirrored by differences in the efficiency of surface processes, which mobilize and transport large amounts of sediment through the fluvial system, from the steep hillslopes to the intermontane basins and further to the foreland. In a highly sensitive high-mountain environment like this, even small changes in the spatiotemporal distribution, magnitude and rates of extreme events may strongly impact environmental conditions, anthropogenic activity, and the well-being of mountain communities and beyond. However, although the NW Argentine Andes comprise the catchments for the La Plata river that traverses one of the most populated and economically relevant areas of South America, there are only few detailed investigations of climate variability and extreme hydro-meteorological events.
In this thesis, I focus on deciphering the spatiotemporal variability of rainfall and river discharge, with particular emphasis on extreme hydro-meteorological events in the subtropical southern Central Andes of NW Argentina during the past seven decades. I employ various methods to assess and quantify statistically significant trend patterns of rainfall and river discharge, integrating high-quality daily time series from gauging stations (40 rainfall and 8 river discharge stations) with gridded datasets (CPC-uni and TRMM 3B42 V7), for the period between 1940 and 2015. Evidence for a general intensification of the hydrological cycle at intermediate elevations (~ 0.5 – 3 km asl) at the eastern flank of the southern Central Andes is found both from rainfall and river-discharge time-series analysis during the period from 1940 to 2015. This intensification is associated with the increase of the annual total amount of rainfall and the mean annual discharge. However, most pronounced trends are found at high percentiles, i.e. extreme hydro-meteorological events, particularly during the wet season from December to February.An important outcome of my studies is the recognition of a rapid increase in the amount of river discharge during the period between 1971 and 1977, most likely linked to the 1976-77 global climate shift, which is associated with the North Pacific Ocean sea surface temperature variability. Interestingly, after this rapid increase, both rainfall and river discharge decreased at low and intermediate elevations along the eastern flank of the Andes. In contrast, during the same time interval, at high elevations, extensive areas on the arid Puna de Atacama plateau have recorded increasing annual rainfall totals. This has been associated with more intense extreme hydro-meteorological events from 1979 to 2014. This part of the study reveals that low-, intermediate, and high-elevation sectors in the Andes of NW Argentina respond differently to changing climate conditions.
Possible forcing mechanisms of the pronounced hydro-meteorological variability observed in the study area are also investigated. For the period between 1940 and 2015, I analyzed modes of oscillation of river discharge from small to medium drainage basins (102 to 104 km2), located on the eastern flank of the orogen. First, I decomposed the relevant monthly time series using the Hilbert-Huang Transform, which is particularly appropriate for non-stationary time series that result from non-linear natural processes. I observed that in the study region discharge variability can be described by five quasi-periodic oscillatory modes on timescales varying from 1 to ~20 years. Secondly, I tested the link between river-discharge variations and large-scale climate modes of variability, using different climate indices, such as the BEST ENSO (Bivariate El Niño-Southern Oscillation Time-series) index. This analysis reveals that, although most of the variance on the annual timescale is associated with the South American Monsoon System, a relatively large part of river-discharge variability is linked to Pacific Ocean variability (PDO phases) at multi-decadal timescales (~20 years). To a lesser degree, river discharge variability is also linked to the Tropical South Atlantic (TSA) sea surface temperature anomaly at multi-annual timescales (~2-5 years).
Taken together, these findings exemplify the high degree of sensitivity of high-mountain environments with respect to climatic variability and change. This is particularly true for the topographic transitions between the humid, low-moderate elevations and the semi-arid to arid highlands of the southern Central Andes. Even subtle changes in the hydro-meteorological regime of these areas of the mountain belt react with major impacts on erosional hillslope processes and generate mass movements that fundamentally impact the transport capacity of mountain streams. Despite more severe storms in these areas, the fluvial system is characterized by pronounced variability of the stream power on different timescales, leading to cycles of sediment aggradation, the loss of agriculturally used land and severe impacts on infrastructure.
Solar-like stars maintain their magnetic fields thanks to a dynamo mechanism. The Babcock-Leighton dynamo is one possible dynamo that has the particularity to require magnetic flux tubes. Magnetic flux tubes are assumed to form at the bottom of the convective zone and rise buoyantly to the surface. A delayed dynamo model has been suggested, where the delay accounts for the rise time of the magnetic flux tubes; a time, that has been ignored by former studies.
The present thesis aims to study the applicability of the flux tube/Babcock-Leighton dynamo to other stars. To do so, we attempt to constrain the rise time of magnetic flux tubes thanks to the first fully compressible MHD simulations of rising magnetic flux tubes in stratified rotating spherical shells.
Such simulations are limited to an unrealistic parameter space, therefore, a scaling relation is required to scale the results to realistic physical regimes. We extended earlier works on 2D scaling relations and derived a general scaling law valid for both 2D and 3D. We then carried out two large series of numerical experiments and verified that the scaling law we have derived indeed applies to the fully non-linear case. It allowed us to extract a constraint for the rise time of magnetic flux tubes that is valid for any solar-like star. We finally introduced this constraint to a delayed dynamo model.
By carrying out simulations of a mean-field, delayed, flux tube/Babcock-Leighton dynamo, we were able to identify a new dynamo regime resulting from the delay. This regime requires delays about an entire cycle and exhibits subequipartition magnetic activity. Revealing this new regime shows that even for long delays the flux tube/Babcock-Leighton dynamo can still deliver non-decaying solutions and remains a good candidate for a wide range of solar-like stars.
Diese Arbeit befasst sich mit der Herstellung und Charakterisierung von thermoresponsiven Filmen auf Goldelektroden durch Fixierung eines bereits synthetisierten thermoresponsiven Polymers. Als Basis für die Entwicklung der responsiven Grenzfläche dienten drei unterschiedliche Copolymere (Polymere I, II und III) aus der Gruppe der thermisch schaltbaren Poly(oligo(ethylenglykol)methacrylate).
Die turbidimetrischen Messungen der Copolymere in Lösungen haben gezeigt, dass der Trübungspunkt vom pH-Wert, der Gegenwart von Salzen sowie von der Ionenstärke der Lösung abhängig ist. Nach der Charakterisierung der Polymere in Lösung wurden Experimente der kovalenten Kopplung der Polymere I bis III an die Oberfläche der Gold-Elektroden durchgeführt. Während bei Polymeren I und II die Ankopplung auf einer Amidverbrückung basierte, wurde bei Polymer III als alternative Methode zur Immobilisierung eine photoinduzierte Anbindung unter gleichzeitiger Vernetzung gewählt. Der Nachweis der erfolgreichen Ankopplung erfolgte bei allen Polymeren elektrochemisch mittels Cyclovoltammetrie und Impedanzspektroskopie in K3/4[Fe(CN)6]-Lösungen. Wie die Ellipsometrie-Messungen zeigten, waren die erhaltenen Polymer-Filme unterschiedlich dick. Die Ankopplung über Amidverbrückung lieferte dünne Filme (10 – 15 nm), während der photovernetzte Film deutlich dicker war (70-80 nm) und die darunter liegende Oberfläche relativ gut isolierte.
Elektrochemische Temperaturexperimente an Polymer-modifizierten Oberflächen in Lösungen in Gegenwart von K3/4[Fe(CN)6] zeigten, dass auch die immobilisierten Polymere I bis III responsives Temperaturverhalten zeigen. Bei Elektroden mit den immobilisierten Polymeren I und II ist der Temperaturverlauf der Parameterwerte diskontinuierlich – ab einem kritischen Punkt (37 °C für Polymer I und 45 °C für Polymer II) wird zunächst langsame Zunahme der Peakströme wird deutlich schneller. Das Temperaturverhalten von Polymer III ist dagegen bis 50 °C kontinuierlich, der Peakstrom sinkt hier durchgehend.
Weiterhin wurde mit den auf Polymeren II und III basierten Elektroden deren Anwendung als responsive Matrix für Bioerkennungsreaktionen untersucht. Es wurde die Ankopplung von kleinen Biorezeptoren, TAG-Peptiden, an Polymer II- und Polymer III-modifizierten Elektroden durchgeführt. Das hydrophile FLAG-TAG-Peptid verändert das Temperaturverhalten des Polymer II-Films unwesentlich, da es die Hydrophilie des Netzwerkes nicht beeinflusst. Weiterhin wurde der Effekt der Ankopplung der ANTI-FLAG-TAG-Antikörper an FLAG-TAG-modifizierte Polymer II-Filme untersucht. Es konnte gezeigt werden, dass die Antikörper spezifisch an FLAG-TAG-modifiziertes Polymer II binden. Es wurde keine unspezifische Anbindung von ANTI-FLAG-TAG an Polymer II beobachtet. Die Temperaturexperimente haben gezeigt, dass die thermische Restrukturierung des Polymer II-FLAG-TAG-Filmes auch nach der Antikörper-Ankopplung noch stattfindet. Der Einfluss der ANTI-FLAG-TAG-Ankopplung ist gering, da der Unterschied in der Hydrophilie zwischen Polymer II und FLAG-TAG bzw. ANTI-FLAG-TAG zu gering ist.
Für die Untersuchungen mit Polymer III-Elektroden wurde neben dem hydrophilen FLAG-TAG-Peptid das deutlich hydrophobere HA-TAG-Peptid ausgewählt. Wie im Falle der Polymer II Elektrode beeinflusst das gekoppelte FLAG-TAG-Peptid das Temperaturverhalten des Polymer III-Netzwerkes nur geringfügig. Die gemessenen Stromwerte sind geringer als bei der Polymer III-Elektrode. Das Temperaturverhalten der FLAG-TAG-Elektrode ähnelt dem der reinen Polymer III-Elektrode – die Stromwerte sinken kontinuierlich bis die Temperatur von ca. 40 °C erreicht ist, bei der ein Plateau beobachtet wird. Offensichtlich verändert FLAG-TAG auch in diesem Fall nicht wesentlich die Hydrophilie des Polymer III-Netzwerkes. Das an Polymer III-Elektroden gekoppelte hydrophobe HA-TAG-Peptid beeinflusst dagegen im starken Maße den Quellzustand des Netzwerkes. Die Ströme für die HA-TAG-Elektroden sind deutlich geringer als die für die FLAG-TAG-Polymer III-Elektroden, was auf geringeren Wassergehalt und dickeren Film zurückzuführen ist. Bereits ab 30 °C erfolgt der Anstieg von Stromwerten, der bei Polymer III- bzw. bei Polymer III-FLAG-TAG-Elektroden nicht beobachtet werden kann. Das gekoppelte hydrophobe HA-TAG-Peptid verdrängt Wasser aus dem Polymer III-Netzwerk, was in der Stauchung des Films bereits bei Raumtemperatur resultiert. Dies führt dazu, dass der Film im Laufe des Temperaturanstieges kaum noch komprimiert. Die Stromwerte steigen in diesem Fall entsprechend des Anstiegs der temperaturabhängigen Diffusion des Redoxpaares. Diese Untersuchungen zeigen, dass das HA-TAG-Peptid als Ankermolekül deutlich besser für eine potentielle Verwendung der Polymer III-Filme für sensorische Zwecke geeignet ist, da es sich deutlich in der Hydrophilie von Polymer III unterscheidet.
Numerous reports of relatively rapid climate changes over the past century make a clear case of the impact of aerosols and clouds, identified as sources of largest uncertainty in climate projections. Earth’s radiation balance is altered by aerosols depending on their size, morphology and chemical composition. Competing effects in the atmosphere can be further studied by investigating the evolution of aerosol microphysical properties, which are the focus of the present work.
The aerosol size distribution, the refractive index, and the single scattering albedo are commonly used such properties linked to aerosol type, and radiative forcing. Highly advanced lidars (light detection and ranging) have reduced aerosol monitoring and optical profiling into a routine process. Lidar data have been widely used to retrieve the size distribution through the inversion of the so-called Lorenz-Mie model (LMM). This model offers a reasonable treatment for spherically approximated particles, it no longer provides, though, a viable description for other naturally occurring arbitrarily shaped particles, such as dust particles. On the other hand, non-spherical geometries as simple as spheroids reproduce certain optical properties with enhanced accuracy. Motivated by this, we adapt the LMM to accommodate the spheroid-particle approximation introducing the notion of a two-dimensional (2D) shape-size distribution.
Inverting only a few optical data points to retrieve the shape-size distribution is classified as a non-linear ill-posed problem. A brief mathematical analysis is presented which reveals the inherent tendency towards highly oscillatory solutions, explores the available options for a generalized solution through regularization methods and quantifies the ill-posedness. The latter will improve our understanding on the main cause fomenting instability in the produced solution spaces. The new approach facilitates the exploitation of additional lidar data points from depolarization measurements, associated with particle non-sphericity. However, the generalization of LMM vastly increases the complexity of the problem. The underlying theory for the calculation of the involved optical cross sections (T-matrix theory) is computationally so costly, that would limit a retrieval analysis to an unpractical point. Moreover the discretization of the model equation by a 2D collocation method, proposed in this work, involves double integrations which are further time consuming. We overcome these difficulties by using precalculated databases and a sophisticated retrieval software (SphInX: Spheroidal Inversion eXperiments) especially developed for our purposes, capable of performing multiple-dataset inversions and producing a wide range of microphysical retrieval outputs.
Hybrid regularization in conjunction with minimization processes is used as a basis for our algorithms. Synthetic data retrievals are performed simulating various atmospheric scenarios in order to test the efficiency of different regularization methods. The gap in contemporary literature in providing full sets of uncertainties in a wide variety of numerical instances is of major concern here. For this, the most appropriate methods are identified through a thorough analysis on an overall-behavior basis regarding accuracy and stability. The general trend of the initial size distributions is captured in our numerical experiments and the reconstruction quality depends on data error level. Moreover, the need for more or less depolarization points is explored for the first time from the point of view of the microphysical retrieval. Finally, our approach is tested in various measurement cases giving further insight for future algorithm improvements.
Möglichkeiten der Mittelstandsförderung durch Vergaberechtsgestaltung und Vergaberechtspraxis
(2016)
Die Förderungswürdigkeit und die Förderungsfähigkeit mittelständischer Unternehmen ist ein gesamteuropäisches, wirtschaftspolitisches Anliegen. Hiervon zeugen zum einen zahlreiche Regelungen im Primär-, Sekundär-, Verfassungs- und einfachgesetzlichem Recht, zum anderen auch die Bedeutung der mittelständischen Unternehmen im wirtschaftlichen, gesellschaftlichen und sozialen Gefüge. So herrscht innerhalb der Europäischen Union nicht nur der Slogan „Vorfahrt für KMU“, sondern auch die im Frühjahr 2014 verabschiedeten Vergaberichtlinien legten ein besonderes Augenmerk auf die Förderung des Zugangs der KMU zum öffentlichen Beschaffungsmarkt. Denn gemessen am Steuerungs- und Lenkungspotenzial der Auftragsvergabe, deren Einfluss auf die Innovationstätigkeit der Wirtschaft sowie deren Auswirkungen auf die Wirtschafts- und Wettbewerbstätigkeit auf der einen Seite und dem gesamtwirtschaftlichen Stellenwert der mittelständischen Unternehmen auf der anderen Seite, sind mittelständische Unternehmen trotz zahlreicher europäischer und nationaler Initiativen im Vergabeverfahren unterrepräsentiert. Neben der undurchsichtigen Regelungsstruktur des deutschen Vergaberechts, unterliegen die mittelständischen Unternehmen vom Beginn bis zum Ende des Vergabeverfahrens besonderen Schwierigkeiten. Dieser Ausgangsbefund wurde zum Anlass genommen, um die Möglichkeiten der Mittelstandsförderung durch Vergaberechtsgestaltung und Vergaberechtspraxis erneut auf den Prüfstand zu stellen.
Infants' lexical processing is modulated by featural manipulations made to words, suggesting that early lexical representations are sufficiently specified to establish a match with the corresponding label. However, the precise degree of detail in early words requires further investigation due to equivocal findings. We studied this question by assessing children’s sensitivity to the degree of featural manipulation (Chapters 2 and 3), and sensitivity to the featural makeup of homorganic and heterorganic consonant clusters (Chapter 4). Gradient sensitivity on the one hand and sensitivity to homorganicity on the other hand would suggest that lexical processing makes use of sub-phonemic information, which in turn would indicate that early words contain sub-phonemic detail. The studies presented in this thesis assess children’s sensitivity to sub-phonemic detail using minimally demanding online paradigms suitable for infants: single-picture pupillometry and intermodal preferential looking. Such paradigms have the potential to uncover lexical knowledge that may be masked otherwise due to cognitive limitations. The study reported in Chapter 2 obtained a differential response in pupil dilation to the degree of featural manipulation, a result consistent with gradient sensitivity. The study reported in Chapter 3 obtained a differential response in proportion of looking time and pupil dilation to the degree of featural manipulation, a result again consistent with gradient sensitivity. The study reported in Chapter 4 obtained a differential response to the manipulation of homorganic and heterorganic consonant clusters, a result consistent with sensitivity to homorganicity. These results suggest that infants' lexical representations are not only specific, but also detailed to the extent that they contain sub-phonemic information.
Das Widerspenstige bändigen
(2016)
Dem Handeln von Lehrkräften wird in der schulischen Praxis wie in der wissenschaftlichen Literatur ein wesentlicher Einfluss auf die Qualität von schulischem Unterricht zugesprochen. Auch wenn umfangreiche normative Vorstellungen über ein gutes Lehr-Handeln bestehen, so gibt es wenig Erkenntnis darüber, welche Gründe Lehrkräfte für ihr pädagogisches Handeln haben. Das Handeln von Lehrkräften kann nur dann adäquat erfasst werden, wenn Bildung einerseits als Weitergabe von Kultur an die nachfolgende Generation und andererseits als eine vom sich bildenden Subjekt ausgehende Selbst- und Weltverständigung verstanden wird. Damit einhergehende Anforderungen an die Lehrkraft stehen notwendigerweise in Widerspruch zueinander; dies gilt besonders für eine Gesellschaft mit großer kultureller und sozialer Heterogenität. Bei der Suche nach Zusammenhängen zwischen Persönlichkeit, pädagogischem Wissen oder Kompetenzen und einem unterrichtlichen Handeln wird häufig von einer Bedingtheit dieses Handelns ausgegangen und dieses auf kognitive Aspekte und an externen Normen orientierte Merkmale verkürzt. Ertragreicher für eine Antwort auf die Frage nach den Begründungen sind wissenschaftliche Arbeiten, die Professionalität als eine Bezugnahme auf einen besonderen strukturellen Rahmen beschreiben, der durch Widersprüche geprägt ist und Entscheidungen zu den Spannungsfeldern pädagogischer Verhältnisse erfordert. Die subjektwissenschaftliche Lerntheorie bietet eine Basis für ein Verständnis eines Lernens in institutionellen Kontexten ausgehend von den Lerninteressen der Schülerinnen und Schüler. Lehren kann darauf bezugnehmend als Unterstützung von Selbst- und Weltverständigungsprozessen durch Wertschätzung, Verstehen und Angebote alternativer Bedeutungshorizonte verstanden werden. Das Handeln von Lehrkräften ist als sinngebende Bezugnahme auf daraus resultierende sowie institutionelle Anforderungen mittels gesellschaftlicher Bedeutungsstrukturen verstehbar. Das handelnde Subjekt erschließt sich selbst und die Welt mit Hilfe von Bedeutungen. Diese können verstanden werden als der Besonderheit der Biographie, der gesellschaftlichen Position sowie der Lebenslage geschuldete Reinterpretationen gesellschaftlicher Bedeutungsstrukturen. Im empirischen Verfahren können mittels eines Übergangs von sequentiellen zu komparativen Analysen Positionierungen als thematisch spezifische und über die konkrete Handlungssituation hinausreichende Bedeutungs-Begründungs-Zusammenhänge rekonstruiert werden. Daraus werden situationsunabhängige Strukturmomente des Gegenstands Lehren an beruflichen Schulen aber auch komplexe, situationsbezogene subjektive Bedeutungs-Begründungs-Muster abgeleitet. Als wesentliche strukturelle Merkmale lassen sich die Schlüsselkategorien ‚Deutungsmacht‘ und ‚instrumentelle pädagogische Beziehung‘ aus dem empirischen Material unter Zuhilfenahme weiterer theoretischer Folien entwickeln. Da Deutungsmacht auf Akzeptanz angewiesen ist und in instrumentellen Beziehungen eine kooperative Bezugnahme auf den Lehr-Lern-Gegenstand allenfalls punktuell erfolgt, können damit asymmetrische metastabile Arrangements zwischen einer Lehrkraft und Schülerinnen und Schülern verstanden werden. Als empirische Ausprägungen weist Deutungsmacht die Varianten ‚absoluter Anspruch‘, ‚Akzeptanz der Fragilität‘ und ‚Akzeptanz der Legitimität eines Infragestellens‘ auf. Bei der zweiten Schlüsselkategorie treten die Varianten ‚strukturelle Prägung‘, ‚unspezifischer allgemein-menschlicher Charakter‘ und ‚Außenprägung‘ der instrumentellen pädagogischen Beziehung auf. Die Bedeutungs-Begründungs-Musters weisen teilweise Inkonsistenzen und Übergänge in den Positionierungen bezogen auf die dargestellten Varianten auf. Nur bei einem Teil der Muster sind Bemühungen um Wertschätzung und Verstehen der Schülerinnen und Schüler plausibel ableitbar, gleiches gilt in Hinblick auf eine Offenheit für eine Revision der Muster. Die Muster, wie etwa ‚Durchsetzend-ertragendes Nachsteuern‘, ‚Direktiv-personalisierendes Praktizieren‘ oder ‚Regulierend-flexibles Managen‘ sind zu verstehen als Bewältigungsmodi der kontingenten pädagogischen (Konflikt-)Situationen, auf die sich die Fallschilderungen beziehen. Die jeweilige Lehrkraft hat dieses Muster in dem beschriebenen Fall genutzt, was allerdings keine Aussage darüber zulässt, auf welche Muster die Lehrkraft in anderen Fällen zugreifen würde. Die Ergebnisse der vorliegenden Arbeit eignen sich als eine heuristische bzw. theoretische Folie, die Lehrkräfte beim Erschließen ihres eigenen pädagogischen Handelns - etwa in einer als Fallberatung konzipierten Fortbildung - unterstützen kann. Möglich sind Anschlüsse an andere theoretische Ansätze zum Handeln von Lehrkräften aber auch deren veränderte Einordnung. Erweitert werden die Optionen, dieses Handeln über wissenschaftliche Zugänge zu erfassen.
Eye movements serve as a window into ongoing visual-cognitive processes and can thus be used to investigate how people perceive real-world scenes. A key issue for understanding eye-movement control during scene viewing is the roles of central and peripheral vision, which process information differently and are therefore specialized for different tasks (object identification and peripheral target selection respectively). Yet, rather little is known about the contributions of central and peripheral processing to gaze control and how they are coordinated within a fixation during scene viewing. Additionally, the factors determining fixation durations have long been neglected, as scene perception research has mainly been focused on the factors determining fixation locations. The present thesis aimed at increasing the knowledge on how central and peripheral vision contribute to spatial and, in particular, to temporal aspects of eye-movement control during scene viewing. In a series of five experiments, we varied processing difficulty in the central or the peripheral visual field by attenuating selective parts of the spatial-frequency spectrum within these regions. Furthermore, we developed a computational model on how foveal and peripheral processing might be coordinated for the control of fixation duration. The thesis provides three main findings. First, the experiments indicate that increasing processing demands in central or peripheral vision do not necessarily prolong fixation durations; instead, stimulus-independent timing is adapted when processing becomes too difficult. Second, peripheral vision seems to play a prominent role in the control of fixation durations, a notion also implemented in the computational model. The model assumes that foveal and peripheral processing proceed largely in parallel and independently during fixation, but can interact to modulate fixation duration. Thus, we propose that the variation in fixation durations can in part be accounted for by the interaction between central and peripheral processing. Third, the experiments indicate that saccadic behavior largely adapts to processing demands, with a bias of avoiding spatial-frequency filtered scene regions as saccade targets. We demonstrate that the observed saccade amplitude patterns reflect corresponding modulations of visual attention. The present work highlights the individual contributions and the interplay of central and peripheral vision for gaze control during scene viewing, particularly for the control of fixation duration. Our results entail new implications for computational models and for experimental research on scene perception.
Water scarcity, adaption on climate change, and risk assessment of droughts and floods are critical topics for science and society these days. Monitoring and modeling of the hydrological cycle are a prerequisite to understand and predict the consequences for weather and agriculture. As soil water storage plays a key role for partitioning of water fluxes between the atmosphere, biosphere, and lithosphere, measurement techniques are required to estimate soil moisture states from small to large scales.
The method of cosmic-ray neutron sensing (CRNS) promises to close the gap between point-scale and remote-sensing observations, as its footprint was reported to be 30 ha. However, the methodology is rather young and requires highly interdisciplinary research to understand and interpret the response of neutrons to soil moisture. In this work, the signal of nine detectors has been systematically compared, and correction approaches have been revised to account for meteorological and geomagnetic variations. Neutron transport simulations have been consulted to precisely characterize the sensitive footprint area, which turned out to be 6--18 ha, highly local, and temporally dynamic. These results have been experimentally confirmed by the significant influence of water bodies and dry roads. Furthermore, mobile measurements on agricultural fields and across different land use types were able to accurately capture the various soil moisture states. It has been further demonstrated that the corresponding spatial and temporal neutron data can be beneficial for mesoscale hydrological modeling. Finally, first tests with a gyrocopter have proven the concept of airborne neutron sensing, where increased footprints are able to overcome local effects.
This dissertation not only bridges the gap between scales of soil moisture measurements. It also establishes a close connection between the two worlds of observers and modelers, and further aims to combine the disciplines of particle physics, geophysics, and soil hydrology to thoroughly explore the potential and limits of the CRNS method.
This two-wave longitudinal study examined how developmental changes in students’ mastery goal orientation, academic effort, and intrinsic motivation were predicted by student-perceived support of motivational support (support for autonomy, competence, and relatedness) in secondary classrooms. The study extends previous knowledge that showed that support for motivational support in class is related to students’ intrinsic motivation as it focused on the developmental changes of a set of different motivational variables and the relations of these changes to student-perceived motivational support in class. Thus, differential classroom effects on students’ motivational development were investigated. A sample of 1088 German students was assessed in the beginning of the school year when students were in grade 8 (Mean age D 13.70, SD D 0.53, 54% girls) and again at the end of the next school year when students were in grade 9. Results of latent change models showed a tendency toward decline in mastery goal orientation and a significant decrease in academic effort from grade 8 to 9. Intrinsic motivation did not decrease significantly across time. Student-perceived support of competence in class predicted the level and change in students’ academic effort. The findings emphasized that it is beneficial to create classroom learning environments that enhance students’ perceptions of competence in class when aiming to enhance students’ academic effort in secondary school classrooms.
Subcultures creating culture
(2016)
The purpose of this work is to apply the methods of textual semiotics to subcultures, in particular to the little known glam subculture. Subcultures have been the main research field of the Birmingham Centre for Contemporary Cultural Studies, known for its interdisciplinary approach, and for its focus on the creative aspects of subculture. Hebdige, in particular, introduced many semiotic elements in his work, as the aberrant decoding after Eco and the cultural creativity via bricolage after Lévi-Strauss. His definition of subculture as symbolic resistance has been criticized by the following post-subcultural researchers for its abstractness and lack of cohesion.
Semiotics eventually have been expelled from the set of tools used in sociology for the analysis of subcultures. Nowadays, the studies on subcultures have a strong ethnographic focus. Due to terminological proliferation and a descriptive approach, it is difficult to compare them on a common basis.
Textual semiotics, through the concept of semiosphere developed by Lotman, allows to go back to the intuitions of Hebdige, organizing the semiotic elements already present in his work into a wider system of interpretation. The semiosphere offers a coherent theoretical horizon as a basis for further analysis, and a new methodological perspective focusing on the cultural. In this thesis for the first time the work of Lotman is applied to the study of a subculture.
Dietary approaches contribute to the prevention and treatment of type 2 diabetes. High protein diets were shown to exert beneficial as well as adverse effects on metabolism. However, it is unclear whether the protein origin plays a role in these effects. The LeguAN study investigated in detail the effects of two high protein diets, either from plant or animal origin, in type 2 diabetic patients. Both diets contained 30 EN% protein, 40 EN% carbohydrates, and 30 EN% fat. Fiber content, glycemic index, and composition of dietary fats were similar in both diets. In comparison to previous dietary habits, the fat content was exchanged for protein, while the carbohydrate intake was not modified. Overall, both high protein diets led to improvements of glycemic control, insulin sensitivity, liver fat, and cardiovascular risk markers without remarkable differences between the protein types.
Fasting glucose together with indices of insulin resistance were ameliorated by both interventions to varying extents but without significant differences between protein types. The decline of HbA1c was more pronounced in the plant protein group, whereby the improvement of insulin sensitivity in the animal protein group. The high protein intake had only slight influence on postprandial metabolism seen for free fatty acids and indices of insulin secretion, sensitivity and degradation. Except for GIP release, ingestion of animal and plant meals did not provoke differential metabolic and hormonal responses despite diverse circulating amino acid levels.
The animal protein diets led to a selective increase of fat-free mass and decrease of total fat mass, which was not significantly different from the plant protein diet. Moreover, the high protein diets potently decreased liver fat content by 42% on average which was linked to significantly diminished lipogenesis, free fatty acids flux and lipolysis in adipose tissue. Moderate decline of circulating liver enzymes was induced by both interventions. The liver fat reduction was associated with improved glucose homeostasis and insulin sensitivity which underlines the protective effect of the diets.
Blood lipid profile improved in all subjects and was probably related to the lower fat intake. Reductions in uric acid and markers of inflammation further argued for metabolic benefits of both high protein diets. Systolic and diastolic blood pressure declined only in the PP group pointing a possible role of arginine.
Kidney function was not altered by high protein consumption over 6 weeks. The rapid decrease of serum creatinine in the PP group was noteworthy and should be further investigated. Protein type did not seem to play a role but long-term studies are warranted to fully elucidate safety of high protein regimen.
Varying the source of dietary proteins did not affect the mTOR pathway in adipose tissue and blood cells under neither acute nor chronic settings. Enhancement of whole-body insulin sensitivity suggested also no alteration of mTOR and no impairment of insulin sensitivity in skeletal muscle.
A remarkable outcome was the extensive reduction of FGF21, critical regulator of metabolic processes, by approximately 50% independently of protein type. Whether hepatic ER-stress, ammonia flux or rather macronutrient preferences is behind this paradoxical finding remains to be investigated in detail.
Unlike initial expectations and previous reports plant protein based diet had no clear advantage over animal proteins. The pronounced beneficial effect of animal protein on insulin homeostasis despite high BCAA and methionine intake was certainly unexpected assuming more complex metabolic adaptations occurring upon prolonged consumption. In addition, the reduced fat intake may have also contributed to the overall improvements in both groups.
Taking into account the above observed study results, a short-term diet containing 30 EN% protein (either from plant or animal origin), 40 EN% carbohydrates, and 30 EN% fat with lower SFA amount leads to metabolic improvements in diabetic patients, regardless of protein source.
The present work is a case study contributing to the major planning project “Suedlink”. It is structured as follows: first, in a theoretical part, mandatory theories of social acceptance (Wüstenhagen et al., 2007), steps of participation (Münnich, 2014), and the governance theory (Benz and Dose, 2011) are elaborated. Secondly, the relevant methods are discussed. Thirdly, in a qualitative analytical part, the information that were gathered from the expert interviews are analyzed with the use of the aforementioned theories. In the fourth place, an empirical quantitative analysis of data regarding the public acceptance towards Suedlink is presented.
In this case study, with the use of qualitative and quantitative methods, two questions are answered: first, which governance aspects were relevant for the priority use of underground cables for the construction of high voltage direct current transmission lines? For this question, intensive document analysis and different expert interviews were conducted. Secondly, the central question of the present work addresses the question whether local or/and individual factors affect the public acceptance towards SüdLink. Here, in particular, it is interesting to analyze if the priority use of underground cables affected the people’s acceptance towards SuedLink. In order to respond to both questions, an online survey was conducted among citizen initiatives, district administrators, and individuals in social media during March till July 2016. Thereafter, the data was analyzed with the use of descriptive quantitative methods. The data shows, that underground cables not necessarily increase public acceptance (see also Menges and Beyer, 2013). On the contrary, individual and local criteria were relevant for the survey respondents. For example criteria such as the quality of participation, distance between home and transmission lines, and the additional financial burden (taxes, higher prices for electricity) were important for the evaluation. In addition, survey respondents who participated in citizen initiatives were more critical against the priority use of underground cables and SuedLink in general. Likewise, residential homeowners rejected every form of transmission lines.
Durch die Zunahme metabolischer Stoffwechselstörungen und Erkrankungen in der Weltbevölkerung wird in der Medizin und den Lebenswissenschaften vermehrt nach Präventionsstrategien und Ansatzpunkten gesucht, die die Gesundheit fördern, Erkrankungen verhindern helfen und damit auch die Gesamtlast auf die Gesundheitssysteme erleichtern. Ein Ansatzpunkt wird dabei in der Ernährung gesehen, da insbesondere der Konsum von gesättigten Fetten die Gesundheit nachträglich zu beeinflussen scheint. Dabei wird übersehen, dass in vielen Studien Hochfettdiäten nicht ausreichend von den Einflüssen einer zum Bedarf hyperkalorischen Energiezufuhr getrennt werden, sodass die Datenlage zu dem Einfluss von (gesättigten) Fetten auf den Metabolismus bei gleichbleibender Energieaufnahme noch immer unzureichend ist.
In der NUtriGenomic Analysis in Twins-Studie wurden 46 Zwillingspaare (34 monozygot, 12 dizygot) über einen Zeitraum von sechs Wochen mittels einer kohlenhydratreichen, fettarmen Diät nach Richtlinien der Deutschen Gesellschaft für Ernährung für ihr Ernährungsverhalten standardisiert, ehe sie zu einer kohlenhydratarmen, fettreichen Diät, die insbesondere gesättigte Fette enthielt, für weitere sechs Wochen wechselten. Beide Diäten waren dem individuellen Energiebedarf der Probanden angepasst, um so sowohl akut nach einerWoche als auch längerfristig nach sechs Wochen Änderungen des Metabolismus beobachten zu können, die sich in der vermehrten Aufnahme von (gesättigten) Fetten begründeten.
Die über die detaillierte Charakterisierung der Probanden an den klinischen Untersuchungstagen generierten Datensätze wurden mit statistischen und mathematischen Methoden (z.B. lineare gemischte Modellierung) analysiert, die der Größe der Datensätze und damit ihrem Informationsvolumen angepasst waren.
Es konnte gezeigt werden, dass die metabolisch gesunden und relativ jungen Probanden, die eine gute Compliance zeigten, im Hinblick auf ihren Glukosestoffwechsel adaptieren konnten, indem die Akutantwort nach einer Woche im Nüchterninsulin und dem Index für Insulinresistenz in den weiteren fünf Wochen ausgeglichen wurde.
Der Lipidstoffwechsel in Form der klassischen Marker wie Gesamtcholesterin, LDL und HDL war dagegen stärker beeinflusst und auch nach insgesamt sechs Wochen deutlich erhöht.
Letzteres unterstützt die Beobachtung im Transkriptom des weißen, subkutanen Fettgewebes, bei der eine Aktivierung der über die Toll-like receptors und das Inflammasom vermittelten subklinischen Inflammation beobachtet werden konnte.
Die auftretenden Veränderungen in Konzentration und Komposition des Plasmalipidoms zeigte ebenfalls nur eine teilweise und auf bestimmte Spezies begrenzte Gegenregulation.
Diesbezüglich kann also geschlussfolgert werden, dass auch die isokalorische Aufnahme von (gesättigten) Fetten zu Veränderungen im Metabolismus führt, wobei die Auswirkungen in weiteren (Langzeit-)Studien und Experimenten noch genauer untersucht werden müssen. Insbesondere wäre dabei ein längerer Zeitraum unter isokalorischen Bedingungen von Interesse und die Untersuchung von Probanden mit metabolischer Vorbelastung (z.B. Insulinresistenz).
Darüber hinaus konnte in NUGAT aber ebenfalls gezeigt werden, dass die Nutrigenetik und Nutrigenomik zwei nicht zu vernachlässigende Faktoren darstellen. So zeigten unter anderem die Konzentrationen einiger Lipidspezies eine starke Erblichkeit und Abhängigkeit der Diät.
Zudem legen die Ergebnisse nahe, dass laufende wie geplante Präventionsstrategien und medizinische Behandlungen deutlich stärker den Patienten als Individuum mit einbeziehen müssen, da die Datenanalyse interindividuelle Unterschiede identifizierte und Hinweise lieferte, dass einige Probanden die nachteiligen, metabolischen Auswirkungen einer Hochfettdiät besser ausgleichen konnten als andere.
Der Fall der Rachel Dolezal
(2016)
Die Amerikanerin Rachel Dolezal war bis ins Jahr 2015 als Afroamerikanerin bekannt. Als Aktivistin der National Association for the Advancement of Colored People setzte sie sich für die Rechte der afroamerikanischen Bevölkerung ein, lebte in einem schwarzen Umfeld und lehrte an einer Universität Afroamerikanische Studien. „I identify as black“ antwortete sie auf die Frage eines amerikanischen Fernsehmoderators, ob sie Afroamerikanerin sei. Ihre Kollegen und ihr näheres Umfeld identifizierten sie ebenfalls als solche. Erst, als regionale Journalisten auf sie aufmerksam wurden und ihre Eltern sich zu Wort meldeten, wurde deutlich, dass Dolezal eigentlich eine weiße Frau ist. Dolezals Eltern bestätigten dies, indem sie Kindheitsfotos einer hellhäutigen, blonden Rachel veröffentlichten. Dolezals Verhalten entfachte daraufhin eine rege mediale Diskussion über ihre Person im Kontext von Ethnizität und »Rasse«.
Die Verfasserin greift Dolezals Fall exemplarisch auf, um der Frage nachzugehen, ob ein Doing Race nach Belieben möglich ist. Darf sich Dolezal als schwarz identifizieren, obwohl sie keine afrikanischen Vorfahren hat? Welche gesellschaftliche Wissensvorräte schränken diese Wahl ein und welche Konsequenzen ergeben sich daraus? Anhand einer Diskursanalyse amerikanischer Zeitungsartikel geht die Verfasserin diesen Fragen nach. Hierbei werden »Rasse« und Ethnizität als soziale Konstruktionen, basierend auf dem Konzept von Stephen Cornell und Douglas Hartmann, betrachtet.
The optical properties of semiconductor nanocrystals (SC NCs) are largely controlled by their size and surface chemistry, i.e., the chemical composition and thickness of inorganic passivation shells and the chemical nature and number of surface ligands as well as the strength of their bonds to surface atoms. The latter is particularly important for CdTe NCs, which – together with alloyed CdxHg1−xTe – are the only SC NCs that can be prepared in water in high quality without the need for an additional inorganic passivation shell. Aiming at a better understanding of the role of stabilizing ligands for the control of the application-relevant fluorescence features of SC NCs, we assessed the influence of two of the most commonly used monodentate thiol ligands, thioglycolic acid (TGA) and mercaptopropionic acid (MPA), on the colloidal stability, photoluminescence (PL) quantum yield (QY), and PL decay behavior of a set of CdTe NC colloids. As an indirect measure for the strength of the coordinative bond of the ligands to SC NC surface atoms, the influence of the pH (pD) and the concentration on the PL properties of these colloids was examined in water and D2O and compared to the results from previous dilution studies with a set of thiol-capped Cd1−xHgxTe SC NCs in D2O. As a prerequisite for these studies, the number of surface ligands was determined photometrically at different steps of purification after SC NC synthesis with Ellman's test. Our results demonstrate ligand control of the pH-dependent PL of these SC NCs, with MPA-stabilized CdTe NCs being less prone to luminescence quenching than TGA-capped ones. For both types of CdTe colloids, ligand desorption is more pronounced in H2O compared to D2O, underlining also the role of hydrogen bonding and solvent molecules.
The collision of bathymetric anomalies, such as oceanic spreading centers, at convergent plate margins can profoundly affect subduction dynamics, magmatism, and the structural and geomorphic evolution of the overriding plate. The Southern Patagonian Andes of South America are a prime example for sustained oceanic ridge collision and the successive formation and widening of an extensive asthenospheric slab window since the Middle Miocene. Several of the predicted upper-plate geologic manifestations of such deep-seated geodynamic processes have been studied in this region, but many topics remain highly debated. One of the main controversial topics is the interpretation of the regional low-temperature thermochronology exhumational record and its relationship with tectonic and/or climate-driven processes, ultimately manifested and recorded in the landscape evolution of the Patagonian Andes. The prominent along-strike variance in the topographic characteristics of the Andes, combined with coupled trends in low-temperature thermochronometer cooling ages have been interpreted in very contrasting ways, considering either purely climatic (i.e. glacial erosion) or geodynamic (slab-window related) controlling factors.
This thesis focuses on two main aspects of these controversial topics. First, based on field observations and bedrock low-temperature thermochronology data, the thesis addresses an existing research gap with respect to the neotectonic activity of the upper plate in response to ridge collision - a mechanism that has been shown to affect the upper plate topography and exhumational patterns in similar tectonic settings. Secondly, the qualitative interpretation of my new and existing thermochronological data from this region is extended by inverse thermal modelling to define thermal histories recorded in the data and evaluate the relative importance of surface vs. geodynamic factors and their possible relationship with the regional cooling record.
My research is centered on the Northern Patagonian Icefield (NPI) region of the Southern Patagonian Andes. This site is located inboard of the present-day location of the Chile Triple Junction - the juncture between the colliding Chile Rise spreading center and the Nazca and Antarctic Plates along the South American convergent margin. As such this study area represents the region of most recent oceanic-ridge collision and associated slab window formation. Importantly, this location also coincides with the abrupt rise in summit elevations and relief characteristics in the Southern Patagonian Andes. Field observations, based on geological, structural and geomorphic mapping, are combined with bedrock apatite (U-Th)/He and apatite fission track (AHe and AFT) cooling ages sampled along elevation transects across the orogen. This new data reveals the existence of hitherto unrecognized neotectonic deformation along the flanks of the range capped by the NPI.
This deformation is associated with the closely spaced oblique collision of successive oceanic-ridge segments in this region over the past 6 Ma. I interpret that this has caused a crustal-scale partitioning of deformation and the decoupling, margin-parallel migration, and localized uplift of a large crustal sliver (the NPI block) along the subduction margin. The location of this uplift coincides with a major increase of summit elevations and relief at the northern edge of the NPI massif. This mechanism is compatible with possible extensional processes along the topographically subdued trailing edge of the NPI block as documented by very recent and possibly still active normal faulting. Taken together, these findings suggest a major structural control on short-wavelength variations in topography in the Southern Patagonian Andes - the region affected by ridge collision and slab window formation.
The second research topic addressed here focuses on using my new and existing bedrock low-temperature cooling ages in forward and inverse thermal modeling. The data was implemented in the HeFTy and QTQt modeling platforms to constrain the late Cenozoic thermal history of the Southern Patagonian Andes in the region of the most recent upper-plate sectors of ridge collision. The data set combines AHe and AFT data from three elevation transects in the region of the Northern Patagonian Icefield. Previous similar studies claimed far-reaching thermal effects of the approaching ridge collision and slab window to affect patterns of Late Miocene reheating in the modelled thermal histories. In contrast, my results show that the currently available data can be explained with a simpler thermal history than previously proposed. Accordingly, a reheating event is not needed to reproduce the observations. Instead, the analyzed ensemble of modelled thermal histories defines a Late Miocene protracted cooling and Pliocene-to-recent stepwise exhumation. These findings agree with the geological record of this region. Specifically, this record indicates an Early Miocene phase of active mountain building associated with surface uplift and an active fold-and-thrust belt, followed by a period of stagnating deformation, peneplanation, and lack of synorogenic deposition in the Patagonian foreland. The subsequent period of stepwise exhumation likely resulted from a combination of pulsed glacial erosion and coeval neotectonic activity. The differences between the present and previously published interpretation of the cooling record can be reconciled with important inconsistencies of previously used model setup. These include mainly the insufficient convergence of the models and improper assumptions regarding the geothermal conditions in the region. This analysis puts a methodological emphasis on the prime importance of the model setup and the need for its thorough examination to evaluate the robustness of the final outcome.
In this paper we report an experimental and computational study of liquid acetonitrile (H3C–C[triple bond, length as m-dash]N) by resonant inelastic X-ray scattering (RIXS) at the N K-edge. The experimental spectra exhibit clear signatures of the electronic structure of the valence states at the N site and incident-beam-polarization dependence is observed as well. Moreover, we find fine structure in the quasielastic line that is assigned to finite scattering duration and nuclear relaxation. We present a simple and light-to-evaluate model for the RIXS maps and analyze the experimental data using this model combined with ab initio molecular dynamics simulations. In addition to polarization-dependence and scattering-duration effects, we pinpoint the effects of different types of chemical bonding to the RIXS spectrum and conclude that the H2C–C[double bond, length as m-dash]NH isomer, suggested in the literature, does not exist in detectable quantities. We study solution effects on the scattering spectra with simulations in liquid and in vacuum. The presented model for RIXS proved to be light enough to allow phase-space-sampling and still accurate enough for identification of transition lines in physical chemistry research by RIXS.
The excitation of localized surface plasmons in noble metal nanoparticles (NPs) results in different nanoscale effects such as electric field enhancement, the generation of hot electrons and a temperature increase close to the NP surface. These effects are typically exploited in diverse fields such as surface-enhanced Raman scattering (SERS), NP catalysis and photothermal therapy (PTT). Halogenated nucleobases are applied as radiosensitizers in conventional radiation cancer therapy due to their high reactivity towards secondary electrons. Here, we use SERS to study the transformation of 8-bromoadenine (8BrA) into adenine on the surface of Au and AgNPs upon irradiation with a low-power continuous wave laser at 532, 633 and 785 nm, respectively. The dissociation of 8BrA is ascribed to a hot-electron transfer reaction and the underlying kinetics are carefully explored. The reaction proceeds within seconds or even milliseconds. Similar dissociation reactions might also occur with other electrophilic molecules, which must be considered in the interpretation of respective SERS spectra. Furthermore, we suggest that hot-electron transfer induced dissociation of radiosensitizers such as 8BrA can be applied in the future in PTT to enhance the damage of tumor tissue upon irradiation.
Macrocycles based on L-cystine were synthesized by ring-closing metathesis (RCM) and subsequently polymerized by entropy-driven ring-opening metathesis polymerization (ED-ROMP). Monomer conversion reached ∼80% in equilibrium and the produced poly(ester-amine-disulfide-alkene)s exhibited apparent molar masses (Mappw) of up to 80 kDa and dispersities (Đ) of ∼2. The polymers can be further functionalized with acid anhydrides and degraded by reductive cleavage of the main-chain disulfide.
Savannas cover a broad geographical range across continents and are a biome best described by a mix of herbaceous and woody plants. The former create a more or less continuous layer while the latter should be sparse enough to leave an open canopy. What has long intrigued ecologists is how these two competing plant life forms of vegetation coexist.
Initially attributed to resource competition, coexistence was considered the stable outcome of a root niche differentiation between trees and grasses. The importance of environmental factors became evident later, when data from moister environments demonstrated that tree cover was often lower than what the rainfall conditions would allow for. Our current understanding relies on the interaction of competition and disturbances in space and time. Hence, the influence of grazing and fire and the corresponding feedbacks they generate have been keenly investigated. Grazing removes grass cover, initiating a self-reinforcing process propagating tree cover expansion. This is known as the encroachment phenomenon. Fire, on the other hand, imposes a bottleneck on the tree population by halting the recruitment of young trees into adulthood. Since grasses fuel fires, a feedback linking grazing, grass cover, fire, and tree cover is created. In African savannas, which are the focus of this dissertation, these feedbacks play a major role in the dynamics.
The importance of these feedbacks came into sharp focus when the notion of alternative states began to be applied to savannas. Alternative states in ecology arise when different states of an ecosystem can occur under the same conditions. According to this an open savanna and a tree-dominated savanna can be classified as alternative states, since they can both occur under the same climatic conditions. The aforementioned feedbacks are critical in the creation of alternative states. The grass-fire feedback can preserve an open canopy as long as fire intensity and frequency remain above a certain threshold. Conversely, crossing a grazing threshold can force an open savanna to shift to a tree-dominated state. Critically, transitions between such alternative states can produce hysteresis, where a return to pre-transition conditions will not suffice to restore the ecosystem to its original state.
In the chapters that follow, I will cover aspects relating to the coexistence mechanisms and the role of feedbacks in tree-grass interactions. Coming back to the coexistence question, due to the overwhelming focus on competition and disturbance another important ecological process was neglected: facilitation. Therefore, in the first study within this dissertation I examine how facilitation can expand the tree-grass coexistence range into drier conditions. For the second study I focus on another aspect of savanna dynamics which remains underrepresented in the literature: the impacts of inter-annual rainfall variability upon savanna trees and the resilience of the savanna state. In the third and final study within this dissertation I approach the well-researched encroachment phenomenon from a new perspective: I search for an early warning indicator of the process to be used as a prevention tool for savanna conservation. In order to perform all this work I developed a mathematical ecohydrological model of Ordinary Differential Equations (ODEs) with three variables: soil moisture content, grass cover and tree cover.
Facilitation: Results showed that the removal of grass cover through grazing was detrimental to trees under arid conditions, contrary to expectation based on resource competition. The reason was that grasses preserved moisture in the soil through infiltration and shading, thus ameliorating the harsh conditions for trees in accordance with the Stress Gradient Hypothesis. The exclusion of grasses from the model further demonstrated this: tree cover was lower in the absence of grasses, indicating that the benefits of grass facilitation outweighed the costs of grass competition for trees. Thus, facilitation expanded the climatic range where savannas persisted into drier conditions.
Rainfall variability: By adjusting the model to current rainfall patterns in East Africa, I simulated conditions of increasing inter-annual rainfall variability for two distinct mean rainfall scenarios: semi-arid and mesic. Alternative states of tree-less grassland and tree-dominated savanna emerged in both cases. Increasing variability reduced semi-arid savanna tree cover to the point that at high variability the savanna state was eliminated, because variability intensified resource competition and strengthened the fire disturbance during high rainfall years. Mesic savannas, on the other hand, became more resilient along the variability gradient: increasing rainfall variability created more opportunities for the rapid growth of trees to overcome the fire disturbance, boosting the chances of savannas persisting and thus increasing mesic savanna resilience.
Preventing encroachment: The breakdown in the grass-fire feedback caused by heavy grazing promoted the expansion of woody cover. This could be irreversible due to the presence of alternative states of encroached and open savanna, which I found along a simulated grazing gradient. When I simulated different short term heavy grazing treatments followed by a reduction to the original grazing conditions, certain cases converged to the encroached state. Utilising woody cover changes only during the heavy grazing treatment, I developed an early warning indicator which identified these cases with a high risk of such hysteresis and successfully distinguished them from those with a low risk. Furthermore, after validating the indicator on encroachment data, I demonstrated that it appeared early enough for encroachment to be prevented through realistic grazing-reduction treatments.
Though this dissertation is rooted in the theory of savanna dynamics, its results can have significant applications in savanna conservation. Facilitation has only recently become a topic of interest within savanna literature. Given the threat of increasing droughts and a general anticipation of drier conditions in parts of Africa, insights stemming from this research may provide clues for preserving arid savannas. The impacts of rainfall variability on savannas have not yet been thoroughly studied, either. Conflicting results appear as a result of the lack of a robust theoretical understanding of plant interactions under variable conditions. . My work and other recent studies argue that such conditions may increase the importance of fast resource acquisition creating a ‘temporal niche’. Woody encroachment has been extensively studied as phenomenon, though not from the perspective of its early identification and prevention. The development of an encroachment forecasting tool, as the one presented in this work, could protect both the savanna biome and societies dependent upon it for (economic) survival. All studies which follow are bound by the attempt to broaden the horizons of savanna-related research in order to deal with extreme conditions and phenomena; be it through the enhancement of the coexistence debate or the study of an imminent external threat or the development of a management-oriented tool for the conservation of savannas.
We present new experimental data of the low-temperature metastable region of liquid water derived from high-density synthetic fluid inclusions (996–916 kg m−3) in quartz. Microthermometric measurements include: (i) prograde (upon heating) and retrograde (upon cooling) liquid–vapour homogenisation. We used single ultrashort laser pulses to stimulate vapour bubble nucleation in initially monophase liquid inclusions. Water densities were calculated based on prograde homogenisation temperatures using the IAPWS-95 formulation. We found retrograde liquid–vapour homogenisation temperatures in excellent agreement with IAPWS-95. (ii) Retrograde ice nucleation. Raman spectroscopy was used to determine the nucleation of ice in the absence of the vapour bubble. Our ice nucleation data in the doubly metastable region are inconsistent with the low-temperature trend of the spinodal predicted by IAPWS-95, as liquid water with a density of 921 kg m−3 remains in a homogeneous state during cooling down to a temperature of −30.5 °C, where it is transformed into ice whose density corresponds to zero pressure. (iii) Ice melting. Ice melting temperatures of up to 6.8 °C were measured in the absence of the vapour bubble, i.e. in the negative pressure region. (iv) Spontaneous retrograde and, for the first time, prograde vapour bubble nucleation. Prograde bubble nucleation occurred upon heating at temperatures above ice melting. The occurrence of prograde and retrograde vapour bubble nucleation in the same inclusions indicates a maximum of the bubble nucleation curve in the ϱ–T plane at around 40 °C. The new experimental data represent valuable benchmarks to evaluate and further improve theoretical models describing the p–V–T properties of metastable water in the low-temperature region.
We present results on ultrafast gas electron diffraction (UGED) experiments with femtosecond resolution using the MeV electron gun at SLAC National Accelerator Laboratory. UGED is a promising method to investigate molecular dynamics in the gas phase because electron pulses can probe the structure with a high spatial resolution. Until recently, however, it was not possible for UGED to reach the relevant timescale for the motion of the nuclei during a molecular reaction. Using MeV electron pulses has allowed us to overcome the main challenges in reaching femtosecond resolution, namely delivering short electron pulses on a gas target, overcoming the effect of velocity mismatch between pump laser pulses and the probe electron pulses, and maintaining a low timing jitter. At electron kinetic energies above 3 MeV, the velocity mismatch between laser and electron pulses becomes negligible. The relativistic electrons are also less susceptible to temporal broadening due to the Coulomb force. One of the challenges of diffraction with relativistic electrons is that the small de Broglie wavelength results in very small diffraction angles. In this paper we describe the new setup and its characterization, including capturing static diffraction patterns of molecules in the gas phase, finding time-zero with sub-picosecond accuracy and first time-resolved diffraction experiments. The new device can achieve a temporal resolution of 100 fs root-mean-square, and sub-angstrom spatial resolution. The collimation of the beam is sufficient to measure the diffraction pattern, and the transverse coherence is on the order of 2 nm. Currently, the temporal resolution is limited both by the pulse duration of the electron pulse on target and by the timing jitter, while the spatial resolution is limited by the average electron beam current and the signal-to-noise ratio of the detection system. We also discuss plans for improving both the temporal resolution and the spatial resolution.
In this contribution, we study using first principles the co-adsorption and catalytic behaviors of CO and O2 on a single gold atom deposited at defective magnesium oxide surfaces. Using cluster models and point charge embedding within a density functional theory framework, we simulate the CO oxidation reaction for Au1 on differently charged oxygen vacancies of MgO(001) to rationalize its experimentally observed lack of catalytic activity. Our results show that: (1) co-adsorption is weakly supported at F0 and F2+ defects but not at F1+ sites, (2) electron redistribution from the F0 vacancy via the Au1 cluster to the adsorbed molecular oxygen weakens the O2 bond, as required for a sustainable catalytic cycle, (3) a metastable carbonate intermediate can form on defects of the F0 type, (4) only a small activation barrier exists for the highly favorable dissociation of CO2 from F0, and (5) the moderate adsorption energy of the gold atom on the F0 defect cannot prevent insertion of molecular oxygen inside the defect. Due to the lack of protection of the color centers, the surface becomes invariably repaired by the surrounding oxygen and the catalytic cycle is irreversibly broken in the first oxidation step.
Information about the strength of donor–acceptor interactions in push–pull alkenes is valuable, as this so-called “push–pull effect” influences their chemical reactivity and dynamic behaviour. In this paper, we discuss the applicability of NMR spectral data and barriers to rotation around the C[double bond, length as m-dash]C double bond to quantify the push–pull effect in biologically important 2-alkylidene-4-oxothiazolidines. While olefinic proton chemical shifts and differences in 13C NMR chemical shifts of the two carbons constituting the C[double bond, length as m-dash]C double bond fail to give the correct trend in the electron withdrawing ability of the substituents attached to the exocyclic carbon of the double bond, barriers to rotation prove to be a reliable quantity in providing information about the extent of donor–acceptor interactions in the push–pull systems studied. In particular all relevant kinetic data, that is the Arrhenius parameters (apparent activation energy Ea and frequency factor A) and activation parameters (ΔS‡, ΔH‡ and ΔG‡), were determined from the data of the experimentally studied configurational isomerization of (E)-9a. These results were compared to previously published related data for other two compounds, (Z)-1b and (2E,5Z)-7, showing that experimentally determined ΔG‡ values are a good indicator of the strength of push–pull character. Theoretical calculations of the rotational barriers of eight selected derivatives excellently correlate with the calculated C[double bond, length as m-dash]C bond lengths and corroborate the applicability of ΔG‡ for estimation of the strength of the push–pull effect in these and related systems.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
Molekulare Charakterisierung des Centrosom-assoziierten Proteins CP91 in Dictyostelium discoideum
(2016)
Das Dictyostelium-Centrosom ist ein Modell für acentrioläre Centrosomen. Es besteht aus einer dreischichtigen Kernstruktur und ist von einer Corona umgeben, welche Nukleationskomplexe für Mikrotubuli beinhaltet. Die Verdoppelung der Kernstruktur wird einmal pro Zellzyklus am Übergang der G2 zur M-Phase gestartet. Durch eine Proteomanalyse isolierter Centrosomen konnte CP91 identifiziert werden, ein 91 kDa großes Coiled-Coil Protein, das in der centrosomalen Kernstruktur lokalisiert. GFP-CP91 zeigte fast keine Mobilität in FRAP-Experimenten während der Interphase, was darauf hindeutet, dass es sich bei CP91 um eine Strukturkomponente des Centrosoms handelt. In der Mitose hingegen dissoziieren das GFP-CP91 als auch das endogene CP91 ab und fehlen an den Spindelpolen von der späten Prophase bis zur Anaphase. Dieses Verhalten korreliert mit dem Verschwinden der zentralen Schicht der Kernstruktur zu Beginn der Centrosomenverdopplung. Somit ist CP91 mit großer Wahrscheinlichkeit ein Bestandteil dieser Schicht. CP91-Fragmente der N-terminalen bzw. C-terminalen Domäne (GFP-CP91 N-Terminus, GFP-CP91 C-Terminus) lokalisieren als GFP-Fusionsproteine exprimiert auch am Centrosom, zeigen aber nicht die gleiche mitotische Verteilung des Volllängenproteins. Das CP91-Fragment der zentralen Coiled-Coil Domäne (GFP-CP91cc) lokalisiert als GFP-Fusionsprotein exprimiert, als ein diffuser cytosolische Cluster, in der Nähe des Centrosoms. Es zeigt eine partiell ähnliche mitotische Verteilung wie das Volllängenprotein. Dies lässt eine regulatorische Domäne innerhalb der Coiled-Coil Domäne vermuten. Die Expression der GFP-Fusionsproteine unterdrückt die Expression des endogenen CP91 und bringt überzählige Centrosomen hervor. Dies war auch eine markante Eigenschaft nach der Unterexpression von CP91 durch RNAi. Zusätzlich zeigte sich in CP91-RNAi Zellen eine stark erhöhte Ploidie verursacht durch schwere Defekte in der Chromosomensegregation verbunden mit einer erhöhten Zellgröße und Defekten im Abschnürungsprozess während der Cytokinese. Die Unterexpression von CP91 durch RNAi hatte auch einen direkten Einfluss auf die Menge an den centrosomalen Proteinen CP39, CP55 und CEP192 und dem Centromerprotein Cenp68 in der Interphase. Die Ergebnisse deuten darauf hin, dass CP91 eine zentrale centrosomale Kernkomponente ist und für den Zusammenhalt der beiden äußeren Schichten der Kernstruktur benötigt wird. Zudem spielt CP91 eine wichtige Rolle für eine ordnungsgemäße Centrosomenbiogenese und, unabhängig davon, bei dem Abschnürungsprozess der Tochterzellen während der Cytokinese.
Background: Aggression is a severe behavioral problem that interferes with many developmental challenges individuals face in middle childhood and adolescence. Particularly in the peer and in the academic domain, aggression inhibits the individual from making important learning experiences that are predictive for a healthy transition into adulthood. Furthermore, the resulting developmental deficits have the propensity to feedback and to promote aggression at later developmental stages. The aim of the present PhD thesis was to investigate pathways and processes involved in the etiology of aggression by examining the interrelation between multiple developmental problems in the peer and in the academic domain. More specifically, the relevance of affiliation with deviant peers as a driving mechanism for the development of aggression, factors promoting the affiliation with deviant peers (social rejection; academic failure), and mechanisms by which affiliation with deviant peers leads to aggression (external locus of control) were investigated.
Method: The research questions were addressed by three studies. Three data waves were available for the first study, the second and third study were based on two data waves. The first study specified pathways to antisocial behavior by investigating the temporal interrelation between social rejection, academic failure, and affiliation with deviant peers in a sample of 1,657 male and female children and adolescents aged between 6 and 15 years. The second study examined the role of external control beliefs as a potential mediator in the link between affiliation with deviant peers and aggression in a sample of 1,466 children and adolescents in the age of 9 to 19 years, employing a half-longitudinal design. The third study aimed to expand the findings of Study 1 and Study 2 by examining the differential predictivity of combinations of developmental risks for different functions of aggression, using a sample of 1,479 participants in the age between 9 and 19 years. First, profiles of social rejection, academic failure, and affiliation with deviant peers were identified, using latent profile analysis. Second, prospective pathways between risk-profiles and reactive and proactive aggression were investigated, using latent path analysis.
Results: The first study revealed that antisocial behavior at T1 was associated with social rejection and academic failure at T2. Both mechanisms promoted affiliation with deviant peers at the same data wave, which predicted deviancy at T3. Furthermore, both an indirect pathway via social rejection and affiliation with deviant peers and an indirect pathway via academic failure and affiliation with deviant peers significantly mediated the link between antisocial behavior at the first and the third data wave. Additionally, the proposed pathways generalized across genders and different age groups. The second study yielded that external control beliefs significantly mediated the link between affiliation with deviant peers and aggression, with affiliation with deviant peers at T1 predicting external control beliefs at T2 and external control beliefs at T1 predicting aggressive behavior at T2. Again, the analyses provided no evidence for gender and age specific variations in the proposed pathways. In the third study, three distinct risk groups were identified, made up of a large non-risk group, with low scores on all risk measures, a group characterized by high scores on social rejection (SR group), and a group with the highest scores on measures of affiliation with deviant peers and academic failure (APAF group). Importantly, risk group membership was differentially associated with reactive and proactive aggression. Only membership in the SR group at T1 was associated with the development of reactive aggression at T2 and only membership in the APAF group at T1 predicted proactive aggression at T2. Additionally, proactive aggression at T1 predicted membership in the APAF group at T2, indicating a reciprocal relationship between both constructs.
Conclusion: The results demonstrated that aggression causes severe behavioral deficits in social and academic domains which promote future aggression by increasing individuals’ tendency to affiliate with deviant peers. The stimulation of external control beliefs provides an explanation for deviant peers’ effect on the progression and intensification of aggression. Finally, multiple developmental risks were shown to co-occur within individuals and to be differentially predictive of reactive and proactive aggression. The findings of this doctoral dissertation have possible implications for the conceptualization of prevention and intervention programs aimed to reduce aggression in middle childhood and adolescence.
A majority of studies documented a reduced ankle muscle activity, particularly of the peroneus longus muscle (PL), in patients with functional ankle instability (FI). It is considered valid that foot orthoses as well as sensorimotor training have a positive effect on ankle muscle activity in healthy individuals and those with lower limb overuse injuries or flat arched feet (reduced reaction time by sensorimotor exercises; increased ankle muscle amplitude by orthoses use). However, the acute- and long-term influence of foot orthoses on ankle muscle activity in individuals with FI is unknown.
AIMS: The present thesis addressed (1a) acute- and (1b) long-term effects of foot orthoses compared to sensorimotor training on ankle muscle activity in patients with FI. (2) Further, it was investigated if the orthosis intervention group demonstrate higher ankle muscle activity by additional short-term use of a measurement in-shoe orthosis (compared to short-term use of “shoe only”) after intervention. (3) As prerequisite, it was evaluated if ankle muscle activity can be tested reliably and (4) if this differs between healthy individuals and those with FI.
METHODS: Three intervention groups (orthosis group [OG], sensorimotor training group [SMTG], control group [CG]), consisting of both, healthy individuals and those with FI, underwent one longitudinal investigation (randomised controlled trial). Throughout 6 weeks of intervention, OG wore an in-shoe orthosis with a specific “PL stimulation module”, whereas SMTG conducted home-based exercises. CG served to measure test-retest reliability of ankle muscle activity (PL, M. tibialis anterior [TA] and M. gastrocnemius medialis [GM]). Pre- and post-intervention, ankle muscle activity (EMG amplitude) was recorded during “normal” unperturbed (NW) and perturbed walking (PW) on a split-belt treadmill (stimulus 200 ms post initial heel contact [IC]) as well as during side cutting (SC), each while wearing “shoes only” and additional measurement in-shoe orthoses (randomized order). Normalized RMS values (100% MVC, mean±SD) were calculated pre- (100-50 ms) and post (200-400 ms) - IC.
RESULTS: (3) Test-retest reliability showed a high range of values in healthy individuals and those with FI. (4) Compared to healthy individuals, patients with FI demonstrated lower PL pre-activity during SC, however higher PL pre-activity for NW and PW. (1a) Acute orthoses use did not influence ankle muscle activity. (1b) For most conditions, sensorimotor training was more effective in individuals with FI than long-term orthotic intervention (increased: PL and GM pre-activity and TA reflex-activity for NW, PL pre-activity and TA, PL and GM reflex-activity for SC, PL reflex-activity for PW). However, prolonged orthoses use was more beneficial in terms of an increase in GM pre-activity during SC. For some conditions, long-term orthoses intervention was as effective as sensorimotor training for individuals with FI (increased: PL pre-activity for PW, TA pre-activity for SC, PL and GM reflex-activity for NW). Prolonged orthoses use was also advantageous in healthy individuals (increased: PL and GM pre-activity for NW and PW, PL pre-activity for SC, TA and PL reflex-activity for NW, PL and GM reflex-activity for PW). (2) The orthosis intervention group did not present higher ankle muscle activity by the additional short-term use of a measurement in-shoe orthosis at re-test after intervention.
CONCLUSION: High variations of reproducibility reflect physiological variability in muscle activity during gait and therefore deemed acceptable. The main findings confirm the presence of sensorimotor long-term effects of specific foot orthoses in healthy individuals (primary preventive effect) and those with FI (therapeutic effect). Neuromuscular compensatory feedback- as well as anticipatory feedforward adaptation mechanism to prolonged orthoses use, specifically of the PL muscle, underpins the key role of PL in providing essential dynamic ankle joint stability. Due to its advantages over sensorimotor training (positive subjective feedback in terms of comfort, time-and-cost-effectiveness), long-term foot orthoses use can be recommended as an applicable therapy alternative in the treatment of FI. Long-term effect of foot orthoses in a population with FI must be validated in a larger sample size with longer follow-up periods to substantiate the generalizability of the existing outcomes.
Inhalt
- Reinhard Andress: Ein unveröffentlichter Brief Alexander von Humboldts an den Buchhändler Jean-Georges Treuttel
- Julian Drews: Écriture (auto)biographique dans l‘Examen critique d‘Alexandre de Humboldt
- Alberto Gómez Gutiérrez: Alexander von Humboldt y la cooperación transcontinental en la Geografía de las plantas: una nueva apreciación de la obra fitogeográfica de Francisco José de Caldas
- Karin Reich und Elena Roussanova: Der Briefwechsel zwischen Karl Kreil und Alexander von Humboldt, ein wichtiger Beitrag zur Geschichte des Erdmagnetismus
- Alexander Stöger: Experiment und Wissensvermittlung. Alexander von Humboldts Darstellungsmethoden in seinen Versuchen über die gereizte Muskel- und Nervenfaser
- Friedrich Herneck und Ingo Schwarz: Friedrich Herneck: Hegel und Alexander von Humboldt
Alexander von Humboldt hat sich in jungen Jahren mit galvanischen Experimenten beschäftigt und die Resultate in einem umfassenden, zweibändigen Werk publiziert. Dabei zeigte er nicht nur, dass er als Experimentator und Teil der wissenschaftlichen Gemeinschaft fähig war, sich mit einem so neuen und komplexen Phänomen zu beschäftigen. Es lässt sich auch erkennen, dass er bereits in dieser frühen Schrift versuchte, das umfangreiche Wissen dem Leser zugänglich zu machen.
Der Artikel betrachtet Humboldts Galvanismusschrift Versuche über die gereizte Muskel- und Nervenfaser (1797–1798) und untersucht einige Elemente wie Anhänge und Schreibstil, die Humboldt nutzte, um die umfangreichen Informationen zu ordnen und dem Leser so neben den Big Data seiner Erkenntnisse auch passende Suchfunktionen zur Verfügung zu stellen, die eine gezielte Nutzung überhaupt ermöglichen.
Die Korrespondenz zwischen Alexander von Humboldt und Karl Kreil war umfangreich und betraf den Erdmagnetismus. Aber heute ist nur noch ein einziger Brief im Original bekannt. Dieser Brief, den Kreil am 3. September 1836 Alexander von Humboldt zukommen ließ, stimmt inhaltlich und teilweise wortwörtlich mit dem Brief überein, den Kreil nur einen Tag später, am 4. September 1836, an Carl Friedrich Gauß schickte. Vier Briefe von Kreil an Humboldt wurden in den „Annalen der Physik und Chemie“ publiziert, eine nicht allzu große Anzahl weiterer Briefe an Humboldt wurde in der biographischen Literatur über Kreil und in Briefen Kreils an Koller und Gauß erwähnt. Aber nicht nur die lückenhafte und bruchstückhaft bekannte Korrespondenz zwischen Humboldt und Kreil, die bis 1851 reicht, gibt Aufschluss über die Beziehungen, sondern von besonderer Bedeutung ist des Weiteren der Bestand an Kreiliana in der Bibliothek Humboldts. Es handelt sich um neun Werke Kreils, das letzte aus dem Jahr 1856. Nachweisbare Kontakte zwischen Kreil und Humboldt fanden also mit Sicherheit mindestens bis zu diesem Jahr statt!
El Ensayo sobre la geografía de las plantas de Alexander von Humboldt ha trascendido como una de sus principales propuestas científicas, fundamento de lo que se conoce hoy como “biogeografía”. El origen de este concepto es difuso hasta el momento de la publicación simultánea de su obra en París y en Tübingen, en 1807. El presente artículo propone contrastar la primera versión manuscrita de este ensayo, elaborada en 1803 en Guayaquil y luego leída en 1805 en el Institut National de Paris, con la obra contemporánea del neogranadino Francisco José de Caldas, con quien convivió en Quito en el primer semestre de 1802.
La référence à Colomb est un lieu commun de la biographie humboldtienne. Humboldt lui-même le souligne particulièrement dans son Examen critique, en y ajoutant une dimension autobiographique. La contribution analyse, dans une perspective philologique, le matériel et les formes de mise en scène avec lesquelles une vie est représentée au travers d'une autre.
Ein unveröffentlichter Brief Alexander von Humboldts an den Buchhändler Jean-Georges Treuttel
(2016)
In der Lilly Library der Indiana University befindet sich ein kurzer unveröffentlichter Brief Alexander von Humboldts an den Buchhändler und Autor Jean-Georges Treuttel. Dieser Artikel unternimmt den Versuch, den historisch-bibliographischen Kontext des Briefes herzustellen und ihn in Humboldts wissenschaftliches Schaffen einzuordnen.
The LEA (late embryogenesis abundant) proteins COR15A and COR15B from Arabidopsis thaliana are intrinsically disordered under fully hydrated conditions, but obtain α-helical structure during dehydration, which is reversible upon rehydration. To understand this unusual structural transition, both proteins were investigated by circular dichroism (CD) and molecular dynamics (MD) approaches. MD simulations showed unfolding of the proteins in water, in agreement with CD data obtained with both HIS-tagged and untagged recombinant proteins. Mainly intramolecular hydrogen bonds (H-bonds) formed by the protein backbone were replaced by H-bonds with water molecules. As COR15 proteins function in vivo as protectants in leaves partially dehydrated by freezing, unfolding was further assessed under crowded conditions. Glycerol reduced (40%) or prevented (100%) unfolding during MD simulations, in agreement with CD spectroscopy results. H-bonding analysis indicated that preferential exclusion of glycerol from the protein backbone increased stability of the folded state.
The interaction of water with α-alumina (i.e. α-Al2O3) surfaces is important in a variety of applications and a useful model for the interaction of water with environmentally abundant aluminosilicate phases. Despite its significance, studies of water interaction with α-Al2O3 surfaces other than the (0001) are extremely limited. Here we characterize the interaction of water (D2O) with a well defined α-Al2O3(1[1 with combining macron]02) surface in UHV both experimentally, using temperature programmed desorption and surface-specific vibrational spectroscopy, and theoretically, using periodic-slab density functional theory calculations. This combined approach makes it possible to demonstrate that water adsorption occurs only at a single well defined surface site (the so-called 1–4 configuration) and that at this site the barrier between the molecularly and dissociatively adsorbed forms is very low: 0.06 eV. A subset of OD stretch vibrations are parallel to this dissociation coordinate, and thus would be expected to be shifted to low frequencies relative to an uncoupled harmonic oscillator. To quantify this effect we solve the vibrational Schrödinger equation along the dissociation coordinate and find fundamental frequencies red-shifted by more than 1500 cm−1. Within the context of this model, at moderate temperatures, we further find that some fraction of surface deuterons are likely delocalized: dissociatively and molecularly absorbed states are no longer distinguishable.
In this work, three ligands produced from amino acids were synthesized and used to produce five bis- and PEPPSI-type palladium–NHC complexes using a novel synthesis route from sustainable starting materials. Three of these complexes were used as precatalysts in the aqueous-phase Suzuki–Miyaura coupling of various substrates displaying high activity. TEM and mercury poisoning experiments provide evidence for Pd-nanoparticle formation stabilized in water.
The synthesis and photophysical properties of two new FRET pairs based on coumarin as a donor and DBD dye as an acceptor are described. The introduction of a bromo atom dramatically increases the two-photon excitation (2PE) cross section providing a 2PE-FRET system, which is also suitable for 2PE-FLIM.
The aim of this study was to develop a one-step synthesis of gold nanotriangles (NTs) in the presence of mixed phospholipid vesicles followed by a separation process to isolate purified NTs. Negatively charged vesicles containing AOT and phospholipids, in the absence and presence of additional reducing agents (polyampholytes, polyanions or low molecular weight compounds), were used as a template phase to form anisotropic gold nanoparticles. Upon addition of the gold chloride solution, the nucleation process is initiated and both types of particles, i.e., isotropic spherical and anisotropic gold nanotriangles, are formed simultaneously. As it was not possible to produce monodisperse nanotriangles with such a one-step procedure, the anisotropic nanoparticles needed to be separated from the spherical ones. Therefore, a new type of separation procedure using combined polyelectrolyte/micelle depletion flocculation was successfully applied. As a result of the different purification steps, a green colored aqueous dispersion was obtained containing highly purified, well-defined negatively charged flat nanocrystals with a platelet thickness of 10 nm and an edge length of about 175 nm. The NTs produce promising results in surface-enhanced Raman scattering.
Optical biosensors based on porous silicon were fabricated by metal assisted chemical etching. Thereby double layered porous silicon structures were obtained consisting of porous pillars with large pores on top of a porous silicon layer with smaller pores. These structures showed a similar sensing performance in comparison to electrochemically produced porous silicon interferometric sensors.
In Near Edge X-Ray Absorption Fine Structure (NEXAFS) spectroscopy X-Ray photons are used to excite tightly bound core electrons to low-lying unoccupied orbitals of the system. This technique offers insight into the electronic structure of the system as well as useful structural information. In this work, we apply NEXAFS to two kinds of imidazolium based ionic liquids ([CnC1im]+[NTf2]- and [C4C1im]+[I]-). A combination of measurements and quantum chemical calculations of C K and N K NEXAFS resonances is presented. The simulations, based on the transition potential density functional theory method (TP-DFT), reproduce all characteristic features observed by the experiment. Furthermore, a detailed assignment of resonance features to excitation centers (carbon or nitrogen atoms) leads to a consistent interpretation of the spectra.
Herein we present an efficient synthesis of a biomimetic probe with modular construction that can be specifically bound by the mannose binding FimH protein – a surface adhesion protein of E. coli bacteria. The synthesis combines the new and interesting DBD dye with the carbohydrate ligand mannose via a Click reaction. We demonstrate the binding to E. coli bacteria over a large concentration range and also present some special characteristics of those molecules that are of particular interest for the application as a biosensor. In particular, the mix-and-measure ability and the very good photo-stability should be highlighted here.
Die vorgelegte Dissertation präsentiert wissenschaftliche Ergebnisse, die in der Zeit vom Dezember 2012 bis August 2016, erarbeitet wurden. Der zentrale Inhalt der Arbeit ist die Simulation von Röntgenabsorptionsprozessen von verschiedenen Systemen in kondensierter Phase. Genauer gesagt, werden Nahkantenabsorptions- (NEXAFS) sowie Röntgenphotoelektronenspektren (XPS) berechnet. In beiden Fällen wird ein Röntgenphoton von einem molekularen System absorbiert. Aufgrund der hohen Photonenenergie wird ein stark gebundenes kernnahes Elektron angeregt. Bei der XPS gelangt dieses mit einer zu messenden kinetischen Energie in Kontinuumszustände. In Abhängigkeit der eingestrahlten Photonenenergie und der kinetischen Energie des austreten Elektrons, kann die Bindungsenergie berechnet werden, welche die zentrale Größe der XPS ist. Im Falle der NEXAFS-Spektroskopie wird das kernnahe Elektron in unbesetzte gebundene Zustände angeregt. Die zentrale Größe ist die Absorption als Funktion der eingestrahlten Photonenenergie. Das erste Kapitel meiner Arbeit erörtert detailliert die experimentellen Methoden sowie die daraus gewonnenen charakteristischen Größen.
Die experimentellen Spektren zeigen oft viele Resonanzen, deren Interpretation aufgrund fehlender Referenzmaterialien schwierig ist. In solchen Fällen bietet es sich an, die Spektren mittels quantenchemischer Methoden zu simulieren. Der dafür erforderliche mathematisch-physikalische Methodenkatalog wird im zweiten Kapitel der Arbeit erörtert.
Das erste von mir untersuchte System ist Graphen. In experimentellen Arbeiten wurde die Oberfläche mittels Bromplasma modifiziert. Die im Anschluss gemessenen NEXAFS-Spektren unterscheiden sich maßgeblich von den Spektren der unbehandelten Oberfläche. Mithilfe periodischer DFT-Rechnungen wurden verschiedene Gitterdefekte sowie bromierte Systeme untersucht und die NEXAFS-Spektren simuliert. Mittels der Simulationen können die Beiträge verschiedener Anregungszentren analysiert werden. Die Berechnungen erlauben den Schluss, dass Gitterdefekte maßgeblich für die entstandenen Veränderungen verantwortlich sind.
Polyvinylalkohol (PVA) wurde als zweites System behandelt. Hierbei sollte untersucht werden, wie groß der Einfluss der Molekularbewegung auf die Verbreiterung der Peaks im XP-Spektrum ist. Des Weiteren wurde untersucht, wie groß der Einfluss von intermolekularen Wechselwirkungen auf die Peakpositionen und Peakverbreiterung ist. Für die Berechnung dieses Systems wurde eine Kombination aus molekulardynamischen und quantenchemischen Methoden verwendet. Als Strukturen dienten Oligomermodelle, die unter dem Einfluss eines (ab initio) Potentials propagiert wurden. Entlang der erstellten Trajektorie wurden Schnappschüsse der Geometrien extrahiert und für die Berechnung der XP-Spektren verwendet. Die Spektren werden bereits mithilfe klassischer Molekulardynamik sehr gut reproduziert. Die erhaltenen Peakbreiten sind verglichen mit dem Experiment allerdings zu klein. Die Hauptursache der Peakverbreiterung ist die Molekularbewegung. Intermolekulare Wechselwirkungen verschieben die Peakpositionen um 0.6 eV zu kleineren Anregungsenergien.
Im dritten Teil der Arbeit stehen die NEXAFS-Spektren von ionischen Flüssigkeiten (ILs) im Fokus. Die experimentell gefundenen Spektren zeigen eine komplexe Struktur mit vielen Resonanzen. In der Arbeit wurden zwei ILs untersucht. Als Geometrien verwenden wir Clustermodelle, die aus experimentellen Kristallstrukturen extrahiert wurden. Die berechneten Spektren erlauben es, die Resonanzen den Anregungszentren zuzuordnen. Außerdem kann eine erstmals gemessene Doppelresonanz simuliert und erklärt werden. Insgesamt kann die Interpretation der Spektren mithilfe der Simulation signifikant erweitert werden.
In allen Systemen wurde zur Berechnung des NEXAFS-Spektrums eine auf Dichtefunktionaltheorie basierende Methode verwendet (die sogenannte Transition-Potential Methode). Gängige wellenfunktionsbasierte Methoden, wie die Konfigurationswechselwirkung mit Einfachanregungen (CIS), zeigen eine starke Blauverschiebung, wenn als Referenz eine Hartree-Fock Slaterdeterminante verwendet wird. Wir zeigen, dass die Verwendung von kernnah-angeregten Determinanten sowohl das resultierende Spektrum als auch die Anregungsenergien deutlich verbessert. Des Weiteren werden auch Referenzen aus Dichtefunktionalrechnungen getestet. Zusätzlich werden auch Referenzen mit gebrochenen Besetzungszahlen für kernnahe Elektronen verwendet. In der Arbeit werden die Resultate der verschiedenen Referenzen miteinander verglichen. Es zeigt sich, dass Referenzen mit gebrochenen Besetzungszahlen das Spektrum nicht weiter verbessern. Der Einfluss der verwendeten Elektronenstrukturmethode ist eher gering.
Surface-enhanced Raman scattering (SERS) is a promising tool to obtain rich chemical information about analytes at trace levels. However, in order to perform selective experiments on individual molecules, two fundamental requirements have to be fulfilled. On the one hand, areas with high local field enhancement, so-called “hot spots”, have to be created by positioning the supporting metal surfaces in close proximity to each other. In most cases hot spots are formed in the gap between adjacent metal nanoparticles (NPs). On the other hand, the analyte has to be positioned directly in the hot spot in order to profit from the highest signal amplification. The use of DNA origami substrates provides both, the arrangement of AuNPs with nm precision as well as the ability to bind analyte molecules at predefined positions. Consequently, the present cumulative doctoral thesis aims at the development of a novel SERS substrate based on a DNA origami template. To this end, two DNA-functionalized gold nanoparticles (AuNPs) are attached to one DNA origami substrate resulting in the formation of a AuNP dimer and thus in a hot spot within the corresponding gap. The obtained structures are characterized by correlated atomic force microscopy (AFM) and SERS imaging which allows for the combination of structural and chemical information.
Initially, the proof-of principle is presented which demonstrates the potential of the novel approach. It is shown that the Raman signal of 15 nm AuNPs coated with dye-modified DNA
(dye: carboxytetramethylrhodamine (TAMRA)) is significantly higher for AuNP dimers arranged on a DNA origami platform in comparison to single AuNPs. Furthermore, by attaching single TAMRA molecules in the hot spot between two 5 nm AuNPs and optimizing the size of the AuNPs by electroless gold deposition, SERS experiments at the few-molecule level are presented. The initially used DNA origami-AuNPs design is further optimized in many respects. On the one hand, larger AuNPs up to a diameter of 60 nm are used which are additionally treated with a silver enhancement solution to obtain Au-Ag-core-shell NPs. On the other hand, the arrangement of both AuNPs is altered to improve the position of the dye molecule within the hot spot as well as to decrease the gap size between the two particles. With the optimized design the detection of single dye molecules (TAMRA and cyanine 3 (Cy3)) by means of SERS is demonstrated. Quantitatively, enhancement factors up to 10^10 are estimated which is sufficiently high to detect single dye molecules.
In the second part, the influence of graphene as an additional component of the SERS substrate is investigated. Graphene is a two-dimensional material with an outstanding combination of electronical, mechanical and optical properties. Here, it is demonstrated that
single layer graphene (SLG) replicates the shape of underlying non-modified DNA origami
substrates very well, which enables the monitoring of structural alterations by AFM imaging.
In this way, it is shown that graphene encapsulation significantly increases the structural
stability of bare DNA origami substrates towards mechanical force and prolonged exposure
to deionized water.
Furthermore, SLG is used to cover DNA origami substrates which are functionalized with a
40 nm AuNP dimer. In this way, a novel kind of hybrid material is created which exhibits
several advantages compared to the analogue non-covered SERS substrates. First, the fluorescence background of dye molecules that are located in between the AuNP surface and SLG is efficiently reduced. Second, the photobleaching rate of the incorporated dye molecules is decreased up to one order of magnitude. Third, due to the increased photostability of the investigated dye molecules, the performance of polarization-dependent series measurements on individual structures is enabled. This in turn reveals extensive information about the dye molecules in the hot spot as well as about the strain induced within the graphene lattice.
Although SLG can significantly influence the SERS substrate in the aforementioned ways, all
those effects are strongly related to the extent of contact with the underlying AuNP dimer.
Physikalische Hydrogele gewinnen derzeit als Zellsubstrate zunehmend an Interesse, da Viskoelastizität oder Stressrelaxation ein bedeutender Parameter in der Mechanotransduktion ist, der bisher vernachlässigt wurde. In dieser Arbeit wurden multi-funktionelle Polyurethane entworfen, die über einen neuartigen Gelierungsmechanismus physikalische Hydrogele bilden. In Wasser bilden die anionischen Polyurethane spontan Aggregate, welche durch elektrostatische Abstoßung in Lösung gehalten werden. Eine schnelle Gelierung kann von hier aus durch Ladungsabschirmung erreicht werden, wodurch die Aggregation voranschreitet und ein Netzwerk ausgebildet wird. Dies kann durch die Zugabe von verschiedenen Säuren oder Salzen geschehen, sodass sowohl saure (pH 4 - 5) als auch pH-neutrale Hydrogele erhalten werden können. Während konventionelle Hydrogele auf Polyurethan-Basis in der Regel durch toxische isocyanat-haltige Präpolymere hergestellt werden, eignet sich der hier beschriebene physikalische Gelierungsmechanismus für in situ Anwendungen in sensitiven Umgebungen. Sowohl Härte als auch Stressrelaxation der Hydrogele können unabhängig voneinander über einen breiten Bereich eingestellt werden. Darüberhinaus zeichnen sich die Hydrogele durch exzellente Stressregeneration aus.
Luhmann in da Contact Zone
(2016)
Our aim in this contribution is to productively engage with the abstractions and complexities of Luhmann’s conceptions of society from a postcolonial perspective, with a particular focus on the explanatory powers of his sociological systems theory when it leaves the realms of Europe and ventures to describe regions of the global South. In view of its more recent global reception beyond Europe, our aim is to thus – following the lead of Dipesh Chakrabarty – provincialize Luhmann’s system theory especially with regard to its underlying assumptions about a global “world society”. For these purposes, we intend to revisit Luhmann in the post/colonial contact zone: We wish to reread Luhmann in the context of spaces of transcultural encounter where “global designs and local histories” (Mignolo), where inclusion into and exclusion from “world society” (Luhmann) clash and interact in intricate ways. The title of our contribution, ‘Luhmann in da Contact Zone’ is deliberately ambiguous: On the one hand, we of course use ‘Luhmann’ metonymically, as representative of a highly complex theoretical design. We shall cursorily outline this design with a special focus on the notion of a singular, modern “world society”, only to confront it with the epistemic challenges of the contact zone. On the other hand, this critique will also involve the close observation of Niklas Luhman as a human observer (a category which within the logic of systems theory actually does not exist) who increasingly transpires in his late writings on exclusion in the global South. By following this dual strategy, we wish to trace an increasing fracture between one Luhmann and the other, between abstract theoretical design and personalized testimony. It is by exploring and measuring this fracture that we hope to eventually be able to map out the potential of a possibly more productive encounter between systems theory and specific strands of postcolonial theory for a pluritopic reading of global modernity.
Recollecting Bones
(2016)
In the same “guarded, roundabout and reticent way” which Lindsay Barrett invokes for Australian conversations about imperial injustice, Germans, too, must begin to more systematically explore, in Paul Gilroy’s words, “the connections and the differences between anti-semitism and anti-black and other racisms and asses[s] the issues that arise when it can no longer be denied that they interacted over a long time in what might be seen as Fascism’s intellectual, ethical and scientific pre-history” (Gilroy 1996: 26). In the meantime, we need to care for the dead. We need to return them, first, from the status of scientific objects to the status of ancestral human beings, and then progressively, and proactively, as close as possible to the care of those communities from whom they were stolen.
Kleine Kosmopolitismen
(2016)
Postcolonial Justice
(2016)
Postcolonial Piracy
(2016)
Media piracy is a contested term in the academic as much as the public debate. It is used by the corporate industries as a synonym for the theft of protected media content with disastrous economic consequences. It is celebrated by technophile elites as an expression of freedom that ensures creativity as much as free market competition. Marxist critics and activists promote flapiracy as a subversive practice that undermines the capitalist world system and its structural injustices. Artists and entrepreneurs across the globe curse it as a threat to their existence, while many use pirate infrastructures and networks fundamentally for the production and dissemination of their art. For large sections of the population across the global South, piracy is simply the only means of accessing the medial flows of a progressively globalising planet.
Reflections of Lusáni Cissé
(2016)
Sexual Aggression Victimization and Perpetration among Male and Female College Students in Chile
(2016)
Evidence on the prevalence of sexual aggression among college students is primarily based on studies from Western countries. In Chile, a South American country strongly influenced by the Catholic Church, little research on sexual aggression among college students is available. Therefore, the purpose of the present study was to examine the prevalence of sexual aggression victimization and perpetration since the age of 14 (the legal age of consent) in a sample of male and female students aged between 18 and 29 years from five Chilean universities (N = 1135), to consider possible gender differences, and to study the extent to which alcohol was involved in the reported incidents of perpetration and victimization. Sexual aggression victimization and perpetration was measured with a Chilean Spanish version of the Sexual Aggression and Victimization Scale (SAV-S), which includes three coercive strategies (use or threat of physical force, exploitation of an incapacitated state, and verbal pressure), three victim-perpetrator constellations (current or former partners, friends/acquaintances, and strangers), and four sexual acts (sexual touch, attempted sexual intercourse, completed sexual intercourse, and other sexual acts, such as oral sex). Overall, 51.9% of women and 48.0% of men reported at least one incident of sexual victimization, and 26.8% of men and 16.5% of women reported at least one incident of sexual aggression perpetration since the age of 14. For victimization, only few gender differences were found, but significantly more men than women reported sexual aggression perpetration. A large proportion of perpetrators also reported victimization experiences. Regarding victim-perpetrator relationship, sexual aggression victimization and perpetration were more common between persons who knew each other than between strangers. Alcohol use by the perpetrator, victim, or both was involved in many incidents of sexual aggression victimization and perpetration, particularly among strangers. The present data are the first to provide a systematic and detailed picture of sexual aggression among college students in Chile, including victimization and perpetration reports by both men and women and confirming the critical role of alcohol established in past research from Western countries.
Rapidly uplifting coastlines are frequently associated with convergent tectonic boundaries, like subduction zones, which are repeatedly breached by giant megathrust earthquakes. The coastal relief along tectonically active realms is shaped by the effect of sea-level variations and heterogeneous patterns of permanent tectonic deformation, which are accumulated through several cycles of megathrust earthquakes. However, the correlation between earthquake deformation patterns and the sustained long-term segmentation of forearcs, particularly in Chile, remains poorly understood. Furthermore, the methods used to estimate permanent deformation from geomorphic markers, like marine terraces, have remained qualitative and are based on unrepeatable methods. This contrasts with the increasing resolution of digital elevation models, such as Light Detection and Ranging (LiDAR) and high-resolution bathymetric surveys.
Throughout this thesis I study permanent deformation in a holistic manner: from the methods to assess deformation rates, to the processes involved in its accumulation. My research focuses particularly on two aspects: Developing methodologies to assess permanent deformation using marine terraces, and comparing permanent deformation with seismic cycle deformation patterns under different spatial scales along the M8.8 Maule earthquake (2010) rupture zone. Two methods are developed to determine deformation rates from wave-built and wave-cut terraces respectively. I selected an archetypal example of a wave-built terrace at Santa Maria Island studying its stratigraphy and recognizing sequences of reoccupation events tied with eleven radiocarbon sample ages (14C ages). I developed a method to link patterns of reoccupation with sea-level proxies by iterating relative sea level curves for a range of uplift rates. I find the best fit between relative sea-level and the stratigraphic patterns for an uplift rate of 1.5 +- 0.3 m/ka.
A Graphical User Interface named TerraceM® was developed in Matlab®. This novel software tool determines shoreline angles in wave-cut terraces under different geomorphic scenarios. To validate the methods, I select test sites in areas of available high-resolution LiDAR topography along the Maule earthquake rupture zone and in California, USA. The software allows determining the 3D location of the shoreline angle, which is a proxy for the estimation of permanent deformation rates. The method is based on linear interpolations to define the paleo platform and cliff on swath profiles. The shoreline angle is then located by intersecting these interpolations. The
accuracy and precision of TerraceM® was tested by comparing its results with previous assessments, and through an experiment with students in a computer lab setting at the University
of Potsdam.
I combined the methods developed to analyze wave-built and wave-cut terraces to assess regional patterns of permanent deformation along the (2010) Maule earthquake rupture. Wave-built terraces are tied using 12 Infra Red Stimulated luminescence ages (IRSL ages) and shoreline angles in wave-cut terraces are estimated from 170 aligned swath profiles. The comparison of coseismic slip, interseismic coupling, and permanent deformation, leads to three areas of high permanent uplift, terrace warping, and sharp fault offsets. These three areas correlate with regions of high slip and low coupling, as well as with the spatial limit of at least eight historical megathrust ruptures (M8-9.5). I propose that the zones of upwarping at Arauco and Topocalma reflect changes in frictional properties of the megathrust, which result in discrete boundaries for the propagation of mega earthquakes.
To explore the application of geomorphic markers and quantitative morphology in offshore areas I performed a local study of patterns of permanent deformation inferred from hitherto unrecognized drowned shorelines at the Arauco Bay, at the southern part of the (2010) Maule earthquake rupture zone. A multidisciplinary approach, including morphometry, sedimentology, paleontology, 3D morphoscopy, and a landscape Evolution Model is used to recognize, map, and assess local rates and patterns of permanent deformation in submarine environments. Permanent deformation patterns are then reproduced using elastic models to assess deformation rates of an active submarine splay fault defined as Santa Maria Fault System. The best fit suggests a reverse structure with a slip rate of 3.7 m/ka for the last 30 ka. The register of land level changes during the earthquake cycle at Santa Maria Island suggest that most of the deformation may be accrued through splay fault reactivation during mega earthquakes, like the (2010) Maule event. Considering a recurrence time of 150 to 200 years, as determined from historical and geological observations, slip between 0.3 and 0.7 m per event would be required to account for the 3.7 m/ka millennial slip rate. However, if the SMFS slips only every ~1000 years, representing a few megathrust earthquakes, then a slip of ~3.5 m per event would be required to account for the long- term rate. Such event would be equivalent to a magnitude ~6.7 earthquake capable to generate a local tsunami.
The results of this thesis provide novel and fundamental information regarding the amount of permanent deformation accrued in the crust, and the mechanisms responsible for this accumulation at millennial time-scales along the M8.8 Maule earthquake (2010) rupture zone. Furthermore, the results of this thesis highlight the application of quantitative geomorphology and the use of repeatable methods to determine permanent deformation, improve the accuracy of marine terrace assessments, and estimates of vertical deformation rates in tectonically active coastal areas. This is vital information for adequate coastal-hazard assessments and to anticipate realistic earthquake and tsunami scenarios.
Seit der Einführung von Antibiotika in die medizinische Behandlung von bakteriellen Infektionskrankheiten existiert ein Wettlauf zwischen der Evolution von Bakterienresistenzen und der Entwicklung wirksamer Antibiotika. Während bis in die 80er Jahre verstärkt an neuen Antibiotika geforscht wurde, gewinnen multiresistente Keime heute zunehmend die Oberhand. Um einzelne Pathogene erfolgreich nachzuweisen und zu bekämpfen, ist ein grundlegendes Wissen über den Erreger unumgänglich. Bakterielle Proteine, die bei einer Infektion vorrangig vom Immunsystem prozessiert und präsentiert werden, könnten für die Entwicklung von Impfstoffen oder gezielten Therapeutika nützlich sein. Auch für die Diagnostik wären diese immundominanten Proteine interessant. Allerdings herrscht ein Mangel an Wissen über spezifische Antigene vieler pathogener Bakterien, die eine eindeutige Diagnostik eines einzelnen Erregers erlauben würden.
Daher wurden in dieser Arbeit vier verschiedene Humanpathogene mittels Phage Display untersucht: Neisseria gonorrhoeae, Neisseria meningitidis, Borrelia burgdorferi und Clostridium difficile. Hierfür wurden aus der genomischen DNA der vier Erreger Bibliotheken konstruiert und durch wiederholte Selektion und Amplifikation, dem sogenannten Panning, immunogene Proteine isoliert. Für alle Erreger bis auf C. difficile wurden immunogene Proteine aus den jeweiligen Bibliotheken isoliert. Die identifizierten Proteine von N. meningitidis und B. burgdorferi waren größtenteils bekannt, konnten aber in dieser Arbeit durch Phage Display verifiziert werden. Für N. gonorrhoeae wurden 21 potentiell immunogene Oligopeptide isoliert, von denen sechs Proteine als neue zuvor unbeschriebene Proteine mit immunogenem Charakter identifiziert wurden. Von den Phagen-präsentierten Oligopeptide der 21 immunogenen Proteine wurden Epitopmappings mit verschiedenen polyklonalen Antikörpern durchgeführt, um immunogene Bereiche näher zu identifizieren und zu charakterisieren. Bei zehn Proteinen wurden lineare Epitope eindeutig mit drei polyklonalen Antikörpern identifiziert, von fünf weiteren Proteinen waren Epitope mit mindestens einem Antikörper detektierbar. Für eine weitere Charakterisierung der ermittelten Epitope wurden Alaninscans durchgeführt, die eine detaillierte Auskunft über kritische Aminosäuren für die Bindung des Antikörpers an das Epitop geben.
Ausgehend von dem neu identifizierten Protein mit immunogenem Charakter NGO1634 wurden 26 weitere Proteine aufgrund ihrer funktionellen Ähnlichkeit ausgewählt und mithilfe bioinformatischer Analysen auf ihre Eignung zur Entwicklung einer diagnostischen Anwendung analysiert. Durch Ausschluss der meisten Proteine aufgrund ihrer Lokalisation, Membrantopologie oder unspezifischen Proteinsequenz wurden scFv-Antikörper gegen acht Proteine mittels Phage Display generiert und anschließend als scFv-Fc-Fusionsantikörper produziert und charakterisiert.
Die hier identifizierten Proteine und linearen Epitope könnten einen Ansatzpunkt für die Entwicklung einer diagnostischen oder therapeutischen Anwendung bieten. Lineare Epitopsequenzen werden häufig für die Impfstoffentwicklung eingesetzt, sodass vor allem die in dieser Arbeit bestimmten Epitope von Membranproteinen interessante Kandidaten für weitere Untersuchungen in diese Richtung sind. Durch weitere Untersuchungen könnten möglicherweise unbekannte Virulenzfaktoren entdeckt werden, deren Inhibierung einen entscheidenden Einfluss auf Infektionen haben könnten.
Diese Arbeit zu Grunde liegenden Forschung zielte darauf ab, neue schmelzbare Acrylnitril-Copolymere zu entwickeln. Diese sollten im Anschluss über ein Schmelzspinnverfahren zur Chemiefaser geformt und im letzten Schritt zur Carbonfaser konvertiert werden. Zu diesem Zweck wurden zunächst orientierende Untersuchungen an unterschiedlichen Copolymeren des Acrylnitril aus Lösungspolymerisation durchgeführt. Die Untersuchungen zeigten, dass elektrostatische Wechselwirkungen besser als sterische Abschirmung dazu geeignet sind, Schmelzbarkeit unterhalb der Zersetzungstemperatur von Polyacrylnitril zu bewirken. Aus der Vielzahl untersuchter Copolymere stellten sich jene mit Methoxyethylacrylat (MEA) als am effektivsten heraus. Für diese Copolymere wurden sowohl die Copolymerisationsparameter bestimmt als auch die grundlegende Kinetik der Lösungspolymerisation untersucht. Die Copolymere mit MEA wurden über Schmelzspinnen zur Faser umgeformt und diese dann untersucht. Hierbei wurden auch Einflüsse verschiedener Parameter, wie z.B. die der Molmasse, auf die Fasereigenschaften und -herstellung untersucht. Zuletzt wurde ein Heterophasenpolymerisationsverfahren zur Herstellung von Copolymeren aus AN/MEA entwickelt; dadurch konnten die Materialeigenschaften weiter verbessert werden. Zur Unterdrückung der thermoplastischen Eigenschaften der Fasern wurde ein geeignetes Verfahren entwickelt und anschließend die Konversion zu Carbonfasern durchgeführt.
Intermontane valley fills
(2016)
Sedimentary valley fills are a widespread characteristic of mountain belts around the world. They transiently store material over time spans ranging from thousands to millions of years and therefore play an important role in modulating the sediment flux from the orogen to the foreland and to oceanic depocenters. In most cases, their formation can be attributed to specific fluvial conditions, which are closely related to climatic and tectonic processes. Hence, valley-fill deposits constitute valuable archives that offer fundamental insight into landscape evolution, and their study may help to assess the impact of future climate change on sediment dynamics.
In this thesis I analyzed intermontane valley-fill deposits to constrain different aspects of the climatic and tectonic history of mountain belts over multiple timescales. First, I developed a method to estimate the thickness distribution of valley fills using artificial neural networks (ANNs). Based on the assumption of geometrical similarity between exposed and buried parts of the landscape, this novel and highly automated technique allows reconstructing fill thickness and bedrock topography on the scale of catchments to entire mountain belts.
Second, I used the new method for estimating the spatial distribution of post-glacial sediments that are stored in the entire European Alps. A comparison with data from exploratory drillings and from geophysical surveys revealed that the model reproduces the measurements with a root mean squared error (RMSE) of 70m and a coefficient of determination (R2) of 0.81. I used the derived sediment thickness estimates in combination with a model of the Last Glacial Maximum (LGM) icecap to infer the lithospheric response to deglaciation, erosion and deposition, and deduce their relative contribution to the present-day rock-uplift rate. For a range of different lithospheric and upper mantle-material properties, the results suggest that the long-wavelength uplift signal can be explained by glacial isostatic adjustment with a small erosional contribution and a substantial but localized tectonic component exceeding 50% in parts of the Eastern Alps and in the Swiss Rhône Valley. Furthermore, this study reveals the particular importance of deconvolving the potential components of rock uplift when interpreting recent movements along active orogens and how this can be used to constrain physical properties of the Earth’s interior.
In a third study, I used the ANN approach to estimate the sediment thickness of alluviated reaches of the Yarlung Tsangpo River, upstream of the rapidly uplifting Namche Barwa massif. This allowed my colleagues and me to reconstruct the ancient river profile of the Yarlung Tsangpo, and to show that in the past, the river had already been deeply incised into the eastern margin of the Tibetan Plateau. Dating of basal sediments from drill cores that reached the paleo-river bed to 2–2.5 Ma are consistent with mineral cooling ages from the Namche Barwa massif, which indicate initiation of rapid uplift at ~4 Ma. Hence, formation of the Tsangpo gorge and aggradation of the voluminous valley fill was most probably a consequence of rapid uplift of the Namche Barwa massif and thus tectonic activity.
The fourth and last study focuses on the interaction of fluvial and glacial processes at the southeastern edge of the Karakoram. Paleo-ice-extent indicators and remnants of a more than 400-m-thick fluvio-lacustrine valley fill point to blockage of the Shyok River, a main tributary of the upper Indus, by the Siachen Glacier, which is the largest glacier in the Karakoram Range. Field observations and 10Be exposure dating attest to a period of recurring lake formation and outburst flooding during the penultimate glaciation prior to ~110 ka. The interaction of Rivers and Glaciers all along the Karakorum is considered a key factor in landscape evolution and presumably promoted headward erosion of the Indus-Shyok drainage system into the western margin of the Tibetan Plateau.
The results of this thesis highlight the strong influence of glaciation and tectonics on valley-fill formation and how this has affected the evolution of different mountain belts. In the Alps valley-fill deposition influenced the magnitude and pattern of rock uplift since ice retreat approximately 17,000 years ago. Conversely, the analyzed valley fills in the Himalaya are much older and reflect environmental conditions that prevailed at ~110 ka and ~2.5 Ma, respectively. Thus, the newly developed method has proven useful for inferring the role of sedimentary valley-fill deposits in landscape evolution on timescales ranging from 1,000 to 10,000,000 years.
Introduction: Adequate cognitive function in patients is a prerequisite for successful implementation of patient education and lifestyle coping in comprehensive cardiac rehabilitation (CR) programs. Although the association between cardiovascular diseases and cognitive impairments (CIs) is well known, the prevalence particularly of mild CI in CR and the characteristics of affected patients have been insufficiently investigated so far.
Methods: In this prospective observational study, 496 patients (54.5 ± 6.2 years, 79.8% men) with coronary artery disease following an acute coronary event (ACE) were analyzed. Patients were enrolled within 14 days of discharge from the hospital in a 3-week inpatient CR program. Patients were tested for CI using the Montreal Cognitive Assessment (MoCA) upon admission to and discharge from CR. Additionally, sociodemographic, clinical, and physiological variables were documented. The data were analyzed descriptively and in a multivariate stepwise backward elimination regression model with respect to CI.
Results: At admission to CR, the CI (MoCA score < 26) was determined in 182 patients (36.7%). Significant differences between CI and no CI groups were identified, and CI group was associated with high prevalence of smoking (65.9 vs 56.7%, P = 0.046), heavy (physically demanding) workloads (26.4 vs 17.8%, P < 0.001), sick leave longer than 1 month prior to CR (28.6 vs 18.5%, P = 0.026), reduced exercise capacity (102.5 vs 118.8 W, P = 0.006), and a shorter 6-min walking distance (401.7 vs 421.3 m, P = 0.021) compared to no CI group. The age- and education-adjusted model showed positive associations with CI only for sick leave more than 1 month prior to ACE (odds ratio [OR] 1.673, 95% confidence interval 1.07–2.79; P = 0.03) and heavy workloads (OR 2.18, 95% confidence interval 1.42–3.36; P < 0.01).
Conclusion: The prevalence of CI in CR was considerably high, affecting more than one-third of cardiac patients. Besides age and education level, CI was associated with heavy workloads and a longer sick leave before ACE.
Die Empirie des beginnenden 21. Jahrhunderts weist mehr autoritäre Regime aus als am Ende des 20. Jahrhunderts angenommen. Die gegenwärtige Autoritarismusforschung versucht die Fortdauer dieses Regimetyps in Hinblick auf die politischen Institutionen zu erklären – dabei bleiben politische Akteure, die nicht zum Herrschaftszentrum gehören, außen vor.
Das vorliegende Projekt untersucht die Rolle und Funktion politischer Opposition in autoritären Regimen. Es wird davon ausgegangen, dass sich an der Opposition eine signifikante Charakteristik autoritärer Regime manifestiert. Das akteurszentrierte Projekt ist der qualitativ orientierten Politikwissenschaft zuzurechnen und verknüpft das Autoritarismuskonzept von Juan Linz mit klassischen Ansätzen der Oppositionsforschung und macht diese Theorien für die gegenwärtige Autoritarismusforschung nutzbar.
Die eigens entwickelte elitenorientierte Oppositionstypologie wird am Beispiel Kenias im Zeitraum 1990-2005 angewendet. Die Oppositionsgruppen werden im Institutionengefüge autoritärer Regime verortet und ihr politisches Agieren in den Dimensionen Handlungsstatus, Handlungsüberzeugung und Handlungsstrategie analysiert. Unter Beachtung der historisch gewachsenen regionalen und kulturellen Spezifika wird angenommen, dass generelle, Regionen übergreifende Aussagen zur Opposition in autoritären Regimen getroffen werden können: Kein Oppositionstyp kann allein einen Herrschaftswechsel bewirken. Der Wechsel bzw. die Fortdauer der Herrschaft hängt von der Dominanz bestimmter Oppositionstypen im Oppositionsgeflecht sowie der gleichzeitigen Schwäche anderer Oppositionstypen ab.
Durch die konzeptionelle Beschäftigung mit Opposition sowie deren empirische Erschließung soll ein substantieller Beitrag für die notwendige Debatte um autoritäre Regime im 21. Jahrhundert geleistet werden.
The Earth’s shallow subsurface with sedimentary cover acts as a waveguide to any incoming wavefield. Within the framework of my thesis, I focused on the characterization of this shallow subsurface within tens to few hundreds of meters of sediment cover. I imaged the seismic 1D shear wave velocity (and possibly the 1D compressional wave velocity). This information is not only required for any seismic risk assessment, geotechnical engineering or microzonation activities, but also for exploration and global seismology where site effects are often neglected in seismic waveform modeling.
First, the conventional frequency-wavenumber (f - k) technique is used to derive the dispersion characteristic of the propagating surface waves recorded using distinct arrays of seismometers in 1D and 2D configurations. Further, the cross-correlation technique is applied to seismic array data to estimate the Green’s function between receivers pairs combination assuming one is the source and the other the receiver. With the consideration of a 1D media, the estimated cross-correlation Green’s functions are sorted with interstation distance in a virtual 1D active seismic experiment. The f - k technique is then used to estimate the dispersion curves. This integrated analysis is important for the interpretation of a large bandwidth of the phase velocity dispersion curves and therefore improving the resolution of the estimated 1D Vs profile.
Second, the new theoretical approach based on the Diffuse Field Assumption (DFA) is used for the interpretation of the observed microtremors H/V spectral ratio. The theory is further extended in this research work to include not only the interpretation of the H/V measured at the surface, but also the H/V measured at depths and in marine environments. A modeling and inversion of synthetic H/V spectral ratio curves on simple predefined geological structures shows an almost perfect recovery of the model parameters (mainly Vs and to a lesser extent Vp). These results are obtained after information from a receiver at depth has been considered in the inversion.
Finally, the Rayleigh wave phase velocity information, estimated from array data, and the H/V(z, f) spectral ratio, estimated from a single station data, are combined and inverted for the velocity profile information. Obtained results indicate an improved depth resolution in comparison to estimations using the phase velocity dispersion curves only. The overall estimated sediment thickness is comparable to estimations obtained by inverting the full micortremor H/V spectral ratio.
We study the interplay between analysis on manifolds with singularities and complex analysis and develop new structures of operators based on the Mellin transform and tools for iterating the calculus for higher singularities. We refer to the idea of interpreting boundary value problems (BVPs) in terms of pseudo-differential operators with a principal symbolic hierarchy, taking into account that BVPs are a source of cone and edge operator algebras. The respective cone and edge pseudo-differential algebras in turn are the starting point of higher corner theories. In addition there are deep relationships between corner operators and complex analysis. This will be illustrated by the Mellin symbolic calculus.
Transmorphic
(2016)
Defining Graphical User Interfaces (GUIs) through functional abstractions can reduce the complexity that arises from mutable abstractions. Recent examples, such as Facebook's React GUI framework have shown, how modelling the view as a functional projection from the application state to a visual representation can reduce the number of interacting objects and thus help to improve the reliabiliy of the system. This however comes at the price of a more rigid, functional framework where programmers are forced to express visual entities with functional abstractions, detached from the way one intuitively thinks about the physical world.
In contrast to that, the GUI Framework Morphic allows interactions in the graphical domain, such as grabbing, dragging or resizing of elements to evolve an application at runtime, providing liveness and directness in the development workflow. Modelling each visual entity through mutable abstractions however makes it difficult to ensure correctness when GUIs start to grow more complex. Furthermore, by evolving morphs at runtime through direct manipulation we diverge more and more from the symbolic description that corresponds to the morph. Given that both of these approaches have their merits and problems, is there a way to combine them in a meaningful way that preserves their respective benefits?
As a solution for this problem, we propose to lift Morphic's concept of direct manipulation from the mutation of state to the transformation of source code. In particular, we will explore the design, implementation and integration of a bidirectional mapping between the graphical representation and a functional and declarative symbolic description of a graphical user interface within a self hosted development environment. We will present Transmorphic, a functional take on the Morphic GUI Framework, where the visual and structural properties of morphs are defined in a purely functional, declarative fashion. In Transmorphic, the developer is able to assemble different morphs at runtime through direct manipulation which is automatically translated into changes in the code of the application. In this way, the comprehensiveness and predictability of direct manipulation can be used in the context of a purely functional GUI, while the effects of the manipulation are reflected in a medium that is always in reach for the programmer and can even be used to incorporate the source transformations into the source files of the application.
The visceral protein transthyretin (TTR) is frequently affected by oxidative post-translational protein modifications (PTPMs) in various diseases. Thus, better insight into structure-function relationships due to oxidative PTPMs of TTR should contribute to the understanding of pathophysiologic mechanisms. While the in vivo analysis of TTR in mammalian models is complex, time- and resource-consuming, transgenic Caenorhabditis elegans expressing hTTR provide an optimal model for the in vivo identification and characterization of drug-mediated oxidative PTPMs of hTTR by means of matrix assisted laser desorption/ionization – time of flight – mass spectrometry (MALDI-TOF-MS). Herein, we demonstrated that hTTR is expressed in all developmental stages of Caenorhabditis elegans, enabling the analysis of hTTR metabolism during the whole life-cycle. The suitability of the applied model was verified by exposing worms to D-penicillamine and menadione. Both drugs induced substantial changes in the oxidative PTPM pattern of hTTR. Additionally, for the first time a covalent binding of both drugs with hTTR was identified and verified by molecular modelling.
Variations in the distribution of mass within an orogen may lead to transient sediment storage, which in turn might affect the state of stress and the level of fault activity. Distinguishing between different forcing mechanisms causing variations of sediment flux and tectonic activity, is therefore one of the most challenging tasks in understanding the spatiotemporal evolution of active mountain belts.
The Himalayan mountain belt is one of the most significant Cenozoic collisional mountain belt, formed due to collision between northward-bound Indian Plate and the Eurasian Plate during the last 55-50 Ma. Ongoing convergence of these two tectonic plates is accommodated by faulting and folding within the Himalayan arc-shaped orogen and the continued lateral and vertical growth of the Tibetan Plateau and mountain belts adjacent to the plateau as well as regions farther north. Growth of the Himalayan orogen is manifested by the development of successive south-vergent thrust systems. These thrust systems divide the orogen into different morphotectonic domains. From north to south these thrusts are the Main Central Thrust (MCT), the Main Boundary Thrust (MBT) and the Main Frontal Thrust (MFT). The growing topography interacts with moisture-bearing monsoonal winds, which results in pronounced gradients in rainfall, weathering, erosion and sediment transport toward the foreland and beyond. However, a fraction of this sediment is trapped and transiently stored within the intermontane valleys or ‘dun’s within the lower-elevation foothills of the range. Improved understanding of the spatiotemporal evolution of these sediment archives could provide a unique opportunity to decipher the triggers of variations in sediment production, delivery and storage in an actively deforming mountain belt and support efforts to test linkages between sediment volumes in intermontane basins and changes in the shallow crustal stress field. As sediment redistribution in mountain belts on timescales of 102-104 years can effect cultural characteristics and infrastructure in the intermontane valleys and may even impact the seismotectonics of a mountain belt, there is a heightened interest in understanding sediment-routing processes and causal relationships between tectonism, climate and topography. It is here at the intersection between tectonic processes and superposed climatic and sedimentary processes in the Himalayan orogenic wedge, where my investigation is focused on. The study area is the intermontane Kangra Basin in the northwestern Sub-Himalaya, because the characteristics of the different Himalayan morphotectonic provinces are well developed, the area is part of a region strongly influenced by monsoonal forcing, and the existence of numerous fluvial terraces provides excellent strain markers to assess deformation processes within the Himalayan orogenic wedge. In addition, being located in front of the Dhauladhar Range the region is characterized by pronounced gradients in past and present-day erosion and sediment processes associated with repeatedly changing climatic conditions. In light of these conditions I analysed climate-driven late Pleistocene-Holocene sediment cycles in this tectonically active region, which may be responsible for triggering the tectonic re-organization within the Himalayan orogenic wedge, leading to out-of-sequence thrusting, at least since early Holocene.
The Kangra Basin is bounded by the MBT and the Sub-Himalayan Jwalamukhi Thrust (JMT) in the north and south, respectively and transiently stores sediments derived from the Dhauladhar Range. The Basin contains ~200-m-thick conglomerates reflecting two distinct aggradation phases; following aggradation, several fluvial terraces were sculpted into these fan deposits. 10Be CRN surface exposure dating of these terrace levels provides an age of 53.4±3.2 ka for the highest-preserved terrace (AF1); subsequently, this surface was incised until ~15 ka, when the second fan (AF2) began to form. AF2 fan aggradation was superseded by episodic Holocene incision, creating at least four terrace levels. We find a correlation between variations in sediment transport and ∂18O records from regions affected by the Indian Summer Monsoon (ISM). During strengthened ISMs sand post-LGM glacial retreat, aggradation occurred in the Kangra Basin, likely due to high sediment flux, whereas periods of a weakened ISM coupled with lower sediment supply coincided with renewed re-incision.
However, the evolution of fluvial terraces along Sub-Himalayan streams in the Kangra sector is also forced by tectonic processes. Back-tilted, folded terraces clearly document tectonic activity of the JMT. Offset of one of the terrace levels indicates a shortening rate of 5.6±0.8 to 7.5±1.0 mm.a-1 over the last ~10 ka. Importantly, my study reveals that late Pleistocene/Holocene out-of-sequence thrusting accommodates 40-60% of the total 14±2 mm.a-1 shortening partitioned throughout the Sub-Himalaya. Importantly, the JMT records shortening at a lower rate over longer timescales hints towards out-of-sequence activity within the Sub-Himalaya. Re-activation of the JMT could be related to changes in the tectonic stress field caused by large-scale sediment removal from the basin. I speculate that the deformation processes of the Sub-Himalaya behave according to the predictions of critical wedge model and assume the following: While >200m of sediment aggradation would trigger foreland-ward propagation of the deformation front, re-incision and removal of most of the stored sediments (nearly 80-85% of the optimum basin-fill) would again create a sub-critical condition of the wedge taper and trigger the retreat of the deformation front.
While tectonism is responsible for the longer-term processes of erosion associated with steepening hillslopes, sediment cycles in this environment are mainly the result of climatic forcing. My new 10Be cosmogenic nuclide exposure dates and a synopsis of previous studies show the late Pleistocene to Holocene alluvial fills and fluvial terraces studied here record periodic fluctuations of sediment supply and transport capacity on timescales of 1000-100000 years. To further evaluate the potential influence of climate change on these fluctuations, I compared the timing of aggradation and incision phases recorded within remnant alluvial fans and terraces with continental climate archives such as speleothems in neighboring regions affected by monsoonal precipitation. Together with previously published OSL ages yielding the timing of aggradation, I find a correlation between variations in sediment transport with oxygen-isotope records from regions affected by the Indian Summer Monsoon (ISM). Accordingly, during periods of increased monsoon intensity (transitions from dry and cold to wet and warm periods – MIS4 to MIS3 and MIS2 to MIS1) (MIS=marine isotope stage) and post-Last Glacial Maximum glacial retreat, aggradation occurred in the Kangra Basin, likely due to high sediment flux. Conversely, periods of weakened monsoon intensity or lower sediment supply coincide with re-incision of the existing basin-fill.
Finally, my study entails part of a low-temperature thermochronology study to assess the youngest exhumation history of the Dhauladhar Range. Zircon helium (ZHe) ages and existing low-temperature data sets (ZHe, apatite fission track (AFT)) across this range, together with 3D thermokinematic modeling (PECUBE) reveals constraints on exhumation and activity of the range-bounding Main Boundary Thrust (MBT) since at least mid-Miocene time. The modeling results indicate mean slip rates on the MBT-fault ramp of ~2 – 3 mm.a-1 since its activation. This has lead to the growth of the >5-km-high frontal Dhauladhar Range and continuous deep-seated exhumation and erosion. The obtained results also provide interesting constraints of deformation patterns and their variation along strike. The results point towards the absence of the time-transient ‘mid-crustal ramp’ in the basal decollement and
duplexing of the Lesser Himalayan sequence, unlike the nearby regions or even the central Nepal domain. A fraction of convergence (~10-15%) is accommodated along the deep-seated MBT-ramp, most likely merging into the MHT. This finding is crucial for a rigorous assessment of the overall level of tectonic activity in the Himalayan morphotectonic provinces as it contradicts recently-published geodetic shortening estimates. In these studies, it has been proposed that the total Himalayan shortening in the NW Himalaya is accommodated within the Sub-Himalaya whereas no tectonic activity is assigned to the MBT.
Interplay of coupling and common noise at the transition to synchrony in oscillator populations
(2016)
There are two ways to synchronize oscillators: by coupling and by common forcing, which can be pure noise. By virtue of the Ott-Antonsen ansatz for sine-coupled phase oscillators, we obtain analytically tractable equations for the case where both coupling and common noise are present. While noise always tends to synchronize the phase oscillators, the repulsive coupling can act against synchrony, and we focus on this nontrivial situation. For identical oscillators, the fully synchronous state remains stable for small repulsive coupling; moreover it is an absorbing state which always wins over the asynchronous regime. For oscillators with a distribution of natural frequencies, we report on a counter-intuitive effect of dispersion (instead of usual convergence) of the oscillators frequencies at synchrony; the latter effect disappears if noise vanishes.
Editorial (Dr. Roswitha Lohwaßer) ; Schon im Studium von gelingenden Schulen lernen (Jannis Andresen, Jakob Erichsen) ; Wie ein Funke überspringt (Laura Zrenner) ; Lerneffekte unter der Lupe (Ariane Faulian) ; Die Lernreise als Ultima Ratio? (Leroy Großmann) ; Die Lernreise als Schulperspektive (Laura Zrenner) ; Teamgeist gefragt (Cornelia Brückner) ; Wunschberuf Lehrerin (Robin Miska) ; Gelebte Integration (Cornelia Brückner) ; Souverän Führen im Unterricht (Dr. Helga Breuninger, Marina Rottig, Prof. Dr. Wilfried Schley) ; Was lernen Lehramtsstudierende durch ein videogestütztes Klassenmanagement-Training? (Dr. Janine Neuhaus, Mirko Wendland)
This thesis is focused on the study and the exact simulation of two classes of real-valued Brownian diffusions: multi-skew Brownian motions with constant drift and Brownian diffusions whose drift admits a finite number of jumps.
The skew Brownian motion was introduced in the sixties by Itô and McKean, who constructed it from the reflected Brownian motion, flipping its excursions from the origin with a given probability. Such a process behaves as the original one except at the point 0, which plays the role of a semipermeable barrier. More generally, a skew diffusion with several semipermeable barriers, called multi-skew diffusion, is a diffusion everywhere except when it reaches one of the barriers, where it is partially reflected with a probability depending on that particular barrier. Clearly, a multi-skew diffusion can be characterized either as solution of a stochastic differential equation involving weighted local times (these terms providing the semi-permeability) or by its infinitesimal generator as Markov process.
In this thesis we first obtain a contour integral representation for the transition semigroup of the multiskew Brownian motion with constant drift, based on a fine analysis of its complex properties. Thanks to this representation we write explicitly the transition densities of the two-skew Brownian motion with constant drift as an infinite series involving, in particular, Gaussian functions and their tails.
Then we propose a new useful application of a generalization of the known rejection sampling method. Recall that this basic algorithm allows to sample from a density as soon as one finds an - easy to sample - instrumental density verifying that the ratio between the goal and the instrumental densities is a bounded function. The generalized rejection sampling method allows to sample exactly from densities for which indeed only an approximation is known. The originality of the algorithm lies in the fact that one finally samples directly from the law without any approximation, except the machine's.
As an application, we sample from the transition density of the two-skew Brownian motion with or without constant drift. The instrumental density is the transition density of the Brownian motion with constant drift, and we provide an useful uniform bound for the ratio of the densities. We also present numerical simulations to study the efficiency of the algorithm.
The second aim of this thesis is to develop an exact simulation algorithm for a Brownian diffusion whose drift admits several jumps. In the literature, so far only the case of a continuous drift (resp. of a drift with one finite jump) was treated. The theoretical method we give allows to deal with any finite number of discontinuities. Then we focus on the case of two jumps, using the transition densities of the two-skew Brownian motion obtained before. Various examples are presented and the efficiency of our approach is discussed.
Understanding the role of natural climate variability under the pressure of human induced changes of climate and landscapes, is crucial to improve future projections and adaption strategies. This doctoral thesis aims to reconstruct Holocene climate and environmental changes in NE Germany based on annually laminated lake sediments. The work contributes to the ICLEA project (Integrated CLimate and Landscape Evolution Analyses). ICLEA intends to compare multiple high-resolution proxy records with independent chronologies from the N central European lowlands, in order to disentangle the impact of climate change and human land use on landscape development during the Lateglacial and Holocene. In this respect, two study sites in NE Germany are investigated in this doctoral project, Lake Tiefer See and palaeolake Wukenfurche. While both sediment records are studied with a combination of high-resolution sediment microfacies and geochemical analyses (e.g. µ-XRF, carbon geochemistry and stable isotopes), detailed proxy understanding mainly focused on the continuous 7.7 m long sediment core from Lake Tiefer See covering the last ~6000 years. Three main objectives are pursued at Lake Tiefer See: (1) to perform a reliable and independent chronology, (2) to establish microfacies and geochemical proxies as indicators for climate and environmental changes, and (3) to trace the effects of climate variability and human activity on sediment deposition.
Addressing the first aim, a reliable chronology of Lake Tiefer See is compiled by using a multiple-dating concept. Varve counting and tephra findings form the chronological framework for the last ~6000 years. The good agreement with independent radiocarbon dates of terrestrial plant remains verifies the robustness of the age model. The resulting reliable and independent chronology of Lake Tiefer See and, additionally, the identification of nine tephras provide a valuable base for detailed comparison and synchronization of the Lake Tiefer See data set with other climate records. The sediment profile of Lake Tiefer See exhibits striking alternations between well-varved and non-varved sediment intervals. The combination of microfacies, geochemical and microfossil (i.e. Cladocera and diatom) analyses indicates that these changes of varve preservation are caused by variations of lake circulation in Lake Tiefer See. An exception is the well-varved sediment deposited since AD 1924, which is mainly influenced by human-induced lake eutrophication. Well-varved intervals before the 20th century are considered to reflect phases of reduced lake circulation and, consequently, stronger anoxic conditions. Instead, non-varved intervals indicate increased lake circulation in Lake Tiefer See, leading to more oxygenated conditions at the lake ground. Furthermore, lake circulation is not only influencing sediment deposition, but also geochemical processes in the lake. As, for example, the proxy meaning of δ13COM varies in time in response to changes of the oxygen regime in the lake hypolinion. During reduced lake circulation and stronger anoxic conditions δ13COM is influenced by microbial carbon cycling. In contrast, organic matter degradation controls δ13COM during phases of intensified lake circulation and more oxygenated conditions. The varve preservation indicates an increasing trend of lake circulation at Lake Tiefer See after ~4000 cal a BP. This trend is superimposed by decadal to centennial scale variability of lake circulation intensity. Comparison to other records in Central Europe suggests that the long-term trend is probably related to gradual changes in Northern Hemisphere orbital forcing, which induced colder and windier conditions in Central Europe and, therefore, reinforced lake circulation. Decadal to centennial scale periods of increased lake circulation coincide with settlement phases at Lake Tiefer See, as inferred from pollen data of the same sediment record. Deforestation reduced the wind shelter of the lake, which probably increased the sensitivity of lake circulation to wind stress. However, results of this thesis also suggest that several of these phases of increased lake circulation are additionally reinforced by climate changes. A first indication is provided by the comparison to the Baltic Sea record, which shows striking correspondence between major non-varved intervals at Lake Tiefer See and bioturbated sediments in the Baltic Sea. Furthermore, a preliminary comparison to the ICLEA study site Lake Czechowskie (N central Poland) shows a coincidence of at least three phases of increased lake circulation in both lakes, which concur with periods of known climate changes (2.8 ka event, ’Migration Period’ and ’Little Ice Age’). These results suggest an additional over-regional climate forcing also on short term increased of lake circulation in Lake Tiefer See.
In summary, the results of this thesis suggest that lake circulation at Lake Tiefer See is driven by a combination of long-term and short-term climate changes as well as of anthropogenic deforestation phases. Furthermore, the lake circulation drives geochemical cycles in the lake affecting the meaning of proxy data. Therefore, the work presented here expands the knowledge of climate and environmental variability in NE Germany. Furthermore, the integration of the Lake Tiefer See multi-proxy record in a regional comparison with another ICLEA side, Lake Czechowskie, enabled to better decipher climate changes and human impact on the lake system. These first results suggest a huge potential for further detailed regional comparisons to better understand palaeoclimate dynamics in N central Europe.
Services that operate over the Internet are under constant threat of being exposed to fraudulent use. Maintaining good user experience for legitimate users often requires the classification of entities as malicious or legitimate in order to initiate countermeasures. As an example, inbound email spam filters decide for spam or non-spam. They can base their decision on both the content of each email as well as on features that summarize prior emails received from the sending server. In general, discriminative classification methods learn to distinguish positive from negative entities. Each decision for a label may be based on features of the entity and related entities. When labels of related entities have strong interdependencies---as can be assumed e.g. for emails being delivered by the same user---classification decisions should not be made independently and dependencies should be modeled in the decision function. This thesis addresses the formulation of discriminative classification problems that are tailored for the specific demands of the following three Internet security applications. Theoretical and algorithmic solutions are devised to protect an email service against flooding of user inboxes, to mitigate abusive usage of outbound email servers, and to protect web servers against distributed denial of service attacks.
In the application of filtering an inbound email stream for unsolicited emails, utilizing features that go beyond each individual email's content can be valuable. Information about each sending mail server can be aggregated over time and may help in identifying unwanted emails. However, while this information will be available to the deployed email filter, some parts of the training data that are compiled by third party providers may not contain this information. The missing features have to be estimated at training time in order to learn a classification model. In this thesis an algorithm is derived that learns a decision function that integrates over a distribution of values for each missing entry. The distribution of missing values is a free parameter that is optimized to learn an optimal decision function.
The outbound stream of emails of an email service provider can be separated by the customer IDs that ask for delivery. All emails that are sent by the same ID in the same period of time are related, both in content and in label. Hijacked customer accounts may send batches of unsolicited emails to other email providers, which in turn might blacklist the sender's email servers after detection of incoming spam emails. The risk of being blocked from further delivery depends on the rate of outgoing unwanted emails and the duration of high spam sending rates. An optimization problem is developed that minimizes the expected cost for the email provider by learning a decision function that assigns a limit on the sending rate to customers based on the each customer's email stream.
Identifying attacking IPs during HTTP-level DDoS attacks allows to block those IPs from further accessing the web servers. DDoS attacks are usually carried out by infected clients that are members of the same botnet and show similar traffic patterns. HTTP-level attacks aim at exhausting one or more resources of the web server infrastructure, such as CPU time. If the joint set of attackers cannot increase resource usage close to the maximum capacity, no effect will be experienced by legitimate users of hosted web sites. However, if the additional load raises the computational burden towards the critical range, user experience will degrade until service may be unavailable altogether. As the loss of missing one attacker depends on block decisions for other attackers---if most other attackers are detected, not blocking one client will likely not be harmful---a structured output model has to be learned. In this thesis an algorithm is developed that learns a structured prediction decoder that searches the space of label assignments, guided by a policy.
Each model is evaluated on real-world data and is compared to reference methods. The results show that modeling each classification problem according to the specific demands of the task improves performance over solutions that do not consider the constraints inherent to an application.
Der Klimawandel
(2016)
Was ist Gerechtigkeit? Wie könnten gerechte Regelungen aussehen für die Katastrophen und Leiden, die der Klimawandel auslöst bzw. auslösen wird? Diese sind häufig ungerecht, weil sie oft deutlich stärker diejenigen treffen, die am wenigsten zur Klimaveränderung beigetragen haben.
Doch was genau verstehen wir unter dem Schlagwort: ‚Klimawandel‘? Und kann dieser wirklich den Menschen direkt treffen? Ein kurzer naturwissenschaftlicher Abriss klärt hier die wichtigsten Fragen.
Da es sich hierbei um eine philosophische Arbeit handelt, muss zunächst geklärt werden, ob der Mensch überhaupt die Ursache von so etwas sein kann wie z.B. der Klimaerwärmung. Robert Spaemanns These dazu ist, dass der Mensch durch seinen freien Willen mit seinen Einzelhandlungen das Weltgeschehen verändern kann. Hans Jonas fügt dem hinzu, dass wir durch diese Fähigkeit, verantwortlich sind für die gewollten und ungewollten Folgen unserer Handlungen.
Damit wäre aus naturwissenschaftlicher Sicht (1. Teil der Arbeit) und aus philosophischer Sicht (Anfang 2. Teil) geklärt, dass der Mensch mit größter Wahrscheinlichkeit die Ursache des Klimawandels ist und diese Verursachung moralische Konsequenzen für ihn hat.
Ein philosophischer Gerechtigkeitsbegriff wird aus der Kantischen Rechts- und Moralphilosophie entwickelt, weil diese die einzige ist, die dem Menschen überhaupt ein Recht auf Rechte zusprechen kann. Diese entspringt der transzendentalen Freiheitsfähigkeit des Menschen, weshalb jedem das Recht auf Rechte absolut und immer zukommt. Gleichzeitig mündet Kants Philosophie wiederum in dem Freiheitsgedanken, indem Gerechtigkeit nur existiert, wenn alle Menschen gleichermaßen frei sein können.
Was heißt das konkret? Wie könnte Gerechtigkeit in der Realität wirklich umgesetzt werden? Die Realisierung schlägt zwei Grundrichtungen ein. John Rawls und Stefan Gosepath beschäftigen sich u.a. eingehend mit der prozeduralen Gerechtigkeit, was bedeutet, dass gerechte Verfahren gefunden werden, die das gesellschaftliche Zusammenleben regeln. Das leitende Prinzip hierfür ist vor allem: ein Mitbestimmungsrecht aller, so dass sich im Prinzip alle Bürger ihre Gesetze selbst geben und damit frei handeln.
In Bezug auf den Klimawandel steht die zweite Ausrichtung im Vordergrund – die distributive oder auch Verteilungs-Gerechtigkeit. Materielle Güter müssen so aufgeteilt werden, dass auch trotz empirischer Unterschiede alle Menschen als moralische Subjekte anerkannt werden und frei sein können.
Doch sind diese philosophischen Schlussfolgerungen nicht viel zu abstrakt, um auf ein ebenso schwer fassbares und globales Problem wie den Klimawandel angewendet zu werden? Was könnte daher eine Klimagerechtigkeit sein?
Es gibt viele Gerechtigkeitsprinzipien, die vorgeben, eine gerechte Grundlage für die Klimaprobleme zu bieten wie z.B. das Verursacherprinzip, das Fähigkeitsprinzip oder das Grandfathering-Prinzip, bei dem die Hauptverursacher nach wie vor am meisten emittieren dürfen (dieses Prinzip leitete die bisherigen internationalen Verhandlungen).
Das Ziel dieser Arbeit ist, herauszufinden, wie die Klimaprobleme gelöst werden können, so dass für alle Menschen unter allen Umständen die universellen Menschenrechte her- und sichergestellt werden und diese frei und moralisch handeln können.
Die Schlussfolgerung dieser Arbeit ist, dass Kants Gerechtigkeitsbegriff durch eine Kombination des Subsistenzemissions-Rechts, des Greenhouse-Development-Rights-Principles (GDR-Prinzip) und einer internationalen Staatlichkeit durchgesetzt werden könnte.
Durch das Subsistenzemissions-Recht hat jeder Mensch das Recht, so viel Energie zu verbrauchen und damit zusammenhängende Emissionen zu produzieren, dass er ein menschenwürdiges Leben führen kann. Das GDR-Prinzip errechnet den Anteil an der weltweiten Gesamtverantwortung zum Klimaschutz eines jeden Landes oder sogar eines jeden Weltbürgers, indem es die historischen Emissionen (Klimaschuld) zu der jetzigen finanziellen Kapazität des Landes/ des Individuums (Verantwortungsfähigkeit) hinzuaddiert. Die Implementierung von internationalen Gremien wird verteidigt, weil es ein globales, grenzüberschreitendes Problem ist, dessen Effekte und dessen Verantwortung globale Ausmaße haben.
Ein schlagendes Argument für fast alle Klimaschutzmaßnahmen ist, dass sie Synergien aufweisen zu anderen gesellschaftlichen Bereichen aufweisen wie z.B. Gesundheit und Armutsbekämpfung, in denen auch noch um die Durchsetzung unserer Menschenrechte gerungen wird.
Ist dieser Lösungsansatz nicht völlig utopisch?
Dieser Vorschlag stellt für die internationale Gemeinschaft eine große Herausforderung dar, wäre jedoch die einzig gerechte Lösung unserer Klimaprobleme. Des Weiteren wird an dem Kantischen Handlungsgrundsatz festgehalten, dass das ewige Streben auf ideale Ziele hin, die beste Verwirklichung dieser durch menschliche, fehlbare Wesen ist.
Protektiver Effekt von 6-Shogaol, Ellagsäure und Myrrhe auf die intestinale epitheliale Barriere
(2016)
Viele bioaktive Pflanzeninhaltsstoffe bzw. Pflanzenmetabolite besitzen antiinflammatorische Eigenschaften. Diese versprechen ein hohes Potential für den Einsatz in der Phytotherapie bzw. Prävention von chronisch-entzündlichen Darmerkrankungen (CED). Eine intestinale Barrieredysfunktion ist ein typisches Charakteristikum von CED Patienten, die dadurch an akuter Diarrhoe leiden.
In dieser Arbeit werden die Pflanzenkomponenten 6-Shogaol, Ellagsäure und Myrrhe an den intestinalen Kolonepithelzellmodellen HT-29/B6 und Caco-2 auf ihr Potential hin, die intestinale Barriere zu stärken bzw. eine Barrieredysfunktion zu verhindern, untersucht. Hauptschwerpunkt der Analysen ist die parazelluläre Barrierefunktion und die Regulation der dafür entscheidenden Proteinfamilie der Tight Junctions (TJs), der Claudine.
Die Barrierefunktion wird durch Messung des transepithelialen Widerstands (TER) und der Fluxmessung in der Ussing-Kammer bestimmt. Dazu werden die HT-29/B6- und Caco-2-Monolayer mit den Pflanzenkomponenten (6-Shogaol, Ellagsäure, Myrrhe), dem proinflammatorischen Zytokin TNF-α oder der Kombination von beiden Subsztanzen für 24 oder 48 h behandelt. Außerdem wurden zur weiteren Charakterisierung die Expression sowie die Lokalisation der für die parazelluläre Barriere relevanten Claudine, die TJ-Ultrastruktur und verschiedene Signalwege analysiert.
In Caco-2-Monolayern führten Ellagsäure und Myrrhe, nicht aber 6-Shogaol, allein zu einem TER-Anstieg bedingt durch eine verringerte Permeabilität für Natriumionen. Myrrhe verminderte die Expression des Kationenkanal-bildenden TJ-Proteins Claudin-2 über die Inhibierung des PI3K/Akt-Signalweges, während Ellagsäure die Expression der TJ-Proteine Claudin-4 und -7 reduzierte. Alle Pflanzenkomponenten schützten in den Caco-2-Zellen vor einer TNF-α-induzierten Barrieredysfunktion.
An den HT-29/B6-Monolayern änderte keine der Pflanzenkomponenten allein die Barrierefunktion. Die HT-29/B6-Zellen reagierten auf TNF-α mit einer deutlichen Verminderung des TER und einer erhöhten Fluoreszein-Permeabilität. Die TER-Abnahme war durch eine PI3K/Akt-vermittelte gesteigerte Claudin-2-Expression sowie eine NFκB-vermittelte Umverteilung des abdichtenden TJ-Proteins Claudin-1 gekennzeichnet. 6-Shogaol konnte den TER-Abfall partiell hemmen sowie die PI3K/Akt-induzierte Claudin-2-Expression und die NFκB-bedingte Claudin-1-Umverteilung verhindern. Ebenso inhibierte Myrrhe, nicht aber Ellagsäure, den TNF-α-induzierten TER-Abfall. Dabei konnte Myrrhe zwar den Claudin-2-Expressionsanstieg und die Claudin-1-Umverteilung unterbinden, jedoch weder die NFκB- noch die PI3K/Akt-Aktivierung hemmen. Diese Arbeit zeigt, dass auch STAT6 an dem Claudin-2-Expressionsanstieg durch
TNF-α in HT-29/B6-Zellen beteiligt ist. So wurde durch Myrrhe die TNF-α-induzierte Phosphorylierung von STAT6 und die erhöhte Claudin-2-Expression inhibiert.
Die Ergebnisse deuten darauf hin, dass die Pflanzenkomponenten 6-Shogaol, Ellagsäure und Myrrhe mit unterschiedlichen Mechanismen stärkend auf die Barriere einwirken. Zur Behandlung von intestinalen Erkrankungen mit Barrieredysfunktion könnten daher Kombinationspräparate aus verschiedenen Pflanzen effektiver sein als Monopräparate.