Refine
Year of publication
- 2016 (993) (remove)
Document Type
- Doctoral Thesis (322)
- Article (254)
- Postprint (204)
- Monograph/Edited Volume (74)
- Part of a Book (66)
- Preprint (18)
- Review (14)
- Master's Thesis (12)
- Part of Periodical (9)
- Conference Proceeding (6)
Language
Keywords
- Migration (13)
- religion (13)
- Religion (12)
- interkulturelle Missverständnisse (12)
- migration (12)
- religiöses Leben (12)
- confusions and misunderstandings (11)
- German (6)
- model (6)
- sentence processing (6)
Institute
- Institut für Biochemie und Biologie (82)
- Institut für Slavistik (76)
- Institut für Chemie (62)
- Mathematisch-Naturwissenschaftliche Fakultät (62)
- Institut für Physik und Astronomie (49)
- Institut für Erd- und Umweltwissenschaften (48)
- Institut für Romanistik (47)
- Bürgerliches Recht (42)
- Humanwissenschaftliche Fakultät (41)
- Sozialwissenschaften (38)
Protektiver Effekt von 6-Shogaol, Ellagsäure und Myrrhe auf die intestinale epitheliale Barriere
(2016)
Viele bioaktive Pflanzeninhaltsstoffe bzw. Pflanzenmetabolite besitzen antiinflammatorische Eigenschaften. Diese versprechen ein hohes Potential für den Einsatz in der Phytotherapie bzw. Prävention von chronisch-entzündlichen Darmerkrankungen (CED). Eine intestinale Barrieredysfunktion ist ein typisches Charakteristikum von CED Patienten, die dadurch an akuter Diarrhoe leiden.
In dieser Arbeit werden die Pflanzenkomponenten 6-Shogaol, Ellagsäure und Myrrhe an den intestinalen Kolonepithelzellmodellen HT-29/B6 und Caco-2 auf ihr Potential hin, die intestinale Barriere zu stärken bzw. eine Barrieredysfunktion zu verhindern, untersucht. Hauptschwerpunkt der Analysen ist die parazelluläre Barrierefunktion und die Regulation der dafür entscheidenden Proteinfamilie der Tight Junctions (TJs), der Claudine.
Die Barrierefunktion wird durch Messung des transepithelialen Widerstands (TER) und der Fluxmessung in der Ussing-Kammer bestimmt. Dazu werden die HT-29/B6- und Caco-2-Monolayer mit den Pflanzenkomponenten (6-Shogaol, Ellagsäure, Myrrhe), dem proinflammatorischen Zytokin TNF-α oder der Kombination von beiden Subsztanzen für 24 oder 48 h behandelt. Außerdem wurden zur weiteren Charakterisierung die Expression sowie die Lokalisation der für die parazelluläre Barriere relevanten Claudine, die TJ-Ultrastruktur und verschiedene Signalwege analysiert.
In Caco-2-Monolayern führten Ellagsäure und Myrrhe, nicht aber 6-Shogaol, allein zu einem TER-Anstieg bedingt durch eine verringerte Permeabilität für Natriumionen. Myrrhe verminderte die Expression des Kationenkanal-bildenden TJ-Proteins Claudin-2 über die Inhibierung des PI3K/Akt-Signalweges, während Ellagsäure die Expression der TJ-Proteine Claudin-4 und -7 reduzierte. Alle Pflanzenkomponenten schützten in den Caco-2-Zellen vor einer TNF-α-induzierten Barrieredysfunktion.
An den HT-29/B6-Monolayern änderte keine der Pflanzenkomponenten allein die Barrierefunktion. Die HT-29/B6-Zellen reagierten auf TNF-α mit einer deutlichen Verminderung des TER und einer erhöhten Fluoreszein-Permeabilität. Die TER-Abnahme war durch eine PI3K/Akt-vermittelte gesteigerte Claudin-2-Expression sowie eine NFκB-vermittelte Umverteilung des abdichtenden TJ-Proteins Claudin-1 gekennzeichnet. 6-Shogaol konnte den TER-Abfall partiell hemmen sowie die PI3K/Akt-induzierte Claudin-2-Expression und die NFκB-bedingte Claudin-1-Umverteilung verhindern. Ebenso inhibierte Myrrhe, nicht aber Ellagsäure, den TNF-α-induzierten TER-Abfall. Dabei konnte Myrrhe zwar den Claudin-2-Expressionsanstieg und die Claudin-1-Umverteilung unterbinden, jedoch weder die NFκB- noch die PI3K/Akt-Aktivierung hemmen. Diese Arbeit zeigt, dass auch STAT6 an dem Claudin-2-Expressionsanstieg durch
TNF-α in HT-29/B6-Zellen beteiligt ist. So wurde durch Myrrhe die TNF-α-induzierte Phosphorylierung von STAT6 und die erhöhte Claudin-2-Expression inhibiert.
Die Ergebnisse deuten darauf hin, dass die Pflanzenkomponenten 6-Shogaol, Ellagsäure und Myrrhe mit unterschiedlichen Mechanismen stärkend auf die Barriere einwirken. Zur Behandlung von intestinalen Erkrankungen mit Barrieredysfunktion könnten daher Kombinationspräparate aus verschiedenen Pflanzen effektiver sein als Monopräparate.
Services that operate over the Internet are under constant threat of being exposed to fraudulent use. Maintaining good user experience for legitimate users often requires the classification of entities as malicious or legitimate in order to initiate countermeasures. As an example, inbound email spam filters decide for spam or non-spam. They can base their decision on both the content of each email as well as on features that summarize prior emails received from the sending server. In general, discriminative classification methods learn to distinguish positive from negative entities. Each decision for a label may be based on features of the entity and related entities. When labels of related entities have strong interdependencies---as can be assumed e.g. for emails being delivered by the same user---classification decisions should not be made independently and dependencies should be modeled in the decision function. This thesis addresses the formulation of discriminative classification problems that are tailored for the specific demands of the following three Internet security applications. Theoretical and algorithmic solutions are devised to protect an email service against flooding of user inboxes, to mitigate abusive usage of outbound email servers, and to protect web servers against distributed denial of service attacks.
In the application of filtering an inbound email stream for unsolicited emails, utilizing features that go beyond each individual email's content can be valuable. Information about each sending mail server can be aggregated over time and may help in identifying unwanted emails. However, while this information will be available to the deployed email filter, some parts of the training data that are compiled by third party providers may not contain this information. The missing features have to be estimated at training time in order to learn a classification model. In this thesis an algorithm is derived that learns a decision function that integrates over a distribution of values for each missing entry. The distribution of missing values is a free parameter that is optimized to learn an optimal decision function.
The outbound stream of emails of an email service provider can be separated by the customer IDs that ask for delivery. All emails that are sent by the same ID in the same period of time are related, both in content and in label. Hijacked customer accounts may send batches of unsolicited emails to other email providers, which in turn might blacklist the sender's email servers after detection of incoming spam emails. The risk of being blocked from further delivery depends on the rate of outgoing unwanted emails and the duration of high spam sending rates. An optimization problem is developed that minimizes the expected cost for the email provider by learning a decision function that assigns a limit on the sending rate to customers based on the each customer's email stream.
Identifying attacking IPs during HTTP-level DDoS attacks allows to block those IPs from further accessing the web servers. DDoS attacks are usually carried out by infected clients that are members of the same botnet and show similar traffic patterns. HTTP-level attacks aim at exhausting one or more resources of the web server infrastructure, such as CPU time. If the joint set of attackers cannot increase resource usage close to the maximum capacity, no effect will be experienced by legitimate users of hosted web sites. However, if the additional load raises the computational burden towards the critical range, user experience will degrade until service may be unavailable altogether. As the loss of missing one attacker depends on block decisions for other attackers---if most other attackers are detected, not blocking one client will likely not be harmful---a structured output model has to be learned. In this thesis an algorithm is developed that learns a structured prediction decoder that searches the space of label assignments, guided by a policy.
Each model is evaluated on real-world data and is compared to reference methods. The results show that modeling each classification problem according to the specific demands of the task improves performance over solutions that do not consider the constraints inherent to an application.
Rapidly uplifting coastlines are frequently associated with convergent tectonic boundaries, like subduction zones, which are repeatedly breached by giant megathrust earthquakes. The coastal relief along tectonically active realms is shaped by the effect of sea-level variations and heterogeneous patterns of permanent tectonic deformation, which are accumulated through several cycles of megathrust earthquakes. However, the correlation between earthquake deformation patterns and the sustained long-term segmentation of forearcs, particularly in Chile, remains poorly understood. Furthermore, the methods used to estimate permanent deformation from geomorphic markers, like marine terraces, have remained qualitative and are based on unrepeatable methods. This contrasts with the increasing resolution of digital elevation models, such as Light Detection and Ranging (LiDAR) and high-resolution bathymetric surveys.
Throughout this thesis I study permanent deformation in a holistic manner: from the methods to assess deformation rates, to the processes involved in its accumulation. My research focuses particularly on two aspects: Developing methodologies to assess permanent deformation using marine terraces, and comparing permanent deformation with seismic cycle deformation patterns under different spatial scales along the M8.8 Maule earthquake (2010) rupture zone. Two methods are developed to determine deformation rates from wave-built and wave-cut terraces respectively. I selected an archetypal example of a wave-built terrace at Santa Maria Island studying its stratigraphy and recognizing sequences of reoccupation events tied with eleven radiocarbon sample ages (14C ages). I developed a method to link patterns of reoccupation with sea-level proxies by iterating relative sea level curves for a range of uplift rates. I find the best fit between relative sea-level and the stratigraphic patterns for an uplift rate of 1.5 +- 0.3 m/ka.
A Graphical User Interface named TerraceM® was developed in Matlab®. This novel software tool determines shoreline angles in wave-cut terraces under different geomorphic scenarios. To validate the methods, I select test sites in areas of available high-resolution LiDAR topography along the Maule earthquake rupture zone and in California, USA. The software allows determining the 3D location of the shoreline angle, which is a proxy for the estimation of permanent deformation rates. The method is based on linear interpolations to define the paleo platform and cliff on swath profiles. The shoreline angle is then located by intersecting these interpolations. The
accuracy and precision of TerraceM® was tested by comparing its results with previous assessments, and through an experiment with students in a computer lab setting at the University
of Potsdam.
I combined the methods developed to analyze wave-built and wave-cut terraces to assess regional patterns of permanent deformation along the (2010) Maule earthquake rupture. Wave-built terraces are tied using 12 Infra Red Stimulated luminescence ages (IRSL ages) and shoreline angles in wave-cut terraces are estimated from 170 aligned swath profiles. The comparison of coseismic slip, interseismic coupling, and permanent deformation, leads to three areas of high permanent uplift, terrace warping, and sharp fault offsets. These three areas correlate with regions of high slip and low coupling, as well as with the spatial limit of at least eight historical megathrust ruptures (M8-9.5). I propose that the zones of upwarping at Arauco and Topocalma reflect changes in frictional properties of the megathrust, which result in discrete boundaries for the propagation of mega earthquakes.
To explore the application of geomorphic markers and quantitative morphology in offshore areas I performed a local study of patterns of permanent deformation inferred from hitherto unrecognized drowned shorelines at the Arauco Bay, at the southern part of the (2010) Maule earthquake rupture zone. A multidisciplinary approach, including morphometry, sedimentology, paleontology, 3D morphoscopy, and a landscape Evolution Model is used to recognize, map, and assess local rates and patterns of permanent deformation in submarine environments. Permanent deformation patterns are then reproduced using elastic models to assess deformation rates of an active submarine splay fault defined as Santa Maria Fault System. The best fit suggests a reverse structure with a slip rate of 3.7 m/ka for the last 30 ka. The register of land level changes during the earthquake cycle at Santa Maria Island suggest that most of the deformation may be accrued through splay fault reactivation during mega earthquakes, like the (2010) Maule event. Considering a recurrence time of 150 to 200 years, as determined from historical and geological observations, slip between 0.3 and 0.7 m per event would be required to account for the 3.7 m/ka millennial slip rate. However, if the SMFS slips only every ~1000 years, representing a few megathrust earthquakes, then a slip of ~3.5 m per event would be required to account for the long- term rate. Such event would be equivalent to a magnitude ~6.7 earthquake capable to generate a local tsunami.
The results of this thesis provide novel and fundamental information regarding the amount of permanent deformation accrued in the crust, and the mechanisms responsible for this accumulation at millennial time-scales along the M8.8 Maule earthquake (2010) rupture zone. Furthermore, the results of this thesis highlight the application of quantitative geomorphology and the use of repeatable methods to determine permanent deformation, improve the accuracy of marine terrace assessments, and estimates of vertical deformation rates in tectonically active coastal areas. This is vital information for adequate coastal-hazard assessments and to anticipate realistic earthquake and tsunami scenarios.
Bedeutung der abhängigen Streuung für die optischen Eigenschaften hochkonzentrierter Dispersionen
(2016)
This thesis is focused on the study and the exact simulation of two classes of real-valued Brownian diffusions: multi-skew Brownian motions with constant drift and Brownian diffusions whose drift admits a finite number of jumps.
The skew Brownian motion was introduced in the sixties by Itô and McKean, who constructed it from the reflected Brownian motion, flipping its excursions from the origin with a given probability. Such a process behaves as the original one except at the point 0, which plays the role of a semipermeable barrier. More generally, a skew diffusion with several semipermeable barriers, called multi-skew diffusion, is a diffusion everywhere except when it reaches one of the barriers, where it is partially reflected with a probability depending on that particular barrier. Clearly, a multi-skew diffusion can be characterized either as solution of a stochastic differential equation involving weighted local times (these terms providing the semi-permeability) or by its infinitesimal generator as Markov process.
In this thesis we first obtain a contour integral representation for the transition semigroup of the multiskew Brownian motion with constant drift, based on a fine analysis of its complex properties. Thanks to this representation we write explicitly the transition densities of the two-skew Brownian motion with constant drift as an infinite series involving, in particular, Gaussian functions and their tails.
Then we propose a new useful application of a generalization of the known rejection sampling method. Recall that this basic algorithm allows to sample from a density as soon as one finds an - easy to sample - instrumental density verifying that the ratio between the goal and the instrumental densities is a bounded function. The generalized rejection sampling method allows to sample exactly from densities for which indeed only an approximation is known. The originality of the algorithm lies in the fact that one finally samples directly from the law without any approximation, except the machine's.
As an application, we sample from the transition density of the two-skew Brownian motion with or without constant drift. The instrumental density is the transition density of the Brownian motion with constant drift, and we provide an useful uniform bound for the ratio of the densities. We also present numerical simulations to study the efficiency of the algorithm.
The second aim of this thesis is to develop an exact simulation algorithm for a Brownian diffusion whose drift admits several jumps. In the literature, so far only the case of a continuous drift (resp. of a drift with one finite jump) was treated. The theoretical method we give allows to deal with any finite number of discontinuities. Then we focus on the case of two jumps, using the transition densities of the two-skew Brownian motion obtained before. Various examples are presented and the efficiency of our approach is discussed.
Der Klimawandel
(2016)
Was ist Gerechtigkeit? Wie könnten gerechte Regelungen aussehen für die Katastrophen und Leiden, die der Klimawandel auslöst bzw. auslösen wird? Diese sind häufig ungerecht, weil sie oft deutlich stärker diejenigen treffen, die am wenigsten zur Klimaveränderung beigetragen haben.
Doch was genau verstehen wir unter dem Schlagwort: ‚Klimawandel‘? Und kann dieser wirklich den Menschen direkt treffen? Ein kurzer naturwissenschaftlicher Abriss klärt hier die wichtigsten Fragen.
Da es sich hierbei um eine philosophische Arbeit handelt, muss zunächst geklärt werden, ob der Mensch überhaupt die Ursache von so etwas sein kann wie z.B. der Klimaerwärmung. Robert Spaemanns These dazu ist, dass der Mensch durch seinen freien Willen mit seinen Einzelhandlungen das Weltgeschehen verändern kann. Hans Jonas fügt dem hinzu, dass wir durch diese Fähigkeit, verantwortlich sind für die gewollten und ungewollten Folgen unserer Handlungen.
Damit wäre aus naturwissenschaftlicher Sicht (1. Teil der Arbeit) und aus philosophischer Sicht (Anfang 2. Teil) geklärt, dass der Mensch mit größter Wahrscheinlichkeit die Ursache des Klimawandels ist und diese Verursachung moralische Konsequenzen für ihn hat.
Ein philosophischer Gerechtigkeitsbegriff wird aus der Kantischen Rechts- und Moralphilosophie entwickelt, weil diese die einzige ist, die dem Menschen überhaupt ein Recht auf Rechte zusprechen kann. Diese entspringt der transzendentalen Freiheitsfähigkeit des Menschen, weshalb jedem das Recht auf Rechte absolut und immer zukommt. Gleichzeitig mündet Kants Philosophie wiederum in dem Freiheitsgedanken, indem Gerechtigkeit nur existiert, wenn alle Menschen gleichermaßen frei sein können.
Was heißt das konkret? Wie könnte Gerechtigkeit in der Realität wirklich umgesetzt werden? Die Realisierung schlägt zwei Grundrichtungen ein. John Rawls und Stefan Gosepath beschäftigen sich u.a. eingehend mit der prozeduralen Gerechtigkeit, was bedeutet, dass gerechte Verfahren gefunden werden, die das gesellschaftliche Zusammenleben regeln. Das leitende Prinzip hierfür ist vor allem: ein Mitbestimmungsrecht aller, so dass sich im Prinzip alle Bürger ihre Gesetze selbst geben und damit frei handeln.
In Bezug auf den Klimawandel steht die zweite Ausrichtung im Vordergrund – die distributive oder auch Verteilungs-Gerechtigkeit. Materielle Güter müssen so aufgeteilt werden, dass auch trotz empirischer Unterschiede alle Menschen als moralische Subjekte anerkannt werden und frei sein können.
Doch sind diese philosophischen Schlussfolgerungen nicht viel zu abstrakt, um auf ein ebenso schwer fassbares und globales Problem wie den Klimawandel angewendet zu werden? Was könnte daher eine Klimagerechtigkeit sein?
Es gibt viele Gerechtigkeitsprinzipien, die vorgeben, eine gerechte Grundlage für die Klimaprobleme zu bieten wie z.B. das Verursacherprinzip, das Fähigkeitsprinzip oder das Grandfathering-Prinzip, bei dem die Hauptverursacher nach wie vor am meisten emittieren dürfen (dieses Prinzip leitete die bisherigen internationalen Verhandlungen).
Das Ziel dieser Arbeit ist, herauszufinden, wie die Klimaprobleme gelöst werden können, so dass für alle Menschen unter allen Umständen die universellen Menschenrechte her- und sichergestellt werden und diese frei und moralisch handeln können.
Die Schlussfolgerung dieser Arbeit ist, dass Kants Gerechtigkeitsbegriff durch eine Kombination des Subsistenzemissions-Rechts, des Greenhouse-Development-Rights-Principles (GDR-Prinzip) und einer internationalen Staatlichkeit durchgesetzt werden könnte.
Durch das Subsistenzemissions-Recht hat jeder Mensch das Recht, so viel Energie zu verbrauchen und damit zusammenhängende Emissionen zu produzieren, dass er ein menschenwürdiges Leben führen kann. Das GDR-Prinzip errechnet den Anteil an der weltweiten Gesamtverantwortung zum Klimaschutz eines jeden Landes oder sogar eines jeden Weltbürgers, indem es die historischen Emissionen (Klimaschuld) zu der jetzigen finanziellen Kapazität des Landes/ des Individuums (Verantwortungsfähigkeit) hinzuaddiert. Die Implementierung von internationalen Gremien wird verteidigt, weil es ein globales, grenzüberschreitendes Problem ist, dessen Effekte und dessen Verantwortung globale Ausmaße haben.
Ein schlagendes Argument für fast alle Klimaschutzmaßnahmen ist, dass sie Synergien aufweisen zu anderen gesellschaftlichen Bereichen aufweisen wie z.B. Gesundheit und Armutsbekämpfung, in denen auch noch um die Durchsetzung unserer Menschenrechte gerungen wird.
Ist dieser Lösungsansatz nicht völlig utopisch?
Dieser Vorschlag stellt für die internationale Gemeinschaft eine große Herausforderung dar, wäre jedoch die einzig gerechte Lösung unserer Klimaprobleme. Des Weiteren wird an dem Kantischen Handlungsgrundsatz festgehalten, dass das ewige Streben auf ideale Ziele hin, die beste Verwirklichung dieser durch menschliche, fehlbare Wesen ist.
Was und von wem wurde mit wissenschaftlichem Anspruch während des ersten Drittels des 20. Jahrhunderts über Antisemitismus in Deutschland gearbeitet und geschrieben? Welche Ansätze bot die frühe Antisemitismusforschung? Das sind die Leitfragen dieser Studie, die ein Stück Kulturund Wissenschaftsgeschichte zugleich bietet. Darüber hinaus schlägt das Buch einen Bogen zur Geschichte des Abwehrkampfs gegen den Antisemitismus bis 1933. Das Fazit: Bereits in der Weimarer Republik existierte ein tiefergehendes Wissen über den Antisemitismus, das jedoch für den Abwehrkampf gegen antisemitische und völkische Bewegungen wenig Perspektiven bieten konnte.
Quantitative thermodynamic and geochemical modeling is today applied in a variety of geological environments from the petrogenesis of igneous rocks to the oceanic realm. Thermodynamic calculations are used, for example, to get better insight into lithosphere dynamics, to constrain melting processes in crust and mantle as well as to study fluid-rock interaction. The development of thermodynamic databases and computer programs to calculate equilibrium phase diagrams have greatly advanced our ability to model geodynamic processes from subduction to orogenesis. However, a well-known problem is that despite its broad application the use and interpretation of thermodynamic models applied to natural rocks is far from straightforward. For example, chemical disequilibrium and/or unknown rock properties, such as fluid activities, complicate the application of equilibrium thermodynamics.
One major aspect of the publications presented in this Habilitationsschrift are new approaches to unravel dynamic and chemical histories of rocks that include applications to chemically open system behaviour. This approach is especially important in rocks that are affected by element fractionation due to fractional crystallisation and fluid loss during dehydration reactions. Furthermore, chemically open system behaviour has also to be considered for studying fluid-rock interaction processes and for extracting information from compositionally zoned metamorphic minerals. In this Habilitationsschrift several publications are presented where I incorporate such open system behaviour in the forward models by incrementing the calculations and considering changing reacting rock compositions during metamorphism. I apply thermodynamic forward modelling incorporating the effects of element fractionation in a variety of geodynamic and geochemical applications in order to better understand lithosphere dynamics and mass transfer in solid rocks.
In three of the presented publications I combine thermodynamic forward models with trace element calculations in order to enlarge the application of geochemical numerical forward modeling. In these publications a combination of thermodynamic and trace element forward modeling is used to study and quantify processes in metamorphic petrology at spatial scales from µm to km. In the thermodynamic forward models I utilize Gibbs energy minimization to quantify mineralogical changes along a reaction path of a chemically open fluid/rock system. These results are combined with mass balanced trace element calculations to determine the trace element distribution between rock and melt/fluid during the metamorphic evolution. Thus, effects of mineral reactions, fluid-rock interaction and element transport in metamorphic rocks on the trace element and isotopic composition of minerals, rocks and percolating fluids or melts can be predicted.
One of the included publications shows that trace element growth zonations in metamorphic garnet porphyroblasts can be used to get crucial information about the reaction path of the investigated sample. In order to interpret the major and trace element distribution and zoning patterns in terms of the reaction history of the samples, we combined thermodynamic forward models with mass-balance rare earth element calculations. Such combined thermodynamic and mass-balance calculations of the rare earth element distribution among the modelled stable phases yielded characteristic zonation patterns in garnet that closely resemble those in the natural samples. We can show in that paper that garnet growth and trace element incorporation occurred in near thermodynamic equilibrium with matrix phases during subduction and that the rare earth element patterns in garnet exhibit distinct enrichment zones that fingerprint the minerals involved in the garnet-forming reactions.
In two of the presented publications I illustrate the capacities of combined thermodynamic-geochemical modeling based on examples relevant to mass transfer in subduction zones. The first example focuses on fluid-rock interaction in and around a blueschist-facies shear zone in felsic gneisses, where fluid-induced mineral reactions and their effects on boron (B) concentrations and isotopic compositions in white mica are modeled. In the second example, fluid release from a subducted slab and associated transport of B and variations in B concentrations and isotopic compositions in liberated fluids and residual rocks are modeled. I show that, combined with experimental data on elemental partitioning and isotopic fractionation, thermodynamic forward modeling unfolds enormous capacities that are far from exhausted.
In my publications presented in this Habilitationsschrift I compare the modeled results to geochemical data of natural minerals and rocks and demonstrate that the combination of thermodynamic and geochemical models enables quantification of metamorphic processes and insights into element cycling that would have been unattainable so far.
Thus, the contributions to the science community presented in this Habilitatonsschrift concern the fields of petrology, geochemistry, geochronology but also ore geology that all use thermodynamic and geochemical models to solve various problems related to geo-materials.
Meter and syntax have overlapping elements in music and speech domains, and individual differences have been documented in both meter perception and syntactic comprehension paradigms. Previous evidence insinuated but never fully explored the relationship that metrical structure has to syntactic comprehension, the comparability of these processes across music and language domains, and the respective role of individual differences. This dissertation aimed to investigate neurocognitive entrainment to meter in music and language, the impact that neurocognitive entrainment had on syntactic comprehension, and whether individual differences in musical expertise, temporal perception and working memory played a role during these processes.
A theoretical framework was developed, which linked neural entrainment, cognitive entrainment, and syntactic comprehension while detailing previously documented effects of individual differences on meter perception and syntactic comprehension. The framework was developed in both music and language domains and was tested using behavioral and EEG methods across three studies (seven experiments). In order to satisfy empirical evaluation of neurocognitive entrainment and syntactic aspects of the framework, original melodies and sentences were composed. Each item had four permutations: regular and irregular metricality, based on the hierarchical organization of strong and weak notes and syllables, and preferred and non-preferred syntax, based on structurally alternate endings. The framework predicted — for both music and language domains — greater neurocognitive entrainment in regular compared to irregular metricality conditions, and accordingly, better syntactic integration in regular compared to irregular metricality conditions. Individual differences among participants were expected for both entrainment and syntactic processes.
Altogether, the dissertation was able to support a holistic account of neurocognitive entrainment to musical meter and its subsequent influence on syntactic integration of melodies, with musician participants. The theoretical predictions were not upheld in the language domain with musician participants, but initial behavioral evidence in combination with previous EEG evidence suggest that perhaps non-musician language EEG data would support the framework’s predictions. Musicians’ deviation from hypothesized results in the language domain were suspected to reflect heightened perception of acoustic features stemming from musical training, which caused current ‘overly’ regular stimuli to distract the cognitive system. The individual-differences approach was vindicated by the surfacing of two factors scores, Verbal Working Memory and Time and Pitch Discrimination, which in turn correlated with multiple experimental data across the three studies.
Ecosystems' exposure to climate change - Modeling as support for nature conservation management
(2016)
The population structure of the highly mobile marine mammal, the harbor porpoise (Phocoena phocoena), in the Atlantic shelf waters follows a pattern of significant isolation-by-distance. The population structure of harbor porpoises from the Baltic Sea, which is connected with the North Sea through a series of basins separated by shallow underwater ridges, however, is more complex. Here, we investigated the population differentiation of harbor porpoises in European Seas with a special focus on the Baltic Sea and adjacent waters, using a population genomics approach. We used 2872 single nucleotide polymorphisms (SNPs), derived from double digest restriction-site associated DNA sequencing (ddRAD-seq), as well as 13 microsatellite loci and mitochondrial haplotypes for the same set of individuals. Spatial principal components analysis (sPCA), and Bayesian clustering on a subset of SNPs suggest three main groupings at the level of all studied regions: the Black Sea, the North Atlantic, and the Baltic Sea. Furthermore, we observed a distinct separation of the North Sea harbor porpoises from the Baltic Sea populations, and identified splits between porpoise populations within the Baltic Sea. We observed a notable distinction between the Belt Sea and the Inner Baltic Sea sub-regions. Improved delineation of harbor porpoise population assignments for the Baltic based on genomic evidence is important for conservation management of this endangered cetacean in threatened habitats, particularly in the Baltic Sea proper. In addition, we show that SNPs outperform microsatellite markers and demonstrate the utility of RAD-tags from a relatively small, opportunistically sampled cetacean sample set for population diversity and divergence analysis.
The title compounds, [(1R,3R,4R,5R,6S)-4,5-bis(acetyloxy)-7-oxo-2-oxabicyclo-
[4.2.0]octan-3-yl]methyl acetate, C14H18O8, (I), [(1S,4R,5S,6R)-5-acetyloxy-7-
hydroxyimino-2-oxobicyclo[4.2.0]octan-4-yl acetate, C11H15NO6, (II), and
[(3aR,5R,6R,7R,7aS)-6,7-bis(acetyloxy)-2-oxooctahydropyrano[3,2-b]pyrrol-5-
yl]methyl acetate, C14H19NO8, (III), are stable bicyclic carbohydrate derivatives.
They can easily be synthesized in a few steps from commercially available
glycals. As a result of the ring strain from the four-membered rings in (I) and
(II), the conformations of the carbohydrates deviate strongly from the ideal
chair form. Compound (II) occurs in the boat form. In the five-membered
lactam (III), on the other hand, the carbohydrate adopts an almost ideal chair
conformation. As a result of the distortion of the sugar rings, the configurations
of the three bicyclic carbohydrate derivatives could not be determined from
their NMR coupling constants. From our three crystal structure determinations,
we were able to establish for the first time the absolute configurations of all new
stereocenters of the carbohydrate rings.
The title compounds, [(1R,3R,4R,5R,6S)-4,5-bis(acetyloxy)-7-oxo-2-oxabicyclo[4.2.0]octan-3-yl]methyl acetate, C14H18O8, (I), [(1S,4R,5S,6R)-5-acetyloxy-7-hydroxyimino-2-oxobicyclo[4.2.0]octan-4-yl acetate, C11H15NO6, (II), and [(3aR,5R,6R,7R,7aS)-6,7-bis(acetyloxy)-2-oxooctahydropyrano[3,2-b]pyrrol-5-yl]methyl acetate, C14H19NO8, (III), are stable bicyclic carbohydrate derivatives. They can easily be synthesized in a few steps from commercially available glycals. As a result of the ring strain from the four-membered rings in (I) and (II), the conformations of the carbohydrates deviate strongly from the ideal chair form. Compound (II) occurs in the boat form. In the five-membered lactam (III), on the other hand, the carbohydrate adopts an almost ideal chair conformation. As a result of the distortion of the sugar rings, the configurations of the three bicyclic carbohydrate derivatives could not be determined from their NMR coupling constants. From our three crystal structure determinations, we were able to establish for the first time the absolute configurations of all new stereocenters of the carbohydrate rings.
In experiments investigating sentence processing, eye movement measures such as fixation durations and regression proportions while reading are commonly used to draw conclusions about processing difficulties. However, these measures are the result of an interaction of multiple cognitive levels and processing strategies and thus are only indirect indicators of processing difficulty. In order to properly interpret an eye movement response, one has to understand the underlying principles of adaptive processing such as trade-off mechanisms between reading speed and depth of comprehension that interact with task demands and individual differences. Therefore, it is necessary to establish explicit models of the respective mechanisms as well as their causal relationship with observable behavior. There are models of lexical processing and eye movement control on the one side and models on sentence parsing and memory processes on the other. However, no model so far combines both sides with explicitly defined linking assumptions.
In this thesis, a model is developed that integrates oculomotor control with a parsing mechanism and a theory of cue-based memory retrieval. On the basis of previous empirical findings and independently motivated principles, adaptive, resource-preserving mechanisms of underspecification are proposed both on the level of memory access and on the level of syntactic parsing. The thesis first investigates the model of cue-based retrieval in sentence comprehension of Lewis & Vasishth (2005) with a comprehensive literature review and computational modeling of retrieval interference in dependency processing. The results reveal a great variability in the data that is not explained by the theory. Therefore, two principles, 'distractor prominence' and 'cue confusion', are proposed as an extension to the theory, thus providing a more adequate description of systematic variance in empirical results as a consequence of experimental design, linguistic environment, and individual differences. In the remainder of the thesis, four interfaces between parsing and eye movement control are defined: Time Out, Reanalysis, Underspecification, and Subvocalization. By comparing computationally derived predictions with experimental results from the literature, it is investigated to what extent these four interfaces constitute an appropriate elementary set of assumptions for explaining specific eye movement patterns during sentence processing. Through simulations, it is shown how this system of in itself simple assumptions results in predictions of complex, adaptive behavior.
In conclusion, it is argued that, on all levels, the sentence comprehension mechanism seeks a balance between necessary processing effort and reading speed on the basis of experience, task demands, and resource limitations. Theories of linguistic processing therefore need to be explicitly defined and implemented, in particular with respect to linking assumptions between observable behavior and underlying cognitive processes. The comprehensive model developed here integrates multiple levels of sentence processing that hitherto have only been studied in isolation. The model is made publicly available as an expandable framework for future studies of the interactions between parsing, memory access, and eye movement control.
Personal fabrication tools, such as 3D printers, are on the way of enabling a future in which non-technical users will be able to create custom objects. However, while the hardware is there, the current interaction model behind existing design tools is not suitable for non-technical users. Today, 3D printers are operated by fabricating the object in one go, which tends to take overnight due to the slow 3D printing technology. Consequently, the current interaction model requires users to think carefully before printing as every mistake may imply another overnight print. Planning every step ahead, however, is not feasible for non-technical users as they lack the experience to reason about the consequences of their design decisions.
In this dissertation, we propose changing the interaction model around personal fabrication tools to better serve this user group. We draw inspiration from personal computing and argue that the evolution of personal fabrication may resemble the evolution of personal computing: Computing started with machines that executed a program in one go before returning the result to the user. By decreasing the interaction unit to single requests, turn-taking systems such as the command line evolved, which provided users with feedback after every input. Finally, with the introduction of direct-manipulation interfaces, users continuously interacted with a program receiving feedback about every action in real-time. In this dissertation, we explore whether these interaction concepts can be applied to personal fabrication as well.
We start with fabricating an object in one go and investigate how to tighten the feedback-cycle on an object-level: We contribute a method called low-fidelity fabrication, which saves up to 90% fabrication time by creating objects as fast low-fidelity previews, which are sufficient to evaluate key design aspects. Depending on what is currently being tested, we propose different conversions that enable users to focus on different parts: faBrickator allows for a modular design in the early stages of prototyping; when users move on WirePrint allows quickly testing an object's shape, while Platener allows testing an object's technical function. We present an interactive editor for each technique and explain the underlying conversion algorithms.
By interacting on smaller units, such as a single element of an object, we explore what it means to transition from systems that fabricate objects in one go to turn-taking systems. We start with a 2D system called constructable: Users draw with a laser pointer onto the workpiece inside a laser cutter. The drawing is captured with an overhead camera. As soon as the the user finishes drawing an element, such as a line, the constructable system beautifies the path and cuts it--resulting in physical output after every editing step. We extend constructable towards 3D editing by developing a novel laser-cutting technique for 3D objects called LaserOrigami that works by heating up the workpiece with the defocused laser until the material becomes compliant and bends down under gravity. While constructable and LaserOrigami allow for fast physical feedback, the interaction is still best described as turn-taking since it consists of two discrete steps: users first create an input and afterwards the system provides physical output.
By decreasing the interaction unit even further to a single feature, we can achieve real-time physical feedback: Input by the user and output by the fabrication device are so tightly coupled that no visible lag exists. This allows us to explore what it means to transition from turn-taking interfaces, which only allow exploring one option at a time, to direct manipulation interfaces with real-time physical feedback, which allow users to explore the entire space of options continuously with a single interaction. We present a system called FormFab, which allows for such direct control. FormFab is based on the same principle as LaserOrigami: It uses a workpiece that when warmed up becomes compliant and can be reshaped. However, FormFab achieves the reshaping not based on gravity, but through a pneumatic system that users can control interactively. As users interact, they see the shape change in real-time.
We conclude this dissertation by extrapolating the current evolution into a future in which large numbers of people use the new technology to create objects. We see two additional challenges on the horizon: sustainability and intellectual property. We investigate sustainability by demonstrating how to print less and instead patch physical objects. We explore questions around intellectual property with a system called Scotty that transfers objects without creating duplicates, thereby preserving the designer's copyright.
Since 1998, elite athletes’ sport injuries have been monitored in single sport event, which leads to the development of first comprehensive injury surveillance system in multi-sport Olympic Games in 2008. However, injury and illness occurred in training phases have not been systematically studied due to its multi-facets, potentially interactive risk related factors. The present thesis aim to address issues of feasibility of establishing a validated measure for injury/illness, training environment and psychosocial risk factors by creating the evaluation tool namely risk of injury questionnaire (Risk-IQ) for elite athletes, which based on IOC consensus statement 2009 recommended content of preparticipation evaluation(PPE) and periodic health exam (PHE).
A total of 335 top level athletes and a total of 88 medical care providers from Germany and Taiwan participated in tow “cross-sectional plus longitudinal” Risk-IQ and MCPQ surveys respectively. Four categories of injury/illness related risk factors questions were asked in Risk-IQ for athletes while injury risk and psychological related questions were asked in MCPQ for MCP cohorts. Answers were quantified scales wise/subscales wise before analyzed with other factors/scales. In addition, adapted variables such as sport format were introduced for difference task of analysis.
Validated with 2-wyas translation and test-retest reliabilities, the Risk-IQ was proved to be in good standard which were further confirmed by analyzed results from official surveys in both Germany and Taiwan. The result of Risk-IQ revealed that elite athletes’ accumulated total injuries, in general, were multi-factor dependent; influencing factors including but not limited to background experiences, medical history, PHE and PPE medical resources as well as stress from life events. Injuries of different body parts were sport format and location specific. Additionally, medical support of PPE and PHE indicated significant difference between German and Taiwan.
The result of the present thesis confirmed that it is feasible to construct a comprehensive evalua-tion instrument for heterogeneous elite athletes cohorts’ risk factor analysis for injury/illness oc-curred during their non-competition periods. In average and with many moderators involved, Ger-man elite athletes have superior medical care support yet suffered more severe injuries than Tai-wanese counterparts. Opinions of injury related psychological issues reflected differently on vari-ous MCP groups irrespective of different nationalities. In general, influencing factors and interac-tions existed among relevant factors in both studies which implied further investigation with multiple regression analysis is needed for better understanding.
Das Wissen um die lokale Struktur von Seltenen Erden Elementen (SEE) in silikatischen und aluminosilikatischen Schmelzen ist von fundamentalem Interesse für die Geochemie der magmatischen Prozesse, speziell wenn es um ein umfassendes Verständnis der Verteilungsprozesse von SEE in magmatischen Systemen geht. Es ist allgemein akzeptiert, dass die SEE-Verteilungsprozesse von Temperatur, Druck, Sauerstofffugazität (im Fall von polyvalenten Kationen) und der Kristallchemie kontrolliert werden. Allerdings ist wenig über den Einfluss der Schmelzzusammensetzung selbst bekannt. Ziel dieser Arbeit ist, eine Beziehung zwischen der Variation der SEE-Verteilung mit der Schmelzzusammensetzung und der Koordinationschemie dieser SEE in der Schmelze zu schaffen.
Dazu wurden Schmelzzusammensetzungen von Prowatke und Klemme (2005), welche eine deutliche Änderung der Verteilungskoeffizienten zwischen Titanit und Schmelze ausschließlich als Funktion der Schmelzzusammensetzung zeigen, sowie haplogranitische bzw. haplobasaltische Schmelzzusammensetzungen als Vertreter magmatischer Systeme mit La, Gd, Yb und Y dotiert und als Glas synthetisiert. Die Schmelzen variierten systematisch im Aluminiumsättigungsindex (ASI), welcher bei den Prowatke und Klemme (2005) Zusammensetzungen einen Bereich von 0.115 bis 0.768, bei den haplogranitischen Zusammensetzungen einen Bereich von 0.935 bis 1.785 und bei den haplobasaltischen Zusammensetzungen einen Bereich von 0.368 bis 1.010 abdeckt. Zusätzlich wurden die haplogranitischen Zusammensetzungen mit 4 % H2O synthetisiert, um den Einfluss von Wasser auf die lokale Umgebung von SEE zu studieren. Um Informationen über die lokalen Struktur von Gd, Yb und Y zu erhalten wurde die Röntgenabsorptionsspektroskopie angewendet. Dabei liefert die Untersuchung der Feinstruktur mittels der EXAFS-Spektroskopie (engl. Extended X-Ray Absorption Fine Structure) quantitative Informationen über die lokale Umgebung, während RIXS (engl. resonant inelastic X-ray scattering), sowie die daraus extrahierte hoch aufgelöste Nahkantenstruktur, XANES (engl. X-ray absorption near edge structure) qualitative Informationen über mögliche Koordinationsänderungen von La, Gd und Yb in den Gläsern liefert. Um mögliche Unterschiede der lokalen Struktur oberhalb der Glastransformationstemperatur (TG) zur Raumtemperatur zu untersuchen, wurden exemplarisch Hochtemperatur Y-EXAFS Untersuchungen durchgeführt.
Für die Auswertung der EXAFS-Messungen wurde ein neu eingeführter Histogramm-Fit verwendet, der auch nicht-symmetrische bzw. nichtgaußförmige Paarverteilungsfunktionen beschreiben kann, wie sie bei einem hohen Grad der Polymerisierung bzw. bei hohen Temperaturen auftreten können. Die Y-EXAFS-Spektren für die Prowatke und Klemme (2005) Zusammensetzungen zeigen mit Zunahme des ASI, eine Zunahme der Asymmetrie und Breite der Y-O Paarverteilungsfunktion, welche sich in sich in der Änderung der Koordinationszahl von 6 nach 8 und einer Zunahme des Y-O Abstand um 0.13Å manifestiert. Ein ähnlicher Trend lässt sich auch für die Gd- und Yb-EXAFS-Spektren beobachten. Die hoch aufgelösten XANESSpektren für La, Gd und Yb zeigen, dass sich die strukturellen Unterschiede zumindest halb-quantitativ bestimmen lassen. Dies gilt insbesondere für Änderungen im mittleren Abstand zu den Sauerstoffatomen. Im Vergleich zur EXAFS-Spektroskopie liefert XANES jedoch keine Informationen über die Form und Breite von Paarverteilungsfunktionen. Die Hochtemperatur EXAFS-Untersuchungen von Y zeigen Änderungen der lokalen Struktur oberhalb der Glasübergangstemperatur an, welche sich vordergründig auf eine thermisch induzierte Erhöhung des mittleren Y-O Abstandes zurückführen lassen. Allerdings zeigt ein Vergleich der Y-O Abstände für Zusammensetzungen mit einem ASI von 0.115 bzw. 0.755, ermittelt bei Raumtemperatur und TG, dass der im Glas beobachtete strukturelle Unterschied entlang der Zusammensetzungsserie in der Schmelze noch stärker ausfallen kann, als bisher für die Gläser angenommen wurde.
Die direkte Korrelation der Verteilungsdaten von Prowatke und Klemme (2005) mit den strukturellen Änderungen der Schmelzen offenbart für Y eine lineare Korrelation, wohingegen Yb und Gd eine nicht lineare Beziehung zeigen. Aufgrund seines Ionenradius und seiner Ladung wird das 6-fach koordinierte SEE in den niedriger polymerisierten Schmelzen bevorzugt durch nicht-brückenbildende Sauerstoffatome koordiniert, um stabile Konfigurationen zu bilden. In den höher polymerisierten Schmelzen mit ASI-Werten in der Nähe von 1 ist 6-fache Koordination nicht möglich, da fast nur noch brückenbildende Sauerstoffatome zur Verfügung stehen. Die Überbindung von brückenbildenden Sauerstoffatomen um das SEE wird durch Erhöhung der Koordinationszahl und des mittleren SEE-O Abstandes ausgeglichen. Dies bedeutet eine energetisch günstigere Konfiguration in den stärker depolymerisierten Zusammensetzungen, aus welcher die beobachtete Variation des Verteilungskoeffizienten resultiert, welcher sich jedoch für jedes Element stark unterscheidet. Für die haplogranitischen und haplobasaltischen Zusammensetzungen wurde mit Zunahme der Polymerisierung auch eine Zunahme der Koordinationszahl und des durchschnittlichen Bindungsabstands, einhergehend mit der Zunahme der Schiefe und der Asymmetrie der Paarverteilungsfunktion, beobachtet. Dies impliziert, dass das jeweilige SEE mit Zunahme der Polymerisierung auch inkompatibler in diesen Zusammensetzungen wird. Weiterhin zeigt die Zugabe von Wasser, dass die Schmelzen depolymerisieren, was in einer symmetrischeren Paarverteilungsfunktion resultiert, wodurch die Kompatibilität wieder zunimmt.
Zusammenfassend zeigt sich, dass die Veränderungen der Schmelzzusammensetzungen in einer Änderung der Polymerisierung der Schmelzen resultieren, die dann einen signifikanten Einfluss auf die lokale Umgebung der SEE hat. Die strukturellen Änderungen lassen sich direkt mit Verteilungsdaten korrelieren, die Trends unterscheiden sich aber stark zwischen leichten, mittleren und schweren SEE. Allerdings konnte diese Studie zeigen, in welcher Größenordnung die Änderungen liegen müssen, um einen signifikanten Einfluss auf den Verteilungskoeffizenten zu haben. Weiterhin zeigt sich, dass der Einfluss der Schmelzzusammensetzung auf die Verteilung der Spurenelemente mit Zunahme der Polymerisierung steigt und daher nicht vernachlässigt werden darf.
In order to evade detection by network-traffic analysis, a growing proportion of malware uses the encrypted HTTPS protocol. We explore the problem of detecting malware on client computers based on HTTPS traffic analysis. In this setting, malware has to be detected based on the host IP address, ports, timestamp, and data volume information of TCP/IP packets that are sent and received by all the applications on the client. We develop a scalable protocol that allows us to collect network flows of known malicious and benign applications as training data and derive a malware-detection method based on a neural networks and sequence classification. We study the method's ability to detect known and new, unknown malware in a large-scale empirical study.
The ever-increasing fat content in Western diet, combined with decreased levels of physical activity, greatly enhance the incidence of metabolic-related diseases. Cancer cachexia (CC) and Metabolic syndrome (MetS) are both multifactorial highly complex metabolism related syndromes, whose etiology is not fully understood, as the mechanisms underlying their development are not completely unveiled. Nevertheless, despite being considered “opposite sides”, MetS and CC share several common issues such as insulin resistance and low-grade inflammation. In these scenarios, tissue macrophages act as key players, due to their capacity to produce and release inflammatory mediators. One of the main features of MetS is hyperinsulinemia, which is generally associated with an attempt of the β-cell to compensate for diminished insulin sensitivity (insulin resistance). There is growing evidence that hyperinsulinemia per se may contribute to the development of insulin resistance, through the establishment of low grade inflammation in insulin responsive tissues, especially in the liver (as insulin is secreted by the pancreas into the portal circulation). The hypothesis of the present study was that insulin may itself provoke an inflammatory response culminating in diminished hepatic insulin sensitivity. To address this premise, firstly, human cell line U937 differentiated macrophages were exposed to insulin, LPS and PGE2. In these cells, insulin significantly augmented the gene expression of the pro-inflammatory mediators IL-1β, IL-8, CCL2, Oncostatin M (OSM) and microsomal prostaglandin E2 synthase (mPGES1), and of the anti-inflammatory mediator IL-10. Moreover, the synergism between insulin and LPS enhanced the induction provoked by LPS in IL-1β, IL-8, IL-6, CCL2 and TNF-α gene. When combined with PGE2, insulin enhanced the induction provoked by PGE2 in IL-1β, mPGES1 and COX2, and attenuated the inhibition induced by PGE2 in CCL2 and TNF-α gene expression contributing to an enhanced inflammatory response by both mechanisms. Supernatants of insulin-treated U937 macrophages reduced the insulin-dependent induction of glucokinase in hepatocytes by 50%. Cytokines contained in the supernatant of insulin-treated U937 macrophages also activated hepatocytes ERK1/2, resulting in inhibitory serine phosphorylation of the insulin receptor substrate. Additionally, the transcription factor STAT3 was activated by phosphorylation resulting in the induction of SOCS3, which is capable of interrupting the insulin receptor signal chain. MicroRNAs, non-coding RNAs linked to protein expression regulation, nowadays recognized as active players in the generation of several inflammatory disorders such as cancer and type II diabetes are also of interest. Considering that in cancer cachexia, patients are highly affected by insulin resistance and inflammation, control, non-cachectic and cachectic cancer patients were selected and the respective circulating levels of pro-inflammatory mediators and microRNA-21-5p, a posttranscriptional regulator of STAT3 expression, assessed and correlated. Cachectic patients circulating cytokines IL-6 and IL-8 levels were significantly higher than those of non-cachectic and controls, and the expression of microRNA-21-5p was significantly lower. Additionally, microRNA-21-5p reduced expression correlated negatively with IL-6 plasma levels. These results indicate that hyperinsulinemia per se might contribute to the low grade inflammation prevailing in MetS patients and thereby promote the development
of insulin resistance particularly in the liver. Diminished MicroRNA-21-5p expression may enhance inflammation and STAT3 expression in cachectic patients, contributing to the development of insulin resistance.
Understanding the role of natural climate variability under the pressure of human induced changes of climate and landscapes, is crucial to improve future projections and adaption strategies. This doctoral thesis aims to reconstruct Holocene climate and environmental changes in NE Germany based on annually laminated lake sediments. The work contributes to the ICLEA project (Integrated CLimate and Landscape Evolution Analyses). ICLEA intends to compare multiple high-resolution proxy records with independent chronologies from the N central European lowlands, in order to disentangle the impact of climate change and human land use on landscape development during the Lateglacial and Holocene. In this respect, two study sites in NE Germany are investigated in this doctoral project, Lake Tiefer See and palaeolake Wukenfurche. While both sediment records are studied with a combination of high-resolution sediment microfacies and geochemical analyses (e.g. µ-XRF, carbon geochemistry and stable isotopes), detailed proxy understanding mainly focused on the continuous 7.7 m long sediment core from Lake Tiefer See covering the last ~6000 years. Three main objectives are pursued at Lake Tiefer See: (1) to perform a reliable and independent chronology, (2) to establish microfacies and geochemical proxies as indicators for climate and environmental changes, and (3) to trace the effects of climate variability and human activity on sediment deposition.
Addressing the first aim, a reliable chronology of Lake Tiefer See is compiled by using a multiple-dating concept. Varve counting and tephra findings form the chronological framework for the last ~6000 years. The good agreement with independent radiocarbon dates of terrestrial plant remains verifies the robustness of the age model. The resulting reliable and independent chronology of Lake Tiefer See and, additionally, the identification of nine tephras provide a valuable base for detailed comparison and synchronization of the Lake Tiefer See data set with other climate records. The sediment profile of Lake Tiefer See exhibits striking alternations between well-varved and non-varved sediment intervals. The combination of microfacies, geochemical and microfossil (i.e. Cladocera and diatom) analyses indicates that these changes of varve preservation are caused by variations of lake circulation in Lake Tiefer See. An exception is the well-varved sediment deposited since AD 1924, which is mainly influenced by human-induced lake eutrophication. Well-varved intervals before the 20th century are considered to reflect phases of reduced lake circulation and, consequently, stronger anoxic conditions. Instead, non-varved intervals indicate increased lake circulation in Lake Tiefer See, leading to more oxygenated conditions at the lake ground. Furthermore, lake circulation is not only influencing sediment deposition, but also geochemical processes in the lake. As, for example, the proxy meaning of δ13COM varies in time in response to changes of the oxygen regime in the lake hypolinion. During reduced lake circulation and stronger anoxic conditions δ13COM is influenced by microbial carbon cycling. In contrast, organic matter degradation controls δ13COM during phases of intensified lake circulation and more oxygenated conditions. The varve preservation indicates an increasing trend of lake circulation at Lake Tiefer See after ~4000 cal a BP. This trend is superimposed by decadal to centennial scale variability of lake circulation intensity. Comparison to other records in Central Europe suggests that the long-term trend is probably related to gradual changes in Northern Hemisphere orbital forcing, which induced colder and windier conditions in Central Europe and, therefore, reinforced lake circulation. Decadal to centennial scale periods of increased lake circulation coincide with settlement phases at Lake Tiefer See, as inferred from pollen data of the same sediment record. Deforestation reduced the wind shelter of the lake, which probably increased the sensitivity of lake circulation to wind stress. However, results of this thesis also suggest that several of these phases of increased lake circulation are additionally reinforced by climate changes. A first indication is provided by the comparison to the Baltic Sea record, which shows striking correspondence between major non-varved intervals at Lake Tiefer See and bioturbated sediments in the Baltic Sea. Furthermore, a preliminary comparison to the ICLEA study site Lake Czechowskie (N central Poland) shows a coincidence of at least three phases of increased lake circulation in both lakes, which concur with periods of known climate changes (2.8 ka event, ’Migration Period’ and ’Little Ice Age’). These results suggest an additional over-regional climate forcing also on short term increased of lake circulation in Lake Tiefer See.
In summary, the results of this thesis suggest that lake circulation at Lake Tiefer See is driven by a combination of long-term and short-term climate changes as well as of anthropogenic deforestation phases. Furthermore, the lake circulation drives geochemical cycles in the lake affecting the meaning of proxy data. Therefore, the work presented here expands the knowledge of climate and environmental variability in NE Germany. Furthermore, the integration of the Lake Tiefer See multi-proxy record in a regional comparison with another ICLEA side, Lake Czechowskie, enabled to better decipher climate changes and human impact on the lake system. These first results suggest a huge potential for further detailed regional comparisons to better understand palaeoclimate dynamics in N central Europe.
Editorial (Dr. Roswitha Lohwaßer) ; Schon im Studium von gelingenden Schulen lernen (Jannis Andresen, Jakob Erichsen) ; Wie ein Funke überspringt (Laura Zrenner) ; Lerneffekte unter der Lupe (Ariane Faulian) ; Die Lernreise als Ultima Ratio? (Leroy Großmann) ; Die Lernreise als Schulperspektive (Laura Zrenner) ; Teamgeist gefragt (Cornelia Brückner) ; Wunschberuf Lehrerin (Robin Miska) ; Gelebte Integration (Cornelia Brückner) ; Souverän Führen im Unterricht (Dr. Helga Breuninger, Marina Rottig, Prof. Dr. Wilfried Schley) ; Was lernen Lehramtsstudierende durch ein videogestütztes Klassenmanagement-Training? (Dr. Janine Neuhaus, Mirko Wendland)
In the debate on how to govern sustainable development, a central question concerns the interaction between knowledge about sustainability and policy developments. The discourse on what constitutes sustainable development conflict on some of the most basic issues, including the proper definitions, instruments and indicators of what should be ‘developed’ or ‘sustained’. Whereas earlier research on the role of (scientific) knowledge in policy adopted a rationalist-positivist view of knowledge as the basis for ‘evidence-based policy making’, recent literature on knowledge creation and transfer processes has instead pointed towards aspects of knowledge-policy ‘co-production’ (Jasanoff 2004). It is highlighted that knowledge utilisation is not just a matter of the quality of the knowledge as such, but a question of which knowledge fits with the institutional context and dominant power structures. Just as knowledge supports and justifies certain policy, policy can produce and stabilise certain knowledge. Moreover, rather than viewing knowledge-policy interaction as a linear and uni-directional model, this conceptualization is based on an assumption of the policy process as being more anarchic and unpredictable, something Cohen, March and Olsen (1972) has famously termed the ‘garbage-can model’.
The present dissertation focuses on the interplay between knowledge and policy in sustainability governance. It takes stock with the practice of ‘Management by Objectives and Results’ (MBOR: Lundqvist 2004) whereby policy actors define sustainable development goals (based on certain knowledge) and are expected to let these definitions guide policy developments as well as evaluate whether sustainability improves or not. As such a knowledge-policy instrument, Sustainability Indicators (SI:s) help both (subjectively) construct ‘social meaning’ about sustainability and (objectively) influence policy and measure its success. The different articles in this cumulative dissertation analyse the development, implementation and policy support (personal and institutional) of Sustainability Indicators as an instrument for MBOR in a variety of settings. More specifically, the articles centre on the question of how sustainability definitions and measurement tools on the one hand (knowledge) and policy instruments and political power structures on the other, are co-produced.
A first article examines the normative foundations of popular international SI:s and country rankings. Combining theoretical (constructivist) analysis with factor analysis, it analyses how the input variable structure of SI:s are related to different sustainability paradigms, producing a different output in terms of which countries (developed versus developing) are most highly ranked. Such a theoretical input-output analysis points towards a potential problem of SI:s becoming a sort of ‘circular argumentation constructs’. The article thus, highlights on a quantitative basis what others have noted qualitatively – that different definitions and interpretations of sustainability influence indicator output to the point of contradiction. The normative aspects of SI:s does thereby not merely concern the question of which indicators to use for what purposes, but also the more fundamental question of how normative and political bias are intrinsically a part of the measurement instrument as such. The study argues that, although no indicator can be expected to tell the sustainability ‘truth-out-there’, a theoretical localization of indicators – and of the input variable structure – may help facilitate interpretation of SI output and the choice of which indicators to use for what (policy or academic) purpose.
A second article examines the co-production of knowledge and policy in German sustainability governance. It focuses on the German sustainability strategy ‘Perspektiven für Deutschland’ (2002), a strategy that stands out both in an international comparison of national sustainability strategies as well as among German government policy strategies because of its relative stability over five consecutive government constellations, its rather high status and increasingly coercive nature. The study analyses what impact the sustainability strategy has had on the policy process between 2002 and 2015, in terms of defining problems and shaping policy processes. Contrasting rationalist and constructivist perspectives on the role of knowledge in policy, two factors, namely the level of (scientific and political) consensus about policy goals and the ‘contextual fit’ of problem definitions, are found to be main factors explaining how different aspects of the strategy is used. Moreover, the study argues that SI:s are part of a continuous process of ‘structuring’ in which indicator, user and context factors together help structure the sustainability challenge in such a way that it becomes more manageable for government policy.
A third article examines how 31 European countries have built supportive institutions of MBOR between 1992 and 2012. In particular during the 1990s and early 2000s much hope was put into the institutionalisation of Environmental Policy Integration (EPI) as a way to overcome sectoral thinking in sustainability policy making and integrate issues of environmental sustainability into all government policy. However, despite high political backing (FN, EU, OECD), implementation of EPI seems to differ widely among countries. The study is a quantitative longitudinal cross-country comparison of how countries’ ‘EPI architectures’ have developed over time. Moreover, it asks which ‘EPI architectures’ seem to be more effective in producing more ‘stringent’ sustainability policy.
Intermontane valley fills
(2016)
Sedimentary valley fills are a widespread characteristic of mountain belts around the world. They transiently store material over time spans ranging from thousands to millions of years and therefore play an important role in modulating the sediment flux from the orogen to the foreland and to oceanic depocenters. In most cases, their formation can be attributed to specific fluvial conditions, which are closely related to climatic and tectonic processes. Hence, valley-fill deposits constitute valuable archives that offer fundamental insight into landscape evolution, and their study may help to assess the impact of future climate change on sediment dynamics.
In this thesis I analyzed intermontane valley-fill deposits to constrain different aspects of the climatic and tectonic history of mountain belts over multiple timescales. First, I developed a method to estimate the thickness distribution of valley fills using artificial neural networks (ANNs). Based on the assumption of geometrical similarity between exposed and buried parts of the landscape, this novel and highly automated technique allows reconstructing fill thickness and bedrock topography on the scale of catchments to entire mountain belts.
Second, I used the new method for estimating the spatial distribution of post-glacial sediments that are stored in the entire European Alps. A comparison with data from exploratory drillings and from geophysical surveys revealed that the model reproduces the measurements with a root mean squared error (RMSE) of 70m and a coefficient of determination (R2) of 0.81. I used the derived sediment thickness estimates in combination with a model of the Last Glacial Maximum (LGM) icecap to infer the lithospheric response to deglaciation, erosion and deposition, and deduce their relative contribution to the present-day rock-uplift rate. For a range of different lithospheric and upper mantle-material properties, the results suggest that the long-wavelength uplift signal can be explained by glacial isostatic adjustment with a small erosional contribution and a substantial but localized tectonic component exceeding 50% in parts of the Eastern Alps and in the Swiss Rhône Valley. Furthermore, this study reveals the particular importance of deconvolving the potential components of rock uplift when interpreting recent movements along active orogens and how this can be used to constrain physical properties of the Earth’s interior.
In a third study, I used the ANN approach to estimate the sediment thickness of alluviated reaches of the Yarlung Tsangpo River, upstream of the rapidly uplifting Namche Barwa massif. This allowed my colleagues and me to reconstruct the ancient river profile of the Yarlung Tsangpo, and to show that in the past, the river had already been deeply incised into the eastern margin of the Tibetan Plateau. Dating of basal sediments from drill cores that reached the paleo-river bed to 2–2.5 Ma are consistent with mineral cooling ages from the Namche Barwa massif, which indicate initiation of rapid uplift at ~4 Ma. Hence, formation of the Tsangpo gorge and aggradation of the voluminous valley fill was most probably a consequence of rapid uplift of the Namche Barwa massif and thus tectonic activity.
The fourth and last study focuses on the interaction of fluvial and glacial processes at the southeastern edge of the Karakoram. Paleo-ice-extent indicators and remnants of a more than 400-m-thick fluvio-lacustrine valley fill point to blockage of the Shyok River, a main tributary of the upper Indus, by the Siachen Glacier, which is the largest glacier in the Karakoram Range. Field observations and 10Be exposure dating attest to a period of recurring lake formation and outburst flooding during the penultimate glaciation prior to ~110 ka. The interaction of Rivers and Glaciers all along the Karakorum is considered a key factor in landscape evolution and presumably promoted headward erosion of the Indus-Shyok drainage system into the western margin of the Tibetan Plateau.
The results of this thesis highlight the strong influence of glaciation and tectonics on valley-fill formation and how this has affected the evolution of different mountain belts. In the Alps valley-fill deposition influenced the magnitude and pattern of rock uplift since ice retreat approximately 17,000 years ago. Conversely, the analyzed valley fills in the Himalaya are much older and reflect environmental conditions that prevailed at ~110 ka and ~2.5 Ma, respectively. Thus, the newly developed method has proven useful for inferring the role of sedimentary valley-fill deposits in landscape evolution on timescales ranging from 1,000 to 10,000,000 years.
Computer Security deals with the detection and mitigation of threats to computer networks, data, and computing hardware. This
thesis addresses the following two computer security problems: email spam campaign and malware detection.
Email spam campaigns can easily be generated using popular dissemination tools by specifying simple grammars that serve as message templates. A grammar is disseminated to nodes of a bot net, the nodes create messages by instantiating the grammar at random. Email spam campaigns can encompass huge data volumes and therefore pose a threat to the stability of the infrastructure of email service providers that have to store them. Malware -software that serves a malicious purpose- is affecting web servers, client computers via active content, and client computers through executable files. Without the help of malware detection systems it would be easy for malware creators to collect sensitive information or to infiltrate computers.
The detection of threats -such as email-spam messages, phishing messages, or malware- is an adversarial and therefore intrinsically
difficult problem. Threats vary greatly and evolve over time. The detection of threats based on manually-designed rules is therefore
difficult and requires a constant engineering effort. Machine-learning is a research area that revolves around the analysis of data and the discovery of patterns that describe aspects of the data. Discriminative learning methods extract prediction models from data that are optimized to predict a target attribute as accurately as possible. Machine-learning methods hold the promise of automatically identifying patterns that robustly and accurately detect threats. This thesis focuses on the design and analysis of discriminative learning methods for the two computer-security problems under investigation: email-campaign and malware detection.
The first part of this thesis addresses email-campaign detection. We focus on regular expressions as a syntactic framework, because regular expressions are intuitively comprehensible by security engineers and administrators, and they can be applied as a detection mechanism in an extremely efficient manner. In this setting, a prediction model is provided with exemplary messages from an email-spam campaign. The prediction model has to generate a regular expression that reveals the syntactic pattern that underlies the entire campaign, and that a security engineers finds comprehensible and feels confident enough to use the expression to blacklist further messages at the email server. We model this problem as two-stage learning problem with structured input and output spaces which can be solved using standard cutting plane methods. Therefore we develop an appropriate loss function, and derive a decoder for the resulting optimization problem.
The second part of this thesis deals with the problem of predicting whether a given JavaScript or PHP file is malicious or benign. Recent malware analysis techniques use static or dynamic features, or both. In fully dynamic analysis, the software or script is executed and observed for malicious behavior in a sandbox environment. By contrast, static analysis is based on features that can be extracted directly from the program file. In order to bypass static detection mechanisms, code obfuscation techniques are used to spread a malicious program file in many different syntactic variants. Deobfuscating the code before applying a static classifier can be subjected to mostly static code analysis and can overcome the problem of obfuscated malicious code, but on the other hand increases the computational costs of malware detection by an order of magnitude. In this thesis we present a cascaded architecture in which a classifier first performs a static analysis of the original code and -based on the outcome of this first classification step- the code may be deobfuscated and classified again. We explore several types of features including token $n$-grams, orthogonal sparse bigrams, subroutine-hashings, and syntax-tree features and study the robustness of detection methods and feature types against the evolution of malware over time. The developed tool scans very large file collections quickly and accurately.
Each model is evaluated on real-world data and compared to reference methods. Our approach of inferring regular expressions to filter emails belonging to an email spam campaigns leads to models with a high true-positive rate at a very low false-positive rate that is an order of magnitude lower than that of a commercial content-based filter. Our presented system -REx-SVMshort- is being used by a commercial email service provider and complements content-based and IP-address based filtering.
Our cascaded malware detection system is evaluated on a high-quality data set of almost 400,000 conspicuous PHP files and a collection of more than 1,00,000 JavaScript files. From our case study we can conclude that our system can quickly and accurately process large data collections at a low false-positive rate.
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Commonly used technologies, such as J2EE and .NET, form de facto standards for the realization of complex distributed systems. Evolution of component systems has lead to web services and service-based architectures. This has been manifested in a multitude of industry standards and initiatives such as XML, WSDL UDDI, SOAP, etc. All these achievements lead to a new and promising paradigm in IT systems engineering which proposes to design complex software solutions as collaboration of contractually defined software services.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
Referential Choice
(2016)
We report a study of referential choice in discourse production, understood as the choice between various types of referential devices, such as pronouns and full noun phrases. Our goal is to predict referential choice, and to explore to what extent such prediction is possible. Our approach to referential choice includes a cognitively informed theoretical component, corpus analysis, machine learning methods and experimentation with human participants. Machine learning algorithms make use of 25 factors, including referent’s properties (such as animacy and protagonism), the distance between a referential expression and its antecedent, the antecedent’s syntactic role, and so on. Having found the predictions of our algorithm to coincide with the original almost 90% of the time, we hypothesized that fully accurate prediction is not possible because, in many situations, more than one referential option is available. This hypothesis was supported by an experimental study, in which participants answered questions about either the original text in the corpus, or about a text modified in accordance with the algorithm’s prediction. Proportions of correct answers to these questions, as well as participants’ rating of the questions’ difficulty, suggested that divergences between the algorithm’s prediction and the original referential device in the corpus occur overwhelmingly in situations where the referential choice is not categorical.
Referential Choice
(2016)
We report a study of referential choice in discourse production, understood as the choice between various types of referential devices, such as pronouns and full noun phrases. Our goal is to predict referential choice, and to explore to what extent such prediction is possible. Our approach to referential choice includes a cognitively informed theoretical component, corpus analysis, machine learning methods and experimentation with human participants. Machine learning algorithms make use of 25 factors, including referent’s properties (such as animacy and protagonism), the distance between a referential expression and its antecedent, the antecedent’s syntactic role, and so on. Having found the predictions of our algorithm to coincide with the original almost 90% of the time, we hypothesized that fully accurate prediction is not possible because, in many situations, more than one referential option is available. This hypothesis was supported by an experimental study, in which participants answered questions about either the original text in the corpus, or about a text modified in accordance with the algorithm’s prediction. Proportions of correct answers to these questions, as well as participants’ rating of the questions’ difficulty, suggested that divergences between the algorithm’s prediction and the original referential device in the corpus occur overwhelmingly in situations where the referential choice is not categorical.
Ionothermal carbon materials
(2016)
Alternative concepts for energy storage and conversion have to be developed, optimized and employed to fulfill the dream of a fossil-independent energy economy. Porous carbon materials play a major role in many energy-related devices. Among different characteristics, distinct porosity features, e.g., specific surface area (SSA), total pore volume (TPV), and the pore size distribution (PSD), are important to maximize the performance in the final device. In order to approach the aim to synthesize carbon materials with tailor-made porosity in a sustainable fashion, the present thesis focused on biomass-derived precursors employing and developing the ionothermal carbonization.
During the ionothermal carbonization, a salt melt simultaneously serves as solvent and porogen. Typically, eutectic mixtures containing zinc chloride are employed as salt phase. The first topic of the present thesis addressed the possibility to precisely tailor the porosity of ionothermal carbon materials by an experimentally simple variation of the molar composition of the binary salt mixture. The developed pore tuning tool allowed the synthesis of glucose derived carbon materials with predictable SSAs in the range of ~ 900 to ~ 2100 m2 g-1. Moreover, the nucleobase adenine was employed as precursor introducing nitrogen functionalities in the final material. Thereby, the chemical properties of the carbon materials are varied leading to new application fields. Nitrogen doped carbons (NDCs) are able to catalyze the oxygen reduction reaction (ORR) which takes place on the cathodic site of a fuel cell. The herein developed porosity tailoring allowed the synthesis of adenine derived NDCs with outstanding SSAs of up to 2900 m2 g-1 and very large TPV of 5.19 cm3 g-1. Furthermore, the influence of the porosity on the ORR could be directly investigated enabling the precise optimization of the porosity characteristics of NDCs for this application. The second topic addressed the development of a new method to investigate the not-yet unraveled mechanism of the oxygen reduction reaction using a rotating disc electrode setup. The focus was put on noble-metal free catalysts. The results showed that the reaction pathway of the investigated catalysts is pH-dependent indicating different active species at different pH-values. The third topic addressed the expansion of the used salts for the ionothermal approach towards hydrated calcium and magnesium chloride. It was shown that hydrated salt phases allowed the introduction of a secondary templating effect which was connected to the coexistence of liquid and solid salt phases. The method enabled the synthesis of fibrous NDCs with SSAs of up to 2780 m2 g-1 and very large TPV of 3.86 cm3 g-1. Moreover, the concept of active site implementation by a facile low-temperature metalation employing the obtained NDCs as solid ligands could be shown for the first time in the context of ORR.
Overall, the thesis may pave the way towards highly porous carbon with tailor-made porosity materials prepared by an inexpensive and sustainable pathway, which can be applied in energy related field thereby supporting the needed expansion of the renewable energy sector.
Widespread landscape changes are presently observed in the Arctic and are most likely to
accelerate in the future, in particular in permafrost regions which are sensitive to climate warming. To assess current and future developments, it is crucial to understand past
environmental dynamics in these landscapes. Causes and interactions of environmental variability can hardly be resolved by instrumental records covering modern time scales. However, long-term
environmental variability is recorded in paleoenvironmental archives. Lake sediments are important archives that allow reconstruction of local limnogeological processes as well as past environmental changes driven directly or indirectly by climate dynamics. This study aims at
reconstructing Late Quaternary permafrost and thermokarst dynamics in central-eastern Beringia,
the terrestrial land mass connecting Eurasia and North America during glacial sea-level low stands. In order to investigate development, processes and influence of thermokarst dynamics, several sediment cores from extant lakes and drained lake basins were analyzed to answer the
following research questions:
1. When did permafrost degradation and thermokarst lake development take place and what were enhancing and inhibiting environmental factors?
2. What are the dominant processes during thermokarst lake development and how are
they reflected in proxy records?
3. How did, and still do, thermokarst dynamics contribute to the inventory and properties of organic matter in sediments and the carbon cycle?
Methods applied in this study are based upon a multi-proxy approach combining
sedimentological, geochemical, geochronological, and micropaleontological analyses, as well as
analyses of stable isotopes and hydrochemistry of pore-water and ice. Modern field observations of water quality and basin morphometrics complete the environmental investigations.
The investigated sediment cores reveal permafrost degradation and thermokarst dynamics on different time scales. The analysis of a sediment core from GG basin on the northern Seward
Peninsula (Alaska) shows prevalent terrestrial accumulation of yedoma throughout the Early to
Mid Wisconsin with intermediate wet conditions at around 44.5 to 41.5 ka BP. This first wetland
development was terminated by the accumulation of a 1-meter-thick airfall tephra most likely originating from the South Killeak Maar eruption at 42 ka BP. A depositional hiatus between 22.5 and 0.23 ka BP may indicate thermokarst lake formation in the surrounding of the site which forms a yedoma upland till today. The thermokarst lake forming GG basin initiated 230 ± 30 cal a
BP and drained in Spring 2005 AD. Four years after drainage the lake talik was still unfrozen below 268 cm depth.
A permafrost core from Mama Rhonda basin on the northern Seward Peninsula preserved a
full lacustrine record including several lake phases. The first lake generation developed at 11.8 cal ka BP during the Lateglacial-Early Holocene transition; its old basin (Grandma Rhonda) is still partially preserved at the southern margin of the study basin. Around 9.0 cal ka BP a shallow and more dynamic thermokarst lake developed with actively eroding shorelines and potentially intermediate shallow water or wetland phases (Mama Rhonda). Mama Rhonda lake drainage at 1.1 cal ka BP was followed by gradual accumulation of terrestrial peat and top-down refreezing of the lake talik. A significant lower organic carbon content was measured in Grandma Rhonda deposits (mean TOC of 2.5 wt%) than in Mama Rhonda deposits (mean TOC of 7.9 wt%) highlighting the impact of thermokarst dynamics on biogeochemical cycling in different lake generations by thawing and mobilization of organic carbon into the lake system.
Proximal and distal sediment cores from Peatball Lake on the Arctic Coastal Plain of Alaska revealed young thermokarst dynamics since about 1,400 years along a depositional gradient based on reconstructions from shoreline expansion rates and absolute dating results. After its initiation as a remnant pond of a previous drained lake basin, a rapidly deepening lake with increasing oxygenation of the water column is evident from laminated sediments, and higher Fe/Ti and Fe/S ratios in the sediment. The sediment record archived characterizing shifts in depositional regimes and sediment sources from upland deposits and re-deposited sediments from drained thaw lake basins depending on the gradually changing shoreline configuration. These changes are evident from alternating organic inputs into the lake system which highlights the potential for thermokarst lakes to recycle old carbon from degrading permafrost deposits of its catchment.
The lake sediment record from Herschel Island in the Yukon (Canada) covers the full Holocene period. After its initiation as a thermokarst lake at 11.7 cal ka BP and intense thermokarst activity until 10.0 cal ka BP, the steady sedimentation was interrupted by a depositional hiatus at 1.6 cal ka BP which likely resulted from lake drainage or allochthonous slumping due to collapsing shore lines. The specific setting of the lake on a push moraine composed of marine deposits is reflected in the sedimentary record. Freshening of the maturing lake is indicated by decreasing electrical conductivity in pore-water. Alternation of marine to freshwater ostracods and foraminifera confirms decreasing salinity as well but also reflects episodical re-deposition of allochthonous marine sediments.
Based on permafrost and lacustrine sediment records, this thesis shows examples of the Late Quaternary evolution of typical Arctic permafrost landscapes in central-eastern Beringia and the complex interaction of local disturbance processes, regional environmental dynamics and global climate patterns. This study confirms that thermokarst lakes are important agents of organic matter recycling in complex and continuously changing landscapes.
In complement to the well-established zwitterionic monomers 3-((2-(methacryloyloxy)ethyl)dimethylammonio)propane-1-sulfonate (“SPE”) and 3-((3-methacrylamidopropyl)dimethylammonio)propane-1-sulfonate (“SPP”), the closely related sulfobetaine monomers were synthesized and polymerized by reversible addition-fragmentation chain transfer (RAFT) polymerization, using a fluorophore labeled RAFT agent. The polyzwitterions of systematically varied molar mass were characterized with respect to their solubility in water, deuterated water, and aqueous salt solutions. These poly(sulfobetaine)s show thermoresponsive behavior in water, exhibiting upper critical solution temperatures (UCST). Phase transition temperatures depend notably on the molar mass and polymer concentration, and are much higher in D2O than in H2O. Also, the phase transition temperatures are effectively modulated by the addition of salts. The individual effects can be in parts correlated to the Hofmeister series for the anions studied. Still, they depend in a complex way on the concentration and the nature of the added electrolytes, on the one hand, and on the detailed structure of the zwitterionic side chain, on the other hand. For the polymers with the same zwitterionic side chain, it is found that methacrylamide-based poly(sulfobetaine)s exhibit higher UCST-type transition temperatures than their methacrylate analogs. The extension of the distance between polymerizable unit and zwitterionic groups from 2 to 3 methylene units decreases the UCST-type transition temperatures. Poly(sulfobetaine)s derived from aliphatic esters show higher UCST-type transition temperatures than their analogs featuring cyclic ammonium cations. The UCST-type transition temperatures increase markedly with spacer length separating the cationic and anionic moieties from 3 to 4 methylene units. Thus, apparently small variations of their chemical structure strongly affect the phase behavior of the polyzwitterions in specific aqueous environments.
Water-soluble block copolymers were prepared from the zwitterionic monomers and the non-ionic monomer N-isopropylmethacrylamide (“NIPMAM”) by the RAFT polymerization. Such block copolymers with two hydrophilic blocks exhibit twofold thermoresponsive behavior in water. The poly(sulfobetaine) block shows an UCST, whereas the poly(NIPMAM) block exhibits a lower critical solution temperature (LCST). This constellation induces a structure inversion of the solvophobic aggregate, called “schizophrenic micelle”. Depending on the relative positions of the two different phase transitions, the block copolymer passes through a molecularly dissolved or an insoluble intermediate regime, which can be modulated by the polymer concentration or by the addition of salt. Whereas, at low temperature, the poly(sulfobetaine) block forms polar aggregates that are kept in solution by the poly(NIPMAM) block, at high temperature, the poly(NIPMAM) block forms hydrophobic aggregates that are kept in solution by the poly(sulfobetaine) block. Thus, aggregates can be prepared in water, which switch reversibly their “inside” to the “outside”, and vice versa.
Der deutsche Zeitungsmarkt ist von einem breiten Angebot überregionaler Tageszeitungen gekennzeichnet, denen innerhalb der Bevölkerung unterschiedliche politische Ausrichtungen zugeschrieben werden. Als konservatives Blatt gilt die „Frankfurter Allgemeine Zeitung“ (F.A.Z.), wohingegen sich die „taz.die tageszeitung“ (taz) durch eine linksalternative Orientierung auszeichnet. Ausgehend von diesem Unterschied untersucht die Arbeit die sprachliche Gestaltung der Überschriften, Unter- und Zwischenzeilen der F.A.Z. und der taz zu den Themen „Alternative für Deutschland“ (AfD), „Nationalsozialistischer Untergrund“ (NSU) und „Front National“ (FN). Die qualitativ-quantitative Korpusuntersuchung fokussiert neben lexikalischen und syntaktischen auch sprachstilistische Faktoren, die eine Stellungnahme zu der Forschungsthese, dass mit der sprachlichen Formulierung der Haupt-, Unter- und Zwischenzeilen die politische Ausrichtung und ideologische Grundhaltung der Zeitungen deutlich wird, erlauben. Die Grundlage für die Analysen bildet ein konstruktivistischer Ansatz, der auf systemtheoretischen Annahmen beruht. Dadurch kann zum einen gezeigt werden, wie sich die Ergebnisse der sprachlichen Analysen mit den unterschiedlichen zugrunde liegenden Wirklichkeitskonstruktionen der Zeitungen verbinden lassen, zum anderen wird deutlich, dass sich die Formulierung der Überschriften auch auf die individuelle Realitätskonstruktion der Rezipienten auswirkt. Die vergleichenden Auswertungen geben unterschiedlich gewichtete Hinweise auf die Einstellung der Kommunikatoren und bestätigen, dass die jeweiligen Perspektivierungen der Wirklichkeit sowie ideologischen Grundhaltungen der F.A.Z. und der taz bereits in der sprachlichen Gestaltung ihrer Titelkomplexe deutlich werden.
Die Dissertation befasst sich mit der Organisation von humanitären Lufttransporten bei internationalen Katastrophen. Diese Flüge finden immer dann statt, wenn die eigene Hilfeleistungsfähigkeit der von Katastrophen betroffenen Regionen überfordert ist und Hilfe aus dem Ausland angefordert wird. Bei jedem der darauffolgenden Hilfseinsätze stehen Hilfsorganisationen und weitere mit der Katastrophenhilfe beteiligte Akteure erneut vor der Herausforderung, in kürzester Zeit eine logistische Kette aufzubauen, damit die Güter zum richtigen Zeitpunkt in der richtigen Menge am richtigen Ort eintreffen.
Humanitäre Lufttransporte werden in der Regel als Charterflüge organisiert und finden auf langen Strecken zu Zielen statt, die nicht selten abseits der hochfrequentierten Warenströme liegen. Am Markt ist das Angebot für derartige Transportdienstleistungen nicht gesichert verfügbar und unter Umständen müssen Hilfsorganisationen warten bis Kapazitäten mit geeigneten Flugzeugen zur Verfügung stehen. Auch qualitativ sind die Anforderungen von Hilfsorganisationen an die Hilfsgütertransporte höher als im regulären Linientransport.
Im Rahmen der Dissertation wird ein alternatives Organisationsmodell für die Beschaffung und den Betrieb sowie die Finanzierung von humanitären Lufttransporten aufgebaut. Dabei wird die gesicherte Verfügbarkeit von besonders flexibel einsetzbaren Flugzeugen in Betracht gezogen, mit deren Hilfe die Qualität und insbesondere die Planbarkeit der Hilfeleistung verbessert werden könnte.
Ein idealtypisches Modell wird hier durch die Kopplung der Kollektivgütertheorie, die der Finanzwissenschaft zuzuordnen ist, mit der Vertragstheorie als Bestandteil der Neuen Institutionenökonomik erarbeitet.
Empirische Beiträge zur Vertragstheorie bemängeln, dass es bei der Beschaffung von transaktionsspezifischen Investitionsgütern, wie etwa Flugzeugen mit besonderen Eigenschaften, aufgrund von Risiken und Umweltunsicherheiten zu ineffizienten Lösungen zwischen Vertragspartnern kommt. Die vorliegende Dissertation zeigt eine Möglichkeit auf, wie durch Aufbau einer gemeinsamen Informationsbasis ex-ante, also vor Vertragsschluss, Risiken und Umweltunsicherheiten reduziert werden können. Dies geschieht durch eine temporale Erweiterung eines empirischen Modells zur Bestimmung der Organisationsform bei transaktionsspezifischen Investitionsgütern aus der Regulierungsökonomik.
Die Arbeitet leistet darüber hinaus einen Beitrag zur Steigerung der Effizienz in der humanitären Logistik durch die fallspezifische Betrachtung von horizontalen Kooperationen und Professionalisierung der Hilfeleistung im Bereich der humanitären Luftfahrt.
Behavioural Models
(2016)
This textbook introduces the basis for modelling and analysing discrete dynamic systems, such as computer programmes, soft- and hardware systems, and business processes. The underlying concepts are introduced and concrete modelling techniques are described, such as finite automata, state machines, and Petri nets. The concepts are related to concrete application scenarios, among which business processes play a prominent role.
The book consists of three parts, the first of which addresses the foundations of behavioural modelling. After a general introduction to modelling, it introduces transition systems as a basic formalism for representing the behaviour of discrete dynamic systems. This section also discusses causality, a fundamental concept for modelling and reasoning about behaviour. In turn, Part II forms the heart of the book and is devoted to models of behaviour. It details both sequential and concurrent systems and introduces finite automata, state machines and several different types of Petri nets. One chapter is especially devoted to business process models, workflow patterns and BPMN, the industry standard for modelling business processes. Lastly, Part III investigates how the behaviour of systems can be analysed. To this end, it introduces readers to the concept of state spaces. Further chapters cover the comparison of behaviour and the formal analysis and verification of behavioural models.
The book was written for students of computer science and software engineering, as well as for programmers and system analysts interested in the behaviour of the systems they work on. It takes readers on a journey from the fundamentals of behavioural modelling to advanced techniques for modelling and analysing sequential and concurrent systems, and thus provides them a deep understanding of the concepts and techniques introduced and how they can be applied to concrete application scenarios.
The population structure of the highly mobile marine mammal, the harbor porpoise (Phocoena phocoena), in the Atlantic shelf waters follows a pattern of significant isolation-by-distance. The population structure of harbor porpoises from the Baltic Sea, which is connected with the North Sea through a series of basins separated by shallow underwater ridges, however, is more complex. Here, we investigated the population differentiation of harbor porpoises in European Seas with a special focus on the Baltic Sea and adjacent waters, using a population genomics approach. We used 2872 single nucleotide polymorphisms (SNPs), derived from double digest restriction-site associated DNA sequencing (ddRAD-seq), as well as 13 microsatellite loci and mitochondrial haplotypes for the same set of individuals. Spatial principal components analysis (sPCA), and Bayesian clustering on a subset of SNPs suggest three main groupings at the level of all studied regions: the Black Sea, the North Atlantic, and the Baltic Sea. Furthermore, we observed a distinct separation of the North Sea harbor porpoises from the Baltic Sea populations, and identified splits between porpoise populations within the Baltic Sea. We observed a notable distinction between the Belt Sea and the Inner Baltic Sea sub-regions. Improved delineation of harbor porpoise population assignments for the Baltic based on genomic evidence is important for conservation management of this endangered cetacean in threatened habitats, particularly in the Baltic Sea proper. In addition, we show that SNPs outperform microsatellite markers and demonstrate the utility of RAD-tags from a relatively small, opportunistically sampled cetacean sample set for population diversity and divergence analysis.
Water scarcity, adaption on climate change, and risk assessment of droughts and floods are critical topics for science and society these days. Monitoring and modeling of the hydrological cycle are a prerequisite to understand and predict the consequences for weather and agriculture. As soil water storage plays a key role for partitioning of water fluxes between the atmosphere, biosphere, and lithosphere, measurement techniques are required to estimate soil moisture states from small to large scales.
The method of cosmic-ray neutron sensing (CRNS) promises to close the gap between point-scale and remote-sensing observations, as its footprint was reported to be 30 ha. However, the methodology is rather young and requires highly interdisciplinary research to understand and interpret the response of neutrons to soil moisture. In this work, the signal of nine detectors has been systematically compared, and correction approaches have been revised to account for meteorological and geomagnetic variations. Neutron transport simulations have been consulted to precisely characterize the sensitive footprint area, which turned out to be 6--18 ha, highly local, and temporally dynamic. These results have been experimentally confirmed by the significant influence of water bodies and dry roads. Furthermore, mobile measurements on agricultural fields and across different land use types were able to accurately capture the various soil moisture states. It has been further demonstrated that the corresponding spatial and temporal neutron data can be beneficial for mesoscale hydrological modeling. Finally, first tests with a gyrocopter have proven the concept of airborne neutron sensing, where increased footprints are able to overcome local effects.
This dissertation not only bridges the gap between scales of soil moisture measurements. It also establishes a close connection between the two worlds of observers and modelers, and further aims to combine the disciplines of particle physics, geophysics, and soil hydrology to thoroughly explore the potential and limits of the CRNS method.
Eye movements serve as a window into ongoing visual-cognitive processes and can thus be used to investigate how people perceive real-world scenes. A key issue for understanding eye-movement control during scene viewing is the roles of central and peripheral vision, which process information differently and are therefore specialized for different tasks (object identification and peripheral target selection respectively). Yet, rather little is known about the contributions of central and peripheral processing to gaze control and how they are coordinated within a fixation during scene viewing. Additionally, the factors determining fixation durations have long been neglected, as scene perception research has mainly been focused on the factors determining fixation locations. The present thesis aimed at increasing the knowledge on how central and peripheral vision contribute to spatial and, in particular, to temporal aspects of eye-movement control during scene viewing. In a series of five experiments, we varied processing difficulty in the central or the peripheral visual field by attenuating selective parts of the spatial-frequency spectrum within these regions. Furthermore, we developed a computational model on how foveal and peripheral processing might be coordinated for the control of fixation duration. The thesis provides three main findings. First, the experiments indicate that increasing processing demands in central or peripheral vision do not necessarily prolong fixation durations; instead, stimulus-independent timing is adapted when processing becomes too difficult. Second, peripheral vision seems to play a prominent role in the control of fixation durations, a notion also implemented in the computational model. The model assumes that foveal and peripheral processing proceed largely in parallel and independently during fixation, but can interact to modulate fixation duration. Thus, we propose that the variation in fixation durations can in part be accounted for by the interaction between central and peripheral processing. Third, the experiments indicate that saccadic behavior largely adapts to processing demands, with a bias of avoiding spatial-frequency filtered scene regions as saccade targets. We demonstrate that the observed saccade amplitude patterns reflect corresponding modulations of visual attention. The present work highlights the individual contributions and the interplay of central and peripheral vision for gaze control during scene viewing, particularly for the control of fixation duration. Our results entail new implications for computational models and for experimental research on scene perception.
Möglichkeiten der Mittelstandsförderung durch Vergaberechtsgestaltung und Vergaberechtspraxis
(2016)
Die Förderungswürdigkeit und die Förderungsfähigkeit mittelständischer Unternehmen ist ein gesamteuropäisches, wirtschaftspolitisches Anliegen. Hiervon zeugen zum einen zahlreiche Regelungen im Primär-, Sekundär-, Verfassungs- und einfachgesetzlichem Recht, zum anderen auch die Bedeutung der mittelständischen Unternehmen im wirtschaftlichen, gesellschaftlichen und sozialen Gefüge. So herrscht innerhalb der Europäischen Union nicht nur der Slogan „Vorfahrt für KMU“, sondern auch die im Frühjahr 2014 verabschiedeten Vergaberichtlinien legten ein besonderes Augenmerk auf die Förderung des Zugangs der KMU zum öffentlichen Beschaffungsmarkt. Denn gemessen am Steuerungs- und Lenkungspotenzial der Auftragsvergabe, deren Einfluss auf die Innovationstätigkeit der Wirtschaft sowie deren Auswirkungen auf die Wirtschafts- und Wettbewerbstätigkeit auf der einen Seite und dem gesamtwirtschaftlichen Stellenwert der mittelständischen Unternehmen auf der anderen Seite, sind mittelständische Unternehmen trotz zahlreicher europäischer und nationaler Initiativen im Vergabeverfahren unterrepräsentiert. Neben der undurchsichtigen Regelungsstruktur des deutschen Vergaberechts, unterliegen die mittelständischen Unternehmen vom Beginn bis zum Ende des Vergabeverfahrens besonderen Schwierigkeiten. Dieser Ausgangsbefund wurde zum Anlass genommen, um die Möglichkeiten der Mittelstandsförderung durch Vergaberechtsgestaltung und Vergaberechtspraxis erneut auf den Prüfstand zu stellen.
Infants' lexical processing is modulated by featural manipulations made to words, suggesting that early lexical representations are sufficiently specified to establish a match with the corresponding label. However, the precise degree of detail in early words requires further investigation due to equivocal findings. We studied this question by assessing children’s sensitivity to the degree of featural manipulation (Chapters 2 and 3), and sensitivity to the featural makeup of homorganic and heterorganic consonant clusters (Chapter 4). Gradient sensitivity on the one hand and sensitivity to homorganicity on the other hand would suggest that lexical processing makes use of sub-phonemic information, which in turn would indicate that early words contain sub-phonemic detail. The studies presented in this thesis assess children’s sensitivity to sub-phonemic detail using minimally demanding online paradigms suitable for infants: single-picture pupillometry and intermodal preferential looking. Such paradigms have the potential to uncover lexical knowledge that may be masked otherwise due to cognitive limitations. The study reported in Chapter 2 obtained a differential response in pupil dilation to the degree of featural manipulation, a result consistent with gradient sensitivity. The study reported in Chapter 3 obtained a differential response in proportion of looking time and pupil dilation to the degree of featural manipulation, a result again consistent with gradient sensitivity. The study reported in Chapter 4 obtained a differential response to the manipulation of homorganic and heterorganic consonant clusters, a result consistent with sensitivity to homorganicity. These results suggest that infants' lexical representations are not only specific, but also detailed to the extent that they contain sub-phonemic information.
Complex networks are ubiquitous in nature and society. They appear in vastly different domains, for instance as social networks, biological interactions or communication networks. Yet in spite of their different origins, these networks share many structural characteristics. For instance, their degree distribution typically follows a power law. This means that the fraction of vertices of degree k is proportional to k^(−β) for some constant β; making these networks highly inhomogeneous. Furthermore, they also typically have high clustering, meaning that links between two nodes are more likely to appear if they have a neighbor in common.
To mathematically study the behavior of such networks, they are often modeled as random graphs. Many of the popular models like inhomogeneous random graphs or Preferential Attachment excel at producing a power law degree distribution. Clustering, on the other hand, is in these models either not present or artificially enforced.
Hyperbolic random graphs bridge this gap by assuming an underlying geometry to the graph: Each vertex is assigned coordinates in the hyperbolic plane, and two vertices are connected if they are nearby. Clustering then emerges as a natural consequence: Two nodes joined by an edge are close by and therefore have many neighbors in common. On the other hand, the exponential expansion of space in the hyperbolic plane naturally produces a power law degree sequence. Due to the hyperbolic geometry, however, rigorous mathematical treatment of this model can quickly become mathematically challenging.
In this thesis, we improve upon the understanding of hyperbolic random graphs by studying its structural and algorithmical properties. Our main contribution is threefold. First, we analyze the emergence of cliques in this model. We find that whenever the power law exponent β is 2 < β < 3, there exists a clique of polynomial size in n. On the other hand, for β >= 3, the size of the largest clique is logarithmic; which severely contrasts previous models with a constant size clique in this case. We also provide efficient algorithms for finding cliques if the hyperbolic node coordinates are known. Second, we analyze the diameter, i. e., the longest shortest path in the graph. We find
that it is of order O(polylog(n)) if 2 < β < 3 and O(logn) if β > 3. To complement
these findings, we also show that the diameter is of order at least Ω(logn). Third, we provide an algorithm for embedding a real-world graph into the hyperbolic plane using only its graph structure. To ensure good quality of the embedding, we perform extensive computational experiments on generated hyperbolic random graphs. Further, as a proof of concept, we embed the Amazon product recommendation network and observe that products from the same category are mapped close together.
Intracontinental deformation usually is a result of tectonic forces associated with distant plate collisions. In general, the evolution of mountain ranges and basins in this environment is strongly controlled by the distribution and geometries of preexisting structures. Thus, predictive models usually fail in forecasting the deformation evolution in these kinds of settings. Detailed information on each range and basin-fill is vital to comprehend the evolution of intracontinental mountain belts and basins. In this dissertation, I have investigated the complex Cenozoic tectonic evolution of the western Tien Shan in Central Asia, which is one of the most active intracontinental ranges in the world. The work presented here combines a broad array of datasets, including thermo- and geochronology, paleoenvironmental interpretations, sediment provenance and subsurface interpretations in order to track changes in tectonic deformation. Most of the identified changes are connected and can be related to regional-scale processes that governed the evolution of the western Tien Shan.
The NW-SE trending Talas-Fergana fault (TFF) separates the western from the central Tien Shan and constitutes a world-class example of the influence of preexisting anisotropies on the subsequent structural development of a contractile orogen. While to the east most of ranges and basins have a sub-parallel E-W trend, the triangular-shaped Fergana basin forms a substantial feature in the western Tien Shan morphology with ranges on all three sides. In this thesis, I present 55 new thermochronologic ages (apatite fission track and zircon (U-Th)/He)) used to constrain exhumation histories of several mountain ranges in the western Tien Shan. At the same time, I analyzed the Fergana basin-fill looking for progressive changes in sedimentary paleoenvironments, source areas and stratal geometrical configurations in the subsurface and outcrops.
The data presented in this thesis suggests that low cooling rates (<1°C Myr-1), calm depositional environments, and low depositional rates (<10 m Myr-1) were widely distributed across the western Tien Shan, describing a quiescent tectonic period throughout the Paleogene. Increased cooling rates in the late Cenozoic occurred diachronously and with variable magnitudes in different ranges. This rapid cooling stage is interpreted to represent increased erosion caused by active deformation and constrains the onset of Cenozoic deformation in the western Tien Shan. Time-temperature histories derived from the northwestern Tien Shan samples show an increase in cooling rates by ~25 Ma. This event is correlated with a synchronous pulse
iv
in the South Tien Shan. I suggest that strike-slip motion along the TFF commenced at the Oligo-Miocene boundary, facilitating CCW rotation of the Fergana basin and enabling exhumation of the linked horsetail splays. Higher depositional rates (~150 m Myr-1) in the Oligo-Miocene section (Massaget Fm.) of the Fergana basin suggest synchronous deformation in the surrounding ranges. The central Alai Range also experienced rapid cooling around this time, suggesting that the onset of intramontane basin fragmentation and isolation is coeval. These results point to deformation starting simultaneously in the late Oligocene – early Miocene in geographically distant mountain ranges. I suggest that these early uplifts are controlled by reactivated structures (like the TFF), which are probably the frictionally weakest and most-suitably oriented for accommodating and transferring N-S horizontal shortening along the western Tien Shan.
Afterwards, in the late Miocene (~10 Ma), a period of renewed rapid cooling affected the Tien Shan and most mountain ranges and inherited structures started to actively deform. This episode is widely distributed and an increase in exhumation is interpreted in most of the sampled ranges. Moreover, the Pliocene section in the basin subsurface shows the higher depositional rates (>180 m Myr-1) and higher energy facies. The deformation and exhumation increase further contributed to intramontane basin partitioning. Overall, the interpretation is that the Tien Shan and much of Central Asia suffered a global increase in the rate of horizontal crustal shortening. Previously, stress transfer along the rigid Tarim block or Pamir indentation has been proposed to account for Himalayan hinterland deformation. However, the extent of the episode requires a different and broader geodynamic driver.
Prevalence of Achilles tendinopathy increases with age, leading to a weaker tendon with predisposition to rupture. Previous studies, investigating Achilles tendon (AT) properties, are restricted to standardized isometric conditions. Knowledge regarding the influence of age and pa-thology on AT response under functional tasks remains limited. Therefore, the aim of the thesis was to investigate the influence of age and pathology on AT properties during a single-leg vertical jump.
Healthy children, asymptomatic adults and patients with Achilles tendinopathy participated. Ultrasonography was used to assess AT-length, AT-cross-sectional area and AT-elongation. The reliability of the methodology used was evaluated both Intra- and inter-rater at rest and at maximal isometric plantar-flexion contraction and was further implemented to investigate tendon properties during functional task. During the functional task a single-leg vertical jump on a force plate was performed while simultaneously AT elongation and vertical ground reaction forces were recorded. AT compliance [mm/N] (elongation/force) and AT strain [%] (elongation/length) were calculated. Differences between groups were evaluated with respect to age (children vs. adults) and pathology (asymptomatic adults vs. patients).
Good to excellent reliability with low levels of variability was achieved in the assessment of AT properties. During the jumps AT elongation was found to be statistical significant higher in children. However, no statistical significant difference was found for force among the groups. AT compliance and strain were found to be statistical significant higher only in children. No significant differences were found between asymptomatic adults and patients with tendinopathy.
The methodology used to assess AT properties is reliable, allowing its implementation into further investigations. Higher AT-compliance in children might be considered as a protective factor against load-related injuries. During functional task, when higher forces are acting on the AT, tendinopathy does not result in a weaker tendon.
Ephraim Carlebach
(2016)
Physikalische Hydrogele gewinnen derzeit als Zellsubstrate zunehmend an Interesse, da Viskoelastizität oder Stressrelaxation ein bedeutender Parameter in der Mechanotransduktion ist, der bisher vernachlässigt wurde. In dieser Arbeit wurden multi-funktionelle Polyurethane entworfen, die über einen neuartigen Gelierungsmechanismus physikalische Hydrogele bilden. In Wasser bilden die anionischen Polyurethane spontan Aggregate, welche durch elektrostatische Abstoßung in Lösung gehalten werden. Eine schnelle Gelierung kann von hier aus durch Ladungsabschirmung erreicht werden, wodurch die Aggregation voranschreitet und ein Netzwerk ausgebildet wird. Dies kann durch die Zugabe von verschiedenen Säuren oder Salzen geschehen, sodass sowohl saure (pH 4 - 5) als auch pH-neutrale Hydrogele erhalten werden können. Während konventionelle Hydrogele auf Polyurethan-Basis in der Regel durch toxische isocyanat-haltige Präpolymere hergestellt werden, eignet sich der hier beschriebene physikalische Gelierungsmechanismus für in situ Anwendungen in sensitiven Umgebungen. Sowohl Härte als auch Stressrelaxation der Hydrogele können unabhängig voneinander über einen breiten Bereich eingestellt werden. Darüberhinaus zeichnen sich die Hydrogele durch exzellente Stressregeneration aus.
Die Empirie des beginnenden 21. Jahrhunderts weist mehr autoritäre Regime aus als am Ende des 20. Jahrhunderts angenommen. Die gegenwärtige Autoritarismusforschung versucht die Fortdauer dieses Regimetyps in Hinblick auf die politischen Institutionen zu erklären – dabei bleiben politische Akteure, die nicht zum Herrschaftszentrum gehören, außen vor.
Das vorliegende Projekt untersucht die Rolle und Funktion politischer Opposition in autoritären Regimen. Es wird davon ausgegangen, dass sich an der Opposition eine signifikante Charakteristik autoritärer Regime manifestiert. Das akteurszentrierte Projekt ist der qualitativ orientierten Politikwissenschaft zuzurechnen und verknüpft das Autoritarismuskonzept von Juan Linz mit klassischen Ansätzen der Oppositionsforschung und macht diese Theorien für die gegenwärtige Autoritarismusforschung nutzbar.
Die eigens entwickelte elitenorientierte Oppositionstypologie wird am Beispiel Kenias im Zeitraum 1990-2005 angewendet. Die Oppositionsgruppen werden im Institutionengefüge autoritärer Regime verortet und ihr politisches Agieren in den Dimensionen Handlungsstatus, Handlungsüberzeugung und Handlungsstrategie analysiert. Unter Beachtung der historisch gewachsenen regionalen und kulturellen Spezifika wird angenommen, dass generelle, Regionen übergreifende Aussagen zur Opposition in autoritären Regimen getroffen werden können: Kein Oppositionstyp kann allein einen Herrschaftswechsel bewirken. Der Wechsel bzw. die Fortdauer der Herrschaft hängt von der Dominanz bestimmter Oppositionstypen im Oppositionsgeflecht sowie der gleichzeitigen Schwäche anderer Oppositionstypen ab.
Durch die konzeptionelle Beschäftigung mit Opposition sowie deren empirische Erschließung soll ein substantieller Beitrag für die notwendige Debatte um autoritäre Regime im 21. Jahrhundert geleistet werden.
Diese Arbeit zu Grunde liegenden Forschung zielte darauf ab, neue schmelzbare Acrylnitril-Copolymere zu entwickeln. Diese sollten im Anschluss über ein Schmelzspinnverfahren zur Chemiefaser geformt und im letzten Schritt zur Carbonfaser konvertiert werden. Zu diesem Zweck wurden zunächst orientierende Untersuchungen an unterschiedlichen Copolymeren des Acrylnitril aus Lösungspolymerisation durchgeführt. Die Untersuchungen zeigten, dass elektrostatische Wechselwirkungen besser als sterische Abschirmung dazu geeignet sind, Schmelzbarkeit unterhalb der Zersetzungstemperatur von Polyacrylnitril zu bewirken. Aus der Vielzahl untersuchter Copolymere stellten sich jene mit Methoxyethylacrylat (MEA) als am effektivsten heraus. Für diese Copolymere wurden sowohl die Copolymerisationsparameter bestimmt als auch die grundlegende Kinetik der Lösungspolymerisation untersucht. Die Copolymere mit MEA wurden über Schmelzspinnen zur Faser umgeformt und diese dann untersucht. Hierbei wurden auch Einflüsse verschiedener Parameter, wie z.B. die der Molmasse, auf die Fasereigenschaften und -herstellung untersucht. Zuletzt wurde ein Heterophasenpolymerisationsverfahren zur Herstellung von Copolymeren aus AN/MEA entwickelt; dadurch konnten die Materialeigenschaften weiter verbessert werden. Zur Unterdrückung der thermoplastischen Eigenschaften der Fasern wurde ein geeignetes Verfahren entwickelt und anschließend die Konversion zu Carbonfasern durchgeführt.
Surface-enhanced Raman scattering (SERS) is a promising tool to obtain rich chemical information about analytes at trace levels. However, in order to perform selective experiments on individual molecules, two fundamental requirements have to be fulfilled. On the one hand, areas with high local field enhancement, so-called “hot spots”, have to be created by positioning the supporting metal surfaces in close proximity to each other. In most cases hot spots are formed in the gap between adjacent metal nanoparticles (NPs). On the other hand, the analyte has to be positioned directly in the hot spot in order to profit from the highest signal amplification. The use of DNA origami substrates provides both, the arrangement of AuNPs with nm precision as well as the ability to bind analyte molecules at predefined positions. Consequently, the present cumulative doctoral thesis aims at the development of a novel SERS substrate based on a DNA origami template. To this end, two DNA-functionalized gold nanoparticles (AuNPs) are attached to one DNA origami substrate resulting in the formation of a AuNP dimer and thus in a hot spot within the corresponding gap. The obtained structures are characterized by correlated atomic force microscopy (AFM) and SERS imaging which allows for the combination of structural and chemical information.
Initially, the proof-of principle is presented which demonstrates the potential of the novel approach. It is shown that the Raman signal of 15 nm AuNPs coated with dye-modified DNA
(dye: carboxytetramethylrhodamine (TAMRA)) is significantly higher for AuNP dimers arranged on a DNA origami platform in comparison to single AuNPs. Furthermore, by attaching single TAMRA molecules in the hot spot between two 5 nm AuNPs and optimizing the size of the AuNPs by electroless gold deposition, SERS experiments at the few-molecule level are presented. The initially used DNA origami-AuNPs design is further optimized in many respects. On the one hand, larger AuNPs up to a diameter of 60 nm are used which are additionally treated with a silver enhancement solution to obtain Au-Ag-core-shell NPs. On the other hand, the arrangement of both AuNPs is altered to improve the position of the dye molecule within the hot spot as well as to decrease the gap size between the two particles. With the optimized design the detection of single dye molecules (TAMRA and cyanine 3 (Cy3)) by means of SERS is demonstrated. Quantitatively, enhancement factors up to 10^10 are estimated which is sufficiently high to detect single dye molecules.
In the second part, the influence of graphene as an additional component of the SERS substrate is investigated. Graphene is a two-dimensional material with an outstanding combination of electronical, mechanical and optical properties. Here, it is demonstrated that
single layer graphene (SLG) replicates the shape of underlying non-modified DNA origami
substrates very well, which enables the monitoring of structural alterations by AFM imaging.
In this way, it is shown that graphene encapsulation significantly increases the structural
stability of bare DNA origami substrates towards mechanical force and prolonged exposure
to deionized water.
Furthermore, SLG is used to cover DNA origami substrates which are functionalized with a
40 nm AuNP dimer. In this way, a novel kind of hybrid material is created which exhibits
several advantages compared to the analogue non-covered SERS substrates. First, the fluorescence background of dye molecules that are located in between the AuNP surface and SLG is efficiently reduced. Second, the photobleaching rate of the incorporated dye molecules is decreased up to one order of magnitude. Third, due to the increased photostability of the investigated dye molecules, the performance of polarization-dependent series measurements on individual structures is enabled. This in turn reveals extensive information about the dye molecules in the hot spot as well as about the strain induced within the graphene lattice.
Although SLG can significantly influence the SERS substrate in the aforementioned ways, all
those effects are strongly related to the extent of contact with the underlying AuNP dimer.
Transmorphic
(2016)
Defining Graphical User Interfaces (GUIs) through functional abstractions can reduce the complexity that arises from mutable abstractions. Recent examples, such as Facebook's React GUI framework have shown, how modelling the view as a functional projection from the application state to a visual representation can reduce the number of interacting objects and thus help to improve the reliabiliy of the system. This however comes at the price of a more rigid, functional framework where programmers are forced to express visual entities with functional abstractions, detached from the way one intuitively thinks about the physical world.
In contrast to that, the GUI Framework Morphic allows interactions in the graphical domain, such as grabbing, dragging or resizing of elements to evolve an application at runtime, providing liveness and directness in the development workflow. Modelling each visual entity through mutable abstractions however makes it difficult to ensure correctness when GUIs start to grow more complex. Furthermore, by evolving morphs at runtime through direct manipulation we diverge more and more from the symbolic description that corresponds to the morph. Given that both of these approaches have their merits and problems, is there a way to combine them in a meaningful way that preserves their respective benefits?
As a solution for this problem, we propose to lift Morphic's concept of direct manipulation from the mutation of state to the transformation of source code. In particular, we will explore the design, implementation and integration of a bidirectional mapping between the graphical representation and a functional and declarative symbolic description of a graphical user interface within a self hosted development environment. We will present Transmorphic, a functional take on the Morphic GUI Framework, where the visual and structural properties of morphs are defined in a purely functional, declarative fashion. In Transmorphic, the developer is able to assemble different morphs at runtime through direct manipulation which is automatically translated into changes in the code of the application. In this way, the comprehensiveness and predictability of direct manipulation can be used in the context of a purely functional GUI, while the effects of the manipulation are reflected in a medium that is always in reach for the programmer and can even be used to incorporate the source transformations into the source files of the application.
The lakes in the Kenyan Rift Valley offer the unique opportunity to study a wide range of hydrochemical environmental conditions, ranging from freshwater to highly saline and alkaline lakes. Because little is known about the hydro- and biogeochemical conditions in the underlying lake sediments, it was the aim of this study to extend the already existing data sets with data from porewater and biomarker analyses. Additionally, reduced sulphur compounds and sulphate reduction rates in the sediment were determined. The new data was used to examine the anthropogenic and microbial influence on the lakes sediments as well as the influence of the water chemistry on the degradation and preservation of organic matter in the sediment column. The lakes discussed in this study are: Logipi, Eight (a small crater lake in the region of Kangirinyang), Baringo, Bogoria, Naivasha, Oloiden, and Sonachi.
The biomarker compositions were similar in all studied lake sediments; nevertheless, there were some differences between the saline and freshwater lakes. One of those differences is the occurrence of a molecule related to β-carotene, which was only found in the saline lakes. This molecule most likely originates from cyanobacteria, single-celled organisms which are commonly found in saline lakes. In the two freshwater lakes, stigmasterol, a sterol characteristic for freshwater algae, was found. In this study, it was shown that Lakes Bogoria and Sonachi can be used for environmental reconstructions with biomarkers, because the absence of oxygen at the lake bottoms slowed the degradation process. Other lakes, like for example Lake Naivasha, cannot be used for such reconstructions, because of the large anthropogenic influence. But the biomarkers proved to be a useful tool to study those anthropogenic influences. Additionally, it was observed that horizons with a high concentration of elemental sulphur can be used as temporal markers. Those horizons were deposited during times when the lake levels were very low. The sulphur was deposited by microorganisms which are capable of anoxygenic photosynthesis or sulphide oxidation.
Das Ziel der Doktorarbeit war die Entwicklung und Evaluation eines skillsbasierten primären Präventionsprogramms (Mainzer Schultraining zur Essstörungsprävention (MaiStep)) für partielle und manifeste Essstörungen. Dabei wurde die Wirksamkeit durch einen primären (Reduktion vorhandener Essstörungssymptome) und sekundären (assoziierte Psychopathologie) Zielparameter 3 und 12 Monate nach Durchführung des Trainings überprüft. Innerhalb der randomisiert kontrollierten Studie gab es zwei Interventionsgruppe und eine aktive Kontrollgruppe. 1.654 Jugendliche (weiblich/männlich: 781/873; mittleres Alter: 13.1±0.7; BMI: 20.0±3.5) konnten für die Studie, an zufällig ausgewählten Schulen in Rheinland-Pfalz, rekrutiert werden. Die Entwicklung des Präventionsprogramms basiert auf einem systematischen Literaturreview von 63 wissenschaftlichen Studien über die Prävention von Essstörungen im Kindes- und Jugendalter. Eine Interventionsgruppe wurde durch Psychologinnen/Psychologen und eine zweite durch Lehrkräfte angeleitet. Das in der aktiven Kontrollgruppe durchgeführte Sucht- bzw. Stresspräventionsprogramm wurde durch Lehrkräfte geleitet. MaiStep zeigte zur 3-Monatskatamnese keine signifikanten Effekte im Vergleich zur aktiven Kontrollgruppe. Allerdings zeigten sich nach 12 Monaten multiple signifikante Effekte zwischen den Interventions- und der aktiven Kontrollgruppe. Im Rahmen der Analyse des primären Parameters wurden in den Interventionsgruppen signifikant weniger Jugendliche mit einer partiellen Anorexia nervosa (CHI²(2) = 8.74, p = .01**) und/oder partiellen Bulimia nervosa (CHI²(2) = 7.25, p = .02*) gefunden. Im Rahmen der sekundären Zielparameter zeigten sich signifikante Veränderungen in Subskalen des Eating Disorder Inventory (EDI-2) Schlankheitsstreben (F (2, 355) = 3.94, p = .02*) und Perfektionismus (F (2, 355) = 4.19, p = .01**) sowie dem Body Image Avoidance Questionnaire (BIAQ) (F (2, 525) = 18.79, p = .01**) zwischen den Interventions- und der aktiven Kontrollgruppe. MaiStep kann somit als erfolgreiches Programm zur Reduktion von partiellen Essstörungen für die Altersgruppe der 13- 15-jährigen bezeichnet werden. Trotz unterschiedlicher Wirkmechanismen zeigten sich die Lehrkräfte im Vergleich zu den Psychologinnen/Psychologen ebenso erfolgreich in der Durchführung.
Trial registration MaiStep is registered at the German Clinical Trials Register (DRKS00005050).
The energy sector is both affected by climate change and a key sector for climate protection measures. Energy security is the backbone of our modern society and guarantees the functioning of most critical infrastructure. Thus, decision makers and energy suppliers of different countries should be familiar with the factors that increase or decrease the susceptibility of their electricity sector to climate change. Susceptibility means socioeconomic and structural characteristics of the electricity sector that affect the demand for and supply of electricity under climate change. Moreover, the relevant stakeholders are supposed to know whether the given national energy and climate targets are feasible and what needs to be done in order to meet these targets. In this regard, a focus should be on the residential building sector as it is one of the largest energy consumers and therefore emitters of anthropogenic CO 2 worldwide.
This dissertation addresses the first aspect, namely the susceptibility of the electricity sector, by developing a ranked index which allows for quantitative comparison of the electricity sector susceptibility of 21 European countries based on 14 influencing factors. Such a ranking has not been completed to date. We applied a sensitivity analysis to test the relative effect of each influencing factor on the susceptibility index ranking. We also discuss reasons for the ranking position and thus the susceptibility of selected countries. The second objective, namely the impact of climate change on the energy demand of buildings, is tackled by means of a new model with which the heating and cooling energy demand of residential buildings can be estimated. We exemplarily applied the model to Germany and the Netherlands. It considers projections of future changes in population, climate and the insulation standards of buildings, whereas most of the existing studies only take into account fewer than three different factors that influence the future energy demand of buildings. Furthermore, we developed a comprehensive retrofitting algorithm with which the total residential building stock can be modeled for the first time for each year in the past and future.
The study confirms that there is no correlation between the geographical location of a country and its position in the electricity sector susceptibility ranking. Moreover, we found no pronounced pattern of susceptibility influencing factors between countries that ranked higher or lower in the index. We illustrate that Luxembourg, Greece, Slovakia and Italy are the countries with the highest electricity sector susceptibility. The electricity sectors of Norway, the Czech Republic, Portugal and Denmark were found to be least susceptible to climate change. Knowledge about the most important factors for the poor and good ranking positions of these countries is crucial for finding adequate adaptation measures to reduce the susceptibility of the electricity sector. Therefore, these factors are described within this study.
We show that the heating energy demand of residential buildings will strongly decrease in both Germany and the Netherlands in the future. The analysis for the Netherlands focused on the regional level and a finer temporal resolution which revealed strong variations in the future heating energy demand changes by province and by month. In the German study, we additionally investigated the future cooling energy demand and could demonstrate that it will only slightly increase up to the middle of this century. Thus, increases in the cooling energy demand are not expected to offset reductions in heating energy demand. The main factor for substantial heating energy demand reductions is the retrofitting of buildings. We are the first to show that the given German and Dutch energy and climate targets in the building sector can only be met if the annual retrofitting rates are substantially increased. The current rate of only about 1 % of the total building stock per year is insufficient for reaching a nearly zero-energy demand of all residential buildings by the middle of this century. To reach this target, it would need to be at least tripled. To sum up, this thesis emphasizes that country-specific characteristics are decisive for the electricity sector susceptibility of European countries. It also shows for different scenarios how much energy is needed in the future to heat and cool residential buildings. With this information, existing climate mitigation and adaptation measures can be justified or new actions encouraged.
This paper is focused on the temperature dependent synthesis of gold nanotriangles in a vesicular template phase, containing phosphatidylcholin and AOT, by adding the strongly alternating polyampholyte PalPhBisCarb.
UV-vis absorption spectra in combination with TEM micrographs show that flat gold nanoplatelets are formed predominantly in presence of the polyampholyte at 45 °C. The formation of triangular and hexagonal nanoplatelets can be directly influenced by the kinetic approach, i.e., by varying the polyampholyte dosage rate at 45 °C. Corresponding zeta potential measurements indicate that a temperature dependent adsorption of the polyampholyte on the {111} faces will induce the symmetry breaking effect, which is responsible for the kinetically controlled hindered vertical and preferred lateral growth of the nanoplatelets.
Sound matters
(2016)
This essay proposes a reorientation in postcolonial studies that takes account of the transcultural realities of the viral twenty-first century. This reorientation entails close attention to actual performances, their specific medial embeddedness, and their entanglement in concrete formal or informal material conditions. It suggests that rather than a focus on print and writing favoured by theories in the wake of the linguistic turn, performed lyrics and sounds may be better suited to guide the conceptual work. Accordingly, the essay chooses a classic of early twentieth-century digital music – M.I.A.’s 2003/2005 single “Galang” – as its guiding example. It ultimately leads up to a reflection on what Ravi Sundaram coined as “pirate modernity,” which challenges us to rethink notions of artistic authorship and authority, hegemony and subversion, culture and theory in the postcolonial world of today.
The age at which members of a semantic category are learned (age of acquisition), the typicality they demonstrate within their corresponding category, and the semantic domain to which they belong (living, non-living) are known to influence the speed and accuracy of lexical/semantic processing. So far, only a few studies have looked at the origin of age of acquisition and its interdependence with typicality and semantic domain within the same experimental design. Twenty adult participants performed an animacy decision task in which nouns were classified according to their semantic domain as being living or non-living. Response times were influenced by the independent main effects of each parameter: typicality, age of acquisition, semantic domain, and frequency. However, there were no interactions. The results are discussed with respect to recent models concerning the origin of age of acquisition effects.
In this thesis, a route to temperature-, pH-, solvent-, 1,2-diol-, and protein-responsive sensors made of biocompatible and low-fouling materials is established. These sensor devices are based on the sensitivemodulation of the visual band gap of a photonic crystal (PhC), which is induced by the selective binding of analytes, triggering a volume phase transition.
The PhCs introduced by this work show a high sensitivity not only for small biomolecules, but also for large analytes, such as glycopolymers or proteins. This enables the PhC to act as a sensor that detects analytes without the need of complex equipment.
Due to their periodical dielectric structure, PhCs prevent the propagation of specific wavelengths. A change of the periodicity parameters is thus indicated by a change in the reflected wavelengths. In the case explored, the PhC sensors are implemented as periodically structured responsive hydrogels in formof an inverse opal.
The stimuli-sensitive inverse opal hydrogels (IOHs) were prepared using a sacrificial opal template of monodispersed silica particles. First, monodisperse silica particles were assembled with a hexagonally packed structure via vertical deposition onto glass slides. The obtained silica crystals, also named colloidal crystals (CCs), exhibit structural color. Subsequently, the CCs templates were embedded in polymer matrix with low-fouling properties. The polymer matrices were composed of oligo(ethylene glycol) methacrylate derivatives (OEGMAs) that render the hydrogels thermoresponsive. Finally, the silica particles were etched, to produce highly porous hydrogel replicas of the CC. Importantly, the inner structure and thus the ability for light diffraction of the IOHs formed was maintained.
The IOH membrane was shown to have interconnected pores with a diameter as well as interconnections between the pores of several hundred nanometers. This enables not only the detection of small analytes, but also, the detection of even large analytes that can diffuse into the nanostructured IOH membrane. Various recognition unit – analyte model systems, such as benzoboroxole – 1,2-diols, biotin – avidin and mannose – concanavalin A, were studied by incorporating functional
comonomers of benzoboroxole, biotin and mannose into the copolymers. The incorporated recognition units specifically bind to certain low and highmolar mass biomolecules, namely to certain saccharides, catechols, glycopolymers or proteins.
Their specific binding strongly changes the overall hydrophilicity, thus modulating the swelling of the IOH matrices, and in consequence, drastically changes their internal periodicity. This swelling is amplified by the thermoresponsive properties of the polymer matrix. The shift of the interference band gap due to the specific molecular recognition is easily visible by the naked eye (up to 150 nm shifts). Moreover, preliminary trial were attempted to detect even larger entities. Therefore anti-bodies were immobilized on hydrogel platforms via polymer-analogous esterification. These platforms incorporate comonomers made of tri(ethylene glycol) methacrylate end-functionalized with a carboxylic acid. In these model systems, the bacteria analytes are too big to penetrate into the IOH membranes, but can only interact with their surfaces. The selected model bacteria, as Escherichia coli, show a specific affinity to anti-body-functionalized hydrogels. Surprisingly in the case functionalized IOHs, this study produced weak color shifts, possibly opening a path to detect directly living organism, which will need further investigations.
The main objective of this dissertation is to analyse prerequisites, expectations, apprehensions, and attitudes of students studying computer science, who are willing to gain a bachelor degree. The research will also investigate in the students’ learning style according to the Felder-Silverman model. These investigations fall in the attempt to make an impact on reducing the “dropout”/shrinkage rate among students, and to suggest a better learning environment.
The first investigation starts with a survey that has been made at the computer science department at the University of Baghdad to investigate the attitudes of computer science students in an environment dominated by women, showing the differences in attitudes between male and female students in different study years. Students are accepted to university studies via a centrally controlled admission procedure depending mainly on their final score at school. This leads to a high percentage of students studying subjects they do not want. Our analysis shows that 75% of the female students do not regret studying computer science although it was not their first choice. And according to statistics over previous years, women manage to succeed in their study and often graduate on top of their class. We finish with a comparison of attitudes between the freshman students of two different cultures and two different university enrolment procedures (University of Baghdad, in Iraq, and the University of Potsdam, in Germany) both with opposite gender majority.
The second step of investigation took place at the department of computer science at the University of Potsdam in Germany and analyzes the learning styles of students studying the three major fields of study offered by the department (computer science, business informatics, and computer science teaching). Investigating the differences in learning styles between the students of those study fields who usually take some joint courses is important to be aware of which changes are necessary to be adopted in the teaching methods to address those different students. It was a two stage study using two questionnaires; the main one is based on the Index of Learning Styles Questionnaire of B. A. Solomon and R. M. Felder, and the second questionnaire was an investigation on the students’ attitudes towards the findings of their personal first questionnaire. Our analysis shows differences in the preferences of learning style between male and female students of the different study fields, as well as differences between students with the different specialties (computer science, business informatics, and computer science teaching).
The third investigation looks closely into the difficulties, issues, apprehensions and expectations of freshman students studying computer science. The study took place at the computer science department at the University of Potsdam with a volunteer sample of students. The goal is to determine and discuss the difficulties and issues that they are facing in their study that may lead them to think in dropping-out, changing the study field, or changing the university. The research continued with the same sample of students (with business informatics students being the majority) through more than three semesters. Difficulties and issues during the study were documented, as well as students’ attitudes, apprehensions, and expectations. Some of the professors and lecturers opinions and solutions to some students’ problems were also documented. Many participants had apprehensions and difficulties, especially towards informatics subjects. Some business informatics participants began to think of changing the university, in particular when they reached their third semester, others thought about changing their field of study. Till the end of this research, most of the participants continued in their studies (the study they have started with or the new study they have changed to) without leaving the higher education system.
Las teorías sobre el orden de las palabras del siglo XVII han encontrado mucha repercusión en las investigaciones actuales sobre la estructura de la información. No obstante, estas alusiones tienden a ser inconscientes. ¿Cómo deben evaluar los historiógrafos tales similitudes, mucho más allá de determinar su continuidad? ¿Se pueden derivar tal vez conclusiones sobre este tema complejo, que es relevante en la discusión de hoy en día, tomando en cuenta las diversas posiciones opuestas y el intenso discurso del siglo XVIII?
We study the interplay between analysis on manifolds with singularities and complex analysis and develop new structures of operators based on the Mellin transform and tools for iterating the calculus for higher singularities. We refer to the idea of interpreting boundary value problems (BVPs) in terms of pseudo-differential operators with a principal symbolic hierarchy, taking into account that BVPs are a source of cone and edge operator algebras. The respective cone and edge pseudo-differential algebras in turn are the starting point of higher corner theories. In addition there are deep relationships between corner operators and complex analysis. This will be illustrated by the Mellin symbolic calculus.
The visceral protein transthyretin (TTR) is frequently affected by oxidative post-translational protein modifications (PTPMs) in various diseases. Thus, better insight into structure-function relationships due to oxidative PTPMs of TTR should contribute to the understanding of pathophysiologic mechanisms. While the in vivo analysis of TTR in mammalian models is complex, time- and resource-consuming, transgenic Caenorhabditis elegans expressing hTTR provide an optimal model for the in vivo identification and characterization of drug-mediated oxidative PTPMs of hTTR by means of matrix assisted laser desorption/ionization – time of flight – mass spectrometry (MALDI-TOF-MS). Herein, we demonstrated that hTTR is expressed in all developmental stages of Caenorhabditis elegans, enabling the analysis of hTTR metabolism during the whole life-cycle. The suitability of the applied model was verified by exposing worms to D-penicillamine and menadione. Both drugs induced substantial changes in the oxidative PTPM pattern of hTTR. Additionally, for the first time a covalent binding of both drugs with hTTR was identified and verified by molecular modelling.
Die vorgelegte Dissertation präsentiert wissenschaftliche Ergebnisse, die in der Zeit vom Dezember 2012 bis August 2016, erarbeitet wurden. Der zentrale Inhalt der Arbeit ist die Simulation von Röntgenabsorptionsprozessen von verschiedenen Systemen in kondensierter Phase. Genauer gesagt, werden Nahkantenabsorptions- (NEXAFS) sowie Röntgenphotoelektronenspektren (XPS) berechnet. In beiden Fällen wird ein Röntgenphoton von einem molekularen System absorbiert. Aufgrund der hohen Photonenenergie wird ein stark gebundenes kernnahes Elektron angeregt. Bei der XPS gelangt dieses mit einer zu messenden kinetischen Energie in Kontinuumszustände. In Abhängigkeit der eingestrahlten Photonenenergie und der kinetischen Energie des austreten Elektrons, kann die Bindungsenergie berechnet werden, welche die zentrale Größe der XPS ist. Im Falle der NEXAFS-Spektroskopie wird das kernnahe Elektron in unbesetzte gebundene Zustände angeregt. Die zentrale Größe ist die Absorption als Funktion der eingestrahlten Photonenenergie. Das erste Kapitel meiner Arbeit erörtert detailliert die experimentellen Methoden sowie die daraus gewonnenen charakteristischen Größen.
Die experimentellen Spektren zeigen oft viele Resonanzen, deren Interpretation aufgrund fehlender Referenzmaterialien schwierig ist. In solchen Fällen bietet es sich an, die Spektren mittels quantenchemischer Methoden zu simulieren. Der dafür erforderliche mathematisch-physikalische Methodenkatalog wird im zweiten Kapitel der Arbeit erörtert.
Das erste von mir untersuchte System ist Graphen. In experimentellen Arbeiten wurde die Oberfläche mittels Bromplasma modifiziert. Die im Anschluss gemessenen NEXAFS-Spektren unterscheiden sich maßgeblich von den Spektren der unbehandelten Oberfläche. Mithilfe periodischer DFT-Rechnungen wurden verschiedene Gitterdefekte sowie bromierte Systeme untersucht und die NEXAFS-Spektren simuliert. Mittels der Simulationen können die Beiträge verschiedener Anregungszentren analysiert werden. Die Berechnungen erlauben den Schluss, dass Gitterdefekte maßgeblich für die entstandenen Veränderungen verantwortlich sind.
Polyvinylalkohol (PVA) wurde als zweites System behandelt. Hierbei sollte untersucht werden, wie groß der Einfluss der Molekularbewegung auf die Verbreiterung der Peaks im XP-Spektrum ist. Des Weiteren wurde untersucht, wie groß der Einfluss von intermolekularen Wechselwirkungen auf die Peakpositionen und Peakverbreiterung ist. Für die Berechnung dieses Systems wurde eine Kombination aus molekulardynamischen und quantenchemischen Methoden verwendet. Als Strukturen dienten Oligomermodelle, die unter dem Einfluss eines (ab initio) Potentials propagiert wurden. Entlang der erstellten Trajektorie wurden Schnappschüsse der Geometrien extrahiert und für die Berechnung der XP-Spektren verwendet. Die Spektren werden bereits mithilfe klassischer Molekulardynamik sehr gut reproduziert. Die erhaltenen Peakbreiten sind verglichen mit dem Experiment allerdings zu klein. Die Hauptursache der Peakverbreiterung ist die Molekularbewegung. Intermolekulare Wechselwirkungen verschieben die Peakpositionen um 0.6 eV zu kleineren Anregungsenergien.
Im dritten Teil der Arbeit stehen die NEXAFS-Spektren von ionischen Flüssigkeiten (ILs) im Fokus. Die experimentell gefundenen Spektren zeigen eine komplexe Struktur mit vielen Resonanzen. In der Arbeit wurden zwei ILs untersucht. Als Geometrien verwenden wir Clustermodelle, die aus experimentellen Kristallstrukturen extrahiert wurden. Die berechneten Spektren erlauben es, die Resonanzen den Anregungszentren zuzuordnen. Außerdem kann eine erstmals gemessene Doppelresonanz simuliert und erklärt werden. Insgesamt kann die Interpretation der Spektren mithilfe der Simulation signifikant erweitert werden.
In allen Systemen wurde zur Berechnung des NEXAFS-Spektrums eine auf Dichtefunktionaltheorie basierende Methode verwendet (die sogenannte Transition-Potential Methode). Gängige wellenfunktionsbasierte Methoden, wie die Konfigurationswechselwirkung mit Einfachanregungen (CIS), zeigen eine starke Blauverschiebung, wenn als Referenz eine Hartree-Fock Slaterdeterminante verwendet wird. Wir zeigen, dass die Verwendung von kernnah-angeregten Determinanten sowohl das resultierende Spektrum als auch die Anregungsenergien deutlich verbessert. Des Weiteren werden auch Referenzen aus Dichtefunktionalrechnungen getestet. Zusätzlich werden auch Referenzen mit gebrochenen Besetzungszahlen für kernnahe Elektronen verwendet. In der Arbeit werden die Resultate der verschiedenen Referenzen miteinander verglichen. Es zeigt sich, dass Referenzen mit gebrochenen Besetzungszahlen das Spektrum nicht weiter verbessern. Der Einfluss der verwendeten Elektronenstrukturmethode ist eher gering.
Subcultures creating culture
(2016)
The purpose of this work is to apply the methods of textual semiotics to subcultures, in particular to the little known glam subculture. Subcultures have been the main research field of the Birmingham Centre for Contemporary Cultural Studies, known for its interdisciplinary approach, and for its focus on the creative aspects of subculture. Hebdige, in particular, introduced many semiotic elements in his work, as the aberrant decoding after Eco and the cultural creativity via bricolage after Lévi-Strauss. His definition of subculture as symbolic resistance has been criticized by the following post-subcultural researchers for its abstractness and lack of cohesion.
Semiotics eventually have been expelled from the set of tools used in sociology for the analysis of subcultures. Nowadays, the studies on subcultures have a strong ethnographic focus. Due to terminological proliferation and a descriptive approach, it is difficult to compare them on a common basis.
Textual semiotics, through the concept of semiosphere developed by Lotman, allows to go back to the intuitions of Hebdige, organizing the semiotic elements already present in his work into a wider system of interpretation. The semiosphere offers a coherent theoretical horizon as a basis for further analysis, and a new methodological perspective focusing on the cultural. In this thesis for the first time the work of Lotman is applied to the study of a subculture.
In this contribution, we study using first principles the co-adsorption and catalytic behaviors of CO and O2 on a single gold atom deposited at defective magnesium oxide surfaces. Using cluster models and point charge embedding within a density functional theory framework, we simulate the CO oxidation reaction for Au1 on differently charged oxygen vacancies of MgO(001) to rationalize its experimentally observed lack of catalytic activity. Our results show that: (1) co-adsorption is weakly supported at F0 and F2+ defects but not at F1+ sites, (2) electron redistribution from the F0 vacancy via the Au1 cluster to the adsorbed molecular oxygen weakens the O2 bond, as required for a sustainable catalytic cycle, (3) a metastable carbonate intermediate can form on defects of the F0 type, (4) only a small activation barrier exists for the highly favorable dissociation of CO2 from F0, and (5) the moderate adsorption energy of the gold atom on the F0 defect cannot prevent insertion of molecular oxygen inside the defect. Due to the lack of protection of the color centers, the surface becomes invariably repaired by the surrounding oxygen and the catalytic cycle is irreversibly broken in the first oxidation step.
We present new experimental data of the low-temperature metastable region of liquid water derived from high-density synthetic fluid inclusions (996–916 kg m−3) in quartz. Microthermometric measurements include: (i) prograde (upon heating) and retrograde (upon cooling) liquid–vapour homogenisation. We used single ultrashort laser pulses to stimulate vapour bubble nucleation in initially monophase liquid inclusions. Water densities were calculated based on prograde homogenisation temperatures using the IAPWS-95 formulation. We found retrograde liquid–vapour homogenisation temperatures in excellent agreement with IAPWS-95. (ii) Retrograde ice nucleation. Raman spectroscopy was used to determine the nucleation of ice in the absence of the vapour bubble. Our ice nucleation data in the doubly metastable region are inconsistent with the low-temperature trend of the spinodal predicted by IAPWS-95, as liquid water with a density of 921 kg m−3 remains in a homogeneous state during cooling down to a temperature of −30.5 °C, where it is transformed into ice whose density corresponds to zero pressure. (iii) Ice melting. Ice melting temperatures of up to 6.8 °C were measured in the absence of the vapour bubble, i.e. in the negative pressure region. (iv) Spontaneous retrograde and, for the first time, prograde vapour bubble nucleation. Prograde bubble nucleation occurred upon heating at temperatures above ice melting. The occurrence of prograde and retrograde vapour bubble nucleation in the same inclusions indicates a maximum of the bubble nucleation curve in the ϱ–T plane at around 40 °C. The new experimental data represent valuable benchmarks to evaluate and further improve theoretical models describing the p–V–T properties of metastable water in the low-temperature region.
Savannas cover a broad geographical range across continents and are a biome best described by a mix of herbaceous and woody plants. The former create a more or less continuous layer while the latter should be sparse enough to leave an open canopy. What has long intrigued ecologists is how these two competing plant life forms of vegetation coexist.
Initially attributed to resource competition, coexistence was considered the stable outcome of a root niche differentiation between trees and grasses. The importance of environmental factors became evident later, when data from moister environments demonstrated that tree cover was often lower than what the rainfall conditions would allow for. Our current understanding relies on the interaction of competition and disturbances in space and time. Hence, the influence of grazing and fire and the corresponding feedbacks they generate have been keenly investigated. Grazing removes grass cover, initiating a self-reinforcing process propagating tree cover expansion. This is known as the encroachment phenomenon. Fire, on the other hand, imposes a bottleneck on the tree population by halting the recruitment of young trees into adulthood. Since grasses fuel fires, a feedback linking grazing, grass cover, fire, and tree cover is created. In African savannas, which are the focus of this dissertation, these feedbacks play a major role in the dynamics.
The importance of these feedbacks came into sharp focus when the notion of alternative states began to be applied to savannas. Alternative states in ecology arise when different states of an ecosystem can occur under the same conditions. According to this an open savanna and a tree-dominated savanna can be classified as alternative states, since they can both occur under the same climatic conditions. The aforementioned feedbacks are critical in the creation of alternative states. The grass-fire feedback can preserve an open canopy as long as fire intensity and frequency remain above a certain threshold. Conversely, crossing a grazing threshold can force an open savanna to shift to a tree-dominated state. Critically, transitions between such alternative states can produce hysteresis, where a return to pre-transition conditions will not suffice to restore the ecosystem to its original state.
In the chapters that follow, I will cover aspects relating to the coexistence mechanisms and the role of feedbacks in tree-grass interactions. Coming back to the coexistence question, due to the overwhelming focus on competition and disturbance another important ecological process was neglected: facilitation. Therefore, in the first study within this dissertation I examine how facilitation can expand the tree-grass coexistence range into drier conditions. For the second study I focus on another aspect of savanna dynamics which remains underrepresented in the literature: the impacts of inter-annual rainfall variability upon savanna trees and the resilience of the savanna state. In the third and final study within this dissertation I approach the well-researched encroachment phenomenon from a new perspective: I search for an early warning indicator of the process to be used as a prevention tool for savanna conservation. In order to perform all this work I developed a mathematical ecohydrological model of Ordinary Differential Equations (ODEs) with three variables: soil moisture content, grass cover and tree cover.
Facilitation: Results showed that the removal of grass cover through grazing was detrimental to trees under arid conditions, contrary to expectation based on resource competition. The reason was that grasses preserved moisture in the soil through infiltration and shading, thus ameliorating the harsh conditions for trees in accordance with the Stress Gradient Hypothesis. The exclusion of grasses from the model further demonstrated this: tree cover was lower in the absence of grasses, indicating that the benefits of grass facilitation outweighed the costs of grass competition for trees. Thus, facilitation expanded the climatic range where savannas persisted into drier conditions.
Rainfall variability: By adjusting the model to current rainfall patterns in East Africa, I simulated conditions of increasing inter-annual rainfall variability for two distinct mean rainfall scenarios: semi-arid and mesic. Alternative states of tree-less grassland and tree-dominated savanna emerged in both cases. Increasing variability reduced semi-arid savanna tree cover to the point that at high variability the savanna state was eliminated, because variability intensified resource competition and strengthened the fire disturbance during high rainfall years. Mesic savannas, on the other hand, became more resilient along the variability gradient: increasing rainfall variability created more opportunities for the rapid growth of trees to overcome the fire disturbance, boosting the chances of savannas persisting and thus increasing mesic savanna resilience.
Preventing encroachment: The breakdown in the grass-fire feedback caused by heavy grazing promoted the expansion of woody cover. This could be irreversible due to the presence of alternative states of encroached and open savanna, which I found along a simulated grazing gradient. When I simulated different short term heavy grazing treatments followed by a reduction to the original grazing conditions, certain cases converged to the encroached state. Utilising woody cover changes only during the heavy grazing treatment, I developed an early warning indicator which identified these cases with a high risk of such hysteresis and successfully distinguished them from those with a low risk. Furthermore, after validating the indicator on encroachment data, I demonstrated that it appeared early enough for encroachment to be prevented through realistic grazing-reduction treatments.
Though this dissertation is rooted in the theory of savanna dynamics, its results can have significant applications in savanna conservation. Facilitation has only recently become a topic of interest within savanna literature. Given the threat of increasing droughts and a general anticipation of drier conditions in parts of Africa, insights stemming from this research may provide clues for preserving arid savannas. The impacts of rainfall variability on savannas have not yet been thoroughly studied, either. Conflicting results appear as a result of the lack of a robust theoretical understanding of plant interactions under variable conditions. . My work and other recent studies argue that such conditions may increase the importance of fast resource acquisition creating a ‘temporal niche’. Woody encroachment has been extensively studied as phenomenon, though not from the perspective of its early identification and prevention. The development of an encroachment forecasting tool, as the one presented in this work, could protect both the savanna biome and societies dependent upon it for (economic) survival. All studies which follow are bound by the attempt to broaden the horizons of savanna-related research in order to deal with extreme conditions and phenomena; be it through the enhancement of the coexistence debate or the study of an imminent external threat or the development of a management-oriented tool for the conservation of savannas.
Das Widerspenstige bändigen
(2016)
Dem Handeln von Lehrkräften wird in der schulischen Praxis wie in der wissenschaftlichen Literatur ein wesentlicher Einfluss auf die Qualität von schulischem Unterricht zugesprochen. Auch wenn umfangreiche normative Vorstellungen über ein gutes Lehr-Handeln bestehen, so gibt es wenig Erkenntnis darüber, welche Gründe Lehrkräfte für ihr pädagogisches Handeln haben. Das Handeln von Lehrkräften kann nur dann adäquat erfasst werden, wenn Bildung einerseits als Weitergabe von Kultur an die nachfolgende Generation und andererseits als eine vom sich bildenden Subjekt ausgehende Selbst- und Weltverständigung verstanden wird. Damit einhergehende Anforderungen an die Lehrkraft stehen notwendigerweise in Widerspruch zueinander; dies gilt besonders für eine Gesellschaft mit großer kultureller und sozialer Heterogenität. Bei der Suche nach Zusammenhängen zwischen Persönlichkeit, pädagogischem Wissen oder Kompetenzen und einem unterrichtlichen Handeln wird häufig von einer Bedingtheit dieses Handelns ausgegangen und dieses auf kognitive Aspekte und an externen Normen orientierte Merkmale verkürzt. Ertragreicher für eine Antwort auf die Frage nach den Begründungen sind wissenschaftliche Arbeiten, die Professionalität als eine Bezugnahme auf einen besonderen strukturellen Rahmen beschreiben, der durch Widersprüche geprägt ist und Entscheidungen zu den Spannungsfeldern pädagogischer Verhältnisse erfordert. Die subjektwissenschaftliche Lerntheorie bietet eine Basis für ein Verständnis eines Lernens in institutionellen Kontexten ausgehend von den Lerninteressen der Schülerinnen und Schüler. Lehren kann darauf bezugnehmend als Unterstützung von Selbst- und Weltverständigungsprozessen durch Wertschätzung, Verstehen und Angebote alternativer Bedeutungshorizonte verstanden werden. Das Handeln von Lehrkräften ist als sinngebende Bezugnahme auf daraus resultierende sowie institutionelle Anforderungen mittels gesellschaftlicher Bedeutungsstrukturen verstehbar. Das handelnde Subjekt erschließt sich selbst und die Welt mit Hilfe von Bedeutungen. Diese können verstanden werden als der Besonderheit der Biographie, der gesellschaftlichen Position sowie der Lebenslage geschuldete Reinterpretationen gesellschaftlicher Bedeutungsstrukturen. Im empirischen Verfahren können mittels eines Übergangs von sequentiellen zu komparativen Analysen Positionierungen als thematisch spezifische und über die konkrete Handlungssituation hinausreichende Bedeutungs-Begründungs-Zusammenhänge rekonstruiert werden. Daraus werden situationsunabhängige Strukturmomente des Gegenstands Lehren an beruflichen Schulen aber auch komplexe, situationsbezogene subjektive Bedeutungs-Begründungs-Muster abgeleitet. Als wesentliche strukturelle Merkmale lassen sich die Schlüsselkategorien ‚Deutungsmacht‘ und ‚instrumentelle pädagogische Beziehung‘ aus dem empirischen Material unter Zuhilfenahme weiterer theoretischer Folien entwickeln. Da Deutungsmacht auf Akzeptanz angewiesen ist und in instrumentellen Beziehungen eine kooperative Bezugnahme auf den Lehr-Lern-Gegenstand allenfalls punktuell erfolgt, können damit asymmetrische metastabile Arrangements zwischen einer Lehrkraft und Schülerinnen und Schülern verstanden werden. Als empirische Ausprägungen weist Deutungsmacht die Varianten ‚absoluter Anspruch‘, ‚Akzeptanz der Fragilität‘ und ‚Akzeptanz der Legitimität eines Infragestellens‘ auf. Bei der zweiten Schlüsselkategorie treten die Varianten ‚strukturelle Prägung‘, ‚unspezifischer allgemein-menschlicher Charakter‘ und ‚Außenprägung‘ der instrumentellen pädagogischen Beziehung auf. Die Bedeutungs-Begründungs-Musters weisen teilweise Inkonsistenzen und Übergänge in den Positionierungen bezogen auf die dargestellten Varianten auf. Nur bei einem Teil der Muster sind Bemühungen um Wertschätzung und Verstehen der Schülerinnen und Schüler plausibel ableitbar, gleiches gilt in Hinblick auf eine Offenheit für eine Revision der Muster. Die Muster, wie etwa ‚Durchsetzend-ertragendes Nachsteuern‘, ‚Direktiv-personalisierendes Praktizieren‘ oder ‚Regulierend-flexibles Managen‘ sind zu verstehen als Bewältigungsmodi der kontingenten pädagogischen (Konflikt-)Situationen, auf die sich die Fallschilderungen beziehen. Die jeweilige Lehrkraft hat dieses Muster in dem beschriebenen Fall genutzt, was allerdings keine Aussage darüber zulässt, auf welche Muster die Lehrkraft in anderen Fällen zugreifen würde. Die Ergebnisse der vorliegenden Arbeit eignen sich als eine heuristische bzw. theoretische Folie, die Lehrkräfte beim Erschließen ihres eigenen pädagogischen Handelns - etwa in einer als Fallberatung konzipierten Fortbildung - unterstützen kann. Möglich sind Anschlüsse an andere theoretische Ansätze zum Handeln von Lehrkräften aber auch deren veränderte Einordnung. Erweitert werden die Optionen, dieses Handeln über wissenschaftliche Zugänge zu erfassen.
Precision horticulture encompasses site- or tree-specific management in fruit plantations. Of decisive importance is spatially resolved data (this means data from each tree) from the production site, since it may enable customized and, therefore, resource-efficient production measures.
The present thesis involves an examination of the apparent electrical conductivity of the soil (ECa), the plant water status spatially measured by means of the crop water stress index (CWSI), and the fruit quality (e.g. fruit size) for Prunus domestica L. (plums) and Citrus x aurantium, Syn. Citrus paradisi (grapefruit). The goals of the present work were i) characterization of the 3D distribution of the apparent electrical conductivity of the soil and variability of the plant’s water status; ii) investigation of the interaction between ECa, CWSI, and fruit quality; and iii) an approach for delineating management zones with respect to managing trees individually.
To that end, the main investigations took place in the plum orchard. This plantation got a slope of 3° grade on Pleistocene and post-Pleistocene substrates in a semi-humid climate (Potsdam, Germany) and encloses an area of 0.37 ha with 156 trees of the cultivar ˈTophit Plusˈ on a Wavit rootstock. The plantation was laid in 2009 with annual and biannual trees spaced 4 m distance along the irrigation system and 5 m between the rows. The trees were watered three times a week with a drip irrigation system positioned 50 cm above ground level providing 1.6 l per tree per event. With the help of geoelectric measurements, the apparent electrical conductivity of the upper soil (0.25 m) was measured for each tree with an electrode spacing of 0.5 m (4-point light hp). In this manner, the plantation was spatially charted with respect to the soil’s ECa. Additionally, tomography measurements were performed for 3D mapping of the soil ECa and spot checks of drilled cores with a profile of up to 1 m. The vegetative, generative, and fruit quality data were collected for each tree. The instantaneous plant water status was comprehensively determined in spot checks with the established Scholander method for water potential analysis (Scholander pressure bomb) as well as thermal imaging. An infrared camera was used for the thermal imaging (ThermaCam SC 500), mounted on a tractor 3.3 m above ground level. The thermal images (320 x 240 px) of the canopy surface were taken with an aperture of 45° and a geometric resolution of 8.54 x 6.41 mm. With the aid of the canopy temperature readings from the thermal images, cross-checked with manual temperature measurements of a dry and a wet reference leaf, the crop water stress index (CWSI) was calculated. Adjustments in CWSI for measurements in a semi-humid climate were developed, whereas the collection of reference temperatures was automatically collected from thermal images.
The bonitur data were transformed with the help of a variance stabilization process into a normal distribution. The statistical analyses as well as the automatic evaluation routine were performed with several scripts in MATLAB® (R2010b and R2016a) and a free program (spatialtoolbox). The hot spot analysis served to check whether an observed pattern is statistically significant. The method was evaluated with an established k-mean analysis. To test the hot-spot analysis by comparison, data from a grapefruit plantation (Adana, Turkey) was collected, including soil ECa, trunk circumference, and yield data. The plantation had 179 trees on a soil of type Xerofkuvent with clay and clay-loamy texture. The examination of the interaction between the critical values from the soil and plant water status information and the vegetative and generative plant growth variables was performed with the application from ANOVA.
The study indicates that the variability of the soil and plant information in fruit production is high, even considering small orchards. It was further indicated that the spatial patterns found in the soil ECa stayed constant through the years (r = 0.88 in 2011-2012 and r = 0.71 in 2012-2013). It was also demonstrated that CWSI determination may also be possible in semi-humid climate. A correlation (r = - 0.65, p < 0.0001) with the established method of leaf water potential analysis was found. The interaction between the ECa from various depths and the plant variables produced a highly significant connection with the topsoil in which the irrigation system was to be found. A correlation between yield and ECatopsoil of r = 0.52 was determined. By using the hot-spot analysis, extreme values in the spatial data could be determined. These extremes served to divide the zones (cold-spot, random, hot-spot). The random zone showed the highest correlation to the plant variables.
In summary it may be said that the cumulative water use efficiency (WUEc) was enhanced with high crop load. While the CWSI had no effect on fruit quality, the interaction of CWSI and WUEc even outweighed the impact of soil ECa on fruit quality in the production system with irrigation. In the plum orchard, irrigation was relevant for obtaining high quality produce even in the semi-humid climate.
This article first outlines different ways of how psycholinguists have dealt with linguistic diversity and illustrates these approaches with three familiar cases from research on language processing, language acquisition, and language disorders. The second part focuses on the role of morphology and morphological variability across languages for psycholinguistic research. The specific phenomena to be examined are to do with stem-formation morphology and inflectional classes; they illustrate how experimental research that is informed by linguistic typology can lead to new insights.
Kritische Anthropologie?
(2016)
This article compares Max Horkheimer’s and Theodor W. Adorno’s foundation of the Frankfurt Critical Theory with Helmuth Plessner’s foundation of Philosophical Anthropology. While Horkheimer’s and Plessner’s paradigms are mutually incompatible, Adorno’s „negative dialectics“ and Plessner’s „negative anthropology“ (G. Gamm) can be seen as complementing one another. Jürgen Habermas at one point sketched a complementary relationship between his own publicly communicative theory of modern society and Plessner’s philosophy of nature and human expressivity, and though he then came to doubt this, he later reaffirmed it. Faced with the „life power“ in „high capitalism“ (Plessner), the ambitions for a public democracy in a pluralistic society have to be broadened from an argumentative focus (Habermas) to include the human condition and the expressive modes of our experience as essentially embodied persons. The article discusses some possible aspects of this complementarity under the title of a „critical anthropology“ (H. Schnädelbach)
This article is a response to calls in prior research that we need more longitudi-nal analyses to better understand the foundations of PSM and related prosocial values. There is wide agreement that it is crucial for theory-building but also for tailoring hiring practices and human resource development programs to sort out whether PSM-related values are stable or developable. The article summarizes existent theoretical expecta-tions, which turn out to be partially conflicting, and tests them against multiple waves of data from the German Socio-Economic Panel Study which covers a time period of sixteen years. It finds that PSM-related values of public employees are stable rather than dynamic but tend to increase with age and decrease with organizational member-ship. The article also examines cohort effects, which have been neglected in prior work, and finds moderate evidence that there are differences between those born during the Second World War and later generations.
This article explores a recent performance of excerpts from T.S. Eliot’s Four Quartets (1935/36–1942) entitled Engaging Eliot: Four Quartets in Word, Color, and Sound as an example of live poetry. In this context, Eliot’s poem can be analysed as an auditory artefact that interacts strongly with other oral performances (welcome addresses and artists’ conversations), as well as with the musical performance of Christopher Theofanidis’s quintet “At the Still Point” at the end of the opening of Engaging Eliot. The event served as an introduction to a 13-day art exhibition and engaged in a re-evaluation of Eliot’s poem after 9/11: while its first part emphasises the connection between Eliot’s poem and Christian doctrine, its second part – especially the combination of poetry reading and musical performance – highlights the philosophical and spiritual dimensions of Four Quartets.
Kommunikative Vernunft
(2016)
Jürgen Habermas explicates the concept of communicative reason. He explains the key assumptions of the philosophy of language and social theory associated with this concept. Also discussed is the category of life-world and the role of the body-mind difference for the consciousness of exclusivity in our access to subjective experience. as well as the role of emotions and perceptions in the context of a theory of communicative action. The question of the redemption of the various validity claims as they are associated with the performance of speech acts is related to processes of social learning and to the role of negative experiences. Finally the interview deals with the relationship between religion and reason and the importance of religion in modern, post-secular societies. Questions about the philosophical culture of our present times are discussed at the end of the conversation.
The present study approaches the Spanish postposed constructions creo Ø and creo yo ‘[p], [I] think’ from a cognitive-constructionist perspective. It is argued that both constructions are to be distinguished from one another because creo Ø has a subjective function, while in creo yo, it is the intersubjective dimension that is particularly prominent. The present investigation takes both a qualitative and a quantitative perspective. With regard to the latter, the problem of quantitative representativity is addressed. The discussion posed the question of how empirical research can feed back into theory, more precisely, into the framework of Cognitive Construction Grammar. The data to be analyzed here are retrieved from the corpora Corpus de Referencia del Español Actual and Corpus del Español.
Die Gründung des Europäischen Stabilitätsmechanismus (ESM) durch einen völkerrechtlichen Vertrag außerhalb der EU-Verträge ist mit mehreren Nachteilen verbunden. So schwächt die Zersplitterung der Rechtsquellen die europäischen Institutionen und deren Legitimation. Auch kann der ESM nicht ohne weiteres auf die Strukturen und das Personal der Europäischen Kommission zurückgreifen. Daher ist eine Integration des ESM in das Gemeinschaftsrecht sinnvoll. Dies setzt eine Rechtsgrundlage voraus. Die Arbeit kommt unter Berücksichtigung der deutschen und französischen Positionen sowie der einschlägigen Rechtsprechung des EuGH zu dem Ergebnis, dass die bestehenden EUVerträge keine Rechtsgrundlage für die Integration des ESM in das Gemeinschaftsrecht enthalten. Vor diesem Hintergrund ist eine Änderung des AEUV gemäß dem ordentlichen Vertragsänderungsverfahren unumgänglich. Damit sollte eine ausdrückliche Rechtsgrundlage für einen gemeinschaftsrechtlichen Stabilitätsmechanismus geschaffen werden. In der Arbeit wird ein konkreter Formulierungsvorschlag für eine derartige Rechtsgrundlage entwickelt, auf deren Basis der Rat mit einer Verordnung einen gemeinschaftsrechtlichen Stabilitätsmechanismus schaffen kann. Außerdem werden die wesentlichen Strukturprinzipien für den gemeinschaftsrechtlichen Stabilitätsmechanismus entwickelt, im Hinblick auf Trägerschaft, Governance, Finanzierung, demokratische Kontrolle und die verfügbaren Finanzhilfeinstrumente.
Diese Arbeit ist im Bereich des Qualitätsmanagements (QM) in öffentlichen Organisationen zu verorten. Sie fragt konkret, welche Faktoren eine effektive Implementierung des QM-Systems Common Assessment Framework (CAF) in Deutschen Bundesbehörden beeinflussen. Auf der Basis des soziologischen Neo-Institutionalismus wurden Hypothesen zu möglichen Einflussfaktoren aufgestellt. Im Rahmen einer systematischen Fallauswahl wurden folgende Organisationen untersucht: das Bundeskartellamt, das Bundeszentralamt für Steuern sowie die Staatsbibliothek zu Berlin. Für den empirischen Teil dieser Arbeit wurden halbstrukturierte Leitfadeninterviews mit Experten der ausgewählten Organisationen geführt. Im Rahmen einer qualitativen Inhaltsanalyse wurden diese dann ausgewertet und mit einer „cross case synthesis“ nach Yin (2014) anschließend theoriegeleitet analysiert.
Es lassen sich letztendlich drei entscheidende Bedingungen für eine effektive CAF-Implementierung in Bundesbehörden ableiten: Zum einen die formale Unterstützung der jeweiligen Hausleitung, die eine aktive Rolle innerhalb des CAF-Projektes einnimmt und dabei auch alle mittleren Führungsspitzen zielführend mit einbinden sollte, beispielsweise durch die Übernahme der QM-Projektleitung. Zum anderen ist es für eine zielkohärente Handlungsweise aller Organisationsmitgliedern vonnöten, die verschiedenen Steuerungsinstrumente im Rahmen einer mittelfristigen Gesamtstrategie miteinander zu verzahnen und so formal zu institutionalisieren. Außerdem ist die formale Institutionalisierung einer QM-Einheit, nahe der Hausleitung außerhalb der Fachabteilungen angesiedelt, zu empfehlen. Es hat sich im Rahmen der untersuchten Fallbeispiele gezeigt, dass diese Einheiten ein größeres Potential aufweisen, sich zu QM- und CAF-Kompetenzzentren zu entwickeln und unnötige Arbeiten, die das CAF-Engagement der Mitarbeiterschaft schmälern würden, von eben jener fernzuhalten.
Durch diese Ergebnisse konnte die Arbeit zwei entscheidende Beiträge leisten: Die Forschungslandkarte der QM- und CAF-Forschung in öffentlichen Organisationen wies, speziell auf Bundesebene, vorab verschiedenste weiße Flecken auf, die von dieser Arbeit teilweise gefüllt werden konnten. Zum anderen ist es auf Basis dieser Forschungsarbeit nun möglich, Verwaltungspraktikern konkrete Handlungsempfehlungen an die Hand zu geben, wenn diese erstmals CAF in ihrer Organisation implementieren möchten oder bei einer schon erfolgten Einführung des QM-Instruments nachsteuern möchten.
Background: Low back pain (LBP) is one of the world wide leading causes of limited activity and disability. Impaired motor control has been found to be one of the possible factors related to the development or persistence of LBP. In particularly, motor control strategies seemed to be altered in situations requiring reactive responses of the trunk counteracting sudden external forces. However, muscular responses were mostly assessed in (quasi) static testing situations under simplified laboratory conditions. Comprehensive investigations in motor control strategies during dynamic everyday situations are lacking. The present research project aimed to investigate muscular compensation strategies following unexpected gait perturbations in people with and without LBP. A novel treadmill stumbling protocol was tested for its validity and reliability to provoke muscular reflex responses at the trunk and the lower extremities (study 1). Thereafter, motor control strategies in response to sudden perturbations were compared between people with LBP and asymptomatic controls (CTRL) (study 2). In accordance with more recent concepts of motor adaptation to pain, it was hypothesized that pain may have profound consequences on motor control strategies in LBP. Therefore, it was investigated whether differences in compensation strategies were either consisting of changes local to the painful area at the trunk, or also being present in remote areas such as at the lower extremities.
Methods: All investigations were performed on a custom build split-belt treadmill simulating trip-like events by unexpected rapid deceleration impulses (amplitude: 2 m/s; duration: 100 ms; 200 ms after heel contact) at 1m/s baseline velocity. A total number of 5 (study 1) and 15 (study 2) right sided perturbations were applied during walking trials. Muscular activities were assessed by surface electromyography (EMG), recorded at 12 trunk muscles and 10 (study 1) respectively 5 (study 2) leg muscles. EMG latencies of muscle onset [ms] were retrieved by a semi-automatic detection method. EMG amplitudes (root mean square (RMS)) were assessed within 200 ms post perturbation, normalized to full strides prior to any perturbation [RMS%]. Latency and amplitude investigations were performed for each muscle individually, as well as for pooled data of muscles grouped by location. Characteristic pain intensity scores (CPIS; 0-100 points, von Korff) based on mean intensity ratings reported for current, worst and average pain over the last three months were used to allocate participants into LBP (≥30 points) or CTRL (≤10 points). Test-retest reproducibility between measurements was determined by a compilation of measures of reliability. Differences in muscular activities between LBP and CTRL were analysed descriptively for individual muscles; differences based on grouped muscles were statistically tested by using a multivariate analysis of variance (MANOVA, α =0.05).
Results: Thirteen individuals were included into the analysis of study 1. EMG latencies revealed reflex muscle activities following the perturbation (mean: 89 ms). Respective EMG amplitudes were on average 5-fold of those assessed in unperturbed strides, though being characterized by a high inter-subject variability. Test-retest reliability of muscle latencies showed a high reproducibility, both for muscles at the trunk and legs. In contrast, reproducibility of amplitudes was only weak to moderate for individual muscles, but increased when being assessed as a location specific outcome summary of grouped muscles. Seventy-six individuals were eligible for data analysis in study 2. Group allocation according to CPIS resulted in n=25 for LBP and n=29 for CTRL. Descriptive analysis of activity onsets revealed longer delays for all muscles within LBP compared to CTRL (trunk muscles: mean 10 ms; leg muscles: mean 3 ms). Onset latencies of grouped muscles revealed statistically significant differences between LBP and CTRL for right (p=0.009) and left (p=0.007) abdominal muscle groups. EMG amplitude analysis showed a high variability in activation levels between individuals, independent of group assignment or location. Statistical testing of grouped muscles indicated no significant difference in amplitudes between LBP and CTRL.
Discussion: The present research project could show that perturbed treadmill walking is suitable to provoke comprehensive reflex responses at the trunk and lower extremities, both in terms of sudden onsets and amplitudes of reflex activity. Moreover, it could demonstrate that sudden loadings under dynamic conditions provoke an altered reflex timing of muscles surrounding the trunk in people with LBP compared to CTRL. In line with previous investigations, compensation strategies seemed to be deployed in a task specific manner, with differences between LBP and CTRL being evident predominately at ventral sides. No muscular alterations exceeding the trunk could be found when being assessed under the automated task of locomotion. While rehabilitation programs tailored towards LBP are still under debate, it is tempting to urge the implementation of dynamic sudden loading incidents of the trunk to enhance motor control and thereby to improve spinal protection. Moreover, in respect to the consistently observed task specificity of muscular compensation strategies, such a rehabilitation program should be rich in variety.