Refine
Has Fulltext
- yes (108) (remove)
Year of publication
- 2015 (108) (remove)
Document Type
- Doctoral Thesis (108) (remove)
Is part of the Bibliography
- yes (108)
Keywords
- Klimawandel (3)
- climate change (3)
- Arbeitsgedächtnis (2)
- Aspekt (2)
- Bakteriophagen (2)
- Erosion (2)
- Germany (2)
- Geschäftsprozessmanagement (2)
- Modellierung (2)
- Nanopartikel (2)
Institute
- Institut für Geowissenschaften (23)
- Institut für Physik und Astronomie (15)
- Institut für Biochemie und Biologie (11)
- Institut für Ernährungswissenschaft (7)
- Sozialwissenschaften (7)
- Institut für Chemie (6)
- Institut für Umweltwissenschaften und Geographie (6)
- Department Linguistik (4)
- Institut für Mathematik (4)
- Department Erziehungswissenschaft (3)
Business Process Management has become an integral part of modern organizations in the private and public sector for improving their operations. In the course of Business Process Management efforts, companies and organizations assemble large process model repositories with many hundreds and thousands of business process models bearing a large amount of information. With the advent of large business process model collections, new challenges arise as structuring and managing a large amount of process models, their maintenance, and their quality assurance.
This is covered by business process architectures that have been introduced for organizing and structuring business process model collections. A variety of business process architecture approaches have been proposed that align business processes along aspects of interest, e. g., goals, functions, or objects. They provide a high level categorization of single processes ignoring their interdependencies, thus hiding valuable information. The production of goods or the delivery of services are often realized by a complex system of interdependent business processes. Hence, taking a holistic view at business processes interdependencies becomes a major necessity to organize, analyze, and assess the impact of their re-/design. Visualizing business processes interdependencies reveals hidden and implicit information from a process model collection.
In this thesis, we present a novel Business Process Architecture approach for representing and analyzing business process interdependencies on an abstract level. We propose a formal definition of our Business Process Architecture approach, design correctness criteria, and develop analysis techniques for assessing their quality. We describe a methodology for applying our Business Process Architecture approach top-down and bottom-up. This includes techniques for Business Process Architecture extraction from, and decomposition to process models while considering consistency issues between business process architecture and process model level. Using our extraction algorithm, we present a novel technique to identify and visualize data interdependencies in Business Process Data Architectures. Our Business Process Architecture approach provides business process experts,managers, and other users of a process model collection with an overview that allows reasoning about a large set of process models,
understanding, and analyzing their interdependencies in a facilitated way. In this regard we evaluated our Business Process Architecture approach in an experiment and provide implementations of selected techniques.
Der Beitrag der Dissertation „Theoriebasierte Betreuung vom Schulpraktikum im Lehramtsstudium Englisch“ zum wissenschaftlichen Diskurs liegt in der Verbindung von Theoriebereichen der Professionalisierungsforschung und angewandten Linguistik mit Untersuchungen zur hochschuldidaktischen Begleitung und Betreuung im ersten Unterrichtspraktikum des Lehramtsstudiums, dem fachdidaktischen Tagespraktikum, an der Universität Potsdam. Ein interaktionsanalytisches Vorgehen wurde eingesetzt zur Weiterentwicklung des hochschuldidaktischen Settings einer disziplinenverbindenden, fachwissenschaftlichen Begleitung von Praktika im komplexen Kontext Schule. Die Implementierung entsprechender Formate ins reguläre Studium wurde in einer über drei Jahre angelegten iterativen Studie turnusmäßig evaluiert.
Gut ausgebildete Schreibkompetenzen gelten als zentrale Voraussetzung für den schulischen Erfolg. Wenngleich die schriftliche Textproduktion unbestritten fester Bestandteil des Deutschunterrichts ist, wird vielfach beklagt, dass die vorhandenen Schreibkompetenzen unzureichend sind. Blickt man auf die fachdidaktische Forschung so zeigt sich, dass Schreibkompetenz ein schwer zu definierendes Phänomen bleibt und innerhalb der schreibdidaktischen Forschung strittig ist, wie Schreibkompetenz – insbesondere nach Erwerb der grundlegenden Schreibfertigkeiten – am Besten entwickelt werden kann. Zudem gilt für das Fach Deutsch, insbesondere den Aufgabenbereich „Texte verfassen“, das eine empirische Fundierung der Fachdidaktik bisher kaum realisiert wurde.
Vor diesem Hintergrund wurde in der vorgelegten Arbeit ein Programm zu Förderung der schriftlichen Erzählfähigkeit von Fünftklässlern entwickelt und anschließend in der regulären Unterrichtspraxis eingesetzt und begleitend evaluiert. Methodisch orientiert sich die Arbeit dabei im Hinblick auf die Konzeption, Umsetzung und Evaluation des Förderprogramms an den von Einsiedler postulierten „Standards der (didaktischen) Entwicklungsforschung“.
Bei der im ersten Schritt erfolgten Konzeption des Förderprogramms ging es darum eine spezifische, didaktische Konzeption, die Kombination sprachstruktur- und (lern)prozessbezogenener Elemente, sprachwissenschaftlich basiert und pädagogisch-didaktisch begründet zu entwickeln. Bei der hierzu notwendigen Integration verschiedener theoretischer Zugänge unterschiedlicher Fachdisziplinen galt es vorhandene Ansätze im Hinblick auf interne Anschlussmöglichkeiten auszuloten und auf diesem Wege einen sich gegenseitig ergänzenden, umfassenden Bezugsrahmen zu schaffen. Dabei gelang - unter Einbeziehung von Modellen und Befunden aus der Schreibentwicklungsforschung - die innerhalb der Schreibforschung vielfach geforderte, jedoch bisher fehlende Integration von strukturellen Ansätzen aus der linguistischen Schreibforschung mit den innerhalb der Kognitionspsychologie favorisierten prozessuellen Ansätzen.
Auf dieser Grundlage wurde ein aufgabenbasiertes Programm mit insgesamt acht verschiedenen Fördermodulen entwickelt, aufgabenbasiert deshalb, weil dies nicht nur einen lehrergesteuerten aber schülerzentrierten Unterricht ermöglicht, sondern auch einen adaptiven Unterricht, somit den spezifischen Anforderungen des Unterrichts in heterogenen Lerngruppen gerecht wird, was angesichts der zunehmenden (sprachlich-kulturellen) Heterogenität in regulären Schulklassen sinnvoll scheint.
In einem zweiten Schritt wurde innerhalb einer Pilotstudie die Umsetzbarkeit des Förderprogramms im schulischen Kontext erprobt. Unter Kontrolle seiner praktischen Umsetzbarkeit (Machbarkeitshypothese) wurde im Anschluss daran die Wirksamkeit des Förderprogramms im Hinblick auf eine Steigerung in den produktbezogenen Schreibmaßen (Wortschatzvarianz, Satzkomplexität, lexikalische Dichte, Kohäsionsgrade, Textlänge) und die Stabilität der Fördereffekte untersucht. Dies geschah mittels eines quasiexperimentellen Untersuchungsplans, genauer eines Zweigruppen-Pretest-Posttest-Follow-up-Plans mit Kontrollvariablen.
Grundlage hierfür war eine möglichst heterogene Stichprobe von knapp 200 Schüler/innen. Denn angesichts der angesprochenen zunehmenden Heterogenität in regulären Schulklassen galt es neben der Wirksamkeit und Nachhaltigkeit des Förderprogramms auch dessen Eignung für den Unterricht in heterogenen Lerngruppen zu prüfen. Die Ergebnisse dieser Prüfung sprechen dafür, dass es gelungen ist ein entsprechendes Förderprogramm zu entwickeln.
Trotz dem Auftreten von forschungsmethodischen Schwierigkeiten, die innerhalb der vorgelegten Arbeit ausführlich im Hinblick auf Ursachen und Wirkungen diskutiert werden, können, unter Berücksichtigung des explorativen Charakters der Studie, die Ergebnisse insbesondere zur Wortschatzvarianz und zur Satzkomplexität ebenfalls als Indizien für die Effektivität des Förderprogramms gelten.
Synchronization of large ensembles of oscillators is an omnipresent phenomenon observed in different fields of science like physics, engineering, life sciences, etc. The most simple setup is that of globally coupled phase oscillators, where all the oscillators contribute to a global field which acts on all oscillators. This formulation of the problem was pioneered by Winfree and Kuramoto. Such a setup gives a possibility for the analysis of these systems in terms of global variables. In this work we describe nontrivial collective dynamics in oscillator populations coupled via mean fields in terms of global variables. We consider problems which cannot be directly reduced to standard Kuramoto and Winfree models.
In the first part of the thesis we adopt a method introduced by Watanabe and Strogatz. The main idea is that the system of identical oscillators of particular type can be described by a low-dimensional system of global equations. This approach enables us to perform a complete analytical analysis for a special but vast set of initial conditions. Furthermore, we show how the approach can be expanded for some nonidentical systems. We apply the Watanabe-Strogatz approach to arrays of Josephson junctions and systems of identical phase oscillators with leader-type coupling.
In the next parts of the thesis we consider the self-consistent mean-field theory method that can be applied to general nonidentical globally coupled systems of oscillators both with or without noise. For considered systems a regime, where the global field rotates uniformly, is the most important one. With the help of this approach such solutions of the self-consistency equation for an arbitrary distribution of frequencies and coupling parameters can be found analytically in the parametric form, both for noise-free and noisy cases.
We apply this method to deterministic Kuramoto-type model with generic coupling and an ensemble of spatially distributed oscillators with leader-type coupling. Furthermore, with the proposed self-consistent approach we fully characterize rotating wave solutions of noisy Kuramoto-type model with generic coupling and an ensemble of noisy oscillators with bi-harmonic coupling.
Whenever possible, a complete analysis of global dynamics is performed and compared with direct numerical simulations of large populations.
Viele klinische Schnelltestsysteme benötigen vorpräparierte oder aufgereinigte Analyte mit frisch hergestellten Lösungen. Fernab standardisierter Laborbedingungen wie z.B. in Entwicklungsländern oder Krisengebieten sind solche Voraussetzungen oft nur unter einem hohen Aufwand herstellbar.
Zusätzlich stellt die erforderliche Sensitivität die Entwicklung einfach zu handhabender Testsysteme vor große Herausforderungen.
Autokatalytische Reaktionen, die sich mit Hilfe sehr geringer Initiatorkonzentrationen auslösen lassen, können hier eine Perspektive für Signalverstärkungsprozesse bieten.
Aus diesem Grund wird im ersten Teil der vorliegenden Arbeit das Verhalten der autokatalytischen Arsenit-Jodat-Reaktion in einem mikrofluidischen Kanal untersucht. Dabei werden insbesondere die diffusiven und konvektiven Einflüsse auf die Reaktionskinetik im Vergleich zu makroskopischen Volumenmengen betrachtet.
Im zweiten Teil werden thermoresponsive Hydrogele mit einem kanalstrukturierten Papiernetzwerk zu einem neuartigen, kapillargetriebenen, extern steuerbaren Mikrofluidik-System kombiniert. Das hier vorgestellte Konzept durch Hydrogele ein papierbasiertes LOC-System zu steuern, ermöglicht zukünftig die Herstellung von komplexeren, steuerbaren Point-Of-Care Testsystemen (POCT). Durch z.B. einen thermischen Stimulus, wird das Lösungsverhalten eines Hydrogels so verändert, dass die gespeicherte Flüssigkeit freigesetzt und durch die Kapillarkraft des Papierkanals ins System transportiert wird. Die Eigenschaften dieses Gelnetzwerks können dabei so eingestellt werden, dass eine Freisetzung von Flüssigkeiten sogar bei Körpertemperatur möglich wäre und damit eine Anwendung gänzlich ohne weitere Hilfsmittel denkbar ist. Für die Anwendung notwendige Chemikalien oder Enzyme lassen sich hierbei bequem in getrocknetem Zustand im Papiersubstrat vorlagern und bei Bedarf in Lösung bringen.
Im abschließenden dritten Teil der Arbeit wird ein durch Hydrogele betriebener, Antikörper-basierter Mikroorganismenschnelltest für Escherichia coli präsentiert. Darüber hinaus wird weiterführend eine einfache Methode zur Funktionalisierung eines Hydrogels mit Biomolekülen über EDC/NHS-Kopplung vorgestellt.
Intuitively, it is clear that neural processes and eye movements in reading are closely connected, but only few studies have investigated both signals simultaneously. Instead, the usual approach is to record them in separate experiments and to subsequently consolidate the results. However, studies using this approach have shown that it is feasible to coregister eye movements and EEG in natural reading and contributed greatly to the understanding of oculomotor processes in reading. The present thesis builds upon that work, assessing to what extent coregistration can be helpful for sentence processing research.
In the first study, we explore how well coregistration is suited to study subtle effects common to psycholinguistic experiments by investigating the effect of distance on dependency resolution. The results demonstrate that researchers must improve the signal-to-noise ratio to uncover more subdued effects in coregistration. In the second study, we compare oscillatory responses in different presentation modes. Using robust effects from world knowledge violations, we show that the generation and retrieval of memory traces may differ between natural reading and word-by-word presentation. In the third study, we bridge the gap between our knowledge of behavioral and neural responses to integration difficulties in reading by analyzing the EEG in the context of regressive saccades. We find the P600, a neural indicator of recovery processes, when readers make a regressive saccade in response to integration difficulties.
The results in the present thesis demonstrate that coregistration can be a useful tool for the study of sentence processing. However, they also show that it may not be suitable for some questions, especially if they involve subtle effects.
The Tien-Shan and the neighboring Pamir region are two of the largest mountain belts in the world. Their deformation is dominated by intermontane basins bounded by active thrust and reverse faulting. The Tien-Shan mountain belt is characterized by a very high rate of seismicity along its margins as well as within the Tien-Shan interior. The study area of the here presented thesis, the western part of the Tien-Shan region, is currently seismically active with small and moderate sized earthquakes. However, at the end of the 19th beginning of the 20th century, this region was struck by a remarkable series of large magnitude (M>7) earthquakes, two of them reached magnitude 8.
Those large earthquakes occurred prior to the installation of the global digital seismic network and therefore were recorded only by analog seismic instruments. The processing of the analog data brings several difficulties, for example, not always the true parameters of the recording system are known. Another complicated task is the digitization of those records - a very time-consuming and delicate part. Therefore a special set of techniques is developed and modern methods are adapted for the digitized instrumental data analysis.
The main goal of the presented thesis is to evaluate the impact of large magnitude M≥7.0 earthquakes, which occurred at the turn of 19th to 20th century in the Tien-Shan region, on the overall regional tectonics. A further objective is to investigate the accuracy of previously estimated source parameters for those earthquakes, which were mainly based on macroseismic observations, and re-estimate them based on the instrumental data. An additional aim of this study is to develop the tools and methods for faster and more productive usage of analog seismic data in modern seismology.
In this thesis, the ten strongest and most interesting historical earthquakes in Tien-Shan region are analyzed. The methods and tool for digitizing and processing the analog seismic data are presented. The source parameters of the two major M≥8.0 earthquakes in the Northern Tien-Shan are re-estimated in individual case studies. Those studies are published as peer-reviewed scientific articles in reputed journals. Additionally, the Sarez-Pamir earthquake and its connection with one of the largest landslides in the world, Usoy landslide, is investigated by seismic modeling. These results are also published as a research paper.
With the developed techniques, the source parameters of seven more major earthquakes in the region are determined and their impact on the regional tectonics was investigated. The large magnitudes of those earthquakes are confirmed by instrumental data. The focal mechanism of these earthquakes were determined providing evidence for responsible faults or fault systems.
Aufgrund ihrer potenziell gesundheitsfördernden Wirkung sind die polyphenolischen Isoflavone für die menschliche Ernährung von großem Interesse. Eine Vielzahl an experimentellen und epidemiologischen Studien zeigen für die in Soja enthaltenen Isoflavone Daidzein und Genistein eine präventive Wirkung bezüglich hormon-abhängiger und altersbedingter Erkrankungen, wie Brust- und Prostatakrebs, Osteoporose, Herz-Kreislauf-Erkrankungen sowie des menopausalen Syndroms. Die Metabolisierung und Bioaktivierung dieser sekundären Pflanzenstoffe durch die humane intestinale Darmmikrobiota ist individuell unterschiedlich. Nur in einem geringen Teil der westlichen Bevölkerung wird der Daidzein-Metabolit Equol durch spezifische Darmbakterien gebildet. Ein isoliertes Equol-produzierendes Bakterium des menschlichen Darmtrakts ist Slackia isoflavoniconvertens. Anhand dieser Spezies sollten die bislang unbekannten, an der Umsetzung von Daidzein und Genistein beteiligten Enzyme identifiziert und charakterisiert werden.
Fermentationsexperimente mit S. isoflavoniconvertens zeigten, dass die Gene der Daidzein und Genistein-umsetzenden Enzyme nicht konstitutiv exprimiert werden, sondern induziert werden müssen. Mit Hilfe der zweidimensionalen differentiellen Gelelektrophorese wurden sechs Proteine detektiert, welche in einer S. isoflavoniconvertens-Kultur in Anwesenheit von Daidzein induziert wurden. Auf Grundlage einzelner Peptidsequenzen erfolgte die Sequenzierung eines Genkomplexes mit den in gleicher Orientierung angeordneten Genen der durch Daidzein induzierten Proteine. Sequenzvergleiche identifizierten zudem äquivalente Genprodukte zu den Proteinen von S. isoflavoniconvertens in anderen Equolproduzierenden Bakterien. Nach der heterologen Expression in Escherichia coli wurden drei dieser Gene durch enzymatische Aktivitätstests als Daidzein-Reduktase (DZNR), Dihydrodaidzein-Reduktase (DHDR) und Tetrahydrodaidzein-Reduktase (THDR) identifiziert. Die Kombination der E. coli-Zellextrakte führte zur vollständigen Umsetzung von Daidzein über Dihydrodaidzein zu Equol. Neben Daidzein setzte die DZNR auch Genistein zu Dihydrogenistein um. Dies erfolgte mit einer größeren Umsatzgeschwindigkeit im Vergleich zur Reduktion von Daidzein zu Dihydrodaidzein. Enzymatische Aktivitätstests mit dem Zellextrakt von S. isoflavoniconvertens zeigten ebenfalls eine schnellere Umsetzung von Genistein. Die Kombination der rekombinanten DHDR und THDR führte zur Umsetzung von Dihydrodaidzein zu Equol. Der korrespondierende Metabolit 5-Hydroxyequol konnte als Endprodukt des Genistein-Metabolismus nicht detektiert werden. Zur Reinigung der drei identifizierten Reduktasen wurden diese genetisch an ein Strep-tag fusioniert und mittels Affinitätschromatographie gereinigt. Die übrigen durch Daidzein induzierten Proteine IfcA, IfcBC und IfcE wurden ebenfalls in E. coli exprimiert und als Strep-Fusionsproteine gereinigt. Vergleichende Aktivitätstests identifizierten das induzierte Protein IfcA als Dihydrodaidzein-Racemase. Diese katalysierte die Umsetzung des (R)- und (S)-Enantiomers von Dihydrodaidzein und Dihydrogenistein zum korrespondierenden Racemat. Neben dem Elektronentransfer-Flavoprotein IfcBC wurden auch die THDR, DZNR und IfcE als FAD-haltige Flavoproteine identifiziert. Zudem handelte es sich bei IfcE um ein Eisen-Schwefel-Protein. Nach Induktion der für die Daidzein-Umsetzung kodierenden Gene wurden mehrere verschieden lange mRNA-Transkripte gebildet. Dies zeigte, dass die Transkription des durch Daidzein induzierten Genkomplexes in S. isoflavoniconvertens nicht in Form eines einzelnen Operonsystems erfolgte.
Auf Grundlage der identifizierten Daidzein-umsetzenden Enzyme kann der Mechanismus der bakteriellen Umsetzung von Isoflavonen durch S. isoflavoniconvertens eingehend erforscht werden. Die ermittelten Gensequenzen der durch Daidzein induzierten Proteine sowie die korrespondierenden Gene weiterer Equol-produzierender Bakterien bieten zudem die Möglichkeit der mikrobiellen Metagenomanalyse im humanen Darmtrakt.
This dissertation investigates the working memory mechanism subserving human sentence processing and its relative contribution to processing difficulty as compared to syntactic prediction. Within the last decades, evidence for a content-addressable memory system underlying human cognition in general has accumulated (e.g., Anderson et al., 2004). In sentence processing research, it has been proposed that this general content-addressable architecture is also used for language processing (e.g., McElree, 2000).
Although there is a growing body of evidence from various kinds of linguistic dependencies that is consistent with a general content-addressable memory subserving sentence processing (e.g., McElree et al., 2003; VanDyke2006), the case of reflexive-antecedent dependencies has challenged this view. It has been proposed that in the processing of reflexive-antecedent dependencies, a syntactic-structure based memory access is used rather than cue-based retrieval within a content-addressable framework (e.g., Sturt, 2003).
Two eye-tracking experiments on Chinese reflexives were designed to tease apart accounts assuming a syntactic-structure based memory access mechanism from cue-based retrieval (implemented in ACT-R as proposed by Lewis and Vasishth (2005).
In both experiments, interference effects were observed from noun phrases which syntactically do not qualify as the reflexive's antecedent but match the animacy requirement the reflexive imposes on its antecedent. These results are interpreted as evidence against a purely syntactic-structure based memory access. However, the exact pattern of effects observed in the data is only partially compatible with the Lewis and Vasishth cue-based parsing model.
Therefore, an extension of the Lewis and Vasishth model is proposed. Two principles are added to the original model, namely 'cue confusion' and 'distractor prominence'.
Although interference effects are generally interpreted in favor of a content-addressable memory architecture, an alternative explanation for interference effects in reflexive processing has been proposed which, crucially, might reconcile interference effects with a structure-based account.
It has been argued that interference effects do not necessarily reflect cue-based retrieval interference in a content-addressable memory but might equally well be accounted for by interference effects which have already occurred at the moment of encoding the antecedent in memory (Dillon, 2011).
Three experiments (eye-tracking and self-paced reading) on German reflexives and Swedish possessives were designed to tease apart cue-based retrieval interference from encoding interference. The results of all three experiments suggest that there is no evidence that encoding interference affects the retrieval of a reflexive's antecedent.
Taken together, these findings suggest that the processing of reflexives can be explained with the same cue-based retrieval mechanism that has been invoked to explain syntactic dependency resolution in a range of other structures. This supports the view that the language processing system is located within a general cognitive architecture, with a general-purpose content-addressable working memory system operating on linguistic expressions.
Finally, two experiments (self-paced reading and eye-tracking) using Chinese relative clauses were conducted to determine the relative contribution to sentence processing difficulty of working-memory processes as compared to syntactic prediction during incremental parsing.
Chinese has the cross-linguistically rare property of being a language with subject-verb-object word order and pre-nominal relative clauses. This property leads to opposing predictions of expectation-based
accounts and memory-based accounts with respect to the relative processing difficulty of subject vs. object relatives.
Previous studies showed contradictory results, which has been attributed to different kinds local ambiguities confounding the materials (Lin and Bever, 2011). The two experiments presented are the first to compare Chinese relatives clauses in syntactically unambiguous contexts.
The results of both experiments were consistent with the predictions of the expectation-based account of sentence processing but not with the memory-based account. From these findings, I conclude that any theory of human sentence processing needs to take into account the power of predictive processes unfolding in the human mind.
This dissertation addresses the question of how linguistic structures can be represented in working memory. We propose a memory-based computational model that derives offline and online complexity profiles in terms of a top-down parser for minimalist grammars (Stabler, 2011). The complexity metric reflects the amount of time an item is stored in memory. The presented architecture links grammatical representations stored in memory directly to the cognitive behavior by deriving predictions about sentence processing difficulty.
Results from five different sentence comprehension experiments were used to evaluate the model's assumptions about memory limitations. The predictions of the complexity metric were compared to the locality (integration and storage) cost metric of Dependency Locality Theory (Gibson, 2000). Both metrics make comparable offline and online predictions for four of the five phenomena. The key difference between the two metrics is that the proposed complexity metric accounts for the structural complexity of intervening material. In contrast, DLT's integration cost metric considers the number of discourse referents, not the syntactic structural complexity.
We conclude that the syntactic analysis plays a significant role in memory requirements of parsing. An incremental top-down parser based on a grammar formalism easily computes offline and online complexity profiles, which can be used to derive predictions about sentence processing difficulty.
The dissertation proposes that the spread of photography and popular cinema in 19th- and 20th-century-India have shaped an aesthetic and affective code integral to the reading and interpretation of Indian English novels, particularly when they address photography and/or cinema film, as in the case of the four corpus texts. In analyzing the nexus between ‘real’ and ‘reel’, the dissertation shows how the texts address the reader as media consumer and virtual image projector. Furthermore, the study discusses the Indian English novel against the backdrop of the cultural and medial transformations of the 20th century to elaborate how these influenced the novel’s aesthetics. Drawing upon reception aesthetics, the author devises the concept of the ‘implied spectator’ to analyze the aesthetic impact of the novels’ images as visual textures.
No God in Sight (2005) by Altaf Tyrewala comprises of a string of 41 interior monologues, loosely connected through their narrators’ random encounters in Mumbai in the year 2000. Although marked by continuous perspective shifts, the text creates a sensation of acute immediacy. Here, the reader is addressed as implied spectator and is sutured into the narrated world like a film spectator ― an effect created through the use of continuity editing as a narrative technique.
Similarly, Ruchir Joshi’s The Last Jet Engine Laugh (2002) coll(oc)ates disparate narrative perspectives and explores photography as an artistic practice, historiographic recorder and epistemological tool. The narrative appears guided by the random viewing of old photographs by the protagonist and primary narrator, the photographer Paresh Bhatt. However, it is the photographic negative and the practice of superimposition that render this string of episodes and different perspectives narratively consequential and cosmologically meaningful. Photography thus marks the perfect symbiosis of autobiography and historiography.
Tabish Khair’s Filming. A Love Story (2007) immerses readers in the cine-aesthetic of 1930s and 40s Bombay film, the era in which the embedded plot is set. Plotline, central scenes and characters evoke the key films of Indian cinema history such as Satyajit Ray’s “Pather Panchali” or Raj Kapoor’s “Awara”. Ultimately, the text written as film dissolves the boundary between fiction and (narrated) reality, reel and real, thereby showing that the images of individual memory are inextricably intertwined with and shaped by collective memory. Ultimately, the reconstruction of the past as and through film(s) conquers trauma and endows the Partition of India as a historic experience of brutal contingency with meaning.
The Bioscope Man (Indrajit Hazra, 2008) is a picaresque narrative set in Calcutta - India’s cultural capital and birthplace of Indian cinema at the beginning of the 20th century. The autodiegetic narrator Abani Chatterjee relates his rise and fall as silent film star, alternating between the modes of tell and show. He is both autodiegetic narrator and spectator or perceiving consciousness, seeing himself in his manifold screen roles. Beyond his film roles however, the narrator remains a void. The marked psychoanalytical symbolism of the text is accentuated by repeated invocations of dark caves and the laterna magica. Here too, ‘reel life’ mirrors and foreshadows real life as Indian and Bengali history again interlace with private history. Abani Chatterjee thus emerges as a quintessentially modern man of no qualities who assumes definitive shape only in the lost reels of the films he starred in.
The final chapter argues that the static images and visual frames forwarded in the texts observe an integral psychological function: Premised upon linear perspective they imply a singular, static subjectivity appealing to the postmodern subject. In the corpus texts, the rise of digital technology in the 1990s thus appears not so much to have displaced older image repertories, practices and media techniques, than it has lent them greater visibility and appeal. Moreover, bricolage and pastiche emerge as cultural techniques which marked modernity from its inception. What the novels thus perpetuate is a media archeology not entirely servant to the poetics of the real. The permeable subject and the notion of the gaze as an active exchange as encapsulated in the concept of darshan - ideas informing all four texts - bespeak the resilience of a mythical universe continually re-instantiated in new technologies and uses. Eventually, the novels convey a sense of subalternity to a substantially Hindu nationalist history and historiography, the centrifugal force of which developed in the twentieth century and continues into the present.
The sea level rise induced intensification of coastal floods is a serious threat to many regions in proximity to the ocean. Although severe flood events are rare they can entail enormous damage costs, especially when built-up areas are inundated. Fortunately, the mean sea level advances slowly and there is enough time for society to adapt to the changing environment. Most commonly, this is achieved by the construction or reinforcement of flood defence measures such as dykes or sea walls but also land use and disaster management are widely discussed options. Overall, albeit the projection of sea level rise impacts and the elaboration of adequate response strategies is amongst the most prominent topics in climate impact research, global damage estimates are vague and mostly rely on the same assessment models. The thesis at hand contributes to this issue by presenting a distinctive approach which facilitates large scale assessments as well as the comparability of results across regions. Moreover, we aim to improve the general understanding of the interplay between mean sea level rise, adaptation, and coastal flood damage.
Our undertaking is based on two basic building blocks. Firstly, we make use of macroscopic flood-damage functions, i.e. damage functions that provide the total monetary damage within a delineated region (e.g. a city) caused by a flood of certain magnitude. After introducing a systematic methodology for the automatised derivation of such functions, we apply it to a total of 140 European cities and obtain a large set of damage curves utilisable for individual as well as comparative damage assessments. By scrutinising the resulting curves, we are further able to characterise the slope of the damage functions by means of a functional model. The proposed function has in general a sigmoidal shape but exhibits a power law increase for the relevant range of flood levels and we detect an average exponent of 3.4 for the considered cities. This finding represents an essential input for subsequent elaborations on the general interrelations of involved quantities.
The second basic element of this work is extreme value theory which is employed to characterise the occurrence of flood events and in conjunction with a damage function provides the probability distribution of the annual damage in the area under study. The resulting approach is highly flexible as it assumes non-stationarity in all relevant parameters and can be easily applied to arbitrary regions, sea level, and adaptation scenarios. For instance, we find a doubling of expected flood damage in the city of Copenhagen for a rise in mean sea levels of only 11 cm. By following more general considerations, we succeed in deducing surprisingly simple functional expressions to describe the damage behaviour in a given region for varying mean sea levels, changing storm intensities, and supposed protection levels. We are thus able to project future flood damage by means of a reduced set of parameters, namely the aforementioned damage function exponent and the extreme value parameters. Similar examinations are carried out to quantify the aleatory uncertainty involved in these projections. In this regard, a decrease of (relative) uncertainty with rising mean sea levels is detected. Beyond that, we demonstrate how potential adaptation measures can be assessed in terms of a Cost-Benefit Analysis. This is exemplified by the Danish case study of Kalundborg, where amortisation times for a planned investment are estimated for several sea level scenarios and discount rates.
Anthropogenic activities have transformed the Earth's environment, not only on local level, but on the planetary-scale causing global change. Besides industrialization, agriculture is a major driver of global change. This change in turn impairs the agriculture sector, reducing crop yields namely due to soil degradation, water scarcity, and climate change. However, this is a more complex issue than it appears. Crop yields can be increased by use of agrochemicals and fertilizers which are mainly produced by fossil energy. This is important to meet the increasing food demand driven by global demographic change, which is further accelerated by changes in regional lifestyles. In this dissertation, we attempt to address this complex problem exploring agricultural potential globally but on a local scale. For this, we considered the influence of lifestyle changes (dietary patterns) as well as technological progress and their effects on climate change, mainly greenhouse gas (GHG) emissions. Furthermore, we examined options for optimizing crop yields in the current cultivated land with the current cropping patterns by closing yield gaps. Using this, we investigated in a five-minute resolution the extent to which food demand can be met locally, and/or by regional and/or global trade. Globally, food consumption habits are shifting towards calorie rich diets. Due to dietary shifts combined with population growth, the global food demand is expected to increase by 60-110% between 2005 and 2050. Hence, one of the challenges to global sustainability is to meet the growing food demand, while at the same time, reducing agricultural inputs and environmental consequences. In order to address the above problem, we used several freely available datasets and applied multiple interconnected analytical approaches that include artificial neural network, scenario analysis, data aggregation and harmonization, downscaling algorithm, and cross-scale analysis.
Globally, we identified sixteen dietary patterns between 1961 and 2007 with food intakes ranging from 1,870 to 3,400 kcal/cap/day. These dietary patterns also reflected changing dietary habits to meat rich diets worldwide. Due to the large share of animal products, very high calorie diets that are common in the developed world, exhibit high total per capita emissions of 3.7-6.1 kg CO2eq./day. This is higher than total per capita emissions of 1.4-4.5 kg CO2eq./day associated with low and moderate calorie diets that are common in developing countries. Currently, 40% of the global crop calories are fed to livestock and the feed calorie use is four times the produced animal calories. However, these values vary from less than 1 kcal to greater 10 kcal around the world. On the local and national scale, we found that the local and national food production could meet demand of 1.9 and 4.4 billion people in 2000, respectively. However, 1 billion people from Asia and Africa require intercontinental agricultural trade to meet their food demand. Nevertheless, these regions can become food self-sufficient by closing yield gaps that require location specific inputs and agricultural management strategies. Such strategies include: fertilizers, pesticides, soil and land improvement, management targeted on mitigating climate induced yield variability, and improving market accessibility. However, closing yield gaps in particular requires global N-fertilizer application to increase by 45-73%, P2O5 by 22-46%, and K2O by 2-3 times compare to 2010. Considering population growth, we found that the global agricultural GHG emissions will approach 7 Gt CO2eq./yr by 2050, while the global livestock feed demand will remain similar to 2000. This changes tremendously when diet shifts are also taken into account, resulting in GHG emissions of 20 Gt CO2eq./yr and an increase of 1.3 times in the crop-based feed demand between 2000 and 2050. However, when population growth, diet shifts, and technological progress by 2050 were considered, GHG emissions can be reduced to 14 Gt CO2eq./yr and the feed demand to nearly 1.8 times compare to that in 2000. Additionally, our findings shows that based on the progress made in closing yield gaps, the number of people depending on international trade can vary between 1.5 and 6 billion by 2050. In medium term, this requires additional fossil energy. Furthermore, climate change, affecting crop yields, will increase the need for international agricultural trade by 4% to 16%.
In summary, three general conclusions are drawn from this dissertation. First, changing dietary patterns will significantly increase crop demand, agricultural GHG emissions, and international food trade in the future when compared to population growth only. Second, such increments can be reduced by technology transfer and technological progress that will enhance crop yields, decrease agricultural emission intensities, and increase livestock feed conversion efficiencies. Moreover, international trade dependency can be lowered by consuming local and regional food products, by producing diverse types of food, and by closing yield gaps. Third, location specific inputs and management options are required to close yield gaps. Sustainability of such inputs and management largely depends on which options are chosen and how they are implemented. However, while every cultivated land may not need to attain its potential yields to enable food security, closing yield gaps only may not be enough to achieve food self-sufficiency in some regions. Hence, a combination of sustainable implementations of agricultural intensification, expansion, and trade as well as shifting dietary habits towards a lower share of animal products is required to feed the growing population.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
Methicillin resistant Staphylococcus aureus (MRSA) is one of the most important antibiotic-resistant pathogens in hospitals and the community. Recently, a new generation of MRSA, the so called livestock associated (LA) MRSA, has emerged occupying food producing animals as a new niche. LA-MRSA can be regularly isolated from economically important live-stock species including corresponding meats. The present thesis takes a methodological approach to confirm the hypothesis that LA-MRSA are transmitted along the pork, poultry and beef production chain from animals at farm to meat on consumers` table. Therefore two new concepts were developed, adapted to differing data sets.
A mathematical model of the pig slaughter process was developed which simulates the change in MRSA carcass prevalence during slaughter with special emphasis on identifying critical process steps for MRSA transmission. Based on prevalences as sole input variables the model framework is able to estimate the average value range of both the MRSA elimination and contamination rate of each of the slaughter steps. These rates are then used to set up a Monte Carlo simulation of the slaughter process chain. The model concludes that regardless of the initial extent of MRSA contamination low outcome prevalences ranging between 0.15 and 1.15 % can be achieved among carcasses at the end of slaughter. Thus, the model demonstrates that the standard procedure of pig slaughtering in principle includes process steps with the capacity to limit MRSA cross contamination. Scalding and singeing were identified as critical process steps for a significant reduction of superficial MRSA contamination.
In the course of the German national monitoring program for zoonotic agents MRSA prevalence and typing data are regularly collected covering the key steps of different food production chains. A new statistical approach has been proposed for analyzing this cross sectional set of MRSA data with regard to show potential farm to fork transmission. For this purpose, chi squared statistics was combined with the calculation of the Czekanowski similarity index to compare the distributions of strain specific characteristics between the samples from farm, carcasses after slaughter and meat at retail. The method was implemented on the turkey and veal production chains and the consistently high degrees of similarity which have been revealed between all sample pairs indicate MRSA transmission along the chain.
As the proposed methods are not specific to process chains or pathogens they offer a broad field of application and extend the spectrum of methods for bacterial transmission assessment.
The continuously increasing demand for rare earth elements in technical components of modern technologies, brings the detection of new deposits closer into the focus of global exploration. One promising method to globally map important deposits might be remote sensing, since it has been used for a wide range of mineral mapping in the past. This doctoral thesis investigates the capacity of hyperspectral remote sensing for the detection of rare earth element deposits. The definition and the realization of a fundamental database on the spectral characteristics of rare earth oxides, rare earth metals and rare earth element bearing materials formed the basis of this thesis. To investigate these characteristics in the field, hyperspectral images of four outcrops in Fen Complex, Norway, were collected in the near-field. A new methodology (named REEMAP) was developed to delineate rare earth element enriched zones. The main steps of REEMAP are: 1) multitemporal weighted averaging of multiple images covering the sample area; 2) sharpening the rare earth related signals using a Gaussian high pass deconvolution technique that is calibrated on the standard deviation of a Gaussian-bell shaped curve that represents by the full width of half maxima of the target absorption band; 3) mathematical modeling of the target absorption band and highlighting of rare earth elements. REEMAP was further adapted to different hyperspectral sensors (EO-1 Hyperion and EnMAP) and a new test site (Lofdal, Namibia). Additionally, the hyperspectral signatures of associated minerals were investigated to serve as proxy for the host rocks. Finally, the capacity and limitations of spectroscopic rare earth element detection approaches in general and of the REEMAP approach specifically were investigated and discussed. One result of this doctoral thesis is that eight rare earth oxides show robust absorption bands and, therefore, can be used for hyperspectral detection methods. Additionally, the spectral signatures of iron oxides, iron-bearing sulfates, calcite and kaolinite can be used to detect metasomatic alteration zones and highlight the ore zone. One of the key results of this doctoral work is the developed REEMAP approach, which can be applied from near-field to space. The REEMAP approach enables rare earth element mapping especially for noisy images. Limiting factors are a low signal to noise ratio, a reduced spectral resolution, overlaying materials, atmospheric absorption residuals and non-optimal illumination conditions. Another key result of this doctoral thesis is the finding that the future hyperspectral EnMAP satellite (with its currently published specifications, June 2015) will be theoretically capable to detect absorption bands of erbium, dysprosium, holmium, neodymium and europium, thulium and samarium. This thesis presents a new methodology REEMAP that enables a spatially wide and rapid hyperspectral detection of rare earth elements in order to meet the demand for fast, extensive and efficient rare earth exploration (from near-field to space).
Analysis and modeling of transient earthquake patterns and their dependence on local stress regimes
(2015)
Investigations in the field of earthquake triggering and associated interactions, which includes aftershock triggering as well as induced seismicity, is important for seismic hazard assessment due to earthquakes destructive power. One of the approaches to study earthquake triggering and their interactions is the use of statistical earthquake models, which are based on knowledge of the basic seismicity properties, in particular, the magnitude distribution and spatiotemporal properties of the triggered events.
In my PhD thesis I focus on some specific aspects of aftershock properties, namely, the relative seismic moment release of the aftershocks with respect to the mainshocks; the spatial correlation between aftershock occurrence and fault deformation; and on the influence of aseismic transients on the aftershock parameter estimation. For the analysis of aftershock sequences I choose a statistical approach, in particular, the well known Epidemic Type Aftershock Sequence (ETAS) model, which accounts for the input of background and triggered seismicity. For my specific purposes, I develop two ETAS model modifications in collaboration with Sebastian Hainzl. By means of this approach, I estimate the statistical aftershock parameters and performed simulations of aftershock sequences as well.
In the case of seismic moment release of aftershocks, I focus on the ratio of cumulative seismic moment release with respect to the mainshocks. Specifically, I investigate the ratio with respect to the focal mechanism of the mainshock and estimate an effective magnitude, which represents the cumulative aftershock energy (similar to Bath's law, which defines the average difference between mainshock and the largest aftershock magnitudes). Furthermore, I compare the observed seismic moment ratios with the results of the ETAS simulations. In particular, I test a restricted ETAS (RETAS) model which is based on results of a clock advanced model and static stress triggering.
To analyze spatial variations of triggering parameters I focus in my second approach on the aftershock occurrence triggered by large mainshocks and the study of the aftershock parameter distribution and their spatial correlation with the coseismic/postseismic slip and interseismic locking. To invert the aftershock parameters I improve the modified ETAS (m-ETAS) model, which is able to take the extension of the mainshock rupture into account. I compare the results obtained by the classical approach with the output of the m-ETAS model.
My third approach is concerned with the temporal clustering of seismicity, which might not only be related to earthquake-earthquake interactions, but also to a time-dependent background rate, potentially biasing the parameter estimations. Thus, my coauthors and I also applied a modification of the ETAS model, which is able to take into account time-dependent background activity. It can be applicable for two different cases: when an aftershock catalog has a temporal incompleteness or when the background seismicity rate changes with time, due to presence of aseismic forces.
An essential part of any research is the testing of the developed models using observational data sets, which are appropriate for the particular study case. Therefore, in the case of seismic moment release I use the global seismicity catalog. For the spatial distribution of triggering parameters I exploit two aftershock sequences of the Mw8.8 2010 Maule (Chile) and Mw 9.0 2011 Tohoku (Japan) mainshocks. In addition, I use published geodetic slip models of different authors. To test our ability to detect aseismic transients my coauthors and I use the data sets from Western Bohemia (Central Europe) and California.
Our results indicate that:
(1) the seismic moment of aftershocks with respect to mainshocks depends on the static stress changes and is maximal for the normal, intermediate for thrust and minimal for strike-slip stress regimes, where the RETAS model shows a good correspondence with the results;
(2) The spatial distribution of aftershock parameters, obtained by the m-ETAS model, shows anomalous values in areas of reactivated crustal fault systems. In addition, the aftershock density is found to be correlated with coseismic slip gradient, afterslip, interseismic coupling and b-values. Aftershock seismic moment is positively correlated with the areas of maximum coseismic slip and interseismically locked areas. These correlations might be related to the stress level or to material properties variations in space;
(3) Ignoring aseismic transient forcing or temporal catalog incompleteness can lead to the significant under- or overestimation of the underlying trigger parameters. In the case when a catalog is complete, this method helps to identify aseismic sources.
By perturbing the differential of a (cochain-)complex by "small" operators, one obtains what is referred to as quasicomplexes, i.e. a sequence whose curvature is not equal to zero in general. In this situation the cohomology is no longer defined. Note that it depends on the structure of the underlying spaces whether or not an operator is "small." This leads to a magical mix of perturbation and regularisation theory. In the general setting of Hilbert spaces compact operators are "small." In order to develop this theory, many elements of diverse mathematical disciplines, such as functional analysis, differential geometry, partial differential equation, homological algebra and topology have to be combined. All essential basics are summarised in the first chapter of this thesis. This contains classical elements of index theory, such as Fredholm operators, elliptic pseudodifferential operators and characteristic classes. Moreover we study the de Rham complex and introduce Sobolev spaces of arbitrary order as well as the concept of operator ideals. In the second chapter, the abstract theory of (Fredholm) quasicomplexes of Hilbert spaces will be developed. From the very beginning we will consider quasicomplexes with curvature in an ideal class. We introduce the Euler characteristic, the cone of a quasiendomorphism and the Lefschetz number. In particular, we generalise Euler's identity, which will allow us to develop the Lefschetz theory on nonseparable Hilbert spaces. Finally, in the third chapter the abstract theory will be applied to elliptic quasicomplexes with pseudodifferential operators of arbitrary order. We will show that the Atiyah-Singer index formula holds true for those objects and, as an example, we will compute the Euler characteristic of the connection quasicomplex. In addition to this we introduce geometric quasiendomorphisms and prove a generalisation of the Lefschetz fixed point theorem of Atiyah and Bott.
A main limitation in the field of flood hydrology is the short time period covered by instrumental flood time series, rarely exceeding more than 50 to 100 years. However, climate variability acts on short to millennial time scales and identifying causal linkages to extreme hydrological events requires longer datasets. To extend instrumental flood time series back in time, natural geoarchives are increasingly explored as flood recorders. Therefore, annually laminated (varved) lake sediments seem to be the most suitable archives since (i) lake basins act as natural sediment traps in the landscape continuously recording land surface processes including floods and (ii) individual flood events are preserved as detrital layers intercalated in the varved sediment sequence and can be dated with seasonal precision by varve counting.
The main goal of this thesis is to improve the understanding about hydrological and sedimentological processes leading to the formation of detrital flood layers and therewith to contribute to an improved interpretation of lake sediments as natural flood archives. This goal was achieved in two ways: first, by comparing detrital layers in sediments of two dissimilar peri-Alpine lakes, Lago Maggiore in Northern Italy and Mondsee in Upper Austria, with local instrumental flood data and, second, by tracking detrital layer formation during floods by a combined hydro-sedimentary monitoring network at Lake Mondsee spanning from the rain fall to the deposition of detrital sediment at the lake floor.
Successions of sub-millimetre to 17 mm thick detrital layers were detected in sub-recent lake sediments of the Pallanza Basin in the western part of Lago Maggiore (23 detrital layers) and Lake Mondsee (23 detrital layers) by combining microfacies and high-resolution micro X-ray fluorescence scanning techniques (µ-XRF). The detrital layer records were dated by detailed intra-basin correlation to a previously dated core sequence in Lago Maggiore and varve counting in Mondsee. The intra-basin correlation of detrital layers between five sediment cores in Lago Maggiore and 13 sediment cores in Mondsee allowed distinguishing river runoff events from local erosion. Moreover, characteristic spatial distribution patterns of detrital flood layers revealed different depositional processes in the two dissimilar lakes, underflows in Lago Maggiore as well as under- and interflows in Mondsee. Comparisons with runoff data of the main tributary streams, the Toce River at Lago Maggiore and the Griesler Ache at Mondsee, revealed empirical runoff thresholds above which the deposition of a detrital layer becomes likely. Whereas this threshold is the same for the whole Pallanza Basin in Lago Maggiore (600 m3s-1 daily runoff), it varies within Lake Mondsee. At proximal locations close to the river inflow detrital layer deposition requires floods exceeding a daily runoff of 40 m3s-1, whereas at a location 2 km more distal an hourly runoff of 80 m3s-1 and at least 2 days with runoff above 40 m3s-1 are necessary. A relation between the thickness of individual deposits and runoff amplitude of the triggering events is apparent for both lakes but is obviously further influenced by variable influx and lake internal distribution of detrital sediment.
To investigate processes of flood layer formation in lake sediments, hydro-sedimentary dynamics in Lake Mondsee and its main tributary stream, Griesler Ache, were monitored from January 2011 to December 2013. Precipitation, discharge and turbidity were recorded continuously at the rivers outlet to the lake and compared to sediment fluxes trapped close to the lake bottom on a basis of three to twelve days and on a monthly basis in three different water depths at two locations in the lake basin, in a distance of 0.9 (proximal) and 2.8 km (distal) to the Griesler Ache inflow. Within the three-year observation period, 26 river floods of different amplitude (10-110 m3s-1) were recorded resulting in variable sediment fluxes to the lake (4-760 g m-2d-1). Vertical and lateral variations in flood-related sedimentation during the largest floods indicate that interflows are the main processes of lake internal sediment transport in Lake Mondsee. The comparison of hydrological and sedimentological data revealed (i) a rapid sedimentation within three days after the peak runoff in the proximal and within six to ten days in the distal lake basin, (ii) empirical runoff thresholds for triggering sediment flux at the lake floor increasing from the proximal (20 m3s-1) to the distal lake basin (30 m3s-1) and (iii) factors controlling the amount of detrital sediment deposition at a certain location in the lake basin. The total influx of detrital sediment is mainly driven by runoff amplitude, catchment sediment availability and episodic sediment input by local sediment sources. A further role plays the lake internal sediment distribution which is not the same for each event but is favoured by flood duration and the existence of a thermocline and, therewith, the season in which a flood occurred.
In summary, the studies reveal a high sensitivity of lake sediments to flood events of different intensity. Certain runoff amplitudes are required to supply enough detrital material to form a visible detrital layer at the lake floor. Reasonable are positive feedback mechanisms between rainfall, runoff, erosion, fluvial sediment transport capacity and lake internal sediment distribution. Therefore, runoff thresholds for detrital layer formation are site-specific due to different lake-catchment characteristics. However, the studies also reveal that flood amplitude is not the only control for the amount of deposited sediment at a certain location in the lake basin even for the strongest flood events. The sediment deposition is rather influenced by a complex interaction of catchment and in-lake processes. This means that the coring location within a lake basin strongly determines the significance of a flood layer record. Moreover, the results show that while lake sediments provide ideal archives for reconstructing flood frequencies, the reconstruction of flood amplitudes is a more complex issue and requires detailed knowledge about relevant catchment and in-lake sediment transport and depositional processes.
The non-linear behaviour of the atmospheric dynamics is not well understood and makes the evaluation and usage of regional climate models (RCMs) difficult. Due to these non-linearities, chaos and internal variability (IV) within the RCMs are induced, leading to a sensitivity of RCMs to their initial conditions (IC). The IV is the ability of RCMs to realise different solutions of simulations that differ in their IC, but have the same lower and lateral boundary conditions (LBC), hence can be defined as the across-member spread between the ensemble members.
For the investigation of the IV and the dynamical and diabatic contributions generating the IV four ensembles of RCM simulations are performed with the atmospheric regional model HIRHAM5. The integration area is the Arctic and each ensemble consists of 20 members. The ensembles cover the time period from July to September for the years 2006, 2007, 2009 and 2012. The ensemble members have the same LBC and differ in their IC only. The different IC are arranged by an initialisation time that shifts successively by six hours. Within each ensemble the first simulation starts on 1st July at 00 UTC and the last simulation starts on 5th July at 18 UTC and each simulation runs until 30th September. The analysed time period ranges from 6th July to 30th September, the time period that is covered by all ensemble members. The model runs without any nudging to allow a free development of each simulation to get the full internal variability within the HIRHAM5.
As a measure of the model generated IV, the across-member standard deviation and the across-member variance is used and the dynamical and diabatic processes influencing the IV are estimated by applying a diagnostic budget study for the IV tendency of the potential temperature developed by Nikiema and Laprise [2010] and Nikiema and Laprise [2011]. The diagnostic budget study is based on the first law of thermodynamics for potential temperature and the mass-continuity equation. The resulting budget equation reveals seven contributions to the potential temperature IV tendency.
As a first study, this work analyses the IV within the HIRHAM5. Therefore, atmospheric circulation parameters and the potential temperature for all four ensemble years are investigated. Similar to previous studies, the IV fluctuates strongly in time. Further, due to the fact that all ensemble members are forced with the same LBC, the IV depends on the vertical level within the troposphere, with high values in the lower troposphere and at 500 hPa and low values in the upper troposphere and at the surface. By the same reason, the spatial distribution shows low values of IV at the boundaries of the model domain.
The diagnostic budget study for the IV tendency of potential temperature reveals that the seven contributions fluctuate in time like the IV. However, the individual terms reach different absolute magnitudes. The budget study identifies the horizontal and vertical ‘baroclinic’ terms as the main contributors to the IV tendency, with the horizontal ‘baroclinic’ term producing and the vertical ‘baroclinic’ term reducing the IV. The other terms fluctuate around zero, because they are small in general or are balanced due to the domain average.
The comparison of the results obtained for the four different ensembles (summers 2006, 2007, 2009 and 2012) reveals that on average the findings for each ensemble are quite similar concerning the magnitude and the general pattern of IV and its contributions. However, near the surface a weaker IV is produced with decreasing sea ice extent. This is caused by a smaller impact of the horizontal 'baroclinic' term over some regions and by the changing diabatic processes, particularly a more intense reducing tendency of the IV due to condensative heating. However, it has to be emphasised that the behaviour of the IV and its dynamical and diabatic contributions are influenced mainly by complex atmospheric feedbacks and large-scale processes and not by the sea ice distribution.
Additionally, a comparison with a second RCM covering the Arctic and using the same LBCs and IC is performed. For both models very similar results concerning the IV and its dynamical and diabatic contributions are found. Hence, this investigation leads to the conclusion that the IV is a natural phenomenon and is independent from the applied RCM.
Die vorliegende Arbeit thematisiert die Finanzierungsmodelle von Public-Private-Partnership-Projekten (PPP) und deren Refinanzierung durch die Kapitalgeber.
Dabei wurden zwei zentrale Fragestellungen thematisiert. Erstens: Führen PPPProjekte zu einer Verschuldung der öffentlichen Hand und sind sie entsprechend bei den Berechnungen der Konvergenzkriterien bzw. der Schulden- und Neuverschuldungsquoten zu berücksichtigen? Die zu prüfende Arbeitshypothese geht von einer Verschuldung der öffentlichen Hand in Folge von PPP-Projekten aus. Zweitens: Unterstellt wird eine bedeutsame Funktion von PPP für die Infrastrukturfinanzierung, wobei im Sinne einer Effizienzsteigerung die Passgenauigkeit beziehungsweise Konsistenz der haushaltsrechtlichen Regelungen mit den regulatorischen Vorgaben für die Kapitalgeber von PPP-Projekten analysiert wird. Diese Schnittstelle und die zur Generierung günstiger („kommunalähnlicher") Finanzierungskonditionen notwendigen staatlichen Garantien bei PPP drängt geradezu zu einem ordnungspolitischen Vergleich von Ansätzen bzw. Projekten im Bereich PPP und in Cash-Flow-Kalkülen.
Die Arbeit führt mit einem gewissen gesamtwirtschaftlichen Fokus der PPP tief in die Analyse des Kapitalmarktes und der Bankenregulierung. Es erfolgt ein Vergleich der gedeckten Refinanzierungsinstrumente für PPP, die durch Forderungen besichert sind (Asset Backed Securities) und solche, die beispielsweise durch Forderungen gegen die öffentliche Hand besichert sind (Covered Bonds). Letztere können auch grundpfandrechtlich gesichert sein. Hier setzt der Verfasser später seine Skizze eines „Infrastructure Covered Bonds" für die Finanzierung notwendiger Infrastrukturmaßnahmen nicht nur in Deutschland an, wobei das Wertpapier hier ausschließlich zur Finanzierung der Infrastruktur bei einem entsprechend neu zuschaffenden (Deckungs-) Registers begeben werden wird.
Im Hinblick auf die Problematik der Umweltverschmutzung durch die Nutzung fossiler Brennstoffe ist es nötig, eine langfristig stabile und umweltfreundliche Energieversorgung zu gewährleisten. Eine Möglichkeit, den Energiebedarf CO2-neutral zu decken, ist die Nutzung von Biogas. Hierbei spielt der Einsatz von biogenen Reststoffen, die durch einen hohen Anteil an Kohlenhydraten, Fetten und Proteinen gekennzeichnet sind und daher ein hohes Biogaspotential besitzen, eine wichtige Rolle. Voraussetzung für die Effizienz und Rentabilität solcher Anlagen ist u. a. ein stabiler Gasbildungsprozess. Da bisher noch nicht alle Aspekte der Biogasbildung vollständig verstanden sind, werden die Anlagen oft nicht optimal ausgelastet, um Prozessstörungen wie z. B. Übersäuerung zu vermeiden.
Um dennoch auftretende Prozessstörungen zu beheben, können unterschiedliche Maßnahmen durchgeführt werden. Neben der Senkung der Raumbelastung, ist es möglich, den pH-Wert durch die Zugabe von Natronlauge oder Calciumoxid anzuheben.
In der vorliegenden Arbeit wurden sowohl Prozessstörungen als auch Prozessregenerierungen an einer großtechnischen Biogasanlage und in Laborversuchen untersucht. Dabei galt es, neben den physikalischen und chemischen Parametern, die mikrobielle Biozönose mit Hilfe des genetischen Fingerprintings zu charakterisieren und Änderungen zu detektieren.
Während der Prozessregenerierungen wurden nach der Zugabe von CaO Veränderungen des Gärrestes beobachtet. Es bildeten sich Pellets, die im Hinblick auf ihre Funktion für die Prozessregenerierung und die Prozessstabilität molekularbiologisch und mikroskopisch untersucht wurden. Es wurde weiterhin der Frage nachgegangen, welche Rolle die Mikroorganismen bei der Entstehung der Pellets spielen.
Die vor allem aus Calcium und Fettsäuren bestehenden Pellets dienten als Aufwuchsflächen für verschiedene Mikroorganismen. Die Bildung von Biofilmen, wie sie auf und in den Pellets nachgewiesen wurde, bot für Mikroorganismen einen Schutz vor negativen Umwelteinflüssen wie z. B. hohe Propionsäurekonzentrationen. Unter diesen günstigen Bedingungen war die Bildung von Biogas auch unter hohen Wasserstoffpartialdrücken, die den Abbau von Propionsäure hemmten, möglich. Als Indikator für bessere Lebensbedingungen wurde im Laborversuch ein Methanoculleus receptaculi-verwandter Organismus identifiziert. Dieses methanogene Archaeon wurde im Pellet nachgewiesen, während es im Gärrest erst nach der Prozessregenerierung detektiert wurde. Der Nachweis eines im Vergleich zum umgebenden Gärrest höheren Anteils an Archaeen im Kern der Pellets sowie von Biofilmen/EPS, verschiedenen Phosphatsalzen und schwerlöslichen Calciumsalzen zeigte, dass sowohl Präzipitation und Adsorption als auch Degradation von LCFA dazu führen, dass deren Konzentration im flüssigen Gärrest gesenkt wird. Dadurch nimmt die Hemmung auf die Biozönose ab und die Biogasbildungsrate steigt. Daher ist der Abbau der Fettsäuren auch bei einem niedrigen pH-Wert und unter hohen Wasserstoffpartialdrücken möglich und der Biogasbildungsprozess ist langfristig stabil. Die Bildung von Pellets unterstützt die Prozessstabilität, sofern diese nicht zu groß werden und dann u. a. die Durchmischung behindern und den Ablauf verstopfen.
Nach erfolgreicher Prozessstabilisierung wurden keine Pellets im Gärrest beobachtet. Der Abbau des organischen Materials wurde sowohl durch die steigende Calciumkonzentration als auch die steigende Gasproduktion angezeigt.
Die vorliegende Arbeit beschäftigt sich mit der Fragestellung, inwieweit die Integrität der phonologischen Sprachverarbeitung für den erfolgreichen Schriftspracherwerb bei deutschsprachigen Kindern relevant ist. Hierbei bilden Fähigkeiten zur phonologischen Bewusstheit (PhB) den Schwerpunkt. Der erfolgreiche Schriftspracherwerb ist nicht nur für den Bildungserfolg mit den damit verbundenen beruflichen und sozio‑ökonomischen Perspektiven wichtig, sondern auch für die aktive Teilhabe am sozialen und kulturellen Leben in unserer Gesellschaft.
Die Bestandteile dieser publikationsbasierten Dissertation sind eine Monographie (Schnitzler, 2008), ein Beitrag in einem Sammelwerk (Schnitzler, 2013) sowie zwei Zeitschriftenartikel (Schnitzler, 2014, 2015). Die ersten beiden Publikationen beschäftigen sich mit der Entwicklung der PhB sowie Zusammenhängen zwischen PhB und Schriftsprachfertigkeiten. Die beiden Zeitschriftenartikel beschäftigen sich mit dem LRS‑Risiko deutschsprachiger Kinder, die im Vorschulalter aufgrund phonologischer Aussprachestörungen (PhAS) logopädisch behandelt werden. Hierzu wurden in Schnitzler (2015) die Ergebnisse einer selbst durchgeführten Studie dargestellt. In dieser Studie wurden mögliche Einflüsse zusätzlicher nicht‑phonologischer Symptome und der Art der phonologischen Aussprachestörung kontrolliert.
Die Ergebnisse weisen darauf hin, dass zum Schulbeginn und während der Schuleingangsphase genau beobachtet werden sollte, ob Kinder über altersentsprechende Fähigkeiten zur PhB verfügen und ob sie diese segmental‑phonologischen Wissensbestände bewusst aktivieren und beim Lesen und Schreiben effizient nutzen. Dies gilt insbesondere für Kinder, für die ein erhöhtes LRS‑Risiko besteht. Verfügen Kinder zu dieser Zeit über unzureichend spezifizierte phonologische Repräsentationen, ist eine frühzeitige Intervention im Sinne einer Prävention von LRS angezeigt.
The promotion of self-employment as part of active labor market policies is considered to be one of the most important unemployment support schemes in Germany. Against this background the main part of this thesis contributes to the evaluation of start-up support schemes within ALMP. Chapter 2 and 4 focus on the evaluation of the New Start-up Subsidy (NSUS, Gründungszuschuss) in its first version (from 2006 to the end of 2011). The chapters offer an advancement of the evaluation of start-up subsidies in Germany, and are based on a novel data set of administrative data from the Federal Employment Agency that was enriched with information from a telephone survey. Chapter 2 provides a thorough descriptive analysis of the NSUS that consists of two parts. First, the participant structure of the program is compared with the one of two former programs. In a second step, the study conducts an in-depth characterization of the participants of the NSUS focusing on founding motives, the level of start-up capital and equity used as well as the sectoral distribution of the new business. Furthermore, the business survival, income situation of founders and job creation by the new businesses is analyzed during a period of 19 months after start-up. The contribution of Chapter 4 is to introduce a new explorative data set that allows comparing subsidized start-ups out of unemployment with non-subsidized business start-ups that were founded by individuals who were not unemployed at the time of start-up. Because previous evaluation studies commonly used eligible non-participants amongst the unemployed as control group to assess the labor market effects of the start-up subsidies, the corresponding results hence referred to the effectiveness of the ALMP measure, but could not address the question whether the subsidy leads to similarly successful and innovative businesses compared to non-subsidized businesses. An assessment of this economic/growth aspect is also important, since the subsidy might induce negative effects that may outweigh the positive effects from an ALMP perspective. The main results of Chapter 4 indicate that subsidized founders seem to have no shortages in terms of formal education, but exhibit less employment and industry-specific experience, and are less likely to benefit from intergenerational transmission of start-ups. Moreover, the study finds evidence that necessity start-ups are over-represented among subsidized business founders, which suggests disadvantages in terms of business preparation due to possible time restrictions right before start-up. Finally, the study also detects more capital constraints among the unemployed, both in terms of the availability of personal equity and access to loans. With respect to potential differences between both groups in terms of business development over time, the results indicate that subsidized start-ups out of unemployment face higher business survival rates 19 months after start-up. However, they lag behind regular business founders in terms of income, business growth, and innovation. The arduous data collection process for start-up activities of non-subsidized founders for Chapter 4 made apparent that Germany is missing a central reporting system for business formations. Additionally, the different start-up reporting systems that do exist exhibit substantial discrepancies in data processing procedures, and therefore also in absolute numbers concerning the overall start-up activity. Chapter 3 is therefore placed in front of Chapter 4 and has the aim to provide a comprehensive review of the most important German start-up reporting systems. The second part of the thesis consists of Chapter 5 which contributes to the literature on determinants of job search behavior of the unemployed individuals by analyzing the effectiveness of internet search with regard to search behavior of unemployed individuals and subsequent job quality. The third and final part of the thesis outlines why the German labor market reacted in a very mild fashion to the Great Recession 2008/09, especially compared to other countries. Chapter 6 describes current economic trends of the labor market in light of general trends in the European Union, and reveals some of the main associated challenges. Thereafter, recent reforms of the main institutional settings of the labor market which influence labor supply are analyzed. Finally, based on the status quo of these institutional settings, the chapter gives a brief overview of strategies to adequately combat the challenges in terms of labor supply and to ensure economic growth in the future.
Untersuchungen zur räumlichen Analyse und Visualisierung von Mietpreisdaten für Immobilienportale
(2015)
Die vorliegende Arbeit verfolgt das Ziel, aus geoinformatischer Sicht eine konzeptionelle Grundlage zur räumlichen Optimierung von Immobilienportalen zu schaffen. Die Arbeit geht dabei von zwei Hypothesen aus:
1. Verfahren der räumlichen Statistik und des Maschinellen Lernens zur Mietpreisschätzung sind den bisher eingesetzten Verfahren der hedonischen Regression überlegen und eignen sich zur räumlichen Optimierung von Immobilienportalen.
2. Die von Immobilienportalen publizierten webbasierten Mietpreiskarten geben nicht die tatsächlichen räumlichen Verhältnisse auf Immobilienmärkten wieder. Alternative webbasierte Darstellungsformen, wie z.B. Gridmaps, sind dem Status Quo der Immobilienpreiskarten von Immobilienportalen überlegen und visualisieren die tatsächlichen räumlichen Verhältnisse von Immobilienpreisen zweckmäßiger.
Beide Thesen können bewiesen werden.
Es erfolgt zunächst eine umfangreiche Erhebung des Forschungsbedarfs mittels Literaturstudien und technologischer Recherche. Zur Beantwortung der Forschungsfragen wird als quantitative Datenbasis ein 74.098 Mietangebote umfassender Datensatz (von Januar 2007 bis September 2013) eines Immobilienportals akquiriert. Dieser reicht jedoch nicht in vollem Umfang zur Beantwortung der Fragestellungen aus. Deshalb führt der Autor Experteninterviews zur Erhebung einer qualitativen Datenbasis. Deren Analyse ergibt in Kombination mit der Literaturstudie und der technologischen Recherche ein umfassendes, bisher so nicht verfügbares Bild. Es stellt den Status Quo der räumlichen Sicht sowie der raumanalytischen und geovisuellen Defizite von Immobilienportalen dar.
Zur Optimierung der raumanalytischen und geovisuellen Defizite werden forschungsbasierte Lösungsansätze herausgearbeitet und teilimplementiert. Methoden des Maschinellen Lernens und räumliche Schätzverfahren werden als Alternativen zu den von Immobilienportalen bisher genutzten „nicht räumlichen“ Analyseverfahren zur Preismodellierung untersucht. Auf Grundlage eines hierfür konzipierten Validierungsrahmens werden diese Methoden für die Nutzung im Kontext von Immobilienportalen adaptiert. Die prototypische Teilimplementierung zeigt die programmiertechnische Umsetzung des Konzeptes auf. Eine umfassende Analyse geeigneter Sekundärvariablensets zur Mietpreisschätzung liefert als methodisches Resultat, dass Interpolatoren, die Sekundärvariablen benötigen (Kriging with external drift, Ordinary Cokriging), kaum zu valideren Mietpreisschätzergebnissen gelangen als die Methode des Ordinary Kriging, die keine Sekundärvariablen benötigt. Die Methoden Random Forest aus dem Maschinellen Lernen und die Geographisch Gewichtete Regression hingegen bergen großes Potential zur Nutzung der räumlichen Mietpreisschätzung im Kontext von Immobilien-portalen. Die Forschungsergebnisse der räumlichen Preismodellierung werden in die räumliche Visualisierung von Mietpreisen transferiert.
Für die webbasierte Mietpreisdarstellung wird ein Set alternativer Darstellungsmethoden entwickelt, um Mietpreiskarten-Prototypen abzuleiten. Ein methodisches Ergebnis der Entwicklung der Mietpreiskarten-Prototypen ist die Entwicklung eines geeigneten Ansatzes der Loslösung des Preisbezugs von fachfremd verwendeten Bezugsgeometrien. Hierfür wird vom Autor der Begriff der zonenlosen Preiskarte geprägt. Diese werden mit Methoden des Gridmapping erstellt. Es werden optimale Rasterauflösungen zur Darstellung interpolierter Rastergrößen ermittelt. Zonenlose Preiskarten mit Methoden des Gridmapping, gepaart mit einer optionalen gebäudescharfen Darstellung in größeren Maßstäben, sind als Resultate der Forschung die bestmögliche, sich an realen Verhältnissen orientierende, räumliche Mietpreisdarstellung. Die entstandenen Prototypen sind eine Annäherung der wahren Verteilung des Mietpreises im Raum und um einiges schärfer, als die auf der hedonischen Regression basierenden Darstellungen. Somit kann die wahre „Topographie“ der Mietpreislandschaft abgebildet werden. Ein Einsatz der Karten für Nutzergruppen wie Makler, Investoren oder Kommunen zur Analyse städtischer Mietmärkte ist denkbar. Alle entstandenen Prototypen sind unter der Nutzung von Map APIs umgesetzt. Ein Ergebnis dessen ist, dass Map APIs noch an diversen „Kinderkrankheiten“ leiden und derart umgesetzte Mietpreiskarten noch einen weiten Weg vor sich haben, bis sie das Niveau thematischer Karten von Immhof oder Arnberger erreichen.
Die konzeptionellen Überlegungen und Teilimplementierungen münden in drei Prozessketten, die Umsetzungsoptionen für eine räumliche Optimierung von Immobilienportalen darstellen. Dabei werden zwei Szenarien für eine räumlich optimierte Mietpreisschätzung und ein Szenario für eine räumlich optimierte Mietpreisdarstellung herausgearbeitet.
Das Thema der Arbeit sind Formen wissenschaftlicher Wissensproduktion in anwendungsbezogenen Forschungsprojekten und ihre Effekte auf Technisierungsprozesse. Diese untersuche ich am Beispiel eines öffentlich geförderten Forschungsprojekts, das ein automatisiertes Videoüberwachungssystem entwickelt. Als anwendungsbezogenes Forschungsprojekt unterliegt die Entwicklung des Videoüberwachungssystems besonderen Rahmenbedingungen: Die Arbeit der Forschergruppe soll erstens auf makrosoziale Kriminalitätsprobleme reagieren, zweitens politische Hoffnungen auf einen erfolgreichen Technologietransfer erfüllen, und drittens dem innerdisziplinären Erkenntnisfortschritt dienen. Daraus ergeben sich alltagspraktische Handlungsprobleme für die Forschergruppe, da sie zwischen heterogenen und möglicherweise widersprüchlichen Erwartungshaltungen vermitteln muss. Diese Vermittlungsstrategien beeinflussen jedoch in die Entscheidungsprozesse, wie und in welchem Ausmaß Überwachungsprozesse technisiert werden.
Das Promotionsprojekt geht der Frage nach, auf welche Weise die Forschergruppe die Integration der verschiedenen Erwartungshaltungen bewältigt, und welche Auswirkungen diese besondere Form des anwendungsbezogenen Forschens auf die Entwicklung der Überwachungstechnologie hat. Auf der Grundlage einer ethnographischen Fallstudie beantworte ich die Frage durch den Nachweis, dass die präferierten Lösungen der Forschergruppe sich eher an disziplinären Fragestellungen ausrichten als an ihrer Praxistauglichkeit. Dies wird besonders darin sichtbar, dass die ursprünglichen Problemstellungen im Verlaufe des Arbeitsprozesses anhand der tatsächlich verfügbaren Instrumente umdefiniert werden. Die daraus resultierenden Konflikte mit den gesellschaftlichen Erwartungshaltungen bewältigt die Forschergruppe, indem sie lernt, die Anwendungsbezogenheit gegenüber der Förderinstitution sorgfältig zu inszenieren.
Die Dissertation mit dem Thema „Demografie und politischer Reduktionismus – eine Diskursanalyse der Demografiepolitik in Deutschland“ knüpft an die Debatte um die Demografisierung an, die sich damit beschäftigt, dass gesellschaftliche Entwicklungen häufig zu sehr aus einer demografischen Perspektive betrachtet und beschrieben werden. Sie analysiert die zum Teil noch jungen Diskurse, die durch Akteure in Wissenschaft, Politik und Publizistik zu den demografiepolitischen Strategien und Aktivitäten der Bundesregierung geführt werden.
Dabei werden Teilbereiche der Gesellschaft, darunter insbesondere die soziale Sicherung, die Generationenbeziehungen sowie die ökonomische Entwicklung und auch räumliche Aspekte der Demografie, in den Blick genommen. Es werden ausgewählte Zusammenhänge zwischen gesellschaftlichen Entwicklungen und demografischen Veränderungen reflektiert, denen ein kausales Beziehungsverhältnis bescheinigt wird. Es wird aufgezeigt, wo mit Verweis auf die Demografie lediglich Deutungsangebote geschaffen und Kausalbehauptungen aufgestellt werden.
Von besonderem Untersuchungsinteresse ist hierbei die Demografie als Argument, um politisches, wirtschaftliches und soziales Handeln zu legitimieren und ein gesellschaftliches Klima der Akzeptanz zu erzeugen. Wo erweist sich die Demografie als ein Spekulationsobjekt – wo ist sie erwiesene, überprüfbare Kausalität? Und wo ist die Grenze zur Instrumentalisierung der Demografie zu ziehen? Es wird belegt, dass eine Gelegenheitsdemografie vor allem aus drei Gründen praktiziert wird: Sie verschafft organisierten Interessen Gehör, sie bietet Orientierung in komplexen Gesellschaften und sie dient als Beurteilungsmaßstab der Bewertung von gesellschaftlichen Entwicklungen.
Die aktuelle Konjunktur von Diskursen zum demografischen Wandel sorgt dafür, dass die Gelegenheiten, mit Demografie zu argumentieren, immer wieder reproduziert werden. In der Folge werden nicht nur gesellschaftliche Entwicklungen zu sehr auf demografische Komponenten zurückgeführt. Auch das familien-, sozial-, migrations- und wirtschaftspolitische Denken und Handeln wird häufig über das Maß tatsächlicher Ursache-Wirkungs-Zusammenhänge hinaus auf vermeintliche demografische Gesetzmäßigkeiten reduziert (Reduktionismus).
Die Diskursanalyse der Demografiepolitik in Deutschland will die Bedeutung des demografischen Wandels für die Gesellschaft dabei nicht in Frage stellen. Sie soll für einen kritischeren Umgang mit der Demografie sensibilisieren. Dazu gehört auch, aufzuzeigen, dass die Demografie ein Faktor unter vielen ist.
Die Honigbiene Apis mellifera zeigt innerhalb einer Kolonie eine an das Alter gekoppelte Arbeitsteilung. Junge Honigbienen versorgen die Brut (Ammenbienen), während ältere Honigbienen (Sammlerinnen) außerhalb des Stocks Pollen und Nektar eintragen. Die biogenen Amine Octopamin und Tyramin sind an der Steuerung der Arbeitsteilung maßgeblich beteiligt. Sie interagieren mit Zielzellen über die Bindung an G Protein gekoppelte Rezeptoren. A. mellifera besitzt fünf charakterisierte Octopaminrezeptoren (AmOctαR1, AmOctβR1-4), einen charakterisierten Tyraminrezeptor (AmTyr1) sowie einen weiteren putativen Tyraminrezeptor.
In der vorliegenden Arbeit wurde dieser putative Aminrezeptor als zweiter Tyraminrezeptor (AmTyr2) identifiziert, lokalisiert und pharmakologisch charakterisiert.
Die von der cDNA abgeleitete Aminosäuresequenz weist strukturelle Eigenschaften und konservierte Motive von G Protein gekoppelten Rezeptoren auf. Phylogenetisch ordnet sich der AmTyr2 Rezeptor bei den Tyramin 2 Rezeptoren anderer Insekten ein. Die funktionelle und pharmakologische Charakterisierung des putativen Tyraminrezeptors erfolgte in modifizierten HEK293 Zellen, die mit der Rezeptor cDNA transfiziert wurden. Die Applikation von Tyramin aktiviert Adenylylcyclasen in diesen Zellen und resultiert in einem Anstieg des intrazellulären cAMP Gehalts. Der AmTyr2 Rezeptor kann durch Tyramin in nanomolaren Konzentrationen halbmaximal aktiviert werden. Während es sich bei Octopamin um einen wirkungsvollen Agonisten des Rezeptors handelt, sind Mianserin und Yohimbin effektive Antagonisten. Für die Lokalisierung des Rezeptorproteins wurde ein polyklonaler Antikörper generiert. Eine AmTyr2-ähnliche Immunreaktivität zeigt sich im Gehirn in den optischen Loben, den Antennalloben, dem Zentralkomplex und in den Kenyon Zellen der Pilzkörper.
Des Weiteren wurde die Rolle der Octopamin- und Tyraminrezeptoren bei der Steuerung der altersabhängigen Arbeitsteilung analysiert.
Die Genexpression des AmOctαR1 in verschiedenen Gehirnteilen korreliert unabhängig vom Alter mit der sozialen Rolle, während sich die Genexpression von AmOctβR3/4 und den Tyraminrezeptoren AmTyr1 und AmTyr2 maximal mit dem Alter aber nicht der sozialen Rolle ändert. Sammlerinnen weisen einen höheren Octopamingehalt im Gesamtgehirn auf als Ammenbienen; bei Tyramin zeigen sich keine Unterschiede. Während Tyramin offensichtlich keine direkte Rolle spielt, werden durch Octopamin gesteuerte Prozesse der altersabhängigen Arbeitsteilung bei der Honigbiene vermutlich über den AmOctαR1 vermittelt.
Die Ergebnisse der vorliegenden Arbeit zeigen die wichtige Rolle von biogenen Aminen, insbesondere Octopamin bei der sozialen Organisation von Insektenstaaten.
Au centre de cette étude se trouvent les patients de la clinique psychiatrique et neurologique de la Charité (Berlin-Est, RDA), durant la période des années 1960. Tout en prenant en compte l'interprétation qui en est faite par le discours médical, ce travail vise à reconstituer les expériences et les trajectoires de ces individus, en les inscrivant dans le contexte de la société socialiste. À partir des dossiers de patients – qui constituent le principal matériau archivistique de cette étude –, il s'agit de saisir certaines des tensions qui traversent la société est-allemande, en relation avec le contexte politique et idéologique. Comme en attestent ces sources, dans le cadre de l'échange thérapeutique, les patients peuvent prendre la parole selon des règles qui diffèrent de celles habituellement en vigueur au sein de la société socialiste. Parce qu'ils peuvent contenir les traces d'une parole ordinairement mise sous silence, du fait de la censure ou de l'auto-censure, de son caractère indicible, inavouable ou délirant, les dossiers de patients apparaissent comme une source précieuse pour l'historien. Des tensions conjugales provoquées par des dissensions idéologiques aux conflits intérieurs d'une « fervente marxiste », de la douleur suscitée par la radiation du parti à celle née de la construction du Mur, des « délires réunificateurs » à ceux faisant de l'Ouest une source de menaces, les expériences individuelles et singulières des patients permettent de reconstituer, selon une approche micro- historique, certaines tensions inhérentes au fonctionnement de la société socialiste.
The overarching goal of this dissertation is to provide a better understanding of the role of wind and water in shaping Earth’s Cenozoic orogenic plateaus - prominent high-elevation, low relief sectors in the interior of Cenozoic mountain belts. In particular, the feedbacks between surface uplift, the build-up of topography and ensuing changes in precipitation, erosion, and vegetation patterns are addressed in light of past and future climate change. Regionally, the study focuses on the two world’s largest plateaus, the Altiplano-Puna Plateau of the Andes and Tibetan Plateau, both characterized by average elevations of >4 km. Both plateaus feature high, deeply incised flanks with pronounced gradients in rainfall, vegetation, hydrology, and surface processes. These characteristics are rooted in the role of plateaus to act as efficient orographic barriers to rainfall and to force changes in atmospheric flow.
The thesis examines the complex topics of tectonic and climatic forcing of the surface-process regime on three different spatial and temporal scales: (1) bedrock wind-erosion rates are quantified in the arid Qaidam Basin of NW Tibet over millennial timescales using cosmogenic radionuclide dating; (2) present-day stable isotope composition in rainfall is examined across the south-central Andes in three transects between 22° S and 28° S; these data are modeled and assessed with remotely sensed rainfall data of the Tropical Rainfall Measuring Mission and the Moderate Resolution Imaging Spectroradiometer; (3) finally, a 2.5-km-long Mio-Pliocene sedimentary record of the intermontane Angastaco Basin (25°45’ S, 66°00’ W) is presented in the context of hydrogen and carbon compositions of molecular lipid biomarker, and oxygen and carbon isotopes obtained from pedogenic carbonates; these records are compared to other environmental proxies, including hydrated volcanic glass shards from volcanic ashes intercalated in the sedimentary strata.
There are few quantitative estimates of eolian bedrock-removal rates from arid, low relief landscapes. Wind-erosion rates from the western Qaidam Basin based on cosmogenic 10Be measurements document erosion rates between 0.05 to 0.4 mm/yr. This finding indicates that in arid environments with strong winds, hyperaridity, exposure of friable strata, and ongoing rock deformation and uplift, wind erosion can outpace fluvial erosion. Large eroded sediment volumes within the Qaidam Basin and coeval dust deposition on the Chinese Loess plateau, exemplify the importance of dust production within arid plateau environments for marine and terrestrial depositional processes, but also health issues and fertilization of soils.
In the south-central Andes, the analysis of 234 stream-water samples for oxygen and hydrogen reveals that areas experiencing deep convective storms do not show the commonly observed patterns of isotopic fractionation and the expected co-varying relationships between oxygen and hydrogen with increasing elevation. These convective storms are formed over semi-arid intermontane basins in the transition between the broken foreland of the Sierras Pampeanas, the Eastern Cordillera, and the Puna Plateau in the interior of the orogen. Here, convective rainfall dominates the precipitation budget and no systematic stable isotope-elevation relationship exists. Regions to the north, in the transition between the broken foreland and the Subandean foreland fold-and-thrust belt, the impact of convection is subdued, with lower degrees of storminess and a stronger expected isotope-elevation relationship. This finding of present-day fractionation trends of meteoric water is of great importance for paleoenvironmental studies in attempts to use stable isotope relationships in the reconstruction of paleoelevations.
The third part of the thesis focuses on the paleohydrological characteristics of the Mio-Pliocene (10-2 Ma) Angastaco Basin sedimentary record, which reveals far-reaching environmental changes during Andean uplift and orographic barrier formation. A precipitation- evapotranspiration record identifies the onset of a precipitation regime related to the South American Low Level Jet at this latitude after 9 Ma. Humid foreland conditions existed until 7 Ma, followed by orographic barrier uplift to the east of the present-day Angastaco Basin. This was superseded by rapid (~0.5 Myr) aridification in an intermontane basin, highlighting the effects of eastward-directed deformation. A transition in vegetation cover from a humid C3 forest ecosystem to semi-arid C4-dominated vegetation was coeval with continued basin uplift to modern elevations.
Continental rifts are excellent regions where the interplay between extension, the build-up of topography, erosion and sedimentation can be evaluated in the context of landscape evolution. Rift basins also constitute important archives that potentially record the evolution and migration of species and the change of sedimentary conditions as a result of climatic change. Finally, rifts have increasingly become targets of resource exploration, such as hydrocarbons or geothermal systems. The study of extensional processes and the factors that further modify the mainly climate-driven surface process regime helps to identify changes in past and present tectonic and geomorphic processes that are ultimately recorded in rift landscapes.
The Cenozoic East African Rift System (EARS) is an exemplary continental rift system and ideal natural laboratory to observe such interactions. The eastern and western branches of the EARS constitute first-order tectonic and topographic features in East Africa, which exert a profound influence on the evolution of topography, the distribution and amount of rainfall, and thus the efficiency of surface processes. The Kenya Rift is an integral part of the eastern branch of the EARS and is characterized by high-relief rift escarpments bounded by normal faults, gently tilted rift shoulders, and volcanic centers along the rift axis.
Considering the Cenozoic tectonic processes in the Kenya Rift, the tectonically controlled cooling history of rift shoulders, the subsidence history of rift basins, and the sedimentation along and across the rift, may help to elucidate the morphotectonic evolution of this extensional province. While tectonic forcing of surface processes may play a minor role in the low-strain rift on centennial to millennial timescales, it may be hypothesized that erosion and sedimentation processes impacted by climate shifts associated with pronounced changes in the availability in moisture may have left important imprints in the landscape.
In this thesis I combined thermochronological, geomorphic field observations, and morphometry of digital elevation models to reconstruct exhumation processes and erosion rates, as well as the effects of climate on the erosion processes in different sectors of the rift. I present three sets of results: (1) new thermochronological data from the northern and central parts of the rift to quantitatively constrain the Tertiary exhumation and thermal evolution of the Kenya Rift. (2) 10Be-derived catchment-wide mean denudation rates from the northern, central and southern rift that characterize erosional processes on millennial to present-day timescales; and (3) paleo-denudation rates in the northern rift to constrain climatically controlled shifts in paleoenvironmental conditions during the early Holocene (African Humid Period).
Taken together, my studies show that time-temperature histories derived from apatite fission track (AFT) analysis, zircon (U-Th)/He dating, and thermal modeling bracket the onset of rifting in the Kenya Rift between 65-50 Ma and about 15 Ma to the present. These two episodes are marked by rapid exhumation and, uplift of the rift shoulders. Between 45 and 15 Ma the margins of the rift experienced very slow erosion/exhumation, with the accommodation of sediments in the rift basin.
In addition, I determined that present-day denudation rates in sparsely vegetated parts of the Kenya Rift amount to 0.13 mm/yr, whereas denudation rates in humid and more densely vegetated sectors of the rift flanks reach a maximum of 0.08 mm/yr, despite steeper hillslopes. I inferred that hillslope gradient and vegetation cover control most of the variation in denudation rates across the Kenya Rift today. Importantly, my results support the notion that vegetation cover plays a fundamental role in determining the voracity of erosion of hillslopes through its stabilizing effects on the land surface.
Finally, in a pilot study I highlighted how paleo-denudation rates in climatic threshold areas changed significantly during times of transient hydrologic conditions and involved a sixfold increase in erosion rates during increased humidity. This assessment is based on cosmogenic nuclide (10Be) dating of quartzitic deltaic sands that were deposited in the northern Kenya Rift during a highstand of Lake Suguta, which was associated with the Holocene African Humid Period. Taken together, my new results document the role of climate variability in erosion processes that impact climatic threshold environments, which may provide a template for potential future impacts of climate-driven changes in surface processes in the course of Global Change.
Two of the most controversial issues concerning the late Cenozoic evolution of the Andean orogen are the timing of uplift of the intraorogenic Puna plateau and its eastern border, the Eastern Cordillera, and ensuing changes in climatic and surface-process conditions in the intermontane basins of the NW-Argentine Andes. The Eastern Cordillera separates the internally drained, arid Puna from semi-arid intermontane basins and the humid sectors of the Andean broken foreland and the Subandean fold-and-thrust belt to the east. With elevations between 4,000 and 6,000 m the eastern flanks of the Andes form an efficient orographic barrier with westward-increasing elevation and asymmetric rainfall distribution and amount with respect to easterly moisture-bearing winds. This is mirrored by pronounced gradients in the efficiency of surface processes that erode and re-distribute sediment from the uplifting ranges. Although the overall pattern of deformation and uplift in this sector of the southern central Andes shows an eastward migration of deformation, a well-developed deformation front does not exist and uplift and associated erosion and sedimentary processes are highly disparate in space and time. In addition, periodic deformation within intermontane basins, and continued diachronous foreland uplifts associated with the reactivation of inherited basement structures furthermore make a rigorous assessment of the spatiotemporal uplift patterns difficult.
This thesis focuses on the tectonic evolution of the Eastern Cordillera of NW Argentina, the depositional history of its intermontane sedimentary basins, and the regional topographic evolution of the eastern flank of the Puna Plateau. The intermontane basins of the Eastern Cordillera and the adjacent morphotectonic provinces of the Sierras Pampeanas and the Santa Bárbara System are akin to reverse fault bounded, filled, and partly coalesced sedimentary basins of the Puna Plateau. In contrast to the Puna basins, however, which still form intact morphologic entities, repeated deformation, erosion, and re-filling have impacted the basins in the Eastern Cordillera. This has resulted in a rich stratigraphy of repeated basin fills, but many of these basins have retained vestiges of their early depositional history that may reach back in time when these areas were still part of a contiguous and undeformed foreland basin. Fortunately, these strata also contain abundant volcanic ashes that are not only important horizons to decipher tectono-sedimentary events through U-Pb geochronology and geochemical correlation, but they also represent terrestrial recorders of the hydrogen-isotope composition of ancient meteoric waters that can be compared to the isotopic composition of modern meteoric water. The ash horizons are thus unique recorders of past environmental conditions and lend themselves to tracking the development of rainfall barriers and tectonically forced climate and environmental change through time.
U-Pb zircon geochronology and paleocurrent reconstructions of conglomerate sequences in the Humahuaca Basin of the Eastern Cordillera at 23.5° S suggest that the basin was an integral part of a largely unrestricted depositional system until 4.2 Ma, which subsequently became progressively decoupled from the foreland by range uplifts to the east that forced easterly moisture-bearing winds to precipitate in increasingly eastward locations. Multiple cycles of severed hydrological conditions and drainage re-capture are identified together with these processes that were associated with basin filling and sediment evacuation, respectively. Moreover, systematic relationships among faults, regional unconformities and deformed landforms reveal a general pattern of intra-basin deformation that appears to be linked with basin-internal deformation during or subsequent to episodes of large-scale sediment removal. Some of these observations are supported by variations in the hydrogen stable isotope composition of volcanic glass from the Neogene to Quaternary sedimentary record, which can be related to spatiotemporal changes in topography and associated orographic effects. δDg values in the basin strata reveal two main trends associated with surface uplift in the catchment area between 6.0 and 3.5 Ma and the onset of semiarid conditions in the basin following the attainment of threshold elevations for effective orographic barriers to the east after 3.5 Ma. The disruption of sediment supply from western sources after 4.2 Ma and subsequent hinterland aridification, moreover, emphasize the possibility that these processes were related to lateral orogenic growth of the adjacent Puna Plateau. As a result of the hinterland aridification the regions in the orogen interior have been characterized by an inefficient fluvial system, which in turn has helped maintaining internal drainage conditions, sediment storage, and relief reduction within high-elevation basins.
The diachronous nature of basin formation and impacts on the fluvial system in the adjacent broken foreland is underscored by the results of detailed sediment provenance and paleocurrent analyses, as well as U-Pb zircon geochronology in the Lerma and Metán basins at ca. 25° S. This is particularly demonstrated by the isolated uplift of the Metán range at ~10 Ma, which is more than 50 km away from the presently active orogenic front along the eastern Puna margin and the Eastern Cordillera to the west. At about 5 Ma, Puna-sourced sediments disappear from the foreland record, documenting further range uplifts in the Eastern Cordillera and hydrological isolation of the neighboring Angastaco Basin from the foreland. Finally, during the late Pliocene and Quaternary, deformation has been accommodated across the entire foreland and is still active. To elucidate the interactions between tectonically controlled changes in elevation and their impact on atmospheric circulation processes in this region, this thesis provides additional, temporally well-constrained hydrogen stable isotope results of volcanic glass samples from the broken foreland, including the Angastaco Basin, and other intermontane basins farther south. The results suggest similar elevations of intermontane basins and the foreland sectors prior to ca. 7 Ma. In case of the Angastaco Basin the region was affected by km-scale surface uplift of the basin. A comparison with coeval isotope data collected from sedimentary sequences in the Puna plateau explains rapid shifts in the intermontane δDg record and supports the notion of recurring phases of enhanced deep convection during the Pliocene, and thus climatic conditions during the middle to late Pliocene similar to the present day.
Combined, field-based and isotope geochemical methods used in this study of the NW-Argentine Andes have thus helped to gain insight into the systematics, rate changes, interactions, and temporal characteristics among tectonically controlled deformation patterns, the build-up of topography impacting atmospheric processes, the distribution of rainfall, and resulting surface processes in a tectonically active mountain belt. Ultimately, this information is essential for a better understanding of the style and the rates at which non-collisional mountain belts evolve, including the development orogenic plateaus and their bordering flanks. The results presented in this study emphasize the importance of stable isotope records for paleoaltimetric and paleoenvironmental studies in mountain belts and furnishes important data for a rigorous interpretation of such records.
The present study addresses the question of how German vowels are perceived and produced by Polish learners of German as a Foreign Language. It comprises three main experiments: a discrimination experiment, a production experiment, and an identification experiment. With the exception of the discrimination task, the experiments further investigated the influence of orthographic marking on the perception and production of German vowel length. It was assumed that explicit markings such as the Dehnungs-h ("lengthening h") could help Polish GFL learners in perceiving and producing German words more correctly.
The discrimination experiment with manipulated nonce words showed that Polish GFL learners detect pure length differences in German vowels less accurately than German native speakers, while this was not the case for pure quality differences. The results of the identification experiment contrast with the results of the discrimination task in that Polish GFL learners were better at judging incorrect vowel length than incorrect vowel quality in manipulated real words. However, orthographic marking did not turn out to be the driving factor and it is suggested that metalinguistic awareness can explain the asymmetry between the two perception experiments. The production experiment supported the results of the identification task in that lengthening h did not help Polish learners in producing German vowel length more correctly. Yet, as far as vowel quality productions are concerned, it is argued that orthography does influence L2 sound productions because Polish learners seem to be negatively influenced by their native grapheme-to-phoneme correspondences.
It is concluded that it is important to differentiate between the influence of the L1 and L2 orthographic system. On the one hand, the investigation of the influence of orthographic vowel length markers in German suggests that Polish GFL learners do not make use of length information provided by the L2 orthographic system. On the other hand, the vowel quality data suggest that the L1 orthographic system plays a crucial role in the acquisition of a foreign language. It is therefore proposed that orthography influences the acquisition of foreign sounds, but not in the way it was originally assumed.
Injection of nanoscale zero-valent iron (nZVI) is an innovative technology for in situ installation of a permeable reactive barrier in the subsurface. Zerovalent iron (ZVI) is highly reactive with chlorinated hydrocarbons (CHCs) and renders them into less harmful substances. Application of nZVI instead of granular ZVI can increase rates of dechlorination of CHCs by orders of magnitude, due to its higher surface area. This approach is still difficult to apply due to fast agglomeration and sedimentation of colloidal suspensions of nZVI, which leads to very short transport distances. To overcome this issue of limited mobility, polyanionic stabilisers are added to increase surface charge and stability of suspensions. In field experiments maximum transport distances of a few metres were achieved. A new approach, which is investigated in this thesis, is enhanced mobility of nZVI by a more mobile carrier colloid. The investigated composite material consists of activated carbon, which is loaded with nZVI.
In this cumulative thesis, transport characteristics of carbon-colloid supported nZVI (c-nZVI) are investigated. Investigations started with column experiments in 40 cm columns filled with various porous media to investigate on physicochemical influences on transport characteristics. The experimental setup was enlarged to a transport experiment in a 1.2-m-sized two-dimensional aquifer tank experiment, which was filled with granular porous media. Further, a field experiment was performed in a natural aquifer system with a targeted transport distance of 5.3 m. Parallel to these investigations, alternative methods for transport observations were investigated by using noninvasive tomographic methods. Experiments using synchrotron radiation and magnetic resonance (MRI) were performed to investigate in situ transport characteristics in a non-destructive way.
Results from column experiments show potentially high mobility under environmental relevant conditions. Addition of mono-and bivalent salts, e.g. more than 0.5 mM/L CaCl2, might decrease mobility. Changes in pH to values below 6 can inhibit mobility at all. Measurements of colloid size show changes in the mean particle size by a factor of ten. Measurements of zeta potential revealed an increase of –62 mV to –82 mV. Results from the 2D-aquifer test system suggest strong particle deposition in the first centimetres and only weak straining in the further travel path and no gravitational influence on particle transport. Straining at the beginning of the travel path in the porous medium was observed with tomographic investigations of transport. MRI experiments revealed similar results to the previous experiments, and observations using synchrotron radiation suggest straining of colloids at pore throats. The potential for high transport distances, which was suggested from laboratory experiments, was confirmed in the field experiment, where the transport distance of 5.3 m was reached by at least 10% of injected nZVI. Altogether, transport distances of the investigated carbon-colloid supported nZVI are higher than published results of traditional nZVI.
Stream water and groundwater are important fresh water resources but their water quality is deteriorated by harmful solutes introduced by human activities. The interface between stream water and the subsurface water is an important zone for retention, transformation and attenuation of these solutes. Streambed structures enhance these processes by increased water and solute exchange across this interface, denoted as hyporheic exchange.
This thesis investigates the influence of hydrological and morphological factors on hyporheic water and solute exchange as well as redox-reactions in fluvial streambed structures on the intermediate scale (10–30m). For this purpose, a three-dimensional numerical modeling approach for coupling stream water flow with porous media flow is used. Multiple steady state stream water flow scenarios over different generic pool-riffle morphologies and a natural in-stream gravel bar are simulated by a computational fluid dynamics code that provides the hydraulic head distribution at the streambed. These heads are subsequently used as the top boundary condition of a reactive transport groundwater model of the subsurface beneath the streambed. Ambient groundwater that naturally interacts with the stream water is considered in scenarios of different magnitudes of downwelling stream water (losing case) and upwelling groundwater (gaining case). Also, the neutral case, where stream stage and groundwater levels are balanced is considered. Transport of oxygen, nitrate and dissolved organic carbon and their reaction by aerobic respiration and denitrification are modeled.
The results show that stream stage and discharge primarily induce hyporheic exchange flux and solute transport with implications for specific residence times and reactions at both the fully and partially submerged structures. Gaining and losing conditions significantly diminish the extent of the hyporheic zone, the water exchange flux, and shorten residence times for both the fully and partially submerged structures. With increasing magnitude of gaining or losing conditions, these metrics exponentially decrease.
Stream water solutes are transported mainly advectively into the hyporheic zone and hence their influx corresponds directly to the infiltrating water flux. Aerobic respiration takes place in the shallow streambed sediments, coinciding to large parts with the extent of the hyporheic exchange flow. Denitrification occurs mainly as a “reactive fringe” surrounding the aerobic zone, where oxygen concentration is low and still a sufficient amount of stream water carbon source is available. The solute consumption rates and the efficiency of the aerobic and anaerobic reactions depend primarily on the available reactive areas and the residence times, which are both controlled by the interplay between hydraulic head distribution at the streambed and the gradients between stream stage and ambient groundwater. Highest solute consumption rates can be expected under neutral conditions, where highest solute flux, longest residence times and largest extent of the hyporheic exchange occur. The results of this thesis show that streambed structures on the intermediate scale have a significant potential to contribute to a net solute turnover that can support a healthy status of the aquatic ecosystem.
The Barberton Greenstone Belt (BGB) in the northwestern part of South Africa belongs to the few well-preserved remnants of Archean crust. Over the last centuries, the BGB has been intensively studied at surface with detailed mapping of its surfacial geological units and tectonic features. Nevertheless, the deeper structure of the BGB remains poorly understood. Various tectonic evolution models have been developed based on geo-chronological and structural data. These theories are highly controversial and centre on the question whether plate tectonics - as geoscientists understand them today - was already evolving on the Early Earth or whether vertical mass movements driven by the higher temperature of the Earth in Archean times governed continent development.
To get a step closer to answering the questions regarding the internal structure and formation of the BGB, magnetotelluric (MT) field experiments were conducted as part of the German-South African research initiative Inkaba yeAfrica. Five-component MT data (three magnetic and two electric channels) were collected at ~200 sites aligned along six profiles crossing the southern part of the BGB. Tectonic features like (fossil) faults and shear zones are often mineralized and therefore can have high electrical conductivities. Hence, by obtaining an image of the conductivity distribution of the subsurface from MT measurements can provide useful information on tectonic processes.
Unfortunately, the BGB MT data set is heavily affected by man-made electromagnetic noise caused, e.g. by powerlines and electric fences. Aperiodic spikes in the magnetic and corresponding offsets in the electric field components impair the data quality particularly at periods >1 s which are required to image deep electrical structures. Application of common methods for noise reduction like delay filtering and remote reference processing, only worked well for periods <1 s. Within the framework of this thesis two new filtering approaches were developed to handle the severe noise in long period data and obtain reliable processing results. The first algorithm is based on the Wiener filter in combination with a spike detection algorithm. Comparison of data variances of a local site with those of a reference site allows the identification of disturbed time series windows for each recorded channel at the local site. Using the data of the reference site, a Wiener filter algorithm is applied to predict physically meaningful data to replace the disturbed windows. While spikes in the magnetic channels are easily recognized and replaced, steps in the electric channels are more difficult to detect depending on their offset. Therefore, I have implemented a novel approach based on time series differentiation, noise removal and subsequent integration to overcome this obstacle. A second filtering approach where spikes and steps in the time series are identified using a comparison of the short and long time average of the data was also implemented as part of my thesis. For this filtering approach the noise in the form of spikes and offsets in the data is treated by an interpolation of the affected data samples. The new developments resulted in a substantial data improvement and allowed to gain one to two decades of data (up to 10 or 100 s).
The re-processed MT data were used to image the electrical conductivity distribution of the BGB by 2D and 3D inversion. Inversion models are in good agreement with the surface geology delineating the highly resistive rocks of the BGB from surrounding more conductive geological units. Fault zones appear as conductive structures and can be traced to depths of 5 to 10 km. 2D models suggest a continuation of the faults further south across the boundary of the BGB. Based on the shallow tectonic structures (fault system) within the BGB compared to deeply rooted resistive batholiths in the area, tectonic models including both vertical mass transport and in parts present-day style plate tectonics seem to be most likely for the evolution of the BGB.
The origin of cosmic rays was the subject of several studies for over a century. The investigations done within this dissertation are one small step to shed some more light on this mystery.
Locating the sources of cosmic rays is not trivial due to the interstellar magnetic field. However, the Hillas criterion allows us to arrive at the conclusion that supernova remnants are our main suspect for the origin of galactic cosmic rays. The mechanism by which they are accelerating particles is found within the field of shock physics as diffusive shock acceleration. To allow particles to enter this process also known as Fermi acceleration pre-acceleration processes like shock surfing acceleration and shock drift acceleration are necessary. Investigating the processes happening in the plasma shocks of supernova remnants is possible by utilising a simplified model which can be simulated on a computer using Particle-in-Cell simulations.
We developed a new and clean setup to simulate the formation of a double shock, i.e., consisting of a forward and a reverse shock and a contact discontinuity, by the collision of two counter-streaming plasmas, in which a magnetic field can be woven into. In a previous work, we investigated the processes at unmagnetised and at magnetised parallel shocks, whereas in the current work, we move our investigation on to magnetised perpendicular shocks.
Due to a much stronger confinement of the particles to the collision region the perpendicular shock develops much faster than the parallel shock. On the other hand, this leads to much weaker turbulence. We are able to find indications for shock surfing acceleration and shock drift acceleration happening at the two shocks leading to populations of pre-accelerated particles that are suitable as a seed population to be injected into further diffusive shock acceleration to be accelerated to even higher energies. We observe the development of filamentary structures in the shock ramp of the forward shock, but not at the reverse shock. This leads to the conclusion that the development of such structures in the shock ramp of quasi-perpendicular collisionless shocks might not necessarily be determined by the existence of a critical sonic Mach number but by a critical shock speed.
The results of the investigations done within this dissertation might be useful for further studies of oblique shocks and for studies using hybrid or magnetohydrodynamic simulations. Together with more sophisticated observational methods, these studies will help to bring us closer to an answer as to how particles can be accelerated in supernova remnants and eventually become cosmic rays that can be detected on Earth.
During the last two decades, instability training devices have become a popular means in athletic training and rehabilitation of mimicking unstable surfaces during movements like vertical jumps. Of note, under unstable conditions, trunk muscles seem to have a stabilizing function during exercise to facilitate the transfer of torques and angular momentum between the lower and upper extremities. The present thesis addresses the acute effects of surface instability on performance during jump-landing tasks. Additionally, the long-term effects (i.e., training) of surface instability were examined with a focus on the role of the trunk in athletic performance/physical fitness.
Healthy adolescent, and young adult subjects participated in three cross-sectional and one longitudinal study, respectively. Performance in jump-landing tasks on stable and unstable surfaces was assessed by means of a ground reaction force plate. Trunk muscle strength (TMS) was determined using an isokinetic device or the Bourban TMS test. Physical fitness was quantified by standing long jump, sprint, stand-and-reach, jumping sideways, Emery balance, and Y balance test on stable surfaces. In addition, activity of selected trunk and leg muscles and lower limb kinematics were recorded during jump-landing tasks.
When performing jump-landing tasks on unstable compared to stable surfaces, jump performance and leg muscle activity were significantly lower. Moreover, significantly smaller knee flexion angles and higher knee valgus angles were observed when jumping and landing on unstable compared to stable conditions and in women compared to men. Significant but small associations were found between behavioral and neuromuscular data, irrespective of surface condition. Core strength training on stable as well as on unstable surfaces significantly improved TMS, balance and coordination.
The findings of the present thesis imply that stable rather than unstable surfaces provide sufficient training stimuli during jump exercises (i.e., plyometrics). Additionally, knee motion strategy during plyometrics appears to be modified by surface instability and sex. Of note, irrespective of surface condition, trunk muscles only play a minor role for leg muscle performance/activity during jump exercises. Moreover, when implemented in strength training programs (i.e., core strength training), there is no advantage in using instability training devices compared to stable surfaces in terms of enhancement of athletic performance.
Physical fitness is an important marker of health that enables people to carry out activities of daily living with vigour and alertness but without undue fatigue and with sufficient reserve to enjoy active leisure pursuits and to meet unforeseen emergencies. Especially, due to scientific findings that the onset of civilization diseases (e.g., obesity, cardiovascular disease) begins in childhood and that physical fitness tracks (at least) into young adulthood, the regular monitoring and promotion of physical fitness in children is risen up to a public health issue. In relation to the evaluation of a child’s physical fitness over time (i.e., development) the use of longitudinally-based percentile values is of particular interest due to their underlined dedication of true physical fitness development within subjects (i.e., individual changes in timing and tempo of growth and maturation). Besides its genetic determination (e.g., sex, body height), physical fitness is influenced by factors that refer to children’s environment and behaviour. For instance, disparities in physical fitness according to children’s living area are frequently reported concerning the fact that living in rural areas as compared to urban areas seems to be more favourable for children’s physical fitness. In addition, cross-sectional studies found higher fitness values in children participating in sports clubs as compared to non-participants. However, up to date, the observed associations between both (i.e., living area and sports club participating) and children’s physical fitness are unresolved concerning a long-term effect. In addition, social inequality as determined by the socioeconomic status (SES) extends through many areas of children’s life. While evidence indicates that the SES is inversely related to various indices of child’s daily life and behaviour like educational success, nutritional habits, and sedentary- and physical activity behaviour, a potential relationship between child’s physical fitness and the SES is hardly investigated and indicated inconsistent results.
The present thesis addressed three objectives: (1) to generate physical fitness percentiles for 9- to 12- year-old boys and girls using a longitudinal approach and to analyse the age- and sex-specific development of physical fitness, (2) to investigate the long-term effect of living area and sports club participation on physical fitness in third- to sixth-grade primary school students, and (3) to examine associations between the SES and physical fitness in a large and representative (i.e., for a German federal state) sample of third grade primary school students.
Methods
(i/ii) Healthy third graders were followed over four consecutive years (up to grade 6), including annually assessment of physical fitness and parental questionnaire (i.e., status of sports club participation and living area). Six tests were conducted to estimate various components of physical fitness: speed (50-m sprint test), upper body muscular power (1-kg ball push test), lower body muscular power (triple hop test), flexibility (stand-and-reach test), agility (star agility run test), and cardiorespiratory fitness (CRF) (9-min run test). (iii) Within a cross-sectional study (i.e., third objective), physical fitness of third graders was assessed by six physical fitness tests including: speed (20-m sprint test), upper body muscular power (1-kg ball push test), lower body muscular power (standing long jump [SLJ] test), flexibility (stand-and-reach test), agility (star agility run test), and CRF (6-min run test). By means of questionnaire, students reported their status of organized sports participation (OSP).
Results
(i) With respect to percentiles of physical fitness development, test performances increased in boys and girls from age 9 to 12, except for males’ flexibility (i.e., stable performance over time). Girls revealed significantly better performance in flexibility, whereas boys scored significantly higher in the remaining physical fitness tests. In girls as compared to boys, physical fitness development was slightly faster for upper body muscular power but substantially faster for flexibility. Generated physical fitness percentile curves indicated a timed and capacity-specific physical fitness development (curvilinear) for upper body muscular power, agility, and CRF. (ii) Concerning the effect of living area and sports club participation on physical fitness development, children living in urban areas showed a significantly faster performance development in physical fitness components of upper and lower body muscular power as compared to peers from rural areas. The same direction was noted as a trend in CRF. Additionally, children that regularly participated in a sports club, when compared to those that not continuously participated in a sports club demonstrated a significantly faster performance development in lower body muscular power. A trend of faster performance development in sports club participants occurred in CRF too. (iii) Regarding the association of SES with physical fitness, the percentage of third graders that achieved a high physical fitness level in lower body muscular power and CRF was significantly higher in students attending schools in communities with high SES as compared to middle and low SES, irrespective of sex. Similar, students from the high SES-group performed significantly better in lower body muscular power and CRF than students from the middle and/or the low SES-group.
Conclusion
(i) The generated percentile values provide an objective tool to estimate childrenʼs physical fitness within the frame of physical education (e.g., age- and sex-specific grading of motor performance) and further to detect children with specific fitness characteristics (low fit or high fit) that may be indicative for the necessity of preventive health promotion or long term athlete development. (ii) It is essential to consider variables of different domains (e.g., environment and behavior) in order to improve knowledge of potential factors which influence physical fitness during childhood. In this regard, the present thesis provide a first input to clarify the causality of living area and sports club participation on physical fitness development in school-aged children. Living in urban areas as well as a regular participation in sports clubs positively affected children´s physical fitness development (i.e., muscular power and CRF). Herein, sports club participation seems to be a key factor within the relationship between living area and physical fitness. (iii) The findings of the present thesis imply that attending schools in communities with high SES refers to better performance in specific physical fitness test items (i.e., muscular power, CRF) in third graders. Extra-curricular physical education classes may represent an important equalizing factor for physical activity opportunities in children of different SES backgrounds. In regard to strong evidence of a positive relationship between physical fitness - in particular muscular fitness/ CRF - and health, more emphasis should be laid on establishing sports clubs and extra-curricular physical education classes as an easy and attractive means to promote fitness-, and hence health- enhancing daily physical activity for all children (i.e. public health approach).
In den letzten Jahrzehnten ist der Trend der Verselbstständigung in vielen Kommunen zu beobachten. Ein Großteil der öffentlichen Leistungserbringer wird mittlerweile als privatrechtliche Gesellschaften in einem wettbewerbsorientierten Umfeld geführt. Während viele Forscher Ausgliederungen in Form von nachgeordneten Behörden auf Bundesebene untersuchen und diese Reformwelle als einen faktischen Autonomisierungsprozess beschreiben, gibt es nur einige wenige Studien, die sich explizit mit den Autonomisierungstendenzen auf Kommunalebene auseinandersetzen. Daher fehlt es an empirischen Erkenntnissen zur Steuerung der kommunalen Beteiligungen.
In dieser Arbeit werden die Steuerungsarrangements deutscher Großstädte erstmals aus Sicht der Gesteuerten beleuchtet. Das Untersuchungsziel der vorliegenden Forschungsarbeit besteht darin, Flexibilisierungstendenzen in mehrheitlich kommunalen Unternehmen zu identifizieren und hierfür Erklärungsfaktoren zu identifizieren. Die Forschungsfrage lautet: Welche instrumentellen und relationalen Faktoren beeinflussen die Managementautonomie in kommunalen Mehrheitsbeteiligungen?
Dabei interessiert insbesondere die Einflussnahme der Kommunen auf verschiedene Tätigkeitsbereiche ihrer Ausgliederungen. Über diese unternehmensspezifischen Sachverhalte ist in Deutschland fast nichts und international nur sehr wenig Empirisches bekannt. Zur Beantwortung der Forschungsfrage hat der Autor auf Basis der Transaktionskosten- und der Social-Exchange-Theorie einen Analyserahmen erstellt. Die aufgestellten Hypothesen wurden mit einer großflächigen Umfrage bei 243 Unternehmen in den 39 größten deutschen Städten empirisch getestet.
Im Ergebnis zeigen sich mehrere empirische Erkenntnisse: Erstens konnten mittels Faktorenanalyse vier unabhängige Faktoren von Managementautonomie in kommunalen Unternehmen identifiziert werden: Personalautonomie, Generelles Management, Preisautonomie und Strategische Fragen. Während die Kommunen ihren Beteiligungen einen hohen Grad an Personalautonomie zugestehen, unterliegen vor allem strategische Investitionsentscheidungen wie die finanzielle Beteiligung an Tochterfirmen, große Projektvorhaben, Diversifikationsentscheidungen oder Kreditautfnahmen einem starken politischen Einfluss.
Zweitens führt eine Rechtsformänderung und die Platzierung in einem Wettbewerbsumfeld (auch bekannt als Corporatisation) vor allem zu einer größeren Flexibilisierung der Personal- und Preispolitik, wirkt sich allerdings wenig auf die weiteren Faktoren der Managementautonomie, Generelles Management und Strategische Entscheidungen, aus. Somit behalten die Kommunen ihre Möglichkeit, auf wichtige Unternehmensfragen der Beteiligung Einfluss zu nehmen, auch im Fall einer Formalprivatisierung bei.
Letztlich können zur Erklärung der Autonomiefaktoren transaktionskostenbasierte und relationale Faktoren ergänzend herangezogen werden. In den Transaktionsspezifika wirken vor allem der wahrgenommene Wettbewerb in der Branche, die Messbarkeit der Leistung, Branchenvariablen, die Anzahl der Politiker im Aufsichtsrat und die eingesetzten Steuerungsmechanismen. In den relationalen Faktoren setzen sich die Variablen gegenseitiges Vertrauen, Effektivität der Aufsichtsräte, Informationsaustausch, Rollenkonflikte, Rollenambivalenzen und Geschäftsführererfahrung im Sektor durch.
Water resources from Central Asia’s mountain regions have a high relevance for the water supply of the water scarce lowlands. A good understanding of the water cycle in these mountain regions is therefore needed to develop water management strategies. Hydrological modeling helps to improve our knowledge of the regional water cycle, and it can be used to gain a better understanding of past changes or estimate future hydrologic changes in view of projected changes in climate. However, due to the scarcity of hydrometeorological data, hydrological modeling for mountain regions in Central Asia involves large uncertainties.
Addressing this problem, the first aim of this thesis was to develop hydrological modeling approaches that can increase the credibility of hydrological models in data sparse mountain regions. This was achieved by using additional data from remote sensing and atmospheric modeling. It was investigated whether spatial patterns from downscaled reanalysis data can be used for the interpolation of station-based precipitation data. This approach was compared to other precipitation estimates using a hydrologic evaluation based on hydrological modeling and a comparison of simulated and observed discharge, which demonstrated a generally good performance of this method. The study further investigated the value of satellite-derived snow cover data for model calibration. Trade-offs of good model performance in terms of discharge and snow cover were explicitly evaluated using a multiobjective optimization algorithm, and the results were contrasted with single-objective calibration and Monte Carlo simulations. The study clearly shows that the additional use of snow cover data improved the internal consistency of the hydrological model. In this context, it was further investigated for the first time how many snow cover scenes were required for hydrological model calibration.
The second aim of this thesis was the application of the hydrological model in order to investigate the causes of observed streamflow increases in two headwater catchments of the Tarim River over the recent decades. This simulation-based approach for trend attribution was complemented by a data-based approach. The hydrological model was calibrated to discharge and glacier mass balance data and considered changes in glacier geometry over time. The results show that in the catchment with a lower glacierization, increasing precipitation and temperature both contributed to the streamflow increases, while in the catchment with a stronger glacierization, increasing temperatures were identified as the dominant driver.
Assumed comparable environmental conditions of early Mars and early Earth in 3.7 Ga ago – at a time when first fossil records of life on Earth could be found – suggest the possibility of life emerging on both planets in parallel. As conditions changed, the hypothetical life on Mars either became extinct or was able to adapt and might still exist in biological niches. The controversial discussed detection of methane on Mars led to the assumption, that it must have a recent origin – either abiotic through active volcanism or chemical processes, or through biogenic production. Spatial and seasonal variations in the detected methane concentrations and correlations between the presence of water vapor and geological features such as subsurface hydrogen, which are occurring together with locally increased detected concentrations of methane, gave fuel to the hypothesis of a possible biological source of the methane on Mars.
Therefore the phylogenetically old methanogenic archaea, which have evolved under early Earth conditions, are often used as model-organisms in astrobiological studies to investigate the potential of life to exist in possible extraterrestrial habitats on our neighboring planet. In this thesis methanogenic archaea originating from two extreme environments on Earth were investigated to test their ability to be active under simulated Mars analog conditions. These extreme environments – the Siberian permafrost-affected soil and the chemoautotrophically based terrestrial ecosystem of Movile cave, Romania – are regarded as analogs for possible Martian (subsurface) habitats. Two novel species of methanogenic archaea isolated from these environments were described within the frame of this thesis.
It could be shown that concentrations up to 1 wt% of Mars regolith analogs added to the growth media had a positive influence on the methane production rates of the tested methanogenic archaea, whereas higher concentrations resulted in decreasing rates. Nevertheless it was possible for the organisms to metabolize when incubated on water-saturated soil matrixes made of Mars regolith analogs without any additional nutrients. Long-term desiccation resistance of more than 400 days was proven with reincubation and indirect counting of viable cells through a combined treatment with propidium monoazide (to inactivate DNA of destroyed cells) and quantitative PCR. Phyllosilicate rich regolith analogs seem to be the best soil mixtures for the tested methanogenic archaea to be active under Mars analog conditions. Furthermore, in a simulation chamber experiment the activity of the permafrost methanogen strain Methanosarcina soligelidi SMA-21 under Mars subsurface analog conditions could be proven. Through real-time wavelength modulation spectroscopy measurements the increase in the methane concentration at temperatures down to -5 °C could be detected.
The results presented in this thesis contribute to the understanding of the activity potential of methanogenic archaea under Mars analog conditions and therefore provide insights to the possible habitability of present-day Mars (near) subsurface environments. Thus, it contributes also to the data interpretation of future life detection missions on that planet. For example the ExoMars mission of the European Space Agency (ESA) and Roscosmos which is planned to be launched in 2018 and is aiming to drill in the Martian subsurface.
Was machen Schulleiter tatsächlich und welche Faktoren beeinflussen diese ausgeführten Tätigkeiten?
(2015)
Während die theoretische Arbeitsbeschreibung und das Rollenbild von Schulleitern vielfach in der Forschung aufgegriffen wurde, gibt es – wie übrigens im gesamten Bereich Public Management – nur wenige empirische Untersuchungen, die aus einer betriebswirtschaftlichen Managementbetrachtung heraus untersuchen, was Schulleiter wirklich machen, d.h. welchen Tätigkeiten und Aufgaben die genannten Personen nachgehen und welche Unterschiede sich feststellen lassen. Besondere Relevanz erhält die Thematik durch das sich wandelnde Aufgabenbild des Schulleiters, getrieben insbesondere durch die zusätzliche Autonomie der Einzelschule, aber auch durch die Fokussierung auf die Performance und Wirksamkeit der Einzelschule und verbunden damit, die Abhängigkeit dieser von der Arbeit des Schulleiters. Hier bildet das Verständnis der Aufgaben und Tätigkeiten eine wichtige Grundlage, die jedoch unzureichend erforscht ist. Mit Hilfe einer explorativen Beobachtung von 15 Schulleiterinnen und Schulleitern und damit einer empirischen Untersuchung von insgesamt 7591 Arbeitsminuten und 774 Aktivitäten in Kombination mit ausführlichen qualitativen, halboffenen Interviews wird durch diese Arbeit eine detaillierte Betrachtung des tatsächlichen Schulleitungsmanagementhandelns möglich. So wird sichtbar, dass sich die Aufgaben und Tätigkeiten der Schulleiter in zentralen Bereichen unterscheiden und eine Typologisierung entlang von Rollenbeschreibungen und Leadership Behavior zu kurz greift. Es konnte zum ersten Mal in dieser Ausführlichkeit innerhalb des deutschen Schulsystems gezeigt werden, dass Schulleiter Kommunikationsmanager sind. Darüber hinaus entwickelt das hier dokumentierte Forschungsvorhaben Hypothesen zu den Faktoren, die einen Einfluss auf die Aufgaben und Tätigkeiten haben und beschreibt dezidiert Implikationen, die diese Erkenntnisse auf die Tätigkeit des Schulleiters, die weitere Forschung aber auch die politische Rahmengestaltung und, damit verbunden, die Weiterentwicklung des Schulsystems haben.
In the last decade, the number and dimensions of catastrophic flooding events in the Niger River Basin (NRB) have markedly increased. Despite the devastating impact of the floods on the population and the mainly agriculturally based economy of the riverine nations, awareness of the hazards in policy and science is still low. The urgency of this topic and the existing research deficits are the motivation for the present dissertation.
The thesis is an initial detailed assessment of the increasing flood risk in the NRB. The research strategy is based on four questions regarding (1) features of the change in flood risk, (2) reasons for the change in the flood regime, (3) expected changes of the flood regime given climate and land use changes, and (4) recommendations from previous analysis for reducing the flood risk in the NRB.
The question examining the features of change in the flood regime is answered by means of statistical analysis. Trend, correlation, changepoint, and variance analyses show that, in addition to the factors exposure and vulnerability, the hazard itself has also increased significantly in the NRB, in accordance with the decadal climate pattern of West Africa. The northern arid and semi-arid parts of the NRB are those most affected by the changes.
As potential reasons for the increase in flood magnitudes, climate and land use changes are attributed by means of a hypothesis-testing framework. Two different approaches, based on either data analysis or simulation, lead to similar results, showing that the influence of climatic changes is generally larger compared to that of land use changes. Only in the dry areas of the NRB is the influence of land use changes comparable to that of climatic alterations.
Future changes of the flood regime are evaluated using modelling results. First ensembles of statistically and dynamically downscaled climate models based on different emission scenarios are analyzed. The models agree with a distinct increase in temperature. The precipitation signal, however, is not coherent. The climate scenarios are used to drive an eco-hydrological model. The influence of climatic changes on the flood regime is uncertain due to the unclear precipitation signal. Still, in general, higher flood peaks are expected. In a next step, effects of land use changes are integrated into the model. Different scenarios show that regreening might help to reduce flood peaks. In contrast, an expansion of agriculture might enhance the flood peaks in the NRB. Similarly to the analysis of observed changes in the flood regime, the impacts of climate- and land use changes for the future scenarios are also most severe in the dry areas of the NRB.
In order to answer the final research question, the results of the above analysis are integrated into a range of recommendations for science and policy on how to reduce flood risk in the NRB. The main recommendations include a stronger consideration of the enormous natural climate variability in the NRB and a focus on so called “no-regret” adaptation strategies which account for high uncertainty, as well as a stronger consideration of regional differences. Regarding the prevention and mitigation of catastrophic flooding, the most vulnerable and sensitive areas in the basin, the arid and semi-arid Sahelian and Sudano-Sahelian regions, should be prioritized. Eventually, an active, science-based and science-guided flood policy is recommended. The enormous population growth in the NRB in connection with the expected deterioration of environmental and climatic conditions is likely to enhance the region´s vulnerability to flooding. A smart and sustainable flood policy can help mitigate these negative impacts of flooding on the development of riverine societies in West Africa.
Spectral fingerprinting
(2015)
Current research on runoff and erosion processes, as well as an increasing demand for sustainable watershed management emphasize the need for an improved understanding of sediment dynamics. This involves the accurate assessment of erosion rates and sediment transfer, yield and origin. A variety of methods exist to capture these processes at the catchment scale. Among these, sediment fingerprinting, a technique to trace back the origin of sediment, has attracted increasing attention by the scientific community in recent years. It is a two-step procedure, based on the fundamental assumptions that potential sources of sediment can be reliably discriminated based on a set of characteristic ‘fingerprint’ properties, and that a comparison of source and sediment fingerprints allows to quantify the relative contribution of each source.
This thesis aims at further assessing the potential of spectroscopy to assist and improve the sediment fingerprinting technique. Specifically, this work focuses on (1) whether potential sediment sources can be reliably identified based on spectral features (‘fingerprints’), whether (2) these spectral fingerprints permit the quantification of relative source contribution, and whether (3) in situ derived source information is sufficient for this purpose. Furthermore, sediment fingerprinting using spectral information is applied in a study catchment to (4) identify major sources and observe how relative source contributions change between and within individual flood events. And finally, (5) spectral fingerprinting results are compared and combined with simultaneous sediment flux measurements to study sediment origin, transport and storage behaviour.
For the sediment fingerprinting approach, soil samples were collected from potential sediment sources within the Isábena catchment, a meso-scale basin in the central Spanish Pyrenees. Undisturbed samples of the upper soil layer were measured in situ using an ASD spectroradiometer and subsequently sampled for measurements in the laboratory. Suspended sediment was sampled automatically by means of ISCO samplers at the catchment as well as at the five major subcatchment outlets during flood events, and stored fine sediment from the channel bed was collected from 14 cross-sections along the main river. Artificial mixtures of known contributions were produced from source soil samples. Then, all source, sediment and mixture samples were dried and spectrally measured in the laboratory. Subsequently, colour coefficients and physically based features with relation to organic carbon, iron oxide, clay content and carbonate, were calculated from all in situ and laboratory spectra. Spectral parameters passing a number of prerequisite tests were submitted to principal component analyses to study natural clustering of samples, discriminant function analyses to observe source differentiation accuracy, and a mixing model for source contribution assessment. In addition, annual as well as flood event based suspended sediment fluxes from the catchment and its subcatchments were calculated from rainfall, water discharge and suspended sediment concentration measurements using rating curves and Quantile Regression Forests. Results of sediment flux monitoring were interpreted individually with respect to storage behaviour, compared to fingerprinting source ascriptions and combined with fingerprinting to assess their joint explanatory potential.
In response to the key questions of this work, (1) three source types (land use) and five spatial sources (subcatchments) could be reliably discriminated based on spectral fingerprints. The artificial mixture experiment revealed that while (2) laboratory parameters permitted source contribution assessment, (3) the use of in situ derived information was insufficient. Apparently, high discrimination accuracy does not necessarily imply good quantification results. When applied to suspended sediment samples of the catchment outlet, the spectral fingerprinting approach was able to (4) quantify the major sediment sources: badlands and the Villacarli subcatchment, respectively, were identified as main contributors, which is consistent with field observations and previous studies. Thereby, source contribution was found to vary both, within and between individual flood events. Also sediment flux was found to vary considerably, annually as well as seasonally and on flood event base. Storage was confirmed to play an important role in the sediment dynamics of the studied catchment, whereas floods with lower total sediment yield tend to deposit and floods with higher yield rather remove material from the channel bed. Finally, a comparison of flux measurements with fingerprinting results highlighted the fact that (5) immediate transport from sources to the catchment outlet cannot be assumed. A combination of the two methods revealed different aspects of sediment dynamics that none of the techniques could have uncovered individually.
In summary, spectral properties provide a fast, non-destructive, and cost-efficient means to discriminate and quantify sediment sources, whereas, unfortunately, straight-forward in situ collected source information is insufficient for the approach. Mixture modelling using artificial mixtures permits valuable insights into the capabilities and limitations of the method and similar experiments are strongly recommended to be performed in the future. Furthermore, a combination of techniques such as e.g. (spectral) sediment fingerprinting and sediment flux monitoring can provide comprehensive understanding of sediment dynamics.
Die Arbeit geht der Frage nach, wie Innovationen in einer Organisation des öffentlichen Sektors aufgenommen wurden und zu welchen Veränderungen dies führte. Im Vordergrund steht hier nicht die Innovation selbst, sondern vielmehr die Anpassungsmechanismen in der Organisation. Folgende Forschungsfragen wurden dazu gewählt:
1. Wie wurde das Instrument Zielsteuerung bzw. Zielvereinbarung im öffentlichen Sektor eingeführt und in die Managementroutinen integriert?
2. Welche Faktoren führen zu einer Integration der Zielsteuerung in die Managementroutinen?
3. Welche Empfehlungen für die Praxis lassen sich daraus ableiten?
Dazu wurde ein Landesbetrieb in Brandenburg detailliert untersucht und 31 Interviews mit Führungskräften der zweiten und dritten Managementebene geführt. In dieser Organisation wurde im Rahmen der deutschlandweiten Reformbewegung in der öffentlichen Verwaltung das Instrument Zielsteuerung bzw. Zielvereinbarung eingeführt und mit ganz konkreten Erwartungen verbunden. Als Untersuchungseinheit der möglichen Anpassungen und Veränderungen wurde das Konstrukt der Managementroutinen herangezogen, welche als kollektive Handlungsmuster ganz bewusst individuelle Verhaltensweisen ausklammerten.
Die Arbeit konnte eine Reihe von früheren Erkenntnissen bestätigen und zudem nachweisen, dass, entgegen des häufigen Vorurteils, Innovationen aus dem privatwirtschaftlichem Raum doch auch zu positiven Veränderungen in Organisationen der öffentlichen Hand führen können. Es kam hier jedoch nicht zur Entwicklung neuer, sondern zu einer Anpassung der bestehenden Routinen. Auf dieser Basis konnte festgestellt werden, dass ein stufenweiser Einführungsvorgang zunächst auf der Ebene der veränderten Zielvorstellungen der Führungskräfte zum Erfolg führte. Erst nach der Anpassung auf dieser „ostentativen“ Ebene kam es mit etwas Verzögerung zu einer Veränderung auf der Ebene der konkreten Handlungen. Im Hinblick auf die Einflussfaktoren der Innovation konnte festgestellt werden, dass viele Aspekte der Zielsetzungstheorie nach wie vor relevant sind und instabile politische Rahmenbedingungen zu wesentlichen Einschränkungen der Entfaltungsmöglichkeiten der Innovation führen können. Für viele Einflussfaktoren konnten allerdings sowohl positive als auch negative Wirkungen identifiziert werden.
Organic bulk heterojunction (BHJ) solar cells based on polymer:fullerene blends are a promising alternative for a low-cost solar energy conversion. Despite significant improvements of the power conversion efficiency in recent years, the fundamental working principles of these devices are yet not fully understood. In general, the current output of organic solar cells is determined by the generation of free charge carriers upon light absorption and their transport to the electrodes in competition to the loss of charge carriers due to recombination.
The object of this thesis is to provide a comprehensive understanding of the dynamic processes and physical parameters determining the performance. A new approach for analyzing the characteristic current-voltage output was developed comprising the experimental determination of the efficiencies of charge carrier generation, recombination and transport, combined with numerical device simulations.
Central issues at the beginning of this work were the influence of an electric field on the free carrier generation process and the contribution of generation, recombination and transport to the current-voltage characteristics. An elegant way to directly measure the field dependence of the free carrier generation is the Time Delayed Collection Field (TDCF) method. In TDCF charge carriers are generated by a short laser pulse and subsequently extracted by a defined rectangular voltage pulse. A new setup was established with an improved time resolution compared to former reports in literature. It was found that charge generation is in general independent of the electric field, in contrast to the current view in literature and opposed to the expectations of the Braun-Onsager model that was commonly used to describe the charge generation process. Even in cases where the charge generation was found to be field-dependend, numerical modelling showed that this field-dependence is in general not capable to account for the voltage dependence of the photocurrent. This highlights the importance of efficient charge extraction in competition to non-geminate recombination, which is the second objective of the thesis.
Therefore, two different techniques were combined to characterize the dynamics and efficiency of non-geminate recombination under device-relevant conditions. One new approach is to perform TDCF measurements with increasing delay between generation and extraction of charges. Thus, TDCF was used for the first time to measure charge carrier generation, recombination and transport with the same experimental setup. This excludes experimental errors due to different measurement and preparation conditions and demonstrates the strength of this technique. An analytic model for the description of TDCF transients was developed and revealed the experimental conditions for which reliable results can be obtained. In particular, it turned out that the $RC$ time of the setup which is mainly given by the sample geometry has a significant influence on the shape of the transients which has to be considered for correct data analysis.
Secondly, a complementary method was applied to characterize charge carrier recombination under steady state bias and illumination, i.e. under realistic operating conditions. This approach relies on the precise determination of the steady state carrier densities established in the active layer. It turned out that current techniques were not sufficient to measure carrier densities with the necessary accuracy. Therefore, a new technique {Bias Assisted Charge Extraction} (BACE) was developed. Here, the charge carriers photogenerated under steady state illumination are extracted by applying a high reverse bias. The accelerated extraction compared to conventional charge extraction minimizes losses through non-geminate recombination and trapping during extraction. By performing numerical device simulations under steady state, conditions were established under which quantitative information on the dynamics can be retrieved from BACE measurements.
The applied experimental techniques allowed to sensitively analyse and quantify geminate and non-geminate recombination losses along with charge transport in organic solar cells. A full analysis was exemplarily demonstrated for two prominent polymer-fullerene blends.
The model system P3HT:PCBM spincast from chloroform (as prepared) exhibits poor power conversion efficiencies (PCE) on the order of 0.5%, mainly caused by low fill factors (FF) and currents. It could be shown that the performance of these devices is limited by the hole transport and large bimolecular recombination (BMR) losses, while geminate recombination losses are insignificant. The low polymer crystallinity and poor interconnection between the polymer and fullerene domains leads to a hole mobility of the order of 10^-7 cm^2/Vs which is several orders of magnitude lower than the electron mobility in these devices. The concomitant build up of space charge hinders extraction of both electrons and holes and promotes bimolecular recombination losses.
Thermal annealing of P3HT:PCBM blends directly after spin coating improves crystallinity and interconnection of the polymer and the fullerene phase and results in comparatively high electron and hole mobilities in the order of 10^-3 cm^2/Vs and 10^-4 cm^2/Vs, respectively. In addition, a coarsening of the domain sizes leads to a reduction of the BMR by one order of magnitude. High charge carrier mobilities and low recombination losses result in comparatively high FF (>65%) and short circuit current (J_SC ≈ 10 mA/cm^2). The overall device performance (PCE ≈ 4%) is only limited by a rather low spectral overlap of absorption and solar emission and a small V_OC, given by the energetics of the P3HT.
From this point of view the combination of the low bandgap polymer PTB7 with PCBM is a promising approach. In BHJ solar cells, this polymer leads to a higher V_OC due to optimized energetics with PCBM. However, the J_SC in these (unoptimized) devices is similar to the J_SC in the optimized blend with P3HT and the FF is rather low (≈ 50%). It turned out that the unoptimized PTB7:PCBM blends suffer from high BMR, a low electron mobility of the order of 10^-5 cm^2/Vs and geminate recombination losses due to field dependent charge carrier generation.
The use of the solvent additive DIO optimizes the blend morphology, mainly by suppressing the formation of very large fullerene domains and by forming a more uniform structure of well interconnected donor and acceptor domains of the order of a few nanometers. Our analysis shows that this results in an increase of the electron mobility by about one order of magnitude (3 x 10^-4 cm^2/Vs), while BMR and geminate recombination losses are significantly reduced. In total these effects improve the J_SC (≈ 17 mA/cm^2) and the FF (> 70%). In 2012 this polymer/fullerene combination resulted in a record PCE for a single junction OSC of 9.2%.
Remarkably, the numerical device simulations revealed that the specific shape of the J-V characteristics depends very sensitively to the variation of not only one, but all dynamic parameters. On the one hand this proves that the experimentally determined parameters, if leading to a good match between simulated and measured J-V curves, are realistic and reliable. On the other hand it also emphasizes the importance to consider all involved dynamic quantities, namely charge carrier generation, geminate and non-geminate recombination as well as electron and hole mobilities. The measurement or investigation of only a subset of these parameters as frequently found in literature will lead to an incomplete picture and possibly to misleading conclusions.
Importantly, the comparison of the numerical device simulation employing the measured parameters and the experimental $J-V$ characteristics allows to identify loss channels and limitations of OSC. For example, it turned out that inefficient extraction of charge carriers is a criticical limitation factor that is often disobeyed. However, efficient and fast transport of charges becomes more and more important with the development of new low bandgap materials with very high internal quantum efficiencies. Likewise, due to moderate charge carrier mobilities, the active layer thicknesses of current high-performance devices are usually limited to around 100 nm. However, larger layer thicknesses would be more favourable with respect to higher current output and robustness of production. Newly designed donor materials should therefore at best show a high tendency to form crystalline structures, as observed in P3HT, combined with the optimized energetics and quantum efficiency of, for example, PTB7.
Diese Arbeit befasst sich mit der Rheinischen Verlaufsform (RV) im rheinfränkischen Dialekt. Nach dem DUDEN handelt es sich bei der RV um eine Konstruktion, die aus dem Kopulaverb sein und einer PP mit am und nominalisiertem Infinitiv besteht und dem Ausdruck von progressivem Aspekt dient.
Die vorliegenden Arbeiten zur RV beschäftigen sich im Wesentlichen entweder mit der Ausprägung der Konstruktion im Standarddeutschen (z.B. Reimann (1999), Krause (2002), Rödel (2003), Rödel (2004a), Rödel (2004b), van Pottelberge (2004)) oder im Ripuarischen (z.B. Andersson (1989), Bhatt & Schmidt (1993)) und kommen zu unterschiedlichen Ergebnissen bezüglich der Verwendungsmöglichkeiten und des Aufbaus der Konstruktion, insbesondere des Status des Infinitivs in der Verlaufsform.
Hauptziel dieser Arbeit ist es, zu zeigen, dass sich die Grammatikalisierung der Verlaufsform von der im DUDEN beschriebenen Konstruktion zu einer analytischen Verbform entlang eines festen Grammatikalisierungspfades vollzieht und die entsprechenden Teilschritte bei der Entwicklung zu einer analytischen Verbform herauszuarbeiten. Auf dieser Grundlage wird in der Arbeit dargestellt, wie sich mittels eines geeigneten Sets an Indikatoren der Grammatikalisierungsgrad der Verlaufsform in einem Dialektraum oder einem diatopischen Register konkret feststellen lässt.
This thesis investigates the application of polyelectrolyte multilayers in plasmonics and picosecond acoustics. The observed samples were fabricated by the spin-assisted layer-by-layer deposition technique that allowed a precise tuning of layer thickness in the range of few nanometers.
The first field of interest deals with the interaction of light-induced localized surface plasmons (LSP) of rod-shaped gold nanoparticles with the particles' environment. The environment consists of an air phase and a phase of polyelectrolytes, whose ratio affects the spectral position of the LSP resonance.
Measured UV-VIS spectra showed the shift of the LSP absorption peak as a function of the cover layer thickness of the particles. The data are modeled using an average dielectric function instead of the dielectric functions of air and polyelectrolytes. In addition using a measured dielectric function of the gold nanoparticles, the position of the LSP absorption peak could be simulated with good agreement to the data.
The analytic model helps to understand the optical properties of metal nanoparticles in an inhomogeneous environment.
The second part of this work discusses the applicability of PAzo/PAH and dye-doped PSS/PAH polyelectrolyte multilayers as transducers to generate hypersound pulses. The generated strain pulses were detected by time-domain Brillouin scattering (TDBS) using a pump-probe laser setup. Transducer layers made of polyelectrolytes were compared qualitatively to common aluminum transducers in terms of measured TDBS signal amplitude, degradation due to laser excitation, and sample preparation.
The measurements proved that fast and easy prepared polyelectrolyte transducers provided stronger TDBS signals than the aluminum transducer. AFM topography measurements showed a degradation of the polyelectrolyte structures, especially for the PAzo/PAH sample.
To quantify the induced strain, optical barriers were introduced to separate the transducer material from the medium of the hypersound propagation. Difficulties in the sample preparation prohibited a reliable quantification. But the experiments showed that a coating with transparent polyelectrolytes increases the efficiency of aluminum transducers and modifies the excited phonon distribution.
The adoption of polyelectrolytes to the scientific field of picosecond acoustics enables a cheap and fast fabrication of transducer layers on most surfaces. In contrast to aluminum layers the polyelectrolytes are transparent over a wide spectral range. Thus, the strain modulation can be probed from surface and back.
Die Interaktionen von komplexen Kohlenhydraten und Proteinen sind ubiquitär. Sie spielen wichtige Rollen in vielen physiologischen Prozessen wie Zelladhäsion, Signaltransduktion sowie bei viralen Infektionen. Die molekularen Grundlagen der Interaktion sind noch nicht komplett verstanden. Ein Modellsystem für Kohlenhydrat-Protein-Interaktionen besteht aus Adhäsionsproteinen (Tailspikes) von Bakteriophagen, die komplexe Kohlenhydrate auf bakteriellen Oberflächen (O-Antigen) erkennen. Das Tailspike-Protein (TSP), das in dieser Arbeit betrachtet wurde, stammt aus dem Bakteriophagen 9NA (9NATSP). 9NATSP weist eine hohe strukturelle Homologie zum gut charakterisierten TSP des Phagen P22 (P22TSP) auf, bei einer niedriger sequenzieller Ähnlichkeit. Die Substratspezifitäten beider Tailspikes sind ähnlich mit Ausnahme der Toleranz gegenüber den glucosylierten Formen des O-Antigens. Die Struktur der beiden Tailspikes ist bekannt, sodass sie ein geeignetes System für vergleichende Bindungsstudien darstellen, um die strukturellen Grundlagen für die Unterschiede der Spezifität zu untersuchen.
Im Rahmen dieser Arbeit wurde der ELISA-like tailspike adsorption assay (ELITA) etabliert, um Binderpaare aus TSPs und O-Antigen zu identifizieren. Dabei wurden 9NATSP und P22TSP als Sonden eingesetzt, deren Bindung an die intakten, an die Mikrotiterplatte adsorbierten Bakterien getestet wurde. Beim Test einer Sammlung aus 44 Salmonella-Stämmen wurden Stämme identifiziert, die bindendes O-Antigen exprimieren. Gleichzeitig wurden Unterschiede in der Bindung der beiden TSPs an Salmonella-Stämme mit gleichem O-Serotyp beobachtet. Die Ergebnisse der ELITA-Messung wurden qualitativ durch eine FACS-basierte Bindungsmessung bestätigt. Zusätzlich ermöglichte die FACS-Messung bei Stämmen, die teilweise modifizierte O-Antigene herstellen, den Anteil an Zellen mit und ohne Modifikation zu erfassen.
Die Oberflächenplasmonresonanz (SPR)-basierten Interaktionsmessungen wurden eingesetzt, um Bindungsaffinitäten für eine TSP-O-Antigen Kombination zu quantifizieren. Dafür wurden zwei Methoden getestet, um die Oligosaccharide auf einem SPR-Chip zu immobilisieren. Zum einen wurden die enzymatisch hergestellten O-Antigenfragmente mit einem bifunktionalen Oxaminadapter derivatisiert, der eine primäre Aminogruppe für die Immobilisierung bereitstellt. Ein Versuch, diese Oligosaccharidfragmente zu immobilisieren, war jedoch nicht erfolgreich. Dagegen wurde das nicht derivatisierte Polysaccharid, bestehend aus repetitivem O-Antigen und einem konservierten Kernsaccharid, erfolgreich auf einem SPR-Chip immobilisiert. Die Immobilisierung wurde durch Interaktionsmessungen mit P22TSP bestätigt. Durch die Immobilisierung des Polysaccharids sind somit quantitative SPR-Bindungsmessungen mit einem polydispersen Interaktionspartner möglich.
Eine Auswahl von Salmonella-Stämmen mit einer ausgeprägt unterschiedlichen Bindung von 9NATSP und P22TSP im ELITA-Testsystem wurde hinsichtlich der Zusammensetzung des O-Antigens mittels HPLC, Kapillargelelektrophorese und MALDI-MS analysiert. Dabei wurden nicht-stöchiometrische Modifikationen der O-Antigene wie Acetylierung und Glucosylierung detektiert. Das Ausmaß der Glucosylierung korrelierte negativ mit der Effizienz der Bindung und des Verdaus durch die beiden TSPs, wobei der negative Effekt bei 9NATSP weniger stark ausgeprägt war als bei P22TSP. Dies stimmt mit den Literaturdaten zu Infektivitätsstudien mit 9NA und P22 überein, die mit Stämmen mit vergleichbaren O-Antigenvarianten durchgeführt wurden. Die Korrelation zwischen der Glucosylierung und Bindungseffizienz konnte strukturell interpretiert werden.
Auf Grundlage der O-Antigenanalysen sowie der Ergebnisse der ELITA- und FACS-Bindungstests wurden die Salmonella-Stämme Brancaster und Kalamu identifiziert, die annähernd quantitativ glucosyliertes O-Antigen exprimieren. Damit eignen sich diese Stämme für weiterführende Studien, um die Zusammenhänge zwischen der Spezifität und der Organisation der Bindestellen der beiden TSPs zu untersuchen.
In this thesis, I study ultrafast dynamics in perovskite oxides using time resolved broadband spectroscopy. I focus on the observation of coherent phonon propagation by time resolved Brillouin scattering: following the excition of metal transducer films with a femtosecond infrared pump pulse, coherent phonon dynamics in the GHz frequency range are triggered. Their propagation is monitored using a delayed white light probe pulse. The technique is illustrated on various thin films and multilayered samples. I apply the technique to investigate the linear and nonlinear acoustic response in bulk SrTiO_3, which displays a ferroelastic phase transition from a cubic to a tetragonal structural phase at T_a=105 K. In the linear regime, I observe a coupling of the observed acoustic phonon mode to the softening optic modes describing the phase transition. In the nonlinear regime, I find a giant slowing down of the sound velocity in the low temperature phase that is only observable for a strain amplitude exceeding the tetragonality of the material. It is attributed to a coupling of the high frequency phonons to ferroelastic domain walls in the material. I propose a new mechanism for the coupling of strain waves to the domain walls that is only effective for high amplitude strain. A detailed study of the phonon attenuation across a wide temperature range shows that the phonon attenuation at low temperatures is influenced by the domain configuration, which is determined by interface strain. Preliminary measurements on magnetic-ferroelectric multilayers reveal that the excitation fluence needs to be carefully controlled when dynamics at phase transitions are studied.