Refine
Has Fulltext
- yes (136) (remove)
Year of publication
- 2016 (136) (remove)
Document Type
- Doctoral Thesis (136) (remove)
Is part of the Bibliography
- yes (136)
Keywords
- Klimawandel (4)
- climate change (4)
- Blickbewegungen (3)
- earthquake (3)
- Aggression (2)
- Aphasie (2)
- Berlin (2)
- Bodenfeuchte (2)
- Centrosom (2)
- Deutschland (2)
Institute
- Institut für Geowissenschaften (26)
- Institut für Chemie (18)
- Institut für Biochemie und Biologie (13)
- Institut für Physik und Astronomie (12)
- Department Linguistik (8)
- Department Sport- und Gesundheitswissenschaften (7)
- Institut für Ernährungswissenschaft (7)
- Institut für Mathematik (7)
- Wirtschaftswissenschaften (7)
- Department Psychologie (6)
The Earth’s shallow subsurface with sedimentary cover acts as a waveguide to any incoming wavefield. Within the framework of my thesis, I focused on the characterization of this shallow subsurface within tens to few hundreds of meters of sediment cover. I imaged the seismic 1D shear wave velocity (and possibly the 1D compressional wave velocity). This information is not only required for any seismic risk assessment, geotechnical engineering or microzonation activities, but also for exploration and global seismology where site effects are often neglected in seismic waveform modeling.
First, the conventional frequency-wavenumber (f - k) technique is used to derive the dispersion characteristic of the propagating surface waves recorded using distinct arrays of seismometers in 1D and 2D configurations. Further, the cross-correlation technique is applied to seismic array data to estimate the Green’s function between receivers pairs combination assuming one is the source and the other the receiver. With the consideration of a 1D media, the estimated cross-correlation Green’s functions are sorted with interstation distance in a virtual 1D active seismic experiment. The f - k technique is then used to estimate the dispersion curves. This integrated analysis is important for the interpretation of a large bandwidth of the phase velocity dispersion curves and therefore improving the resolution of the estimated 1D Vs profile.
Second, the new theoretical approach based on the Diffuse Field Assumption (DFA) is used for the interpretation of the observed microtremors H/V spectral ratio. The theory is further extended in this research work to include not only the interpretation of the H/V measured at the surface, but also the H/V measured at depths and in marine environments. A modeling and inversion of synthetic H/V spectral ratio curves on simple predefined geological structures shows an almost perfect recovery of the model parameters (mainly Vs and to a lesser extent Vp). These results are obtained after information from a receiver at depth has been considered in the inversion.
Finally, the Rayleigh wave phase velocity information, estimated from array data, and the H/V(z, f) spectral ratio, estimated from a single station data, are combined and inverted for the velocity profile information. Obtained results indicate an improved depth resolution in comparison to estimations using the phase velocity dispersion curves only. The overall estimated sediment thickness is comparable to estimations obtained by inverting the full micortremor H/V spectral ratio.
In the past, floods were basically managed by flood control mechanisms. The focus was set on the reduction of flood hazard. The potential consequences were of minor interest. Nowadays river flooding is increasingly seen from the risk perspective, including possible consequences. Moreover, the large-scale picture of flood risk became increasingly important for disaster management planning, national risk developments and the (re-) insurance industry. Therefore, it is widely accepted that risk-orientated flood management ap-proaches at the basin-scale are needed. However, large-scale flood risk assessment methods for areas of several 10,000 km² are still in early stages. Traditional flood risk assessments are performed reach wise, assuming constant probabilities for the entire reach or basin. This might be helpful on a local basis, but where large-scale patterns are important this approach is of limited use. Assuming a T-year flood (e.g. 100 years) for the entire river network is unrealistic and would lead to an overestimation of flood risk at the large scale. Due to the lack of damage data, additionally, the probability of peak discharge or rainfall is usually used as proxy for damage probability to derive flood risk. With a continuous and long term simulation of the entire flood risk chain, the spatial variability of probabilities could be consider and flood risk could be directly derived from damage data in a consistent way.
The objective of this study is the development and application of a full flood risk chain, appropriate for the large scale and based on long term and continuous simulation. The novel approach of ‘derived flood risk based on continuous simulations’ is introduced, where the synthetic discharge time series is used as input into flood impact models and flood risk is directly derived from the resulting synthetic damage time series.
The bottleneck at this scale is the hydrodynamic simu-lation. To find suitable hydrodynamic approaches for the large-scale a benchmark study with simplified 2D hydrodynamic models was performed. A raster-based approach with inertia formulation and a relatively high resolution of 100 m in combination with a fast 1D channel routing model was chosen.
To investigate the suitability of the continuous simulation of a full flood risk chain for the large scale, all model parts were integrated into a new framework, the Regional Flood Model (RFM). RFM consists of the hydrological model SWIM, a 1D hydrodynamic river network model, a 2D raster based inundation model and the flood loss model FELMOps+r. Subsequently, the model chain was applied to the Elbe catchment, one of the largest catchments in Germany. For the proof-of-concept, a continuous simulation was per-formed for the period of 1990-2003. Results were evaluated / validated as far as possible with available observed data in this period. Although each model part introduced its own uncertainties, results and runtime were generally found to be adequate for the purpose of continuous simulation at the large catchment scale.
Finally, RFM was applied to a meso-scale catchment in the east of Germany to firstly perform a flood risk assessment with the novel approach of ‘derived flood risk assessment based on continuous simulations’. Therefore, RFM was driven by long term synthetic meteorological input data generated by a weather generator. Thereby, a virtual time series of climate data of 100 x 100 years was generated and served as input to RFM providing subsequent 100 x 100 years of spatially consistent river discharge series, inundation patterns and damage values. On this basis, flood risk curves and expected annual damage could be derived directly from damage data, providing a large-scale picture of flood risk. In contrast to traditional flood risk analysis, where homogenous return periods are assumed for the entire basin, the presented approach provides a coherent large-scale picture of flood risk. The spatial variability of occurrence probability is respected. Additionally, data and methods are consistent. Catchment and floodplain processes are repre-sented in a holistic way. Antecedent catchment conditions are implicitly taken into account, as well as physical processes like storage effects, flood attenuation or channel–floodplain interactions and related damage influencing effects. Finally, the simulation of a virtual period of 100 x 100 years and consequently large data set on flood loss events enabled the calculation of flood risk directly from damage distributions. Problems associated with the transfer of probabilities in rainfall or peak runoff to probabilities in damage, as often used in traditional approaches, are bypassed.
RFM and the ‘derived flood risk approach based on continuous simulations’ has the potential to provide flood risk statements for national planning, re-insurance aspects or other questions where spatially consistent, large-scale assessments are required.
Age of acquisition (AOA) is a psycholinguistic variable that significantly influences behavioural measures (response times and accuracy rates) in tasks that require lexical and semantic processing. Its origin is – unlike the origin of semantic typicality (TYP), which is assumed at the semantic level – controversially discussed. Different theories propose AOA effects to originate either at the semantic level or at the link between semantics and phonology (lemma-level).
The dissertation aims at investigating the influence of AOA and its interdependence with the semantic variable TYP on particularly semantic processing in order to pinpoint the origin of AOA effects. Therefore, three studies have been conducted that considered the variables AOA and TYP in semantic processing tasks (category verifications and animacy decisions) by means of behavioural and partly electrophysiological (ERP) data and in different populations (healthy young and elderly participants and in semantically impaired individuals with aphasia (IWA)).
The behavioural and electrophysiological data of the three studies provide evidence for distinct processing levels of the variables AOA and TYP. The data further support previous assumptions on a semantic origin for TYP but question the same for AOA. The findings, however, support an origin of AOA effects at the transition between the word form (phonology) and the semantic level that can be captured at the behavioural but not at the electrophysiological level.
In the interest of producing functional catalysts from sustainable building-blocks, 1, 3-dicarboxylate imidazolium salts derived from amino acids were successfully modified to be suitable as N-Heterocyclic carbene (NHC) ligands within metal complexes. Complexes of Ag(I), Pd(II), and Ir(I) were successfully produced using known procedures using ligands derived from glycine, alanine, β-alanine and phenylalanine. The complexes were characterized in solid state using X-Ray crystallography, which allowed for the steric and electronic comparison of these ligands to well-known NHC ligands within analogous metal complexes.
The palladium complexes were tested as catalysts for aqueous-phase Suzuki-Miyaura cross-coupling. Water-solubility could be induced via ester hydrolysis of the N-bound groups in the presence of base. The mono-NHC–Pd complexes were seen to be highly active in the coupling of aryl bromides with phenylboronic acid; the active catalyst of which was determined to be mostly Pd(0) nanoparticles. Kinetic studies determined that reaction proceeds quickly in the coupling of bromoacetophenone, for both pre-hydrolyzed and in-situ hydrolysis catalyst dissolution. The catalyst could also be recycled for an extra run by simply re-using the aqueous layer.
The imidazolium salts were also used to produce organosilica hybrid materials. This was attempted via two methods: by post-grafting onto a commercial organosilica, and co-condensation of the corresponding organosilane. The co-condensation technique harbours potential for the production of solid-support catalysts.
Over the last decades, the world’s population has been growing at a faster rate, resulting in increased urbanisation, especially in developing countries. More than half of the global population currently lives in urbanised areas with an increasing tendency. The growth of cities results in a significant loss of vegetation cover, soil compaction and sealing of the soil surface which in turn results in high surface runoff during high-intensity storms and causes the problem of accelerated soil water erosion on streets and building grounds. Accelerated soil water erosion is a serious environmental problem in cities as it gives rise to the contamination of aquatic bodies, reduction of ground water recharge and increase in land degradation, and also results in damages to urban infrastructures, including drainage systems, houses and roads. Understanding the problem of water erosion in urban settings is essential for the sustainable planning and management of cities prone to water erosion. However, in spite of the vast existence of scientific literature on water erosion in rural regions, a concrete understanding of the underlying dynamics of urban erosion still remains inadequate for the urban dryland environments.
This study aimed at assessing water erosion and the associated socio-environmental determinants in a typical dryland urban area and used the city of Windhoek, Namibia, as a case study. The study used a multidisciplinary approach to assess the problem of water erosion. This included an in depth literature review on current research approaches and challenges of urban erosion, a field survey method for the quantification of the spatial extent of urban erosion in the dryland city of Windhoek, and face to face interviews by using semi-structured questionnaires to analyse the perceptions of stakeholders on urban erosion.
The review revealed that around 64% of the literatures reviewed were conducted in the developed world, and very few researches were carried out in regions with extreme climate, including dryland regions. Furthermore, the applied methods for erosion quantification and monitoring are not inclusive of urban typical features and they are not specific for urban areas. The reviewed literature also lacked aspects aimed at addressing the issues of climate change and policies regarding erosion in cities. In a field study, the spatial extent and severity of an urban dryland city, Windhoek, was quantified and the results show that nearly 56% of the city is affected by water erosion showing signs of accelerated erosion in the form of rills and gullies, which occurred mainly in the underdeveloped, informal and semi-formal areas of the city. Factors influencing the extent of erosion in Windhoek included vegetation cover and type, socio-urban factors and to a lesser extent slope estimates. A comparison of an interpolated field survey erosion map with a conventional erosion assessment tool (the Universal Soil Loss Equation) depicted a large deviation in spatial patterns, which underlines the inappropriateness of traditional non-urban erosion tools to urban settings and emphasises the need to develop new erosion assessment and management methods for urban environments. It was concluded that measures for controlling water erosion in the city need to be site-specific as the extent of erosion varied largely across the city.
The study also analysed the perceptions and understanding of stakeholders of urban water erosion in Windhoek, by interviewing 41 stakeholders using semi-structured questionnaires. The analysis addressed their understanding of water erosion dynamics, their perceptions with regards to the causes and the seriousness of erosion damages, and their attitudes towards the responsibilities for urban erosion. The results indicated that there is less awareness of the process as a phenomenon, instead there is more awareness of erosion damages and the factors contributing to the damages. About 69% of the stakeholders considered erosion damages to be ranging from moderate to very serious. However, there were notable disparities between the private householders and public authority groups. The study further found that the stakeholders have no clear understanding of their responsibilities towards the management of the control measures and payment for the damages. The private householders and local authority sectors pointed fingers at each other for the responsibilities for erosion damage payments and for putting up prevention measures. The reluctance to take responsibility could create a predicament for areas affected, specifically in the informal settlements where land management is not carried out by the local authority and land is not owned by the occupants.
The study concluded that in order to combat urban erosion, it is crucial to understand diverse dynamics aggravating the process of urbanisation from different scales. Accordingly, the study suggests that there is an urgent need for the development of urban-specific approaches that aim at: (a) incorporating the diverse socio-economic-environmental aspects influencing erosion, (b) scientifically improving natural cycles that influence water storages and nutrients for plants in urbanised dryland areas in order to increase the amount of vegetation cover, (c) making use of high resolution satellite images to improve the adopted methods for assessing urban erosion, (d) developing water erosion policies, and (e) continuously monitoring the impact of erosion and the influencing processes from local, national and international levels.
The main objective of this dissertation is to analyse prerequisites, expectations, apprehensions, and attitudes of students studying computer science, who are willing to gain a bachelor degree. The research will also investigate in the students’ learning style according to the Felder-Silverman model. These investigations fall in the attempt to make an impact on reducing the “dropout”/shrinkage rate among students, and to suggest a better learning environment.
The first investigation starts with a survey that has been made at the computer science department at the University of Baghdad to investigate the attitudes of computer science students in an environment dominated by women, showing the differences in attitudes between male and female students in different study years. Students are accepted to university studies via a centrally controlled admission procedure depending mainly on their final score at school. This leads to a high percentage of students studying subjects they do not want. Our analysis shows that 75% of the female students do not regret studying computer science although it was not their first choice. And according to statistics over previous years, women manage to succeed in their study and often graduate on top of their class. We finish with a comparison of attitudes between the freshman students of two different cultures and two different university enrolment procedures (University of Baghdad, in Iraq, and the University of Potsdam, in Germany) both with opposite gender majority.
The second step of investigation took place at the department of computer science at the University of Potsdam in Germany and analyzes the learning styles of students studying the three major fields of study offered by the department (computer science, business informatics, and computer science teaching). Investigating the differences in learning styles between the students of those study fields who usually take some joint courses is important to be aware of which changes are necessary to be adopted in the teaching methods to address those different students. It was a two stage study using two questionnaires; the main one is based on the Index of Learning Styles Questionnaire of B. A. Solomon and R. M. Felder, and the second questionnaire was an investigation on the students’ attitudes towards the findings of their personal first questionnaire. Our analysis shows differences in the preferences of learning style between male and female students of the different study fields, as well as differences between students with the different specialties (computer science, business informatics, and computer science teaching).
The third investigation looks closely into the difficulties, issues, apprehensions and expectations of freshman students studying computer science. The study took place at the computer science department at the University of Potsdam with a volunteer sample of students. The goal is to determine and discuss the difficulties and issues that they are facing in their study that may lead them to think in dropping-out, changing the study field, or changing the university. The research continued with the same sample of students (with business informatics students being the majority) through more than three semesters. Difficulties and issues during the study were documented, as well as students’ attitudes, apprehensions, and expectations. Some of the professors and lecturers opinions and solutions to some students’ problems were also documented. Many participants had apprehensions and difficulties, especially towards informatics subjects. Some business informatics participants began to think of changing the university, in particular when they reached their third semester, others thought about changing their field of study. Till the end of this research, most of the participants continued in their studies (the study they have started with or the new study they have changed to) without leaving the higher education system.
Changing the perspective sometimes offers completely new insights to an already well-known phenomenon. Exercising behavior, defined as planned, structured and repeated bodily movements with the intention to maintain or increase the physical fitness (Caspersen, Powell, & Christenson, 1985), can be thought of as such a well-known phenomenon that has been in the scientific focus for many decades (Dishman & O’Connor, 2005). Within these decades a perspective that assumes rational and controlled evaluations as the basis for decision making, was predominantly used to understand why some people engage in physical activity and others do not (Ekkekakis & Zenko, 2015).
Dual-process theories (Ekkekakis & Zenko, 2015; Payne & Gawronski, 2010) provide another perspective, that is not exclusively influenced by rational reasoning. These theories differentiate two different processes that guide behavior “depending on whether they operate automatically or in a controlled fashion“ (Gawronski & Creighton, 2012, p. 282). Following this line of thought, exercise behavior is not solely influenced by thoughtful deliberations (e.g. concluding that exercising is healthy) but also by spontaneous affective reactions (e.g. disliking being sweaty while exercising). The theoretical frameworks of dual-process models are not new in psychology (Chaiken & Trope, 1999) and have already been used for the explanation of numerous behaviors (e.g. Hofmann, Friese, & Wiers, 2008; Huijding, de Jong, Wiers, & Verkooijen, 2005). However, they have only rarely been used for the explanation of exercise behavior (e.g. Bluemke, Brand, Schweizer, & Kahlert, 2010; Conroy, Hyde, Doerksen, & Ribeiro, 2010; Hyde, Doerksen, Ribeiro, & Conroy, 2010). The assumption of two dissimilar behavior influencing processes, differs fundamentally from previous theories and thus from the research that has been conducted in the last decades in exercise psychology. Research mainly concentrated on predictors of the controlled processes and addressed the identified predictors in exercise interventions (Ekkekakis & Zenko, 2015; Hagger, Chatzisarantis, & Biddle, 2002).
Predictors arising from the described automatic processes, for example automatic evaluations for exercising (AEE), have been neglected in exercise psychology for many years. Until now, only a few researchers investigated the influence of these AEE for exercising behavior (Bluemke et al., 2010; Brand & Schweizer, 2015; Markland, Hall, Duncan, & Simatovic, 2015). Marginally more researchers focused on the impact of AEE for physical activity behavior (Calitri, Lowe, Eves, & Bennett, 2009; Conroy et al., 2010; Hyde et al., 2010; Hyde, Elavsky, Doerksen, & Conroy, 2012). The extant studies mainly focused on the quality of AEE and the associated quantity of exercise (exercise much or little; Bluemke et al., 2010; Calitri et al., 2009; Conroy et al., 2010; Hyde et al., 2012). In sum, there is still a dramatic lack of empirical knowledge, when applying dual-process theories to exercising behavior, even though these theories have proven to be successful in explaining behavior in many other health-relevant domains like eating, drinking or smoking behavior (e.g. Hofmann et al., 2008).
The main goal of the present dissertation was to collect empirical evidence for the influence of AEE on exercise behavior and to expand the so far exclusively correlational studies by experimentally controlled studies. By doing so, the ongoing debate on a paradigm shift from controlled and deliberative influences of exercise behavior towards approaches that consider automatic and affective influences (Ekkekakis & Zenko, 2015) should be encouraged. All three conducted publications are embedded in dual-process theorizing (Gawronski & Bodenhausen, 2006, 2014; Strack & Deutsch, 2004). These theories offer a theoretical framework that could integrate the established controlled variables of exercise behavior explanation and additionally consider automatic factors for exercise behavior like AEE.
Taken together, the empirical findings collected suggest that AEE play an important and diverse role for exercise behavior. They represent exercise setting preferences, are a cause for short-term exercise decisions and are decisive for long-term exercise adherence. Adding to the few already present studies in this field, the influence of (positive) AEE for exercise behavior was confirmed in all three presented publications. Even though the available set of studies needs to be extended in prospectively studies, first steps towards a more complete picture have been taken. Closing with the beginning of the synopsis: I think that time is right for a change of perspectives! This means a careful extension of the present theories with controlled evaluations explaining exercise behavior. Dual-process theories including controlled and automatic evaluations could provide such a basis for future research endeavors in exercise psychology.
Durch die Zunahme metabolischer Stoffwechselstörungen und Erkrankungen in der Weltbevölkerung wird in der Medizin und den Lebenswissenschaften vermehrt nach Präventionsstrategien und Ansatzpunkten gesucht, die die Gesundheit fördern, Erkrankungen verhindern helfen und damit auch die Gesamtlast auf die Gesundheitssysteme erleichtern. Ein Ansatzpunkt wird dabei in der Ernährung gesehen, da insbesondere der Konsum von gesättigten Fetten die Gesundheit nachträglich zu beeinflussen scheint. Dabei wird übersehen, dass in vielen Studien Hochfettdiäten nicht ausreichend von den Einflüssen einer zum Bedarf hyperkalorischen Energiezufuhr getrennt werden, sodass die Datenlage zu dem Einfluss von (gesättigten) Fetten auf den Metabolismus bei gleichbleibender Energieaufnahme noch immer unzureichend ist.
In der NUtriGenomic Analysis in Twins-Studie wurden 46 Zwillingspaare (34 monozygot, 12 dizygot) über einen Zeitraum von sechs Wochen mittels einer kohlenhydratreichen, fettarmen Diät nach Richtlinien der Deutschen Gesellschaft für Ernährung für ihr Ernährungsverhalten standardisiert, ehe sie zu einer kohlenhydratarmen, fettreichen Diät, die insbesondere gesättigte Fette enthielt, für weitere sechs Wochen wechselten. Beide Diäten waren dem individuellen Energiebedarf der Probanden angepasst, um so sowohl akut nach einerWoche als auch längerfristig nach sechs Wochen Änderungen des Metabolismus beobachten zu können, die sich in der vermehrten Aufnahme von (gesättigten) Fetten begründeten.
Die über die detaillierte Charakterisierung der Probanden an den klinischen Untersuchungstagen generierten Datensätze wurden mit statistischen und mathematischen Methoden (z.B. lineare gemischte Modellierung) analysiert, die der Größe der Datensätze und damit ihrem Informationsvolumen angepasst waren.
Es konnte gezeigt werden, dass die metabolisch gesunden und relativ jungen Probanden, die eine gute Compliance zeigten, im Hinblick auf ihren Glukosestoffwechsel adaptieren konnten, indem die Akutantwort nach einer Woche im Nüchterninsulin und dem Index für Insulinresistenz in den weiteren fünf Wochen ausgeglichen wurde.
Der Lipidstoffwechsel in Form der klassischen Marker wie Gesamtcholesterin, LDL und HDL war dagegen stärker beeinflusst und auch nach insgesamt sechs Wochen deutlich erhöht.
Letzteres unterstützt die Beobachtung im Transkriptom des weißen, subkutanen Fettgewebes, bei der eine Aktivierung der über die Toll-like receptors und das Inflammasom vermittelten subklinischen Inflammation beobachtet werden konnte.
Die auftretenden Veränderungen in Konzentration und Komposition des Plasmalipidoms zeigte ebenfalls nur eine teilweise und auf bestimmte Spezies begrenzte Gegenregulation.
Diesbezüglich kann also geschlussfolgert werden, dass auch die isokalorische Aufnahme von (gesättigten) Fetten zu Veränderungen im Metabolismus führt, wobei die Auswirkungen in weiteren (Langzeit-)Studien und Experimenten noch genauer untersucht werden müssen. Insbesondere wäre dabei ein längerer Zeitraum unter isokalorischen Bedingungen von Interesse und die Untersuchung von Probanden mit metabolischer Vorbelastung (z.B. Insulinresistenz).
Darüber hinaus konnte in NUGAT aber ebenfalls gezeigt werden, dass die Nutrigenetik und Nutrigenomik zwei nicht zu vernachlässigende Faktoren darstellen. So zeigten unter anderem die Konzentrationen einiger Lipidspezies eine starke Erblichkeit und Abhängigkeit der Diät.
Zudem legen die Ergebnisse nahe, dass laufende wie geplante Präventionsstrategien und medizinische Behandlungen deutlich stärker den Patienten als Individuum mit einbeziehen müssen, da die Datenanalyse interindividuelle Unterschiede identifizierte und Hinweise lieferte, dass einige Probanden die nachteiligen, metabolischen Auswirkungen einer Hochfettdiät besser ausgleichen konnten als andere.
Der Bittergeschmack warnt den Organismus vor potentiell verdorbener oder giftiger Nahrung und ist somit ein wichtiger Kontrollmechanismus. Die initiale Detektion der zahlreich vorkommenden Bitterstoffe erfolgt bei der Maus durch 35 Bitterrezeptoren (Tas2rs), die sich im Zungengewebe befinden. Die Geschmacksinformation wird anschließend von der Zunge über das periphere (PNS) ins zentrale Nervensystem (ZNS) geleitet, wo deren Verarbeitung stattfindet. Die Verarbeitung der Geschmacksinformation konnte bislang nicht gänzlich aufgeklärt werden. Neue Studien deuten auf eine Expression von Tas2rs auch im PNS und ZNS entlang der Geschmacksbahn hin. Über Vorkommen und Aufgaben dieser Rezeptoren bzw. Rezeptorzellen im Nervensystem ist bislang wenig bekannt.
Im Rahmen dieser Arbeit wurde die Tas2r-Expression in verschiedenen Mausmodellen untersucht, Tas2r-exprimierende Zellen identifiziert und deren Funktionen bei der Übertragung der Geschmacksinformationen analysiert. Im Zuge der Expressionsanalysen mittels qRT-PCR konnte die Expression von 25 der 35 bekannten Bittergeschmacksrezeptoren im zentralen Nervensystem der Maus nachgewiesen werden. Die Expressionsmuster im PNS sowie im ZNS lassen darüber hinaus Vermutungen zu Funktionen in verschiedenen Bereichen des Nervensystems zu. Basierend auf den Ergebnissen der Expressionsanalysen war es möglich, stark exprimierte Tas2rs mittels In-situ-Hybridisierung in verschiedenen Zelltypen zu visualisieren. Des Weiteren konnten immunhistochemische Färbungen unter Verwendung eines genetisch modifizierten Mausmodells die Ergebnisse der Expressionsanalysen bestätigen. Sie zeigten eine Expression von Tas2rs, am Beispiel des Tas2r131-Rezeptors, in cholinergen, dopaminergen, GABAergen, noradrenergen und glycinerg-angesteuerten Projektionsneuronen sowie in Interneuronen. Die Ergebnisse der vorliegenden Arbeit zeigen daher erstmals das Vorkommen von Tas2rs in verschiedenen neuronalen Zelltypen in weiten Teilen des ZNS. Dies lässt den Schluss zu, dass Tas2r-exprimierende Zellen potentiell multiple Funktionen innehaben. Anhand von Verhaltensexperimenten in genetisch modifizierten Mäusen wurde die mögliche Funktion von Tas2r131-exprimierenden Neuronen (Tas2r131-Neurone) bei der Geschmackswahrnehmung untersucht. Die Ergebnisse weisen auf eine Beteiligung von Tas2r131-Neuronen an der Signalweiterleitung bzw. -verarbeitung der Geschmacksinformation für eine Auswahl von Bittersubstanzen hin. Die Analysen zeigen darüber hinaus, dass Tas2r131-Neuronen nicht an der Geschmackswahrnehmung anderer Bitterstoffe sowie Geschmacksstimuli anderer Qualitäten (süß, umami, sauer, salzig), beteiligt sind. Eine spezifische „Tas2r131-Bittergeschmacksbahn“, die mit anderen potentiellen „Bitterbahnen“ teils unabhängige, teils überlappende Signalwege bzw. Verarbeitungsbereiche besitzt, bildet eine mögliche zelluläre Grundlage zur Unterscheidung von Bitterstoffen. Die im Rahmen dieser Arbeit entstandene Hypothese einer potentiellen Diskriminierung von Bitterstoffen soll daher in weiterführenden Studien durch die Etablierung eines Verhaltenstest mit Mäusen geprüft werden.
The Milky Way is only one out of billions of galaxies in the universe. However, it is a special galaxy because it allows to explore the main mechanisms involved in its evolution and formation history by unpicking the system star-by-star. Especially, the chemical fingerprints of its stars provide clues and evidence of past events in the Galaxy’s lifetime. These information help not only to decipher the current structure and building blocks of the Milky Way, but to learn more about the general formation process of galaxies.
In the past decade a multitude of stellar spectroscopic Galactic surveys have scanned millions of stars far beyond the rim of the solar neighbourhood. The obtained spectroscopic information provide unprecedented insights to the chemo-dynamics of the Milky Way. In addition analytic models and numerical simulations of the Milky Way provide necessary descriptions and predictions suited for comparison with observations in order to decode the physical properties that underlie the complex system of the Galaxy.
In the thesis various approaches are taken to connect modern theoretical modelling of galaxy formation and evolution with observations from Galactic stellar surveys. With its focus on the chemo-kinematics of the Galactic disk this work aims to determine new observational constraints on the formation of the Milky Way providing also proper comparisons with two different models. These are the population synthesis model TRILEGAL based on analytical distribution functions, which aims to simulate the number and distribution of stars in the Milky Way and its different components, and a hybrid model (MCM) that combines an N-body simulation of a Milky Way like galaxy in the cosmological framework with a semi-analytic chemical evolution model for the Milky Way. The major observational data sets in use come from two surveys, namely the “Radial Velocity Experiment” (RAVE) and the “Sloan Extension for Galactic Understanding and Exploration” (SEGUE).
In the first approach the chemo-kinematic properties of the thin and thick disk of the Galaxy as traced by a selection of about 20000 SEGUE G-dwarf stars are directly compared to the predictions by the MCM model. As a necessary condition for this, SEGUE's selection function and its survey volume are evaluated in detail to correct the spectroscopic observations for their survey specific selection biases. Also, based on a Bayesian method spectro-photometric distances with uncertainties below 15% are computed for the selection of SEGUE G-dwarfs that are studied up to a distance of 3 kpc from the Sun.
For the second approach two synthetic versions of the SEGUE survey are generated based on the above models. The obtained synthetic stellar catalogues are then used to create mock samples best resembling the compiled sample of observed SEGUE G-dwarfs. Generally, mock samples are not only ideal to compare predictions from various models. They also allow validation of the models' quality and improvement as with this work could be especially achieved for TRILEGAL. While TRILEGAL reproduces the statistical properties of the thin and thick disk as seen in the observations, the MCM model has shown to be more suitable in reproducing many chemo-kinematic correlations as revealed by the SEGUE stars. However, evidence has been found that the MCM model may be missing a stellar component with the properties of the thick disk that the observations clearly show. While the SEGUE stars do indicate a thin-thick dichotomy of the stellar Galactic disk in agreement with other spectroscopic stellar studies, no sign for a distinct metal-poor disk is seen in the MCM model.
Usually stellar spectroscopic surveys are limited to a certain volume around the Sun covering different regions of the Galaxy’s disk. This often prevents to obtain a global view on the chemo-dynamics of the Galactic disk. Hence, a suitable combination of stellar samples from independent surveys is not only useful for the verification of results but it also helps to complete the picture of the Milky Way. Therefore, the thesis closes with a comparison of the SEGUE G-dwarfs and a sample of RAVE giants. The comparison reveals that the chemo-kinematic relations agree in disk regions where the samples of both surveys show a similar number of stars. For those parts of the survey volumes where one of the surveys lacks statistics they beautifully complement each other. This demonstrates that the comparison of theoretical models on the one side, and the combined observational data gathered by multiple surveys on the other side, are key ingredients to understand and disentangle the structure and formation history of the Milky Way.
The energy sector is both affected by climate change and a key sector for climate protection measures. Energy security is the backbone of our modern society and guarantees the functioning of most critical infrastructure. Thus, decision makers and energy suppliers of different countries should be familiar with the factors that increase or decrease the susceptibility of their electricity sector to climate change. Susceptibility means socioeconomic and structural characteristics of the electricity sector that affect the demand for and supply of electricity under climate change. Moreover, the relevant stakeholders are supposed to know whether the given national energy and climate targets are feasible and what needs to be done in order to meet these targets. In this regard, a focus should be on the residential building sector as it is one of the largest energy consumers and therefore emitters of anthropogenic CO 2 worldwide.
This dissertation addresses the first aspect, namely the susceptibility of the electricity sector, by developing a ranked index which allows for quantitative comparison of the electricity sector susceptibility of 21 European countries based on 14 influencing factors. Such a ranking has not been completed to date. We applied a sensitivity analysis to test the relative effect of each influencing factor on the susceptibility index ranking. We also discuss reasons for the ranking position and thus the susceptibility of selected countries. The second objective, namely the impact of climate change on the energy demand of buildings, is tackled by means of a new model with which the heating and cooling energy demand of residential buildings can be estimated. We exemplarily applied the model to Germany and the Netherlands. It considers projections of future changes in population, climate and the insulation standards of buildings, whereas most of the existing studies only take into account fewer than three different factors that influence the future energy demand of buildings. Furthermore, we developed a comprehensive retrofitting algorithm with which the total residential building stock can be modeled for the first time for each year in the past and future.
The study confirms that there is no correlation between the geographical location of a country and its position in the electricity sector susceptibility ranking. Moreover, we found no pronounced pattern of susceptibility influencing factors between countries that ranked higher or lower in the index. We illustrate that Luxembourg, Greece, Slovakia and Italy are the countries with the highest electricity sector susceptibility. The electricity sectors of Norway, the Czech Republic, Portugal and Denmark were found to be least susceptible to climate change. Knowledge about the most important factors for the poor and good ranking positions of these countries is crucial for finding adequate adaptation measures to reduce the susceptibility of the electricity sector. Therefore, these factors are described within this study.
We show that the heating energy demand of residential buildings will strongly decrease in both Germany and the Netherlands in the future. The analysis for the Netherlands focused on the regional level and a finer temporal resolution which revealed strong variations in the future heating energy demand changes by province and by month. In the German study, we additionally investigated the future cooling energy demand and could demonstrate that it will only slightly increase up to the middle of this century. Thus, increases in the cooling energy demand are not expected to offset reductions in heating energy demand. The main factor for substantial heating energy demand reductions is the retrofitting of buildings. We are the first to show that the given German and Dutch energy and climate targets in the building sector can only be met if the annual retrofitting rates are substantially increased. The current rate of only about 1 % of the total building stock per year is insufficient for reaching a nearly zero-energy demand of all residential buildings by the middle of this century. To reach this target, it would need to be at least tripled. To sum up, this thesis emphasizes that country-specific characteristics are decisive for the electricity sector susceptibility of European countries. It also shows for different scenarios how much energy is needed in the future to heat and cool residential buildings. With this information, existing climate mitigation and adaptation measures can be justified or new actions encouraged.
Extreme hydro-meteorological events, such as severe droughts or heavy rainstorms, constitute primary manifestations of climate variability and exert a critical impact on the natural environment and human society. This is particularly true for high-mountain areas, such as the eastern flank of the southern Central Andes of NW Argentina, a region impacted by deep convection processes that form the basis of extreme events, often resulting in floods, a variety of mass movements, and hillslope processes. This region is characterized by pronounced E-W gradients in topography, precipitation, and vegetation cover, spanning low to medium-elevation, humid and densely vegetated areas to high-elevation, arid and sparsely vegetated environments. This strong E-W gradient is mirrored by differences in the efficiency of surface processes, which mobilize and transport large amounts of sediment through the fluvial system, from the steep hillslopes to the intermontane basins and further to the foreland. In a highly sensitive high-mountain environment like this, even small changes in the spatiotemporal distribution, magnitude and rates of extreme events may strongly impact environmental conditions, anthropogenic activity, and the well-being of mountain communities and beyond. However, although the NW Argentine Andes comprise the catchments for the La Plata river that traverses one of the most populated and economically relevant areas of South America, there are only few detailed investigations of climate variability and extreme hydro-meteorological events.
In this thesis, I focus on deciphering the spatiotemporal variability of rainfall and river discharge, with particular emphasis on extreme hydro-meteorological events in the subtropical southern Central Andes of NW Argentina during the past seven decades. I employ various methods to assess and quantify statistically significant trend patterns of rainfall and river discharge, integrating high-quality daily time series from gauging stations (40 rainfall and 8 river discharge stations) with gridded datasets (CPC-uni and TRMM 3B42 V7), for the period between 1940 and 2015. Evidence for a general intensification of the hydrological cycle at intermediate elevations (~ 0.5 – 3 km asl) at the eastern flank of the southern Central Andes is found both from rainfall and river-discharge time-series analysis during the period from 1940 to 2015. This intensification is associated with the increase of the annual total amount of rainfall and the mean annual discharge. However, most pronounced trends are found at high percentiles, i.e. extreme hydro-meteorological events, particularly during the wet season from December to February.An important outcome of my studies is the recognition of a rapid increase in the amount of river discharge during the period between 1971 and 1977, most likely linked to the 1976-77 global climate shift, which is associated with the North Pacific Ocean sea surface temperature variability. Interestingly, after this rapid increase, both rainfall and river discharge decreased at low and intermediate elevations along the eastern flank of the Andes. In contrast, during the same time interval, at high elevations, extensive areas on the arid Puna de Atacama plateau have recorded increasing annual rainfall totals. This has been associated with more intense extreme hydro-meteorological events from 1979 to 2014. This part of the study reveals that low-, intermediate, and high-elevation sectors in the Andes of NW Argentina respond differently to changing climate conditions.
Possible forcing mechanisms of the pronounced hydro-meteorological variability observed in the study area are also investigated. For the period between 1940 and 2015, I analyzed modes of oscillation of river discharge from small to medium drainage basins (102 to 104 km2), located on the eastern flank of the orogen. First, I decomposed the relevant monthly time series using the Hilbert-Huang Transform, which is particularly appropriate for non-stationary time series that result from non-linear natural processes. I observed that in the study region discharge variability can be described by five quasi-periodic oscillatory modes on timescales varying from 1 to ~20 years. Secondly, I tested the link between river-discharge variations and large-scale climate modes of variability, using different climate indices, such as the BEST ENSO (Bivariate El Niño-Southern Oscillation Time-series) index. This analysis reveals that, although most of the variance on the annual timescale is associated with the South American Monsoon System, a relatively large part of river-discharge variability is linked to Pacific Ocean variability (PDO phases) at multi-decadal timescales (~20 years). To a lesser degree, river discharge variability is also linked to the Tropical South Atlantic (TSA) sea surface temperature anomaly at multi-annual timescales (~2-5 years).
Taken together, these findings exemplify the high degree of sensitivity of high-mountain environments with respect to climatic variability and change. This is particularly true for the topographic transitions between the humid, low-moderate elevations and the semi-arid to arid highlands of the southern Central Andes. Even subtle changes in the hydro-meteorological regime of these areas of the mountain belt react with major impacts on erosional hillslope processes and generate mass movements that fundamentally impact the transport capacity of mountain streams. Despite more severe storms in these areas, the fluvial system is characterized by pronounced variability of the stream power on different timescales, leading to cycles of sediment aggradation, the loss of agriculturally used land and severe impacts on infrastructure.
Discourse production is crucial for communicative success and is in the core of aphasia assessment and treatment. Coherence differentiates discourse from a series of utterances/sentences; it is internal unity and connectedness, and, as such, perhaps the most inherent property of discourse. It is unclear whether people with aphasia, who experience various language production difficulties, preserve the ability to produce coherent discourse. A more general question of how coherence is established and represented linguistically has been addressed in the literature, yet remains unanswered. This dissertation presents an investigation of discourse production in aphasia and the linguistic mechanisms of establishing coherence.
This cumulative dissertation contains four self-contained articles which are related to EU regional policy and its structural funds as the overall research topic. In particular, the thesis addresses the question if EU regional policy interventions can at all be scientifically justified and legitimated on theoretical and empirical grounds from an economics point of view. The first two articles of the thesis (“The EU structural funds as a means to hamper migration” and “Internal migration and EU regional policy transfer payments: a panel data analysis for 28 EU member countries”) enter into one particular aspect of the debate regarding the justification and legitimisation of EU regional policy. They theoretically and empirically analyse as to whether regional policy or the market force of the free flow of labour (migration) in the internal European market is the better instrument to improve and harmonise the living and working conditions of EU citizens. Based on neoclassical market failure theory, the first paper argues that the structural funds of the EU are inhibiting internal migration, which is one of the key measures in achieving convergence among the nations in the single European market. It becomes clear that European regional policy aiming at economic growth and cohesion among the member states cannot be justified and legitimated if the structural funds hamper instead of promote migration. The second paper, however, shows that the empirical evidence on the migration and regional policy nexus is not unambiguous, i.e. different empirical investigations show that EU structural funds hamper and promote EU internal migration. Hence, the question of the scientific justification and legitimisation of EU regional policy cannot be readily and unambiguously answered on empirical grounds. This finding is unsatisfying but is in line with previous theoretical and empirical literature. That is why, I take a step back and reconsider the theoretical beginnings of the thesis, which took for granted neoclassical market failure theory as the starting point for the positive explanation as well as the normative justification and legitimisation of EU regional policy. The third article of the thesis (“EU regional policy: theoretical foundations and policy conclusions revisited”) deals with the theoretical explanation and legitimisation of EU regional policy as well as the policy recommendations given to EU regional policymakers deduced from neoclassical market failure theory. The article elucidates that neoclassical market failure is a normative concept, which justifies and legitimates EU regional policy based on a political and thus subjective goal or value-judgement. It can neither be used, therefore, to give a scientifically positive explanation of the structural funds nor to obtain objective and practically applicable policy instruments. Given this critique of neoclassical market failure theory, the third paper consequently calls into question the widely prevalent explanation and justification of EU regional policy given in static neoclassical equilibrium economics. It argues that an evolutionary non-equilibrium economics perspective on EU regional policy is much more appropriate to provide a realistic understanding of one of the largest policies conducted by the EU. However, this does neither mean that evolutionary economic theory can be unreservedly seen as the panacea to positively explain EU regional policy nor to derive objective policy instruments for EU regional policymakers. This issue is discussed in the fourth article of the thesis (“Market failure vs. system failure as a rationale for economic policy? A critique from an evolutionary perspective”). This article reconsiders the explanation of economic policy from an evolutionary economics perspective. It contrasts the neoclassical equilibrium notions of market and government failure with the dominant evolutionary neo-Schumpeterian and Austrian-Hayekian perceptions. Based on this comparison, the paper criticises the fact that neoclassical failure reasoning still prevails in non-equilibrium evolutionary economics when economic policy issues are examined. This is surprising, since proponents of evolutionary economics usually view their approach as incompatible with its neoclassical counterpart. The paper therefore argues that in order to prevent the otherwise fruitful and more realistic evolutionary approach from undermining its own criticism of neoclassical economics and to create a consistent as well as objective evolutionary policy framework, it is necessary to eliminate the equilibrium spirit. Taken together, the main finding of this thesis is that European regional policy and its structural funds can neither theoretically nor empirically be justified and legitimated from an economics point of view. Moreover, the thesis finds that the prevalent positive and instrumental explanation of EU regional policy given in the literature needs to be reconsidered, because these theories can neither scientifically explain the emergence and development of this policy nor are they appropriate to derive objective and scientific policy instruments for EU regional policymakers.
Water scarcity, adaption on climate change, and risk assessment of droughts and floods are critical topics for science and society these days. Monitoring and modeling of the hydrological cycle are a prerequisite to understand and predict the consequences for weather and agriculture. As soil water storage plays a key role for partitioning of water fluxes between the atmosphere, biosphere, and lithosphere, measurement techniques are required to estimate soil moisture states from small to large scales.
The method of cosmic-ray neutron sensing (CRNS) promises to close the gap between point-scale and remote-sensing observations, as its footprint was reported to be 30 ha. However, the methodology is rather young and requires highly interdisciplinary research to understand and interpret the response of neutrons to soil moisture. In this work, the signal of nine detectors has been systematically compared, and correction approaches have been revised to account for meteorological and geomagnetic variations. Neutron transport simulations have been consulted to precisely characterize the sensitive footprint area, which turned out to be 6--18 ha, highly local, and temporally dynamic. These results have been experimentally confirmed by the significant influence of water bodies and dry roads. Furthermore, mobile measurements on agricultural fields and across different land use types were able to accurately capture the various soil moisture states. It has been further demonstrated that the corresponding spatial and temporal neutron data can be beneficial for mesoscale hydrological modeling. Finally, first tests with a gyrocopter have proven the concept of airborne neutron sensing, where increased footprints are able to overcome local effects.
This dissertation not only bridges the gap between scales of soil moisture measurements. It also establishes a close connection between the two worlds of observers and modelers, and further aims to combine the disciplines of particle physics, geophysics, and soil hydrology to thoroughly explore the potential and limits of the CRNS method.
Das Widerspenstige bändigen
(2016)
Dem Handeln von Lehrkräften wird in der schulischen Praxis wie in der wissenschaftlichen Literatur ein wesentlicher Einfluss auf die Qualität von schulischem Unterricht zugesprochen. Auch wenn umfangreiche normative Vorstellungen über ein gutes Lehr-Handeln bestehen, so gibt es wenig Erkenntnis darüber, welche Gründe Lehrkräfte für ihr pädagogisches Handeln haben. Das Handeln von Lehrkräften kann nur dann adäquat erfasst werden, wenn Bildung einerseits als Weitergabe von Kultur an die nachfolgende Generation und andererseits als eine vom sich bildenden Subjekt ausgehende Selbst- und Weltverständigung verstanden wird. Damit einhergehende Anforderungen an die Lehrkraft stehen notwendigerweise in Widerspruch zueinander; dies gilt besonders für eine Gesellschaft mit großer kultureller und sozialer Heterogenität. Bei der Suche nach Zusammenhängen zwischen Persönlichkeit, pädagogischem Wissen oder Kompetenzen und einem unterrichtlichen Handeln wird häufig von einer Bedingtheit dieses Handelns ausgegangen und dieses auf kognitive Aspekte und an externen Normen orientierte Merkmale verkürzt. Ertragreicher für eine Antwort auf die Frage nach den Begründungen sind wissenschaftliche Arbeiten, die Professionalität als eine Bezugnahme auf einen besonderen strukturellen Rahmen beschreiben, der durch Widersprüche geprägt ist und Entscheidungen zu den Spannungsfeldern pädagogischer Verhältnisse erfordert. Die subjektwissenschaftliche Lerntheorie bietet eine Basis für ein Verständnis eines Lernens in institutionellen Kontexten ausgehend von den Lerninteressen der Schülerinnen und Schüler. Lehren kann darauf bezugnehmend als Unterstützung von Selbst- und Weltverständigungsprozessen durch Wertschätzung, Verstehen und Angebote alternativer Bedeutungshorizonte verstanden werden. Das Handeln von Lehrkräften ist als sinngebende Bezugnahme auf daraus resultierende sowie institutionelle Anforderungen mittels gesellschaftlicher Bedeutungsstrukturen verstehbar. Das handelnde Subjekt erschließt sich selbst und die Welt mit Hilfe von Bedeutungen. Diese können verstanden werden als der Besonderheit der Biographie, der gesellschaftlichen Position sowie der Lebenslage geschuldete Reinterpretationen gesellschaftlicher Bedeutungsstrukturen. Im empirischen Verfahren können mittels eines Übergangs von sequentiellen zu komparativen Analysen Positionierungen als thematisch spezifische und über die konkrete Handlungssituation hinausreichende Bedeutungs-Begründungs-Zusammenhänge rekonstruiert werden. Daraus werden situationsunabhängige Strukturmomente des Gegenstands Lehren an beruflichen Schulen aber auch komplexe, situationsbezogene subjektive Bedeutungs-Begründungs-Muster abgeleitet. Als wesentliche strukturelle Merkmale lassen sich die Schlüsselkategorien ‚Deutungsmacht‘ und ‚instrumentelle pädagogische Beziehung‘ aus dem empirischen Material unter Zuhilfenahme weiterer theoretischer Folien entwickeln. Da Deutungsmacht auf Akzeptanz angewiesen ist und in instrumentellen Beziehungen eine kooperative Bezugnahme auf den Lehr-Lern-Gegenstand allenfalls punktuell erfolgt, können damit asymmetrische metastabile Arrangements zwischen einer Lehrkraft und Schülerinnen und Schülern verstanden werden. Als empirische Ausprägungen weist Deutungsmacht die Varianten ‚absoluter Anspruch‘, ‚Akzeptanz der Fragilität‘ und ‚Akzeptanz der Legitimität eines Infragestellens‘ auf. Bei der zweiten Schlüsselkategorie treten die Varianten ‚strukturelle Prägung‘, ‚unspezifischer allgemein-menschlicher Charakter‘ und ‚Außenprägung‘ der instrumentellen pädagogischen Beziehung auf. Die Bedeutungs-Begründungs-Musters weisen teilweise Inkonsistenzen und Übergänge in den Positionierungen bezogen auf die dargestellten Varianten auf. Nur bei einem Teil der Muster sind Bemühungen um Wertschätzung und Verstehen der Schülerinnen und Schüler plausibel ableitbar, gleiches gilt in Hinblick auf eine Offenheit für eine Revision der Muster. Die Muster, wie etwa ‚Durchsetzend-ertragendes Nachsteuern‘, ‚Direktiv-personalisierendes Praktizieren‘ oder ‚Regulierend-flexibles Managen‘ sind zu verstehen als Bewältigungsmodi der kontingenten pädagogischen (Konflikt-)Situationen, auf die sich die Fallschilderungen beziehen. Die jeweilige Lehrkraft hat dieses Muster in dem beschriebenen Fall genutzt, was allerdings keine Aussage darüber zulässt, auf welche Muster die Lehrkraft in anderen Fällen zugreifen würde. Die Ergebnisse der vorliegenden Arbeit eignen sich als eine heuristische bzw. theoretische Folie, die Lehrkräfte beim Erschließen ihres eigenen pädagogischen Handelns - etwa in einer als Fallberatung konzipierten Fortbildung - unterstützen kann. Möglich sind Anschlüsse an andere theoretische Ansätze zum Handeln von Lehrkräften aber auch deren veränderte Einordnung. Erweitert werden die Optionen, dieses Handeln über wissenschaftliche Zugänge zu erfassen.
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions.
First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations.
Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago.
Das Wissen um die lokale Struktur von Seltenen Erden Elementen (SEE) in silikatischen und aluminosilikatischen Schmelzen ist von fundamentalem Interesse für die Geochemie der magmatischen Prozesse, speziell wenn es um ein umfassendes Verständnis der Verteilungsprozesse von SEE in magmatischen Systemen geht. Es ist allgemein akzeptiert, dass die SEE-Verteilungsprozesse von Temperatur, Druck, Sauerstofffugazität (im Fall von polyvalenten Kationen) und der Kristallchemie kontrolliert werden. Allerdings ist wenig über den Einfluss der Schmelzzusammensetzung selbst bekannt. Ziel dieser Arbeit ist, eine Beziehung zwischen der Variation der SEE-Verteilung mit der Schmelzzusammensetzung und der Koordinationschemie dieser SEE in der Schmelze zu schaffen.
Dazu wurden Schmelzzusammensetzungen von Prowatke und Klemme (2005), welche eine deutliche Änderung der Verteilungskoeffizienten zwischen Titanit und Schmelze ausschließlich als Funktion der Schmelzzusammensetzung zeigen, sowie haplogranitische bzw. haplobasaltische Schmelzzusammensetzungen als Vertreter magmatischer Systeme mit La, Gd, Yb und Y dotiert und als Glas synthetisiert. Die Schmelzen variierten systematisch im Aluminiumsättigungsindex (ASI), welcher bei den Prowatke und Klemme (2005) Zusammensetzungen einen Bereich von 0.115 bis 0.768, bei den haplogranitischen Zusammensetzungen einen Bereich von 0.935 bis 1.785 und bei den haplobasaltischen Zusammensetzungen einen Bereich von 0.368 bis 1.010 abdeckt. Zusätzlich wurden die haplogranitischen Zusammensetzungen mit 4 % H2O synthetisiert, um den Einfluss von Wasser auf die lokale Umgebung von SEE zu studieren. Um Informationen über die lokalen Struktur von Gd, Yb und Y zu erhalten wurde die Röntgenabsorptionsspektroskopie angewendet. Dabei liefert die Untersuchung der Feinstruktur mittels der EXAFS-Spektroskopie (engl. Extended X-Ray Absorption Fine Structure) quantitative Informationen über die lokale Umgebung, während RIXS (engl. resonant inelastic X-ray scattering), sowie die daraus extrahierte hoch aufgelöste Nahkantenstruktur, XANES (engl. X-ray absorption near edge structure) qualitative Informationen über mögliche Koordinationsänderungen von La, Gd und Yb in den Gläsern liefert. Um mögliche Unterschiede der lokalen Struktur oberhalb der Glastransformationstemperatur (TG) zur Raumtemperatur zu untersuchen, wurden exemplarisch Hochtemperatur Y-EXAFS Untersuchungen durchgeführt.
Für die Auswertung der EXAFS-Messungen wurde ein neu eingeführter Histogramm-Fit verwendet, der auch nicht-symmetrische bzw. nichtgaußförmige Paarverteilungsfunktionen beschreiben kann, wie sie bei einem hohen Grad der Polymerisierung bzw. bei hohen Temperaturen auftreten können. Die Y-EXAFS-Spektren für die Prowatke und Klemme (2005) Zusammensetzungen zeigen mit Zunahme des ASI, eine Zunahme der Asymmetrie und Breite der Y-O Paarverteilungsfunktion, welche sich in sich in der Änderung der Koordinationszahl von 6 nach 8 und einer Zunahme des Y-O Abstand um 0.13Å manifestiert. Ein ähnlicher Trend lässt sich auch für die Gd- und Yb-EXAFS-Spektren beobachten. Die hoch aufgelösten XANESSpektren für La, Gd und Yb zeigen, dass sich die strukturellen Unterschiede zumindest halb-quantitativ bestimmen lassen. Dies gilt insbesondere für Änderungen im mittleren Abstand zu den Sauerstoffatomen. Im Vergleich zur EXAFS-Spektroskopie liefert XANES jedoch keine Informationen über die Form und Breite von Paarverteilungsfunktionen. Die Hochtemperatur EXAFS-Untersuchungen von Y zeigen Änderungen der lokalen Struktur oberhalb der Glasübergangstemperatur an, welche sich vordergründig auf eine thermisch induzierte Erhöhung des mittleren Y-O Abstandes zurückführen lassen. Allerdings zeigt ein Vergleich der Y-O Abstände für Zusammensetzungen mit einem ASI von 0.115 bzw. 0.755, ermittelt bei Raumtemperatur und TG, dass der im Glas beobachtete strukturelle Unterschied entlang der Zusammensetzungsserie in der Schmelze noch stärker ausfallen kann, als bisher für die Gläser angenommen wurde.
Die direkte Korrelation der Verteilungsdaten von Prowatke und Klemme (2005) mit den strukturellen Änderungen der Schmelzen offenbart für Y eine lineare Korrelation, wohingegen Yb und Gd eine nicht lineare Beziehung zeigen. Aufgrund seines Ionenradius und seiner Ladung wird das 6-fach koordinierte SEE in den niedriger polymerisierten Schmelzen bevorzugt durch nicht-brückenbildende Sauerstoffatome koordiniert, um stabile Konfigurationen zu bilden. In den höher polymerisierten Schmelzen mit ASI-Werten in der Nähe von 1 ist 6-fache Koordination nicht möglich, da fast nur noch brückenbildende Sauerstoffatome zur Verfügung stehen. Die Überbindung von brückenbildenden Sauerstoffatomen um das SEE wird durch Erhöhung der Koordinationszahl und des mittleren SEE-O Abstandes ausgeglichen. Dies bedeutet eine energetisch günstigere Konfiguration in den stärker depolymerisierten Zusammensetzungen, aus welcher die beobachtete Variation des Verteilungskoeffizienten resultiert, welcher sich jedoch für jedes Element stark unterscheidet. Für die haplogranitischen und haplobasaltischen Zusammensetzungen wurde mit Zunahme der Polymerisierung auch eine Zunahme der Koordinationszahl und des durchschnittlichen Bindungsabstands, einhergehend mit der Zunahme der Schiefe und der Asymmetrie der Paarverteilungsfunktion, beobachtet. Dies impliziert, dass das jeweilige SEE mit Zunahme der Polymerisierung auch inkompatibler in diesen Zusammensetzungen wird. Weiterhin zeigt die Zugabe von Wasser, dass die Schmelzen depolymerisieren, was in einer symmetrischeren Paarverteilungsfunktion resultiert, wodurch die Kompatibilität wieder zunimmt.
Zusammenfassend zeigt sich, dass die Veränderungen der Schmelzzusammensetzungen in einer Änderung der Polymerisierung der Schmelzen resultieren, die dann einen signifikanten Einfluss auf die lokale Umgebung der SEE hat. Die strukturellen Änderungen lassen sich direkt mit Verteilungsdaten korrelieren, die Trends unterscheiden sich aber stark zwischen leichten, mittleren und schweren SEE. Allerdings konnte diese Studie zeigen, in welcher Größenordnung die Änderungen liegen müssen, um einen signifikanten Einfluss auf den Verteilungskoeffizenten zu haben. Weiterhin zeigt sich, dass der Einfluss der Schmelzzusammensetzung auf die Verteilung der Spurenelemente mit Zunahme der Polymerisierung steigt und daher nicht vernachlässigt werden darf.
Ziel der vorliegenden Arbeit ist es, belohnungsabhängiges (instrumentelles) Lernen und Entscheidungsfindungsprozesse auf Verhaltens- und neuronaler Ebene in Abhängigkeit von chronischem Stresserleben (erfasst über den Lifetime Stress Inventory, Holmes und Rahe 1962) und kognitiven Variablen (eingeteilt in eine fluide und eine kristalline Intelligenzkomponente) an gesunden Probanden zu untersuchen. Dabei steht zu Beginn die Sicherung der Konstruktvalidität zwischen den bislang oft synonym verwendeten Begriffen modellfrei ~ habituell, bzw. modellbasiert ~ zielgerichtet im Fokus. Darauf aufbauend soll dann insbesondere der differentielle und interaktionelle Einfluss von chronischem Stresserleben und kognitiven Variablen auf Entscheidungsprozesse (instrumentelles Lernen) und deren neuronales Korrelat im VS untersucht und dargestellt werden. Abschließend wird die Relevanz der untersuchten belohnungsabhängigen Lernprozesse für die Entstehung und Aufrechterhaltung der Alkoholabhängigkeit zusammen mit weiteren Einflussvariablen in einem Übersichtspapier diskutiert.
Der Klimawandel
(2016)
Was ist Gerechtigkeit? Wie könnten gerechte Regelungen aussehen für die Katastrophen und Leiden, die der Klimawandel auslöst bzw. auslösen wird? Diese sind häufig ungerecht, weil sie oft deutlich stärker diejenigen treffen, die am wenigsten zur Klimaveränderung beigetragen haben.
Doch was genau verstehen wir unter dem Schlagwort: ‚Klimawandel‘? Und kann dieser wirklich den Menschen direkt treffen? Ein kurzer naturwissenschaftlicher Abriss klärt hier die wichtigsten Fragen.
Da es sich hierbei um eine philosophische Arbeit handelt, muss zunächst geklärt werden, ob der Mensch überhaupt die Ursache von so etwas sein kann wie z.B. der Klimaerwärmung. Robert Spaemanns These dazu ist, dass der Mensch durch seinen freien Willen mit seinen Einzelhandlungen das Weltgeschehen verändern kann. Hans Jonas fügt dem hinzu, dass wir durch diese Fähigkeit, verantwortlich sind für die gewollten und ungewollten Folgen unserer Handlungen.
Damit wäre aus naturwissenschaftlicher Sicht (1. Teil der Arbeit) und aus philosophischer Sicht (Anfang 2. Teil) geklärt, dass der Mensch mit größter Wahrscheinlichkeit die Ursache des Klimawandels ist und diese Verursachung moralische Konsequenzen für ihn hat.
Ein philosophischer Gerechtigkeitsbegriff wird aus der Kantischen Rechts- und Moralphilosophie entwickelt, weil diese die einzige ist, die dem Menschen überhaupt ein Recht auf Rechte zusprechen kann. Diese entspringt der transzendentalen Freiheitsfähigkeit des Menschen, weshalb jedem das Recht auf Rechte absolut und immer zukommt. Gleichzeitig mündet Kants Philosophie wiederum in dem Freiheitsgedanken, indem Gerechtigkeit nur existiert, wenn alle Menschen gleichermaßen frei sein können.
Was heißt das konkret? Wie könnte Gerechtigkeit in der Realität wirklich umgesetzt werden? Die Realisierung schlägt zwei Grundrichtungen ein. John Rawls und Stefan Gosepath beschäftigen sich u.a. eingehend mit der prozeduralen Gerechtigkeit, was bedeutet, dass gerechte Verfahren gefunden werden, die das gesellschaftliche Zusammenleben regeln. Das leitende Prinzip hierfür ist vor allem: ein Mitbestimmungsrecht aller, so dass sich im Prinzip alle Bürger ihre Gesetze selbst geben und damit frei handeln.
In Bezug auf den Klimawandel steht die zweite Ausrichtung im Vordergrund – die distributive oder auch Verteilungs-Gerechtigkeit. Materielle Güter müssen so aufgeteilt werden, dass auch trotz empirischer Unterschiede alle Menschen als moralische Subjekte anerkannt werden und frei sein können.
Doch sind diese philosophischen Schlussfolgerungen nicht viel zu abstrakt, um auf ein ebenso schwer fassbares und globales Problem wie den Klimawandel angewendet zu werden? Was könnte daher eine Klimagerechtigkeit sein?
Es gibt viele Gerechtigkeitsprinzipien, die vorgeben, eine gerechte Grundlage für die Klimaprobleme zu bieten wie z.B. das Verursacherprinzip, das Fähigkeitsprinzip oder das Grandfathering-Prinzip, bei dem die Hauptverursacher nach wie vor am meisten emittieren dürfen (dieses Prinzip leitete die bisherigen internationalen Verhandlungen).
Das Ziel dieser Arbeit ist, herauszufinden, wie die Klimaprobleme gelöst werden können, so dass für alle Menschen unter allen Umständen die universellen Menschenrechte her- und sichergestellt werden und diese frei und moralisch handeln können.
Die Schlussfolgerung dieser Arbeit ist, dass Kants Gerechtigkeitsbegriff durch eine Kombination des Subsistenzemissions-Rechts, des Greenhouse-Development-Rights-Principles (GDR-Prinzip) und einer internationalen Staatlichkeit durchgesetzt werden könnte.
Durch das Subsistenzemissions-Recht hat jeder Mensch das Recht, so viel Energie zu verbrauchen und damit zusammenhängende Emissionen zu produzieren, dass er ein menschenwürdiges Leben führen kann. Das GDR-Prinzip errechnet den Anteil an der weltweiten Gesamtverantwortung zum Klimaschutz eines jeden Landes oder sogar eines jeden Weltbürgers, indem es die historischen Emissionen (Klimaschuld) zu der jetzigen finanziellen Kapazität des Landes/ des Individuums (Verantwortungsfähigkeit) hinzuaddiert. Die Implementierung von internationalen Gremien wird verteidigt, weil es ein globales, grenzüberschreitendes Problem ist, dessen Effekte und dessen Verantwortung globale Ausmaße haben.
Ein schlagendes Argument für fast alle Klimaschutzmaßnahmen ist, dass sie Synergien aufweisen zu anderen gesellschaftlichen Bereichen aufweisen wie z.B. Gesundheit und Armutsbekämpfung, in denen auch noch um die Durchsetzung unserer Menschenrechte gerungen wird.
Ist dieser Lösungsansatz nicht völlig utopisch?
Dieser Vorschlag stellt für die internationale Gemeinschaft eine große Herausforderung dar, wäre jedoch die einzig gerechte Lösung unserer Klimaprobleme. Des Weiteren wird an dem Kantischen Handlungsgrundsatz festgehalten, dass das ewige Streben auf ideale Ziele hin, die beste Verwirklichung dieser durch menschliche, fehlbare Wesen ist.