Refine
Has Fulltext
- yes (649) (remove)
Year of publication
- 2016 (649) (remove)
Document Type
- Postprint (216)
- Article (175)
- Doctoral Thesis (136)
- Monograph/Edited Volume (28)
- Part of Periodical (22)
- Preprint (18)
- Review (14)
- Master's Thesis (12)
- Part of a Book (11)
- Working Paper (6)
Keywords
- Migration (13)
- migration (13)
- religion (13)
- Religion (12)
- interkulturelle Missverständnisse (12)
- religiöses Leben (12)
- confusions and misunderstandings (11)
- Logopädie (6)
- Zeitschrift (6)
- model (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Slavistik (75)
- Institut für Geowissenschaften (41)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (39)
- Institut für Physik und Astronomie (31)
- Institut für Biochemie und Biologie (30)
- Vereinigung für Jüdische Studien e. V. (29)
- Bürgerliches Recht (28)
- Department Linguistik (23)
Der Bittergeschmack warnt den Organismus vor potentiell verdorbener oder giftiger Nahrung und ist somit ein wichtiger Kontrollmechanismus. Die initiale Detektion der zahlreich vorkommenden Bitterstoffe erfolgt bei der Maus durch 35 Bitterrezeptoren (Tas2rs), die sich im Zungengewebe befinden. Die Geschmacksinformation wird anschließend von der Zunge über das periphere (PNS) ins zentrale Nervensystem (ZNS) geleitet, wo deren Verarbeitung stattfindet. Die Verarbeitung der Geschmacksinformation konnte bislang nicht gänzlich aufgeklärt werden. Neue Studien deuten auf eine Expression von Tas2rs auch im PNS und ZNS entlang der Geschmacksbahn hin. Über Vorkommen und Aufgaben dieser Rezeptoren bzw. Rezeptorzellen im Nervensystem ist bislang wenig bekannt.
Im Rahmen dieser Arbeit wurde die Tas2r-Expression in verschiedenen Mausmodellen untersucht, Tas2r-exprimierende Zellen identifiziert und deren Funktionen bei der Übertragung der Geschmacksinformationen analysiert. Im Zuge der Expressionsanalysen mittels qRT-PCR konnte die Expression von 25 der 35 bekannten Bittergeschmacksrezeptoren im zentralen Nervensystem der Maus nachgewiesen werden. Die Expressionsmuster im PNS sowie im ZNS lassen darüber hinaus Vermutungen zu Funktionen in verschiedenen Bereichen des Nervensystems zu. Basierend auf den Ergebnissen der Expressionsanalysen war es möglich, stark exprimierte Tas2rs mittels In-situ-Hybridisierung in verschiedenen Zelltypen zu visualisieren. Des Weiteren konnten immunhistochemische Färbungen unter Verwendung eines genetisch modifizierten Mausmodells die Ergebnisse der Expressionsanalysen bestätigen. Sie zeigten eine Expression von Tas2rs, am Beispiel des Tas2r131-Rezeptors, in cholinergen, dopaminergen, GABAergen, noradrenergen und glycinerg-angesteuerten Projektionsneuronen sowie in Interneuronen. Die Ergebnisse der vorliegenden Arbeit zeigen daher erstmals das Vorkommen von Tas2rs in verschiedenen neuronalen Zelltypen in weiten Teilen des ZNS. Dies lässt den Schluss zu, dass Tas2r-exprimierende Zellen potentiell multiple Funktionen innehaben. Anhand von Verhaltensexperimenten in genetisch modifizierten Mäusen wurde die mögliche Funktion von Tas2r131-exprimierenden Neuronen (Tas2r131-Neurone) bei der Geschmackswahrnehmung untersucht. Die Ergebnisse weisen auf eine Beteiligung von Tas2r131-Neuronen an der Signalweiterleitung bzw. -verarbeitung der Geschmacksinformation für eine Auswahl von Bittersubstanzen hin. Die Analysen zeigen darüber hinaus, dass Tas2r131-Neuronen nicht an der Geschmackswahrnehmung anderer Bitterstoffe sowie Geschmacksstimuli anderer Qualitäten (süß, umami, sauer, salzig), beteiligt sind. Eine spezifische „Tas2r131-Bittergeschmacksbahn“, die mit anderen potentiellen „Bitterbahnen“ teils unabhängige, teils überlappende Signalwege bzw. Verarbeitungsbereiche besitzt, bildet eine mögliche zelluläre Grundlage zur Unterscheidung von Bitterstoffen. Die im Rahmen dieser Arbeit entstandene Hypothese einer potentiellen Diskriminierung von Bitterstoffen soll daher in weiterführenden Studien durch die Etablierung eines Verhaltenstest mit Mäusen geprüft werden.
The aim of this paper is to bring together two areas which are of great importance for the study of overdetermined boundary value problems. The first area is homological algebra which is the main tool in constructing the formal theory of overdetermined problems. And the second area is the global calculus of pseudodifferential operators which allows one to develop explicit analysis.
Brief communication
(2016)
In March 2015, a new international blueprint for disaster risk reduction (DRR) was adopted in Sendai, Japan, at the end of the Third UN World Conference on Disaster Risk Reduction (WCDRR, 14-18 March 2015). We review and discuss the agreed commitments and targets, as well as the negotiation leading the Sendai Framework for DRR (SF-DRR) and discuss briefly its implication for the later UN-led negotiations on sustainable development goals and climate change.
The visceral protein transthyretin (TTR) is frequently affected by oxidative post-translational protein modifications (PTPMs) in various diseases. Thus, better insight into structure-function relationships due to oxidative PTPMs of TTR should contribute to the understanding of pathophysiologic mechanisms. While the in vivo analysis of TTR in mammalian models is complex, time- and resource-consuming, transgenic Caenorhabditis elegans expressing hTTR provide an optimal model for the in vivo identification and characterization of drug-mediated oxidative PTPMs of hTTR by means of matrix assisted laser desorption/ionization – time of flight – mass spectrometry (MALDI-TOF-MS). Herein, we demonstrated that hTTR is expressed in all developmental stages of Caenorhabditis elegans, enabling the analysis of hTTR metabolism during the whole life-cycle. The suitability of the applied model was verified by exposing worms to D-penicillamine and menadione. Both drugs induced substantial changes in the oxidative PTPM pattern of hTTR. Additionally, for the first time a covalent binding of both drugs with hTTR was identified and verified by molecular modelling.
To understand past flood changes in the Rhine catchment and in particular the role of anthropogenic climate change in extreme flows, an attribution study relying on a proper GCM (general circulation model) downscaling is needed. A downscaling based on conditioning a stochastic weather generator on weather patterns is a promising approach. This approach assumes a strong link between weather patterns and local climate, and sufficient GCM skill in reproducing weather pattern climatology. These presuppositions are unprecedentedly evaluated here using 111 years of daily climate data from 490 stations in the Rhine basin and comprehensively testing the number of classification parameters and GCM weather pattern characteristics. A classification based on a combination of mean sea level pressure, temperature, and humidity from the ERA20C reanalysis of atmospheric fields over central Europe with 40 weather types was found to be the most appropriate for stratifying six local climate variables. The corresponding skill is quite diverse though, ranging from good for radiation to poor for precipitation. Especially for the latter it was apparent that pressure fields alone cannot sufficiently stratify local variability. To test the skill of the latest generation of GCMs from the CMIP5 ensemble in reproducing the frequency, seasonality, and persistence of the derived weather patterns, output from 15 GCMs is evaluated. Most GCMs are able to capture these characteristics well, but some models showed consistent deviations in all three evaluation criteria and should be excluded from further attribution analysis.
Changes in free symptom attributions in hypochondriasis after cognitive therapy and exposure therapy
(2016)
Background: Cognitive-behavioural therapy can change dysfunctional symptom attributions in patients with hypochondriasis. Past research has used forced-choice answer formats, such as questionnaires, to assess these misattributions; however, with this approach, idiosyncratic attributions cannot be assessed. Free associations are an important complement to existing approaches that assess symptom attributions. Aims: With this study, we contribute to the current literature by using an open-response instrument to investigate changes in freely associated attributions after exposure therapy (ET) and cognitive therapy (CT) compared with a wait list (WL). Method: The current study is a re-examination of a formerly published randomized controlled trial (Weck, Neng, Richtberg, Jakob and Stangier, 2015) that investigated the effectiveness of CT and ET. Seventy-three patients with hypochondriasis were randomly assigned to CT, ET or a WL, and completed a 12-week treatment (or waiting period). Before and after the treatment or waiting period, patients completed an Attribution task in which they had to spontaneously attribute nine common bodily sensations to possible causes in an open-response format. Results: Compared with the WL, both CT and ET reduced the frequency of somatic attributions regarding severe diseases (CT: Hedges's g = 1.12; ET: Hedges's g = 1.03) and increased the frequency of normalizing attributions (CT: Hedges's g = 1.17; ET: Hedges's g = 1.24). Only CT changed the attributions regarding moderate diseases (Hedges's g = 0.69). Changes in somatic attributions regarding mild diseases and psychological attributions were not observed. Conclusions: Both CT and ET are effective for treating freely associated misattributions in patients with hypochondriasis. This study supplements research that used a forced-choice assessment.
The interaction of water with α-alumina (i.e. α-Al2O3) surfaces is important in a variety of applications and a useful model for the interaction of water with environmentally abundant aluminosilicate phases. Despite its significance, studies of water interaction with α-Al2O3 surfaces other than the (0001) are extremely limited. Here we characterize the interaction of water (D2O) with a well defined α-Al2O3(1[1 with combining macron]02) surface in UHV both experimentally, using temperature programmed desorption and surface-specific vibrational spectroscopy, and theoretically, using periodic-slab density functional theory calculations. This combined approach makes it possible to demonstrate that water adsorption occurs only at a single well defined surface site (the so-called 1–4 configuration) and that at this site the barrier between the molecularly and dissociatively adsorbed forms is very low: 0.06 eV. A subset of OD stretch vibrations are parallel to this dissociation coordinate, and thus would be expected to be shifted to low frequencies relative to an uncoupled harmonic oscillator. To quantify this effect we solve the vibrational Schrödinger equation along the dissociation coordinate and find fundamental frequencies red-shifted by more than 1500 cm−1. Within the context of this model, at moderate temperatures, we further find that some fraction of surface deuterons are likely delocalized: dissociatively and molecularly absorbed states are no longer distinguishable.
The Milky Way is only one out of billions of galaxies in the universe. However, it is a special galaxy because it allows to explore the main mechanisms involved in its evolution and formation history by unpicking the system star-by-star. Especially, the chemical fingerprints of its stars provide clues and evidence of past events in the Galaxy’s lifetime. These information help not only to decipher the current structure and building blocks of the Milky Way, but to learn more about the general formation process of galaxies.
In the past decade a multitude of stellar spectroscopic Galactic surveys have scanned millions of stars far beyond the rim of the solar neighbourhood. The obtained spectroscopic information provide unprecedented insights to the chemo-dynamics of the Milky Way. In addition analytic models and numerical simulations of the Milky Way provide necessary descriptions and predictions suited for comparison with observations in order to decode the physical properties that underlie the complex system of the Galaxy.
In the thesis various approaches are taken to connect modern theoretical modelling of galaxy formation and evolution with observations from Galactic stellar surveys. With its focus on the chemo-kinematics of the Galactic disk this work aims to determine new observational constraints on the formation of the Milky Way providing also proper comparisons with two different models. These are the population synthesis model TRILEGAL based on analytical distribution functions, which aims to simulate the number and distribution of stars in the Milky Way and its different components, and a hybrid model (MCM) that combines an N-body simulation of a Milky Way like galaxy in the cosmological framework with a semi-analytic chemical evolution model for the Milky Way. The major observational data sets in use come from two surveys, namely the “Radial Velocity Experiment” (RAVE) and the “Sloan Extension for Galactic Understanding and Exploration” (SEGUE).
In the first approach the chemo-kinematic properties of the thin and thick disk of the Galaxy as traced by a selection of about 20000 SEGUE G-dwarf stars are directly compared to the predictions by the MCM model. As a necessary condition for this, SEGUE's selection function and its survey volume are evaluated in detail to correct the spectroscopic observations for their survey specific selection biases. Also, based on a Bayesian method spectro-photometric distances with uncertainties below 15% are computed for the selection of SEGUE G-dwarfs that are studied up to a distance of 3 kpc from the Sun.
For the second approach two synthetic versions of the SEGUE survey are generated based on the above models. The obtained synthetic stellar catalogues are then used to create mock samples best resembling the compiled sample of observed SEGUE G-dwarfs. Generally, mock samples are not only ideal to compare predictions from various models. They also allow validation of the models' quality and improvement as with this work could be especially achieved for TRILEGAL. While TRILEGAL reproduces the statistical properties of the thin and thick disk as seen in the observations, the MCM model has shown to be more suitable in reproducing many chemo-kinematic correlations as revealed by the SEGUE stars. However, evidence has been found that the MCM model may be missing a stellar component with the properties of the thick disk that the observations clearly show. While the SEGUE stars do indicate a thin-thick dichotomy of the stellar Galactic disk in agreement with other spectroscopic stellar studies, no sign for a distinct metal-poor disk is seen in the MCM model.
Usually stellar spectroscopic surveys are limited to a certain volume around the Sun covering different regions of the Galaxy’s disk. This often prevents to obtain a global view on the chemo-dynamics of the Galactic disk. Hence, a suitable combination of stellar samples from independent surveys is not only useful for the verification of results but it also helps to complete the picture of the Milky Way. Therefore, the thesis closes with a comparison of the SEGUE G-dwarfs and a sample of RAVE giants. The comparison reveals that the chemo-kinematic relations agree in disk regions where the samples of both surveys show a similar number of stars. For those parts of the survey volumes where one of the surveys lacks statistics they beautifully complement each other. This demonstrates that the comparison of theoretical models on the one side, and the combined observational data gathered by multiple surveys on the other side, are key ingredients to understand and disentangle the structure and formation history of the Milky Way.
Exosomes are small membrane vesicles released by different cell types, including hepatocytes, that play important roles in intercellular communication. We have previously demonstrated that hepatocyte-derived exosomes contain the synthetic machinery to form sphingosine-1-phosphate (S1P) in target hepatocytes resulting in proliferation and liver regeneration after ischemia/reperfusion (I/R) injury. We also demonstrated that the chemokine receptors, CXCR1 and CXCR2, regulate liver recovery and regeneration after I/R injury. In the current study, we sought to determine if the regulatory effects of CXCR1 and CXCR2 on liver recovery and regeneration might occur via altered release of hepatocyte exosomes. We found that hepatocyte release of exosomes was dependent upon CXCR1 and CXCR2. CXCR1-deficient hepatocytes produced fewer exosomes, whereas CXCR2-deficient hepatocytes produced more exosomes compared to their wild-type controls. In CXCR2-deficient hepatocytes, there was increased activity of neutral sphingomyelinase (Nsm) and intracellular ceramide. CXCR1-deficient hepatocytes had no alterations in Nsm activity or ceramide production. Interestingly, exosomes from CXCR1-deficient hepatocytes had no effect on hepatocyte proliferation, due to a lack of neutral ceramidase and sphingosine kinase. The data demonstrate that CXCR1 and CXCR2 regulate hepatocyte exosome release. The mechanism utilized by CXCR1 remains elusive, but CXCR2 appears to modulate Nsm activity and resultant production of ceramide to control exosome release. CXCR1 is required for packaging of enzymes into exosomes that mediate their hepatocyte proliferative effect.
Children’s interpretations of sentences containing focus particles do not seem adult-like until school age. This study investigates how German 4-year-old children comprehend sentences with the focus particle ‘nur’ (only) by using different tasks and controlling for the impact of general cognitive abilities on performance measures. Two sentence types with ‘only’ in either pre-subject or pre-object position were presented. Eye gaze data and verbal responses were collected via the visual world paradigm combined with a sentence-picture verification task. While the eye tracking data revealed an adult-like pattern of focus particle processing, the sentence-picture verification replicated previous findings of poor comprehension, especially for ‘only’ in pre-subject position. A second study focused on the impact of general cognitive abilities on the outcomes of the verification task. Working memory was related to children’s performance in both sentence types whereas inhibitory control was selectively related to the number of errors for sentences with ‘only’ in pre-subject position. These results suggest that children at the age of 4 years have the linguistic competence to correctly interpret sentences with focus particles, which–depending on specific task demands–may be masked by immature general cognitive abilities.
The energy sector is both affected by climate change and a key sector for climate protection measures. Energy security is the backbone of our modern society and guarantees the functioning of most critical infrastructure. Thus, decision makers and energy suppliers of different countries should be familiar with the factors that increase or decrease the susceptibility of their electricity sector to climate change. Susceptibility means socioeconomic and structural characteristics of the electricity sector that affect the demand for and supply of electricity under climate change. Moreover, the relevant stakeholders are supposed to know whether the given national energy and climate targets are feasible and what needs to be done in order to meet these targets. In this regard, a focus should be on the residential building sector as it is one of the largest energy consumers and therefore emitters of anthropogenic CO 2 worldwide.
This dissertation addresses the first aspect, namely the susceptibility of the electricity sector, by developing a ranked index which allows for quantitative comparison of the electricity sector susceptibility of 21 European countries based on 14 influencing factors. Such a ranking has not been completed to date. We applied a sensitivity analysis to test the relative effect of each influencing factor on the susceptibility index ranking. We also discuss reasons for the ranking position and thus the susceptibility of selected countries. The second objective, namely the impact of climate change on the energy demand of buildings, is tackled by means of a new model with which the heating and cooling energy demand of residential buildings can be estimated. We exemplarily applied the model to Germany and the Netherlands. It considers projections of future changes in population, climate and the insulation standards of buildings, whereas most of the existing studies only take into account fewer than three different factors that influence the future energy demand of buildings. Furthermore, we developed a comprehensive retrofitting algorithm with which the total residential building stock can be modeled for the first time for each year in the past and future.
The study confirms that there is no correlation between the geographical location of a country and its position in the electricity sector susceptibility ranking. Moreover, we found no pronounced pattern of susceptibility influencing factors between countries that ranked higher or lower in the index. We illustrate that Luxembourg, Greece, Slovakia and Italy are the countries with the highest electricity sector susceptibility. The electricity sectors of Norway, the Czech Republic, Portugal and Denmark were found to be least susceptible to climate change. Knowledge about the most important factors for the poor and good ranking positions of these countries is crucial for finding adequate adaptation measures to reduce the susceptibility of the electricity sector. Therefore, these factors are described within this study.
We show that the heating energy demand of residential buildings will strongly decrease in both Germany and the Netherlands in the future. The analysis for the Netherlands focused on the regional level and a finer temporal resolution which revealed strong variations in the future heating energy demand changes by province and by month. In the German study, we additionally investigated the future cooling energy demand and could demonstrate that it will only slightly increase up to the middle of this century. Thus, increases in the cooling energy demand are not expected to offset reductions in heating energy demand. The main factor for substantial heating energy demand reductions is the retrofitting of buildings. We are the first to show that the given German and Dutch energy and climate targets in the building sector can only be met if the annual retrofitting rates are substantially increased. The current rate of only about 1 % of the total building stock per year is insufficient for reaching a nearly zero-energy demand of all residential buildings by the middle of this century. To reach this target, it would need to be at least tripled. To sum up, this thesis emphasizes that country-specific characteristics are decisive for the electricity sector susceptibility of European countries. It also shows for different scenarios how much energy is needed in the future to heat and cool residential buildings. With this information, existing climate mitigation and adaptation measures can be justified or new actions encouraged.
Climate change increases riverine carbon outgassing, while export to the ocean remains uncertain
(2016)
Any regular interaction of land and river during flooding affects carbon pools within the terrestrial system, riverine carbon and carbon exported from the system. In the Amazon basin carbon fluxes are considerably influenced by annual flooding, during which terrigenous organic material is imported to the river. The Amazon basin therefore represents an excellent example of a tightly coupled terrestrial-riverine system. The processes of generation, conversion and transport of organic carbon in such a coupled terrigenous-riverine system strongly interact and are climate-sensitive, yet their functioning is rarely considered in Earth system models and their response to climate change is still largely unknown. To quantify regional and global carbon budgets and climate change effects on carbon pools and carbon fluxes, it is important to account for the coupling between the land, the river, the ocean and the atmosphere. We developed the RIVerine Carbon Model (RivCM), which is directly coupled to the well-established dynamic vegetation and hydrology model LPJmL, in order to account for this large-scale coupling. We evaluate RivCM with observational data and show that some of the values are reproduced quite well by the model, while we see large deviations for other variables. This is mainly caused by some simplifications we assumed. Our evaluation shows that it is possible to reproduce large-scale carbon transport across a river system but that this involves large uncertainties. Acknowledging these uncertainties, we estimate the potential changes in riverine carbon by applying RivCM for climate forcing from five climate models and three CO2 emission scenarios (Special Report on Emissions Scenarios, SRES). We find that climate change causes a doubling of riverine organic carbon in the southern and western basin while reducing it by 20% in the eastern and northern parts. In contrast, the amount of riverine inorganic carbon shows a 2- to 3-fold increase in the entire basin, independent of the SRES scenario. The export of carbon to the atmosphere increases as well, with an average of about 30 %. In contrast, changes in future export of organic carbon to the Atlantic Ocean depend on the SRES scenario and are projected to either decrease by about 8.9% (SRES A1B) or increase by about 9.1% (SRES A2). Such changes in the terrigenous-riverine system could have local and regional impacts on the carbon budget of the whole Amazon basin and parts of the Atlantic Ocean. Changes in riverine carbon could lead to a shift in the riverine nutrient supply and pH, while changes in the exported carbon to the ocean lead to changes in the supply of organic material that acts as a food source in the Atlantic. On larger scales the increased outgassing of CO2 could turn the Amazon basin from a sink of carbon to a considerable source. Therefore, we propose that the coupling of terrestrial and riverine carbon budgets should be included in subsequent analysis of the future regional carbon budget.
Extreme hydro-meteorological events, such as severe droughts or heavy rainstorms, constitute primary manifestations of climate variability and exert a critical impact on the natural environment and human society. This is particularly true for high-mountain areas, such as the eastern flank of the southern Central Andes of NW Argentina, a region impacted by deep convection processes that form the basis of extreme events, often resulting in floods, a variety of mass movements, and hillslope processes. This region is characterized by pronounced E-W gradients in topography, precipitation, and vegetation cover, spanning low to medium-elevation, humid and densely vegetated areas to high-elevation, arid and sparsely vegetated environments. This strong E-W gradient is mirrored by differences in the efficiency of surface processes, which mobilize and transport large amounts of sediment through the fluvial system, from the steep hillslopes to the intermontane basins and further to the foreland. In a highly sensitive high-mountain environment like this, even small changes in the spatiotemporal distribution, magnitude and rates of extreme events may strongly impact environmental conditions, anthropogenic activity, and the well-being of mountain communities and beyond. However, although the NW Argentine Andes comprise the catchments for the La Plata river that traverses one of the most populated and economically relevant areas of South America, there are only few detailed investigations of climate variability and extreme hydro-meteorological events.
In this thesis, I focus on deciphering the spatiotemporal variability of rainfall and river discharge, with particular emphasis on extreme hydro-meteorological events in the subtropical southern Central Andes of NW Argentina during the past seven decades. I employ various methods to assess and quantify statistically significant trend patterns of rainfall and river discharge, integrating high-quality daily time series from gauging stations (40 rainfall and 8 river discharge stations) with gridded datasets (CPC-uni and TRMM 3B42 V7), for the period between 1940 and 2015. Evidence for a general intensification of the hydrological cycle at intermediate elevations (~ 0.5 – 3 km asl) at the eastern flank of the southern Central Andes is found both from rainfall and river-discharge time-series analysis during the period from 1940 to 2015. This intensification is associated with the increase of the annual total amount of rainfall and the mean annual discharge. However, most pronounced trends are found at high percentiles, i.e. extreme hydro-meteorological events, particularly during the wet season from December to February.An important outcome of my studies is the recognition of a rapid increase in the amount of river discharge during the period between 1971 and 1977, most likely linked to the 1976-77 global climate shift, which is associated with the North Pacific Ocean sea surface temperature variability. Interestingly, after this rapid increase, both rainfall and river discharge decreased at low and intermediate elevations along the eastern flank of the Andes. In contrast, during the same time interval, at high elevations, extensive areas on the arid Puna de Atacama plateau have recorded increasing annual rainfall totals. This has been associated with more intense extreme hydro-meteorological events from 1979 to 2014. This part of the study reveals that low-, intermediate, and high-elevation sectors in the Andes of NW Argentina respond differently to changing climate conditions.
Possible forcing mechanisms of the pronounced hydro-meteorological variability observed in the study area are also investigated. For the period between 1940 and 2015, I analyzed modes of oscillation of river discharge from small to medium drainage basins (102 to 104 km2), located on the eastern flank of the orogen. First, I decomposed the relevant monthly time series using the Hilbert-Huang Transform, which is particularly appropriate for non-stationary time series that result from non-linear natural processes. I observed that in the study region discharge variability can be described by five quasi-periodic oscillatory modes on timescales varying from 1 to ~20 years. Secondly, I tested the link between river-discharge variations and large-scale climate modes of variability, using different climate indices, such as the BEST ENSO (Bivariate El Niño-Southern Oscillation Time-series) index. This analysis reveals that, although most of the variance on the annual timescale is associated with the South American Monsoon System, a relatively large part of river-discharge variability is linked to Pacific Ocean variability (PDO phases) at multi-decadal timescales (~20 years). To a lesser degree, river discharge variability is also linked to the Tropical South Atlantic (TSA) sea surface temperature anomaly at multi-annual timescales (~2-5 years).
Taken together, these findings exemplify the high degree of sensitivity of high-mountain environments with respect to climatic variability and change. This is particularly true for the topographic transitions between the humid, low-moderate elevations and the semi-arid to arid highlands of the southern Central Andes. Even subtle changes in the hydro-meteorological regime of these areas of the mountain belt react with major impacts on erosional hillslope processes and generate mass movements that fundamentally impact the transport capacity of mountain streams. Despite more severe storms in these areas, the fluvial system is characterized by pronounced variability of the stream power on different timescales, leading to cycles of sediment aggradation, the loss of agriculturally used land and severe impacts on infrastructure.
Discourse production is crucial for communicative success and is in the core of aphasia assessment and treatment. Coherence differentiates discourse from a series of utterances/sentences; it is internal unity and connectedness, and, as such, perhaps the most inherent property of discourse. It is unclear whether people with aphasia, who experience various language production difficulties, preserve the ability to produce coherent discourse. A more general question of how coherence is established and represented linguistically has been addressed in the literature, yet remains unanswered. This dissertation presents an investigation of discourse production in aphasia and the linguistic mechanisms of establishing coherence.
Caribbean States organised in CARICOM recently brought forward reparation claims against several European States to compensate slavery and (native) genocides in the Caribbean and even threatened to approach the International Court of Justice. The paper provides for an analysis of the facts behind the CARICOM claim and asks whether the law of state responsibility is able to provide for the demanded compensation. As the intertemporal principle generally prohibits retroactive application of today’s international rules, the paper argues that the complete claim must be based on the law of state responsibility governing in the time of the respective conduct. An inquiry into the history of primary (prohibition of slavery and genocide) as well as secondary rules of State responsibility reveals that both sets of rules were underdeveloped or non-existent at the times of slavery and alleged (native) genocides. Therefore, the author concludes that the CARICOM claim is legally flawed but nevertheless worth the attention as it once again exposes imperial and colonial injustices of the past and their legitimization by historical international law and international/natural lawyers.
Archaeology can be understood as a tool used in the process of identity formation,
contributing to a sense of belonging and unity within a diverse set of communities.
Research was conducted with the intention of analyzing the wide range of perceptions
regarding archaeological sites in the mixed city of Lod, Israel. I explored the impact of
urban cultural heritage on shaping the identity of local Jewish and Arab children, who
were chosen as the youngest active members of society living in the city, and who
participated in the 2013 archaeological excavation season at the Khan al-Hilu. Israel is
an ideal location for such research, due to its nature as simultaneously being the focus
of extensive archaeological excavations as well as being the setting of an intractable conflict. Ancestral attachment to the land serves as a foundation for the collective
identity of both Jews and Arabs. Yet, each community and individual may relate differently
to the surrounding archaeological sites, which is further shaped by their sense of
societal hierarchy and cultural heritage.
This article first outlines different ways of how psycholinguists have dealt with linguistic diversity and illustrates these approaches with three familiar cases from research on language processing, language acquisition, and language disorders. The second part focuses on the role of morphology and morphological variability across languages for psycholinguistic research. The specific phenomena to be examined are to do with stem-formation morphology and inflectional classes; they illustrate how experimental research that is informed by linguistic typology can lead to new insights.
This cumulative dissertation contains four self-contained articles which are related to EU regional policy and its structural funds as the overall research topic. In particular, the thesis addresses the question if EU regional policy interventions can at all be scientifically justified and legitimated on theoretical and empirical grounds from an economics point of view. The first two articles of the thesis (“The EU structural funds as a means to hamper migration” and “Internal migration and EU regional policy transfer payments: a panel data analysis for 28 EU member countries”) enter into one particular aspect of the debate regarding the justification and legitimisation of EU regional policy. They theoretically and empirically analyse as to whether regional policy or the market force of the free flow of labour (migration) in the internal European market is the better instrument to improve and harmonise the living and working conditions of EU citizens. Based on neoclassical market failure theory, the first paper argues that the structural funds of the EU are inhibiting internal migration, which is one of the key measures in achieving convergence among the nations in the single European market. It becomes clear that European regional policy aiming at economic growth and cohesion among the member states cannot be justified and legitimated if the structural funds hamper instead of promote migration. The second paper, however, shows that the empirical evidence on the migration and regional policy nexus is not unambiguous, i.e. different empirical investigations show that EU structural funds hamper and promote EU internal migration. Hence, the question of the scientific justification and legitimisation of EU regional policy cannot be readily and unambiguously answered on empirical grounds. This finding is unsatisfying but is in line with previous theoretical and empirical literature. That is why, I take a step back and reconsider the theoretical beginnings of the thesis, which took for granted neoclassical market failure theory as the starting point for the positive explanation as well as the normative justification and legitimisation of EU regional policy. The third article of the thesis (“EU regional policy: theoretical foundations and policy conclusions revisited”) deals with the theoretical explanation and legitimisation of EU regional policy as well as the policy recommendations given to EU regional policymakers deduced from neoclassical market failure theory. The article elucidates that neoclassical market failure is a normative concept, which justifies and legitimates EU regional policy based on a political and thus subjective goal or value-judgement. It can neither be used, therefore, to give a scientifically positive explanation of the structural funds nor to obtain objective and practically applicable policy instruments. Given this critique of neoclassical market failure theory, the third paper consequently calls into question the widely prevalent explanation and justification of EU regional policy given in static neoclassical equilibrium economics. It argues that an evolutionary non-equilibrium economics perspective on EU regional policy is much more appropriate to provide a realistic understanding of one of the largest policies conducted by the EU. However, this does neither mean that evolutionary economic theory can be unreservedly seen as the panacea to positively explain EU regional policy nor to derive objective policy instruments for EU regional policymakers. This issue is discussed in the fourth article of the thesis (“Market failure vs. system failure as a rationale for economic policy? A critique from an evolutionary perspective”). This article reconsiders the explanation of economic policy from an evolutionary economics perspective. It contrasts the neoclassical equilibrium notions of market and government failure with the dominant evolutionary neo-Schumpeterian and Austrian-Hayekian perceptions. Based on this comparison, the paper criticises the fact that neoclassical failure reasoning still prevails in non-equilibrium evolutionary economics when economic policy issues are examined. This is surprising, since proponents of evolutionary economics usually view their approach as incompatible with its neoclassical counterpart. The paper therefore argues that in order to prevent the otherwise fruitful and more realistic evolutionary approach from undermining its own criticism of neoclassical economics and to create a consistent as well as objective evolutionary policy framework, it is necessary to eliminate the equilibrium spirit. Taken together, the main finding of this thesis is that European regional policy and its structural funds can neither theoretically nor empirically be justified and legitimated from an economics point of view. Moreover, the thesis finds that the prevalent positive and instrumental explanation of EU regional policy given in the literature needs to be reconsidered, because these theories can neither scientifically explain the emergence and development of this policy nor are they appropriate to derive objective and scientific policy instruments for EU regional policymakers.
We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true
regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.
Convoluted Brownian motion
(2016)
In this paper we analyse semimartingale properties of a class of Gaussian periodic processes, called convoluted Brownian motions, obtained by convolution between a deterministic function and a Brownian motion. A classical
example in this class is the periodic Ornstein-Uhlenbeck process. We compute their characteristics and show that in general, they are neither
Markovian nor satisfy a time-Markov field property. Nevertheless, by enlargement
of filtration and/or addition of a one-dimensional component, one can in some case recover the Markovianity. We treat exhaustively the case of the bidimensional trigonometric convoluted Brownian motion and the higher-dimensional monomial convoluted Brownian motion.
Water scarcity, adaption on climate change, and risk assessment of droughts and floods are critical topics for science and society these days. Monitoring and modeling of the hydrological cycle are a prerequisite to understand and predict the consequences for weather and agriculture. As soil water storage plays a key role for partitioning of water fluxes between the atmosphere, biosphere, and lithosphere, measurement techniques are required to estimate soil moisture states from small to large scales.
The method of cosmic-ray neutron sensing (CRNS) promises to close the gap between point-scale and remote-sensing observations, as its footprint was reported to be 30 ha. However, the methodology is rather young and requires highly interdisciplinary research to understand and interpret the response of neutrons to soil moisture. In this work, the signal of nine detectors has been systematically compared, and correction approaches have been revised to account for meteorological and geomagnetic variations. Neutron transport simulations have been consulted to precisely characterize the sensitive footprint area, which turned out to be 6--18 ha, highly local, and temporally dynamic. These results have been experimentally confirmed by the significant influence of water bodies and dry roads. Furthermore, mobile measurements on agricultural fields and across different land use types were able to accurately capture the various soil moisture states. It has been further demonstrated that the corresponding spatial and temporal neutron data can be beneficial for mesoscale hydrological modeling. Finally, first tests with a gyrocopter have proven the concept of airborne neutron sensing, where increased footprints are able to overcome local effects.
This dissertation not only bridges the gap between scales of soil moisture measurements. It also establishes a close connection between the two worlds of observers and modelers, and further aims to combine the disciplines of particle physics, geophysics, and soil hydrology to thoroughly explore the potential and limits of the CRNS method.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
We study the adsorption–desorption transition of polyelectrolyte chains onto planar, cylindrical and spherical surfaces with arbitrarily high surface charge densities by massive Monte Carlo computer simulations. We examine in detail how the well known scaling relations for the threshold transition—demarcating the adsorbed and desorbed domains of a polyelectrolyte near weakly charged surfaces—are altered for highly charged interfaces. In virtue of high surface potentials and large surface charge densities, the Debye–Hückel approximation is often not feasible and the nonlinear Poisson–Boltzmann approach should be implemented. At low salt conditions, for instance, the electrostatic potential from the nonlinear Poisson–Boltzmann equation is smaller than the Debye–Hückel result, such that the required critical surface charge density for polyelectrolyte adsorption σc increases. The nonlinear relation between the surface charge density and electrostatic potential leads to a sharply increasing critical surface charge density with growing ionic strength, imposing an additional limit to the critical salt concentration above which no polyelectrolyte adsorption occurs at all. We contrast our simulations findings with the known scaling results for weak critical polyelectrolyte adsorption onto oppositely charged surfaces for the three standard geometries. Finally, we discuss some applications of our results for some physical–chemical and biophysical systems.
Volunteered geographical information (VGI) and citizen science have become important sources data for much scientific research. In the domain of land cover, crowdsourcing can provide a high temporal resolution data to support different analyses of landscape processes. However, the scientists may have little control over what gets recorded by the crowd, providing a potential source of error and uncertainty. This study compared analyses of crowdsourced land cover data that were contributed by different groups, based on nationality (labelled Gondor and Non-Gondor) and on domain experience (labelled Expert and Non-Expert). The analyses used a geographically weighted model to generate maps of land cover and compared the maps generated by the different groups. The results highlight the differences between the maps how specific land cover classes were under-and over-estimated. As crowdsourced data and citizen science are increasingly used to replace data collected under the designed experiment, this paper highlights the importance of considering between group variations and their impacts on the results of analyses. Critically, differences in the way that landscape features are conceptualised by different groups of contributors need to be considered when using crowdsourced data in formal scientific analyses. The discussion considers the potential for variation in crowdsourced data, the relativist nature of land cover and suggests a number of areas for future research. The key finding is that the veracity of citizen science data is not the critical issue per se. Rather, it is important to consider the impacts of differences in the semantics, affordances and functions associated with landscape features held by different groups of crowdsourced data contributors.
The title compounds, [(1R,3R,4R,5R,6S)-4,5-bis(acetyloxy)-7-oxo-2-oxabicyclo[4.2.0]octan-3-yl]methyl acetate, C14H18O8, (I), [(1S,4R,5S,6R)-5-acetyloxy-7-hydroxyimino-2-oxobicyclo[4.2.0]octan-4-yl acetate, C11H15NO6, (II), and [(3aR,5R,6R,7R,7aS)-6,7-bis(acetyloxy)-2-oxooctahydropyrano[3,2-b]pyrrol-5-yl]methyl acetate, C14H19NO8, (III), are stable bicyclic carbohydrate derivatives. They can easily be synthesized in a few steps from commercially available glycals. As a result of the ring strain from the four-membered rings in (I) and (II), the conformations of the carbohydrates deviate strongly from the ideal chair form. Compound (II) occurs in the boat form. In the five-membered lactam (III), on the other hand, the carbohydrate adopts an almost ideal chair conformation. As a result of the distortion of the sugar rings, the configurations of the three bicyclic carbohydrate derivatives could not be determined from their NMR coupling constants. From our three crystal structure determinations, we were able to establish for the first time the absolute configurations of all new stereocenters of the carbohydrate rings.
Ecosystem services have a significant impact on human wellbeing. While ecosystem services are frequently represented by monetary values, social values and underlying social benefits remain under explored. The purpose of this study is to assess whether and how social benefits have been explicitly addressed within socio-economic and socio-cultural ecosystem services research, ultimately allowing a better understanding between ecosystem services and human well-being. In this paper, we reviewed 115 international primary valuation studies and tested four hypotheses associated to the identification of social benefits of ecosystem services using logistic regressions. Tested hypotheses were that (1) social benefits are mostly derived in studies that assess cultural ecosystem services as opposed to other ecosystem service types, (2) there is a pattern of social benefits and certain cultural ecosystem services assessed simultaneously, (3) monetary valuation techniques go beyond expressing monetary values and convey social benefits, and (4) directly addressing stakeholder's views the consideration of social benefits in ecosystem service assessments. Our analysis revealed that (1) a variety of social benefits are valued in studies that assess either of the four ecosystem service types, (2) certain social benefits are likely to co-occur in combination with certain cultural ecosystem services, (3) of the studies that employed monetary valuation techniques, simulated market approaches overlapped most frequently with the assessment of social benefits and (4) studies that directly incorporate stakeholder's views were more likely to also assess social benefits. (C) 2016 Elsevier B.V. All rights reserved.
Most climate change impacts manifest in the form of natural hazards. Damage assessment typically relies on damage functions that translate the magnitude of extreme events to a quantifiable damage. In practice, the availability of damage functions is limited due to a lack of data sources and a lack of understanding of damage processes. The study of the characteristics of damage functions for different hazards could strengthen the theoretical foundation of damage functions and support their development and validation. Accordingly, we investigate analogies of damage functions for coastal flooding and for wind storms and identify a unified approach. This approach has general applicability for granular portfolios and may also be applied, for example, to heat-related mortality. Moreover, the unification enables the transfer of methodology between hazards and a consistent treatment of uncertainty. This is demonstrated by a sensitivity analysis on the basis of two simple case studies (for coastal flood and storm damage). The analysis reveals the relevance of the various uncertainty sources at varying hazard magnitude and on both the microscale and the macroscale level. Main findings are the dominance of uncertainty from the hazard magnitude and the persistent behaviour of intrinsic uncertainties on both scale levels. Our results shed light on the general role of uncertainties and provide useful insight for the application of the unified approach.
Das gestorbene Ich
(2016)
Das Widerspenstige bändigen
(2016)
Dem Handeln von Lehrkräften wird in der schulischen Praxis wie in der wissenschaftlichen Literatur ein wesentlicher Einfluss auf die Qualität von schulischem Unterricht zugesprochen. Auch wenn umfangreiche normative Vorstellungen über ein gutes Lehr-Handeln bestehen, so gibt es wenig Erkenntnis darüber, welche Gründe Lehrkräfte für ihr pädagogisches Handeln haben. Das Handeln von Lehrkräften kann nur dann adäquat erfasst werden, wenn Bildung einerseits als Weitergabe von Kultur an die nachfolgende Generation und andererseits als eine vom sich bildenden Subjekt ausgehende Selbst- und Weltverständigung verstanden wird. Damit einhergehende Anforderungen an die Lehrkraft stehen notwendigerweise in Widerspruch zueinander; dies gilt besonders für eine Gesellschaft mit großer kultureller und sozialer Heterogenität. Bei der Suche nach Zusammenhängen zwischen Persönlichkeit, pädagogischem Wissen oder Kompetenzen und einem unterrichtlichen Handeln wird häufig von einer Bedingtheit dieses Handelns ausgegangen und dieses auf kognitive Aspekte und an externen Normen orientierte Merkmale verkürzt. Ertragreicher für eine Antwort auf die Frage nach den Begründungen sind wissenschaftliche Arbeiten, die Professionalität als eine Bezugnahme auf einen besonderen strukturellen Rahmen beschreiben, der durch Widersprüche geprägt ist und Entscheidungen zu den Spannungsfeldern pädagogischer Verhältnisse erfordert. Die subjektwissenschaftliche Lerntheorie bietet eine Basis für ein Verständnis eines Lernens in institutionellen Kontexten ausgehend von den Lerninteressen der Schülerinnen und Schüler. Lehren kann darauf bezugnehmend als Unterstützung von Selbst- und Weltverständigungsprozessen durch Wertschätzung, Verstehen und Angebote alternativer Bedeutungshorizonte verstanden werden. Das Handeln von Lehrkräften ist als sinngebende Bezugnahme auf daraus resultierende sowie institutionelle Anforderungen mittels gesellschaftlicher Bedeutungsstrukturen verstehbar. Das handelnde Subjekt erschließt sich selbst und die Welt mit Hilfe von Bedeutungen. Diese können verstanden werden als der Besonderheit der Biographie, der gesellschaftlichen Position sowie der Lebenslage geschuldete Reinterpretationen gesellschaftlicher Bedeutungsstrukturen. Im empirischen Verfahren können mittels eines Übergangs von sequentiellen zu komparativen Analysen Positionierungen als thematisch spezifische und über die konkrete Handlungssituation hinausreichende Bedeutungs-Begründungs-Zusammenhänge rekonstruiert werden. Daraus werden situationsunabhängige Strukturmomente des Gegenstands Lehren an beruflichen Schulen aber auch komplexe, situationsbezogene subjektive Bedeutungs-Begründungs-Muster abgeleitet. Als wesentliche strukturelle Merkmale lassen sich die Schlüsselkategorien ‚Deutungsmacht‘ und ‚instrumentelle pädagogische Beziehung‘ aus dem empirischen Material unter Zuhilfenahme weiterer theoretischer Folien entwickeln. Da Deutungsmacht auf Akzeptanz angewiesen ist und in instrumentellen Beziehungen eine kooperative Bezugnahme auf den Lehr-Lern-Gegenstand allenfalls punktuell erfolgt, können damit asymmetrische metastabile Arrangements zwischen einer Lehrkraft und Schülerinnen und Schülern verstanden werden. Als empirische Ausprägungen weist Deutungsmacht die Varianten ‚absoluter Anspruch‘, ‚Akzeptanz der Fragilität‘ und ‚Akzeptanz der Legitimität eines Infragestellens‘ auf. Bei der zweiten Schlüsselkategorie treten die Varianten ‚strukturelle Prägung‘, ‚unspezifischer allgemein-menschlicher Charakter‘ und ‚Außenprägung‘ der instrumentellen pädagogischen Beziehung auf. Die Bedeutungs-Begründungs-Musters weisen teilweise Inkonsistenzen und Übergänge in den Positionierungen bezogen auf die dargestellten Varianten auf. Nur bei einem Teil der Muster sind Bemühungen um Wertschätzung und Verstehen der Schülerinnen und Schüler plausibel ableitbar, gleiches gilt in Hinblick auf eine Offenheit für eine Revision der Muster. Die Muster, wie etwa ‚Durchsetzend-ertragendes Nachsteuern‘, ‚Direktiv-personalisierendes Praktizieren‘ oder ‚Regulierend-flexibles Managen‘ sind zu verstehen als Bewältigungsmodi der kontingenten pädagogischen (Konflikt-)Situationen, auf die sich die Fallschilderungen beziehen. Die jeweilige Lehrkraft hat dieses Muster in dem beschriebenen Fall genutzt, was allerdings keine Aussage darüber zulässt, auf welche Muster die Lehrkraft in anderen Fällen zugreifen würde. Die Ergebnisse der vorliegenden Arbeit eignen sich als eine heuristische bzw. theoretische Folie, die Lehrkräfte beim Erschließen ihres eigenen pädagogischen Handelns - etwa in einer als Fallberatung konzipierten Fortbildung - unterstützen kann. Möglich sind Anschlüsse an andere theoretische Ansätze zum Handeln von Lehrkräften aber auch deren veränderte Einordnung. Erweitert werden die Optionen, dieses Handeln über wissenschaftliche Zugänge zu erfassen.
Das „Startprojekt“
(2016)
Absolventinnen und Absolventen unserer Informatik-Bachelorstudiengänge benötigen für kompetentes berufliches Handeln sowohl fachliche als auch überfachliche Kompetenzen. Vielfach verlangen wir von Erstsemestern in Grundlagen-Lehrveranstaltungen fast ausschließlich den Aufbau von Fachkompetenz und vernachlässigen dabei häufig Selbstkompetenz, Methodenkompetenz und Sozialkompetenz. Gerade die drei letztgenannten sind für ein erfolgreiches Studium unabdingbar und sollten von Anfang an entwickelt werden. Wir stellen unser „Startprojekt“ als einen Beitrag vor, im ersten Semester die eigenverantwortliche, überfachliche Kompetenzentwicklung in einem fachlichen Kontext zu fördern.
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions.
First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations.
Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago.
Fluxes of organic and inorganic carbon within the Amazon basin are considerably controlled by annual flooding, which triggers the export of terrigenous organic material to the river and ultimately to the Atlantic Ocean. The amount of carbon imported to the river and the further conversion, transport and export of it depend on temperature, atmospheric CO2, terrestrial productivity and carbon storage, as well as discharge. Both terrestrial productivity and discharge are influenced by climate and land use change. The coupled LPJmL and RivCM model system (Langerwisch et al., 2016) has been applied to assess the combined impacts of climate and land use change on the Amazon riverine carbon dynamics. Vegetation dynamics (in LPJmL) as well as export and conversion of terrigenous carbon to and within the river (RivCM) are included. The model system has been applied for the years 1901 to 2099 under two deforestation scenarios and with climate forcing of three SRES emission scenarios, each for five climate models. We find that high deforestation (business-as-usual scenario) will strongly decrease (locally by up to 90 %) riverine particulate and dissolved organic carbon amount until the end of the current century. At the same time, increase in discharge leaves net carbon transport during the first decades of the century roughly unchanged only if a sufficient area is still forested. After 2050 the amount of transported carbon will decrease drastically. In contrast to that, increased temperature and atmospheric CO2 concentration determine the amount of riverine inorganic carbon stored in the Amazon basin. Higher atmospheric CO2 concentrations increase riverine inorganic carbon amount by up to 20% (SRES A2). The changes in riverine carbon fluxes have direct effects on carbon export, either to the atmosphere via outgassing or to the Atlantic Ocean via discharge. The outgassed carbon will increase slightly in the Amazon basin, but can be regionally reduced by up to 60% due to deforestation. The discharge of organic carbon to the ocean will be reduced by about 40% under the most severe deforestation and climate change scenario. These changes would have local and regional consequences on the carbon balance and habitat characteristics in the Amazon basin itself as well as in the adjacent Atlantic Ocean.
Even if greenhouse gas emissions were stopped today, sea level would continue to rise for centuries, with the long-term sea-level commitment of a 2 degrees C warmer world significantly exceeding 2 m. In view of the potential implications for coastal populations and ecosystems worldwide, we investigate, from an ice-dynamic perspective, the possibility of delaying sea-level rise by pumping ocean water onto the surface of the Antarctic ice sheet. We find that due to wave propagation ice is discharged much faster back into the ocean than would be expected from a pure advection with surface velocities. The delay time depends strongly on the distance from the coastline at which the additional mass is placed and less strongly on the rate of sea-level rise that is mitigated. A millennium-scale storage of at least 80% of the additional ice requires placing it at a distance of at least 700 km from the coastline. The pumping energy required to elevate the potential energy of ocean water to mitigate the currently observed 3 mmyr(-1) will exceed 7% of the current global primary energy supply. At the same time, the approach offers a comprehensive protection for entire coastlines particularly including regions that cannot be protected by dikes.
Diese empirische Studie untersucht den Einfluss der Staatsform auf den Einsatz und das Ausmaß von Wirtschaftssanktionen unter Verwendung von Regressionsanalysen. Die Ergebnisse deuten auf ein friedliches Wirtschaftsverhältnis zwischen den Demokratien hin. Die bisherige Forschung hat den demokratischen Wirtschaftsfrieden mithilfe der institutionellen Theorie erklärt, die das Sanktionsverhalten auf ein rationalistisches Kosten-Nutzen-Kalkül zurückführt. Demgegenüber vertritt die konstruktivistische Theorie die Auffassung, dass die friedvollere Konfliktbewältigung unter Demokratien auf die Ausbildung einer gemeinsamen Identität, sowie auf gemeinsame Werte und Normen zurückzuführen sei.
Gravity dictates the structure of the whole Universe and, although it is triumphantly described by the theory of General Relativity, it is the force that we least understand in nature. One of the cardinal predictions of this theory are black holes. Massive, dark objects are found in the majority of galaxies. Our own galactic center very contains such an object with a mass of about four million solar masses. Are these objects supermassive black holes (SMBHs), or do we need alternatives? The answer lies in the event horizon, the characteristic that defines a black hole. The key to probe the horizon is to model the movement of stars around a SMBH, and the interactions between them, and look for deviations from real observations. Nuclear star clusters harboring a massive, dark object with a mass of up to ~ ten million solar masses are good testbeds to probe the event horizon of the potential SMBH with stars. The channel for interactions between stars and the central MBH are the fact that (a) compact stars and stellar-mass black holes can gradually inspiral into the SMBH due to the emission of gravitational radiation, which is known as an “Extreme Mass Ratio Inspiral” (EMRI), and (b) stars can produce gases which will be accreted by the SMBH through normal stellar evolution, or by collisions and disruptions brought about by the strong central tidal field. Such processes can contribute significantly to the mass of the SMBH. These two processes involve different disciplines, which combined will provide us with detailed information about the fabric of space and time. In this habilitation I present nine articles of my recent work directly related with these topics.
In this paper we report an experimental and computational study of liquid acetonitrile (H3C–C[triple bond, length as m-dash]N) by resonant inelastic X-ray scattering (RIXS) at the N K-edge. The experimental spectra exhibit clear signatures of the electronic structure of the valence states at the N site and incident-beam-polarization dependence is observed as well. Moreover, we find fine structure in the quasielastic line that is assigned to finite scattering duration and nuclear relaxation. We present a simple and light-to-evaluate model for the RIXS maps and analyze the experimental data using this model combined with ab initio molecular dynamics simulations. In addition to polarization-dependence and scattering-duration effects, we pinpoint the effects of different types of chemical bonding to the RIXS spectrum and conclude that the H2C–C[double bond, length as m-dash]NH isomer, suggested in the literature, does not exist in detectable quantities. We study solution effects on the scattering spectra with simulations in liquid and in vacuum. The presented model for RIXS proved to be light enough to allow phase-space-sampling and still accurate enough for identification of transition lines in physical chemistry research by RIXS.
Dependency Resolution Difficulty Increases with Distance in Persian Separable Complex Predicates
(2016)
Delaying the appearance of a verb in a noun-verb dependency tends to increase processing difficulty at the verb; one explanation for this locality effect is decay and/or interference of the noun in working memory. Surprisal, an expectation-based account, predicts that delaying the appearance of a verb either renders it no more predictable or more predictable, leading respectively to a prediction of no effect of distance or a facilitation. Recently, Husain et al. (2014) suggested that when the exact identity of the upcoming verb is predictable (strong predictability), increasing argument-verb distance leads to facilitation effects, which is consistent with surprisal; but when the exact identity of the upcoming verb is not predictable (weak predictability), locality effects are seen. We investigated Husain et al.'s proposal using Persian complex predicates (CPs), which consist of a non-verbal element—a noun in the current study—and a verb. In CPs, once the noun has been read, the exact identity of the verb is highly predictable (strong predictability); this was confirmed using a sentence completion study. In two self-paced reading (SPR) and two eye-tracking (ET) experiments, we delayed the appearance of the verb by interposing a relative clause (Experiments 1 and 3) or a long PP (Experiments 2 and 4). We also included a simple Noun-Verb predicate configuration with the same distance manipulation; here, the exact identity of the verb was not predictable (weak predictability). Thus, the design crossed Predictability Strength and Distance. We found that, consistent with surprisal, the verb in the strong predictability conditions was read faster than in the weak predictability conditions. Furthermore, greater verb-argument distance led to slower reading times; strong predictability did not neutralize or attenuate the locality effects. As regards the effect of distance on dependency resolution difficulty, these four experiments present evidence in favor of working memory accounts of argument-verb dependency resolution, and against the surprisal-based expectation account of Levy (2008). However, another expectation-based measure, entropy, which was computed using the offline sentence completion data, predicts reading times in Experiment 1 but not in the other experiments. Because participants tend to produce more ungrammatical continuations in the long-distance condition in Experiment 1, we suggest that forgetting due to memory overload leads to greater entropy at the verb.
Background: The goal of this study was to estimate the prevalence of and risk factors for diagnosed depression in heart failure (HF) patients in German primary care practices.
Methods: This study was a retrospective database analysis in Germany utilizing the Disease Analyzer (R) Database (IMS Health, Germany). The study population included 132,994 patients between 40 and 90 years of age from 1,072 primary care practices. The observation period was between 2004 and 2013. Follow-up lasted up to five years and ended in April 2015. A total of 66,497 HF patients were selected after applying exclusion criteria. The same number of 66,497 controls were chosen and were matched (1:1) to HF patients on the basis of age, sex, health insurance, depression diagnosis in the past, and follow-up duration after index date.
Results: HF was a strong risk factor for diagnosed depression (p < 0.0001). A total of 10.5% of HF patients and 6.3% of matched controls developed depression after one year of follow-up (p < 0.001). Depression was documented in 28.9% of the HF group and 18.2% of the control group after the five-year follow-up (p < 0.001). Cancer, dementia, osteoporosis, stroke, and osteoarthritis were associated with a higher risk of developing depression. Male gender and private health insurance were associated with lower risk of depression.
Conclusions: The risk of diagnosed depression is significantly increased in patients with HF compared to patients without HF in primary care practices in Germany.
Die Korrespondenz zwischen Alexander von Humboldt und Karl Kreil war umfangreich und betraf den Erdmagnetismus. Aber heute ist nur noch ein einziger Brief im Original bekannt. Dieser Brief, den Kreil am 3. September 1836 Alexander von Humboldt zukommen ließ, stimmt inhaltlich und teilweise wortwörtlich mit dem Brief überein, den Kreil nur einen Tag später, am 4. September 1836, an Carl Friedrich Gauß schickte. Vier Briefe von Kreil an Humboldt wurden in den „Annalen der Physik und Chemie“ publiziert, eine nicht allzu große Anzahl weiterer Briefe an Humboldt wurde in der biographischen Literatur über Kreil und in Briefen Kreils an Koller und Gauß erwähnt. Aber nicht nur die lückenhafte und bruchstückhaft bekannte Korrespondenz zwischen Humboldt und Kreil, die bis 1851 reicht, gibt Aufschluss über die Beziehungen, sondern von besonderer Bedeutung ist des Weiteren der Bestand an Kreiliana in der Bibliothek Humboldts. Es handelt sich um neun Werke Kreils, das letzte aus dem Jahr 1856. Nachweisbare Kontakte zwischen Kreil und Humboldt fanden also mit Sicherheit mindestens bis zu diesem Jahr statt!
Das Wissen um die lokale Struktur von Seltenen Erden Elementen (SEE) in silikatischen und aluminosilikatischen Schmelzen ist von fundamentalem Interesse für die Geochemie der magmatischen Prozesse, speziell wenn es um ein umfassendes Verständnis der Verteilungsprozesse von SEE in magmatischen Systemen geht. Es ist allgemein akzeptiert, dass die SEE-Verteilungsprozesse von Temperatur, Druck, Sauerstofffugazität (im Fall von polyvalenten Kationen) und der Kristallchemie kontrolliert werden. Allerdings ist wenig über den Einfluss der Schmelzzusammensetzung selbst bekannt. Ziel dieser Arbeit ist, eine Beziehung zwischen der Variation der SEE-Verteilung mit der Schmelzzusammensetzung und der Koordinationschemie dieser SEE in der Schmelze zu schaffen.
Dazu wurden Schmelzzusammensetzungen von Prowatke und Klemme (2005), welche eine deutliche Änderung der Verteilungskoeffizienten zwischen Titanit und Schmelze ausschließlich als Funktion der Schmelzzusammensetzung zeigen, sowie haplogranitische bzw. haplobasaltische Schmelzzusammensetzungen als Vertreter magmatischer Systeme mit La, Gd, Yb und Y dotiert und als Glas synthetisiert. Die Schmelzen variierten systematisch im Aluminiumsättigungsindex (ASI), welcher bei den Prowatke und Klemme (2005) Zusammensetzungen einen Bereich von 0.115 bis 0.768, bei den haplogranitischen Zusammensetzungen einen Bereich von 0.935 bis 1.785 und bei den haplobasaltischen Zusammensetzungen einen Bereich von 0.368 bis 1.010 abdeckt. Zusätzlich wurden die haplogranitischen Zusammensetzungen mit 4 % H2O synthetisiert, um den Einfluss von Wasser auf die lokale Umgebung von SEE zu studieren. Um Informationen über die lokalen Struktur von Gd, Yb und Y zu erhalten wurde die Röntgenabsorptionsspektroskopie angewendet. Dabei liefert die Untersuchung der Feinstruktur mittels der EXAFS-Spektroskopie (engl. Extended X-Ray Absorption Fine Structure) quantitative Informationen über die lokale Umgebung, während RIXS (engl. resonant inelastic X-ray scattering), sowie die daraus extrahierte hoch aufgelöste Nahkantenstruktur, XANES (engl. X-ray absorption near edge structure) qualitative Informationen über mögliche Koordinationsänderungen von La, Gd und Yb in den Gläsern liefert. Um mögliche Unterschiede der lokalen Struktur oberhalb der Glastransformationstemperatur (TG) zur Raumtemperatur zu untersuchen, wurden exemplarisch Hochtemperatur Y-EXAFS Untersuchungen durchgeführt.
Für die Auswertung der EXAFS-Messungen wurde ein neu eingeführter Histogramm-Fit verwendet, der auch nicht-symmetrische bzw. nichtgaußförmige Paarverteilungsfunktionen beschreiben kann, wie sie bei einem hohen Grad der Polymerisierung bzw. bei hohen Temperaturen auftreten können. Die Y-EXAFS-Spektren für die Prowatke und Klemme (2005) Zusammensetzungen zeigen mit Zunahme des ASI, eine Zunahme der Asymmetrie und Breite der Y-O Paarverteilungsfunktion, welche sich in sich in der Änderung der Koordinationszahl von 6 nach 8 und einer Zunahme des Y-O Abstand um 0.13Å manifestiert. Ein ähnlicher Trend lässt sich auch für die Gd- und Yb-EXAFS-Spektren beobachten. Die hoch aufgelösten XANESSpektren für La, Gd und Yb zeigen, dass sich die strukturellen Unterschiede zumindest halb-quantitativ bestimmen lassen. Dies gilt insbesondere für Änderungen im mittleren Abstand zu den Sauerstoffatomen. Im Vergleich zur EXAFS-Spektroskopie liefert XANES jedoch keine Informationen über die Form und Breite von Paarverteilungsfunktionen. Die Hochtemperatur EXAFS-Untersuchungen von Y zeigen Änderungen der lokalen Struktur oberhalb der Glasübergangstemperatur an, welche sich vordergründig auf eine thermisch induzierte Erhöhung des mittleren Y-O Abstandes zurückführen lassen. Allerdings zeigt ein Vergleich der Y-O Abstände für Zusammensetzungen mit einem ASI von 0.115 bzw. 0.755, ermittelt bei Raumtemperatur und TG, dass der im Glas beobachtete strukturelle Unterschied entlang der Zusammensetzungsserie in der Schmelze noch stärker ausfallen kann, als bisher für die Gläser angenommen wurde.
Die direkte Korrelation der Verteilungsdaten von Prowatke und Klemme (2005) mit den strukturellen Änderungen der Schmelzen offenbart für Y eine lineare Korrelation, wohingegen Yb und Gd eine nicht lineare Beziehung zeigen. Aufgrund seines Ionenradius und seiner Ladung wird das 6-fach koordinierte SEE in den niedriger polymerisierten Schmelzen bevorzugt durch nicht-brückenbildende Sauerstoffatome koordiniert, um stabile Konfigurationen zu bilden. In den höher polymerisierten Schmelzen mit ASI-Werten in der Nähe von 1 ist 6-fache Koordination nicht möglich, da fast nur noch brückenbildende Sauerstoffatome zur Verfügung stehen. Die Überbindung von brückenbildenden Sauerstoffatomen um das SEE wird durch Erhöhung der Koordinationszahl und des mittleren SEE-O Abstandes ausgeglichen. Dies bedeutet eine energetisch günstigere Konfiguration in den stärker depolymerisierten Zusammensetzungen, aus welcher die beobachtete Variation des Verteilungskoeffizienten resultiert, welcher sich jedoch für jedes Element stark unterscheidet. Für die haplogranitischen und haplobasaltischen Zusammensetzungen wurde mit Zunahme der Polymerisierung auch eine Zunahme der Koordinationszahl und des durchschnittlichen Bindungsabstands, einhergehend mit der Zunahme der Schiefe und der Asymmetrie der Paarverteilungsfunktion, beobachtet. Dies impliziert, dass das jeweilige SEE mit Zunahme der Polymerisierung auch inkompatibler in diesen Zusammensetzungen wird. Weiterhin zeigt die Zugabe von Wasser, dass die Schmelzen depolymerisieren, was in einer symmetrischeren Paarverteilungsfunktion resultiert, wodurch die Kompatibilität wieder zunimmt.
Zusammenfassend zeigt sich, dass die Veränderungen der Schmelzzusammensetzungen in einer Änderung der Polymerisierung der Schmelzen resultieren, die dann einen signifikanten Einfluss auf die lokale Umgebung der SEE hat. Die strukturellen Änderungen lassen sich direkt mit Verteilungsdaten korrelieren, die Trends unterscheiden sich aber stark zwischen leichten, mittleren und schweren SEE. Allerdings konnte diese Studie zeigen, in welcher Größenordnung die Änderungen liegen müssen, um einen signifikanten Einfluss auf den Verteilungskoeffizenten zu haben. Weiterhin zeigt sich, dass der Einfluss der Schmelzzusammensetzung auf die Verteilung der Spurenelemente mit Zunahme der Polymerisierung steigt und daher nicht vernachlässigt werden darf.
Ziel der vorliegenden Arbeit ist es, belohnungsabhängiges (instrumentelles) Lernen und Entscheidungsfindungsprozesse auf Verhaltens- und neuronaler Ebene in Abhängigkeit von chronischem Stresserleben (erfasst über den Lifetime Stress Inventory, Holmes und Rahe 1962) und kognitiven Variablen (eingeteilt in eine fluide und eine kristalline Intelligenzkomponente) an gesunden Probanden zu untersuchen. Dabei steht zu Beginn die Sicherung der Konstruktvalidität zwischen den bislang oft synonym verwendeten Begriffen modellfrei ~ habituell, bzw. modellbasiert ~ zielgerichtet im Fokus. Darauf aufbauend soll dann insbesondere der differentielle und interaktionelle Einfluss von chronischem Stresserleben und kognitiven Variablen auf Entscheidungsprozesse (instrumentelles Lernen) und deren neuronales Korrelat im VS untersucht und dargestellt werden. Abschließend wird die Relevanz der untersuchten belohnungsabhängigen Lernprozesse für die Entstehung und Aufrechterhaltung der Alkoholabhängigkeit zusammen mit weiteren Einflussvariablen in einem Übersichtspapier diskutiert.
Der Fall der Rachel Dolezal
(2016)
Die Amerikanerin Rachel Dolezal war bis ins Jahr 2015 als Afroamerikanerin bekannt. Als Aktivistin der National Association for the Advancement of Colored People setzte sie sich für die Rechte der afroamerikanischen Bevölkerung ein, lebte in einem schwarzen Umfeld und lehrte an einer Universität Afroamerikanische Studien. „I identify as black“ antwortete sie auf die Frage eines amerikanischen Fernsehmoderators, ob sie Afroamerikanerin sei. Ihre Kollegen und ihr näheres Umfeld identifizierten sie ebenfalls als solche. Erst, als regionale Journalisten auf sie aufmerksam wurden und ihre Eltern sich zu Wort meldeten, wurde deutlich, dass Dolezal eigentlich eine weiße Frau ist. Dolezals Eltern bestätigten dies, indem sie Kindheitsfotos einer hellhäutigen, blonden Rachel veröffentlichten. Dolezals Verhalten entfachte daraufhin eine rege mediale Diskussion über ihre Person im Kontext von Ethnizität und »Rasse«.
Die Verfasserin greift Dolezals Fall exemplarisch auf, um der Frage nachzugehen, ob ein Doing Race nach Belieben möglich ist. Darf sich Dolezal als schwarz identifizieren, obwohl sie keine afrikanischen Vorfahren hat? Welche gesellschaftliche Wissensvorräte schränken diese Wahl ein und welche Konsequenzen ergeben sich daraus? Anhand einer Diskursanalyse amerikanischer Zeitungsartikel geht die Verfasserin diesen Fragen nach. Hierbei werden »Rasse« und Ethnizität als soziale Konstruktionen, basierend auf dem Konzept von Stephen Cornell und Douglas Hartmann, betrachtet.
Der Florist
(2016)