Refine
Has Fulltext
- yes (649) (remove)
Year of publication
- 2016 (649) (remove)
Document Type
- Postprint (216)
- Article (175)
- Doctoral Thesis (136)
- Monograph/Edited Volume (28)
- Part of Periodical (22)
- Preprint (18)
- Review (14)
- Master's Thesis (12)
- Part of a Book (11)
- Working Paper (6)
Keywords
- Migration (13)
- migration (13)
- religion (13)
- Religion (12)
- interkulturelle Missverständnisse (12)
- religiöses Leben (12)
- confusions and misunderstandings (11)
- Logopädie (6)
- Zeitschrift (6)
- model (6)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (80)
- Institut für Slavistik (75)
- Institut für Geowissenschaften (41)
- Humanwissenschaftliche Fakultät (39)
- Institut für Chemie (39)
- Institut für Physik und Astronomie (31)
- Institut für Biochemie und Biologie (30)
- Vereinigung für Jüdische Studien e. V. (29)
- Bürgerliches Recht (28)
- Department Linguistik (23)
Exosomes are small membrane vesicles released by different cell types, including hepatocytes, that play important roles in intercellular communication. We have previously demonstrated that hepatocyte-derived exosomes contain the synthetic machinery to form sphingosine-1-phosphate (S1P) in target hepatocytes resulting in proliferation and liver regeneration after ischemia/reperfusion (I/R) injury. We also demonstrated that the chemokine receptors, CXCR1 and CXCR2, regulate liver recovery and regeneration after I/R injury. In the current study, we sought to determine if the regulatory effects of CXCR1 and CXCR2 on liver recovery and regeneration might occur via altered release of hepatocyte exosomes. We found that hepatocyte release of exosomes was dependent upon CXCR1 and CXCR2. CXCR1-deficient hepatocytes produced fewer exosomes, whereas CXCR2-deficient hepatocytes produced more exosomes compared to their wild-type controls. In CXCR2-deficient hepatocytes, there was increased activity of neutral sphingomyelinase (Nsm) and intracellular ceramide. CXCR1-deficient hepatocytes had no alterations in Nsm activity or ceramide production. Interestingly, exosomes from CXCR1-deficient hepatocytes had no effect on hepatocyte proliferation, due to a lack of neutral ceramidase and sphingosine kinase. The data demonstrate that CXCR1 and CXCR2 regulate hepatocyte exosome release. The mechanism utilized by CXCR1 remains elusive, but CXCR2 appears to modulate Nsm activity and resultant production of ceramide to control exosome release. CXCR1 is required for packaging of enzymes into exosomes that mediate their hepatocyte proliferative effect.
Children’s interpretations of sentences containing focus particles do not seem adult-like until school age. This study investigates how German 4-year-old children comprehend sentences with the focus particle ‘nur’ (only) by using different tasks and controlling for the impact of general cognitive abilities on performance measures. Two sentence types with ‘only’ in either pre-subject or pre-object position were presented. Eye gaze data and verbal responses were collected via the visual world paradigm combined with a sentence-picture verification task. While the eye tracking data revealed an adult-like pattern of focus particle processing, the sentence-picture verification replicated previous findings of poor comprehension, especially for ‘only’ in pre-subject position. A second study focused on the impact of general cognitive abilities on the outcomes of the verification task. Working memory was related to children’s performance in both sentence types whereas inhibitory control was selectively related to the number of errors for sentences with ‘only’ in pre-subject position. These results suggest that children at the age of 4 years have the linguistic competence to correctly interpret sentences with focus particles, which–depending on specific task demands–may be masked by immature general cognitive abilities.
The energy sector is both affected by climate change and a key sector for climate protection measures. Energy security is the backbone of our modern society and guarantees the functioning of most critical infrastructure. Thus, decision makers and energy suppliers of different countries should be familiar with the factors that increase or decrease the susceptibility of their electricity sector to climate change. Susceptibility means socioeconomic and structural characteristics of the electricity sector that affect the demand for and supply of electricity under climate change. Moreover, the relevant stakeholders are supposed to know whether the given national energy and climate targets are feasible and what needs to be done in order to meet these targets. In this regard, a focus should be on the residential building sector as it is one of the largest energy consumers and therefore emitters of anthropogenic CO 2 worldwide.
This dissertation addresses the first aspect, namely the susceptibility of the electricity sector, by developing a ranked index which allows for quantitative comparison of the electricity sector susceptibility of 21 European countries based on 14 influencing factors. Such a ranking has not been completed to date. We applied a sensitivity analysis to test the relative effect of each influencing factor on the susceptibility index ranking. We also discuss reasons for the ranking position and thus the susceptibility of selected countries. The second objective, namely the impact of climate change on the energy demand of buildings, is tackled by means of a new model with which the heating and cooling energy demand of residential buildings can be estimated. We exemplarily applied the model to Germany and the Netherlands. It considers projections of future changes in population, climate and the insulation standards of buildings, whereas most of the existing studies only take into account fewer than three different factors that influence the future energy demand of buildings. Furthermore, we developed a comprehensive retrofitting algorithm with which the total residential building stock can be modeled for the first time for each year in the past and future.
The study confirms that there is no correlation between the geographical location of a country and its position in the electricity sector susceptibility ranking. Moreover, we found no pronounced pattern of susceptibility influencing factors between countries that ranked higher or lower in the index. We illustrate that Luxembourg, Greece, Slovakia and Italy are the countries with the highest electricity sector susceptibility. The electricity sectors of Norway, the Czech Republic, Portugal and Denmark were found to be least susceptible to climate change. Knowledge about the most important factors for the poor and good ranking positions of these countries is crucial for finding adequate adaptation measures to reduce the susceptibility of the electricity sector. Therefore, these factors are described within this study.
We show that the heating energy demand of residential buildings will strongly decrease in both Germany and the Netherlands in the future. The analysis for the Netherlands focused on the regional level and a finer temporal resolution which revealed strong variations in the future heating energy demand changes by province and by month. In the German study, we additionally investigated the future cooling energy demand and could demonstrate that it will only slightly increase up to the middle of this century. Thus, increases in the cooling energy demand are not expected to offset reductions in heating energy demand. The main factor for substantial heating energy demand reductions is the retrofitting of buildings. We are the first to show that the given German and Dutch energy and climate targets in the building sector can only be met if the annual retrofitting rates are substantially increased. The current rate of only about 1 % of the total building stock per year is insufficient for reaching a nearly zero-energy demand of all residential buildings by the middle of this century. To reach this target, it would need to be at least tripled. To sum up, this thesis emphasizes that country-specific characteristics are decisive for the electricity sector susceptibility of European countries. It also shows for different scenarios how much energy is needed in the future to heat and cool residential buildings. With this information, existing climate mitigation and adaptation measures can be justified or new actions encouraged.
Climate change increases riverine carbon outgassing, while export to the ocean remains uncertain
(2016)
Any regular interaction of land and river during flooding affects carbon pools within the terrestrial system, riverine carbon and carbon exported from the system. In the Amazon basin carbon fluxes are considerably influenced by annual flooding, during which terrigenous organic material is imported to the river. The Amazon basin therefore represents an excellent example of a tightly coupled terrestrial-riverine system. The processes of generation, conversion and transport of organic carbon in such a coupled terrigenous-riverine system strongly interact and are climate-sensitive, yet their functioning is rarely considered in Earth system models and their response to climate change is still largely unknown. To quantify regional and global carbon budgets and climate change effects on carbon pools and carbon fluxes, it is important to account for the coupling between the land, the river, the ocean and the atmosphere. We developed the RIVerine Carbon Model (RivCM), which is directly coupled to the well-established dynamic vegetation and hydrology model LPJmL, in order to account for this large-scale coupling. We evaluate RivCM with observational data and show that some of the values are reproduced quite well by the model, while we see large deviations for other variables. This is mainly caused by some simplifications we assumed. Our evaluation shows that it is possible to reproduce large-scale carbon transport across a river system but that this involves large uncertainties. Acknowledging these uncertainties, we estimate the potential changes in riverine carbon by applying RivCM for climate forcing from five climate models and three CO2 emission scenarios (Special Report on Emissions Scenarios, SRES). We find that climate change causes a doubling of riverine organic carbon in the southern and western basin while reducing it by 20% in the eastern and northern parts. In contrast, the amount of riverine inorganic carbon shows a 2- to 3-fold increase in the entire basin, independent of the SRES scenario. The export of carbon to the atmosphere increases as well, with an average of about 30 %. In contrast, changes in future export of organic carbon to the Atlantic Ocean depend on the SRES scenario and are projected to either decrease by about 8.9% (SRES A1B) or increase by about 9.1% (SRES A2). Such changes in the terrigenous-riverine system could have local and regional impacts on the carbon budget of the whole Amazon basin and parts of the Atlantic Ocean. Changes in riverine carbon could lead to a shift in the riverine nutrient supply and pH, while changes in the exported carbon to the ocean lead to changes in the supply of organic material that acts as a food source in the Atlantic. On larger scales the increased outgassing of CO2 could turn the Amazon basin from a sink of carbon to a considerable source. Therefore, we propose that the coupling of terrestrial and riverine carbon budgets should be included in subsequent analysis of the future regional carbon budget.
Extreme hydro-meteorological events, such as severe droughts or heavy rainstorms, constitute primary manifestations of climate variability and exert a critical impact on the natural environment and human society. This is particularly true for high-mountain areas, such as the eastern flank of the southern Central Andes of NW Argentina, a region impacted by deep convection processes that form the basis of extreme events, often resulting in floods, a variety of mass movements, and hillslope processes. This region is characterized by pronounced E-W gradients in topography, precipitation, and vegetation cover, spanning low to medium-elevation, humid and densely vegetated areas to high-elevation, arid and sparsely vegetated environments. This strong E-W gradient is mirrored by differences in the efficiency of surface processes, which mobilize and transport large amounts of sediment through the fluvial system, from the steep hillslopes to the intermontane basins and further to the foreland. In a highly sensitive high-mountain environment like this, even small changes in the spatiotemporal distribution, magnitude and rates of extreme events may strongly impact environmental conditions, anthropogenic activity, and the well-being of mountain communities and beyond. However, although the NW Argentine Andes comprise the catchments for the La Plata river that traverses one of the most populated and economically relevant areas of South America, there are only few detailed investigations of climate variability and extreme hydro-meteorological events.
In this thesis, I focus on deciphering the spatiotemporal variability of rainfall and river discharge, with particular emphasis on extreme hydro-meteorological events in the subtropical southern Central Andes of NW Argentina during the past seven decades. I employ various methods to assess and quantify statistically significant trend patterns of rainfall and river discharge, integrating high-quality daily time series from gauging stations (40 rainfall and 8 river discharge stations) with gridded datasets (CPC-uni and TRMM 3B42 V7), for the period between 1940 and 2015. Evidence for a general intensification of the hydrological cycle at intermediate elevations (~ 0.5 – 3 km asl) at the eastern flank of the southern Central Andes is found both from rainfall and river-discharge time-series analysis during the period from 1940 to 2015. This intensification is associated with the increase of the annual total amount of rainfall and the mean annual discharge. However, most pronounced trends are found at high percentiles, i.e. extreme hydro-meteorological events, particularly during the wet season from December to February.An important outcome of my studies is the recognition of a rapid increase in the amount of river discharge during the period between 1971 and 1977, most likely linked to the 1976-77 global climate shift, which is associated with the North Pacific Ocean sea surface temperature variability. Interestingly, after this rapid increase, both rainfall and river discharge decreased at low and intermediate elevations along the eastern flank of the Andes. In contrast, during the same time interval, at high elevations, extensive areas on the arid Puna de Atacama plateau have recorded increasing annual rainfall totals. This has been associated with more intense extreme hydro-meteorological events from 1979 to 2014. This part of the study reveals that low-, intermediate, and high-elevation sectors in the Andes of NW Argentina respond differently to changing climate conditions.
Possible forcing mechanisms of the pronounced hydro-meteorological variability observed in the study area are also investigated. For the period between 1940 and 2015, I analyzed modes of oscillation of river discharge from small to medium drainage basins (102 to 104 km2), located on the eastern flank of the orogen. First, I decomposed the relevant monthly time series using the Hilbert-Huang Transform, which is particularly appropriate for non-stationary time series that result from non-linear natural processes. I observed that in the study region discharge variability can be described by five quasi-periodic oscillatory modes on timescales varying from 1 to ~20 years. Secondly, I tested the link between river-discharge variations and large-scale climate modes of variability, using different climate indices, such as the BEST ENSO (Bivariate El Niño-Southern Oscillation Time-series) index. This analysis reveals that, although most of the variance on the annual timescale is associated with the South American Monsoon System, a relatively large part of river-discharge variability is linked to Pacific Ocean variability (PDO phases) at multi-decadal timescales (~20 years). To a lesser degree, river discharge variability is also linked to the Tropical South Atlantic (TSA) sea surface temperature anomaly at multi-annual timescales (~2-5 years).
Taken together, these findings exemplify the high degree of sensitivity of high-mountain environments with respect to climatic variability and change. This is particularly true for the topographic transitions between the humid, low-moderate elevations and the semi-arid to arid highlands of the southern Central Andes. Even subtle changes in the hydro-meteorological regime of these areas of the mountain belt react with major impacts on erosional hillslope processes and generate mass movements that fundamentally impact the transport capacity of mountain streams. Despite more severe storms in these areas, the fluvial system is characterized by pronounced variability of the stream power on different timescales, leading to cycles of sediment aggradation, the loss of agriculturally used land and severe impacts on infrastructure.
Discourse production is crucial for communicative success and is in the core of aphasia assessment and treatment. Coherence differentiates discourse from a series of utterances/sentences; it is internal unity and connectedness, and, as such, perhaps the most inherent property of discourse. It is unclear whether people with aphasia, who experience various language production difficulties, preserve the ability to produce coherent discourse. A more general question of how coherence is established and represented linguistically has been addressed in the literature, yet remains unanswered. This dissertation presents an investigation of discourse production in aphasia and the linguistic mechanisms of establishing coherence.
Caribbean States organised in CARICOM recently brought forward reparation claims against several European States to compensate slavery and (native) genocides in the Caribbean and even threatened to approach the International Court of Justice. The paper provides for an analysis of the facts behind the CARICOM claim and asks whether the law of state responsibility is able to provide for the demanded compensation. As the intertemporal principle generally prohibits retroactive application of today’s international rules, the paper argues that the complete claim must be based on the law of state responsibility governing in the time of the respective conduct. An inquiry into the history of primary (prohibition of slavery and genocide) as well as secondary rules of State responsibility reveals that both sets of rules were underdeveloped or non-existent at the times of slavery and alleged (native) genocides. Therefore, the author concludes that the CARICOM claim is legally flawed but nevertheless worth the attention as it once again exposes imperial and colonial injustices of the past and their legitimization by historical international law and international/natural lawyers.
Archaeology can be understood as a tool used in the process of identity formation,
contributing to a sense of belonging and unity within a diverse set of communities.
Research was conducted with the intention of analyzing the wide range of perceptions
regarding archaeological sites in the mixed city of Lod, Israel. I explored the impact of
urban cultural heritage on shaping the identity of local Jewish and Arab children, who
were chosen as the youngest active members of society living in the city, and who
participated in the 2013 archaeological excavation season at the Khan al-Hilu. Israel is
an ideal location for such research, due to its nature as simultaneously being the focus
of extensive archaeological excavations as well as being the setting of an intractable conflict. Ancestral attachment to the land serves as a foundation for the collective
identity of both Jews and Arabs. Yet, each community and individual may relate differently
to the surrounding archaeological sites, which is further shaped by their sense of
societal hierarchy and cultural heritage.
This article first outlines different ways of how psycholinguists have dealt with linguistic diversity and illustrates these approaches with three familiar cases from research on language processing, language acquisition, and language disorders. The second part focuses on the role of morphology and morphological variability across languages for psycholinguistic research. The specific phenomena to be examined are to do with stem-formation morphology and inflectional classes; they illustrate how experimental research that is informed by linguistic typology can lead to new insights.
This cumulative dissertation contains four self-contained articles which are related to EU regional policy and its structural funds as the overall research topic. In particular, the thesis addresses the question if EU regional policy interventions can at all be scientifically justified and legitimated on theoretical and empirical grounds from an economics point of view. The first two articles of the thesis (“The EU structural funds as a means to hamper migration” and “Internal migration and EU regional policy transfer payments: a panel data analysis for 28 EU member countries”) enter into one particular aspect of the debate regarding the justification and legitimisation of EU regional policy. They theoretically and empirically analyse as to whether regional policy or the market force of the free flow of labour (migration) in the internal European market is the better instrument to improve and harmonise the living and working conditions of EU citizens. Based on neoclassical market failure theory, the first paper argues that the structural funds of the EU are inhibiting internal migration, which is one of the key measures in achieving convergence among the nations in the single European market. It becomes clear that European regional policy aiming at economic growth and cohesion among the member states cannot be justified and legitimated if the structural funds hamper instead of promote migration. The second paper, however, shows that the empirical evidence on the migration and regional policy nexus is not unambiguous, i.e. different empirical investigations show that EU structural funds hamper and promote EU internal migration. Hence, the question of the scientific justification and legitimisation of EU regional policy cannot be readily and unambiguously answered on empirical grounds. This finding is unsatisfying but is in line with previous theoretical and empirical literature. That is why, I take a step back and reconsider the theoretical beginnings of the thesis, which took for granted neoclassical market failure theory as the starting point for the positive explanation as well as the normative justification and legitimisation of EU regional policy. The third article of the thesis (“EU regional policy: theoretical foundations and policy conclusions revisited”) deals with the theoretical explanation and legitimisation of EU regional policy as well as the policy recommendations given to EU regional policymakers deduced from neoclassical market failure theory. The article elucidates that neoclassical market failure is a normative concept, which justifies and legitimates EU regional policy based on a political and thus subjective goal or value-judgement. It can neither be used, therefore, to give a scientifically positive explanation of the structural funds nor to obtain objective and practically applicable policy instruments. Given this critique of neoclassical market failure theory, the third paper consequently calls into question the widely prevalent explanation and justification of EU regional policy given in static neoclassical equilibrium economics. It argues that an evolutionary non-equilibrium economics perspective on EU regional policy is much more appropriate to provide a realistic understanding of one of the largest policies conducted by the EU. However, this does neither mean that evolutionary economic theory can be unreservedly seen as the panacea to positively explain EU regional policy nor to derive objective policy instruments for EU regional policymakers. This issue is discussed in the fourth article of the thesis (“Market failure vs. system failure as a rationale for economic policy? A critique from an evolutionary perspective”). This article reconsiders the explanation of economic policy from an evolutionary economics perspective. It contrasts the neoclassical equilibrium notions of market and government failure with the dominant evolutionary neo-Schumpeterian and Austrian-Hayekian perceptions. Based on this comparison, the paper criticises the fact that neoclassical failure reasoning still prevails in non-equilibrium evolutionary economics when economic policy issues are examined. This is surprising, since proponents of evolutionary economics usually view their approach as incompatible with its neoclassical counterpart. The paper therefore argues that in order to prevent the otherwise fruitful and more realistic evolutionary approach from undermining its own criticism of neoclassical economics and to create a consistent as well as objective evolutionary policy framework, it is necessary to eliminate the equilibrium spirit. Taken together, the main finding of this thesis is that European regional policy and its structural funds can neither theoretically nor empirically be justified and legitimated from an economics point of view. Moreover, the thesis finds that the prevalent positive and instrumental explanation of EU regional policy given in the literature needs to be reconsidered, because these theories can neither scientifically explain the emergence and development of this policy nor are they appropriate to derive objective and scientific policy instruments for EU regional policymakers.
We prove statistical rates of convergence for kernel-based least squares regression from i.i.d. data using a conjugate gradient algorithm, where regularization against overfitting is obtained by early stopping. This method is related to Kernel Partial Least Squares, a regression method that combines supervised dimensionality reduction with least squares projection. Following the setting introduced in earlier related literature, we study so-called "fast convergence rates" depending on the regularity of the target regression function (measured by a source condition in terms of the kernel integral operator) and on the effective dimensionality of the data mapped into the kernel space. We obtain upper bounds, essentially matching known minimax lower bounds, for the L^2 (prediction) norm as well as for the stronger Hilbert norm, if the true
regression function belongs to the reproducing kernel Hilbert space. If the latter assumption is not fulfilled, we obtain similar convergence rates for appropriate norms, provided additional unlabeled data are available.
Convoluted Brownian motion
(2016)
In this paper we analyse semimartingale properties of a class of Gaussian periodic processes, called convoluted Brownian motions, obtained by convolution between a deterministic function and a Brownian motion. A classical
example in this class is the periodic Ornstein-Uhlenbeck process. We compute their characteristics and show that in general, they are neither
Markovian nor satisfy a time-Markov field property. Nevertheless, by enlargement
of filtration and/or addition of a one-dimensional component, one can in some case recover the Markovianity. We treat exhaustively the case of the bidimensional trigonometric convoluted Brownian motion and the higher-dimensional monomial convoluted Brownian motion.
Water scarcity, adaption on climate change, and risk assessment of droughts and floods are critical topics for science and society these days. Monitoring and modeling of the hydrological cycle are a prerequisite to understand and predict the consequences for weather and agriculture. As soil water storage plays a key role for partitioning of water fluxes between the atmosphere, biosphere, and lithosphere, measurement techniques are required to estimate soil moisture states from small to large scales.
The method of cosmic-ray neutron sensing (CRNS) promises to close the gap between point-scale and remote-sensing observations, as its footprint was reported to be 30 ha. However, the methodology is rather young and requires highly interdisciplinary research to understand and interpret the response of neutrons to soil moisture. In this work, the signal of nine detectors has been systematically compared, and correction approaches have been revised to account for meteorological and geomagnetic variations. Neutron transport simulations have been consulted to precisely characterize the sensitive footprint area, which turned out to be 6--18 ha, highly local, and temporally dynamic. These results have been experimentally confirmed by the significant influence of water bodies and dry roads. Furthermore, mobile measurements on agricultural fields and across different land use types were able to accurately capture the various soil moisture states. It has been further demonstrated that the corresponding spatial and temporal neutron data can be beneficial for mesoscale hydrological modeling. Finally, first tests with a gyrocopter have proven the concept of airborne neutron sensing, where increased footprints are able to overcome local effects.
This dissertation not only bridges the gap between scales of soil moisture measurements. It also establishes a close connection between the two worlds of observers and modelers, and further aims to combine the disciplines of particle physics, geophysics, and soil hydrology to thoroughly explore the potential and limits of the CRNS method.
Coupling of attention and saccades when viewing scenes with central and peripheral degradation
(2016)
Degrading real-world scenes in the central or the peripheral visual field yields a characteristic pattern: Mean saccade amplitudes increase with central and decrease with peripheral degradation. Does this pattern reflect corresponding modulations of selective attention? If so, the observed saccade amplitude pattern should reflect more focused attention in the central region with peripheral degradation and an attentional bias toward the periphery with central degradation. To investigate this hypothesis, we measured the detectability of peripheral (Experiment 1) or central targets (Experiment 2) during scene viewing when low or high spatial frequencies were gaze-contingently filtered in the central or the peripheral visual field. Relative to an unfiltered control condition, peripheral filtering induced a decrease of the detection probability for peripheral but not for central targets (tunnel vision). Central filtering decreased the detectability of central but not of peripheral targets. Additional post hoc analyses are compatible with the interpretation that saccade amplitudes and direction are computed in partial independence. Our experimental results indicate that task-induced modulations of saccade amplitudes reflect attentional modulations.
We study the adsorption–desorption transition of polyelectrolyte chains onto planar, cylindrical and spherical surfaces with arbitrarily high surface charge densities by massive Monte Carlo computer simulations. We examine in detail how the well known scaling relations for the threshold transition—demarcating the adsorbed and desorbed domains of a polyelectrolyte near weakly charged surfaces—are altered for highly charged interfaces. In virtue of high surface potentials and large surface charge densities, the Debye–Hückel approximation is often not feasible and the nonlinear Poisson–Boltzmann approach should be implemented. At low salt conditions, for instance, the electrostatic potential from the nonlinear Poisson–Boltzmann equation is smaller than the Debye–Hückel result, such that the required critical surface charge density for polyelectrolyte adsorption σc increases. The nonlinear relation between the surface charge density and electrostatic potential leads to a sharply increasing critical surface charge density with growing ionic strength, imposing an additional limit to the critical salt concentration above which no polyelectrolyte adsorption occurs at all. We contrast our simulations findings with the known scaling results for weak critical polyelectrolyte adsorption onto oppositely charged surfaces for the three standard geometries. Finally, we discuss some applications of our results for some physical–chemical and biophysical systems.
Volunteered geographical information (VGI) and citizen science have become important sources data for much scientific research. In the domain of land cover, crowdsourcing can provide a high temporal resolution data to support different analyses of landscape processes. However, the scientists may have little control over what gets recorded by the crowd, providing a potential source of error and uncertainty. This study compared analyses of crowdsourced land cover data that were contributed by different groups, based on nationality (labelled Gondor and Non-Gondor) and on domain experience (labelled Expert and Non-Expert). The analyses used a geographically weighted model to generate maps of land cover and compared the maps generated by the different groups. The results highlight the differences between the maps how specific land cover classes were under-and over-estimated. As crowdsourced data and citizen science are increasingly used to replace data collected under the designed experiment, this paper highlights the importance of considering between group variations and their impacts on the results of analyses. Critically, differences in the way that landscape features are conceptualised by different groups of contributors need to be considered when using crowdsourced data in formal scientific analyses. The discussion considers the potential for variation in crowdsourced data, the relativist nature of land cover and suggests a number of areas for future research. The key finding is that the veracity of citizen science data is not the critical issue per se. Rather, it is important to consider the impacts of differences in the semantics, affordances and functions associated with landscape features held by different groups of crowdsourced data contributors.
The title compounds, [(1R,3R,4R,5R,6S)-4,5-bis(acetyloxy)-7-oxo-2-oxabicyclo[4.2.0]octan-3-yl]methyl acetate, C14H18O8, (I), [(1S,4R,5S,6R)-5-acetyloxy-7-hydroxyimino-2-oxobicyclo[4.2.0]octan-4-yl acetate, C11H15NO6, (II), and [(3aR,5R,6R,7R,7aS)-6,7-bis(acetyloxy)-2-oxooctahydropyrano[3,2-b]pyrrol-5-yl]methyl acetate, C14H19NO8, (III), are stable bicyclic carbohydrate derivatives. They can easily be synthesized in a few steps from commercially available glycals. As a result of the ring strain from the four-membered rings in (I) and (II), the conformations of the carbohydrates deviate strongly from the ideal chair form. Compound (II) occurs in the boat form. In the five-membered lactam (III), on the other hand, the carbohydrate adopts an almost ideal chair conformation. As a result of the distortion of the sugar rings, the configurations of the three bicyclic carbohydrate derivatives could not be determined from their NMR coupling constants. From our three crystal structure determinations, we were able to establish for the first time the absolute configurations of all new stereocenters of the carbohydrate rings.
Ecosystem services have a significant impact on human wellbeing. While ecosystem services are frequently represented by monetary values, social values and underlying social benefits remain under explored. The purpose of this study is to assess whether and how social benefits have been explicitly addressed within socio-economic and socio-cultural ecosystem services research, ultimately allowing a better understanding between ecosystem services and human well-being. In this paper, we reviewed 115 international primary valuation studies and tested four hypotheses associated to the identification of social benefits of ecosystem services using logistic regressions. Tested hypotheses were that (1) social benefits are mostly derived in studies that assess cultural ecosystem services as opposed to other ecosystem service types, (2) there is a pattern of social benefits and certain cultural ecosystem services assessed simultaneously, (3) monetary valuation techniques go beyond expressing monetary values and convey social benefits, and (4) directly addressing stakeholder's views the consideration of social benefits in ecosystem service assessments. Our analysis revealed that (1) a variety of social benefits are valued in studies that assess either of the four ecosystem service types, (2) certain social benefits are likely to co-occur in combination with certain cultural ecosystem services, (3) of the studies that employed monetary valuation techniques, simulated market approaches overlapped most frequently with the assessment of social benefits and (4) studies that directly incorporate stakeholder's views were more likely to also assess social benefits. (C) 2016 Elsevier B.V. All rights reserved.
Most climate change impacts manifest in the form of natural hazards. Damage assessment typically relies on damage functions that translate the magnitude of extreme events to a quantifiable damage. In practice, the availability of damage functions is limited due to a lack of data sources and a lack of understanding of damage processes. The study of the characteristics of damage functions for different hazards could strengthen the theoretical foundation of damage functions and support their development and validation. Accordingly, we investigate analogies of damage functions for coastal flooding and for wind storms and identify a unified approach. This approach has general applicability for granular portfolios and may also be applied, for example, to heat-related mortality. Moreover, the unification enables the transfer of methodology between hazards and a consistent treatment of uncertainty. This is demonstrated by a sensitivity analysis on the basis of two simple case studies (for coastal flood and storm damage). The analysis reveals the relevance of the various uncertainty sources at varying hazard magnitude and on both the microscale and the macroscale level. Main findings are the dominance of uncertainty from the hazard magnitude and the persistent behaviour of intrinsic uncertainties on both scale levels. Our results shed light on the general role of uncertainties and provide useful insight for the application of the unified approach.
Das gestorbene Ich
(2016)
Das Widerspenstige bändigen
(2016)
Dem Handeln von Lehrkräften wird in der schulischen Praxis wie in der wissenschaftlichen Literatur ein wesentlicher Einfluss auf die Qualität von schulischem Unterricht zugesprochen. Auch wenn umfangreiche normative Vorstellungen über ein gutes Lehr-Handeln bestehen, so gibt es wenig Erkenntnis darüber, welche Gründe Lehrkräfte für ihr pädagogisches Handeln haben. Das Handeln von Lehrkräften kann nur dann adäquat erfasst werden, wenn Bildung einerseits als Weitergabe von Kultur an die nachfolgende Generation und andererseits als eine vom sich bildenden Subjekt ausgehende Selbst- und Weltverständigung verstanden wird. Damit einhergehende Anforderungen an die Lehrkraft stehen notwendigerweise in Widerspruch zueinander; dies gilt besonders für eine Gesellschaft mit großer kultureller und sozialer Heterogenität. Bei der Suche nach Zusammenhängen zwischen Persönlichkeit, pädagogischem Wissen oder Kompetenzen und einem unterrichtlichen Handeln wird häufig von einer Bedingtheit dieses Handelns ausgegangen und dieses auf kognitive Aspekte und an externen Normen orientierte Merkmale verkürzt. Ertragreicher für eine Antwort auf die Frage nach den Begründungen sind wissenschaftliche Arbeiten, die Professionalität als eine Bezugnahme auf einen besonderen strukturellen Rahmen beschreiben, der durch Widersprüche geprägt ist und Entscheidungen zu den Spannungsfeldern pädagogischer Verhältnisse erfordert. Die subjektwissenschaftliche Lerntheorie bietet eine Basis für ein Verständnis eines Lernens in institutionellen Kontexten ausgehend von den Lerninteressen der Schülerinnen und Schüler. Lehren kann darauf bezugnehmend als Unterstützung von Selbst- und Weltverständigungsprozessen durch Wertschätzung, Verstehen und Angebote alternativer Bedeutungshorizonte verstanden werden. Das Handeln von Lehrkräften ist als sinngebende Bezugnahme auf daraus resultierende sowie institutionelle Anforderungen mittels gesellschaftlicher Bedeutungsstrukturen verstehbar. Das handelnde Subjekt erschließt sich selbst und die Welt mit Hilfe von Bedeutungen. Diese können verstanden werden als der Besonderheit der Biographie, der gesellschaftlichen Position sowie der Lebenslage geschuldete Reinterpretationen gesellschaftlicher Bedeutungsstrukturen. Im empirischen Verfahren können mittels eines Übergangs von sequentiellen zu komparativen Analysen Positionierungen als thematisch spezifische und über die konkrete Handlungssituation hinausreichende Bedeutungs-Begründungs-Zusammenhänge rekonstruiert werden. Daraus werden situationsunabhängige Strukturmomente des Gegenstands Lehren an beruflichen Schulen aber auch komplexe, situationsbezogene subjektive Bedeutungs-Begründungs-Muster abgeleitet. Als wesentliche strukturelle Merkmale lassen sich die Schlüsselkategorien ‚Deutungsmacht‘ und ‚instrumentelle pädagogische Beziehung‘ aus dem empirischen Material unter Zuhilfenahme weiterer theoretischer Folien entwickeln. Da Deutungsmacht auf Akzeptanz angewiesen ist und in instrumentellen Beziehungen eine kooperative Bezugnahme auf den Lehr-Lern-Gegenstand allenfalls punktuell erfolgt, können damit asymmetrische metastabile Arrangements zwischen einer Lehrkraft und Schülerinnen und Schülern verstanden werden. Als empirische Ausprägungen weist Deutungsmacht die Varianten ‚absoluter Anspruch‘, ‚Akzeptanz der Fragilität‘ und ‚Akzeptanz der Legitimität eines Infragestellens‘ auf. Bei der zweiten Schlüsselkategorie treten die Varianten ‚strukturelle Prägung‘, ‚unspezifischer allgemein-menschlicher Charakter‘ und ‚Außenprägung‘ der instrumentellen pädagogischen Beziehung auf. Die Bedeutungs-Begründungs-Musters weisen teilweise Inkonsistenzen und Übergänge in den Positionierungen bezogen auf die dargestellten Varianten auf. Nur bei einem Teil der Muster sind Bemühungen um Wertschätzung und Verstehen der Schülerinnen und Schüler plausibel ableitbar, gleiches gilt in Hinblick auf eine Offenheit für eine Revision der Muster. Die Muster, wie etwa ‚Durchsetzend-ertragendes Nachsteuern‘, ‚Direktiv-personalisierendes Praktizieren‘ oder ‚Regulierend-flexibles Managen‘ sind zu verstehen als Bewältigungsmodi der kontingenten pädagogischen (Konflikt-)Situationen, auf die sich die Fallschilderungen beziehen. Die jeweilige Lehrkraft hat dieses Muster in dem beschriebenen Fall genutzt, was allerdings keine Aussage darüber zulässt, auf welche Muster die Lehrkraft in anderen Fällen zugreifen würde. Die Ergebnisse der vorliegenden Arbeit eignen sich als eine heuristische bzw. theoretische Folie, die Lehrkräfte beim Erschließen ihres eigenen pädagogischen Handelns - etwa in einer als Fallberatung konzipierten Fortbildung - unterstützen kann. Möglich sind Anschlüsse an andere theoretische Ansätze zum Handeln von Lehrkräften aber auch deren veränderte Einordnung. Erweitert werden die Optionen, dieses Handeln über wissenschaftliche Zugänge zu erfassen.
Das „Startprojekt“
(2016)
Absolventinnen und Absolventen unserer Informatik-Bachelorstudiengänge benötigen für kompetentes berufliches Handeln sowohl fachliche als auch überfachliche Kompetenzen. Vielfach verlangen wir von Erstsemestern in Grundlagen-Lehrveranstaltungen fast ausschließlich den Aufbau von Fachkompetenz und vernachlässigen dabei häufig Selbstkompetenz, Methodenkompetenz und Sozialkompetenz. Gerade die drei letztgenannten sind für ein erfolgreiches Studium unabdingbar und sollten von Anfang an entwickelt werden. Wir stellen unser „Startprojekt“ als einen Beitrag vor, im ersten Semester die eigenverantwortliche, überfachliche Kompetenzentwicklung in einem fachlichen Kontext zu fördern.
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions.
First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations.
Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago.
Fluxes of organic and inorganic carbon within the Amazon basin are considerably controlled by annual flooding, which triggers the export of terrigenous organic material to the river and ultimately to the Atlantic Ocean. The amount of carbon imported to the river and the further conversion, transport and export of it depend on temperature, atmospheric CO2, terrestrial productivity and carbon storage, as well as discharge. Both terrestrial productivity and discharge are influenced by climate and land use change. The coupled LPJmL and RivCM model system (Langerwisch et al., 2016) has been applied to assess the combined impacts of climate and land use change on the Amazon riverine carbon dynamics. Vegetation dynamics (in LPJmL) as well as export and conversion of terrigenous carbon to and within the river (RivCM) are included. The model system has been applied for the years 1901 to 2099 under two deforestation scenarios and with climate forcing of three SRES emission scenarios, each for five climate models. We find that high deforestation (business-as-usual scenario) will strongly decrease (locally by up to 90 %) riverine particulate and dissolved organic carbon amount until the end of the current century. At the same time, increase in discharge leaves net carbon transport during the first decades of the century roughly unchanged only if a sufficient area is still forested. After 2050 the amount of transported carbon will decrease drastically. In contrast to that, increased temperature and atmospheric CO2 concentration determine the amount of riverine inorganic carbon stored in the Amazon basin. Higher atmospheric CO2 concentrations increase riverine inorganic carbon amount by up to 20% (SRES A2). The changes in riverine carbon fluxes have direct effects on carbon export, either to the atmosphere via outgassing or to the Atlantic Ocean via discharge. The outgassed carbon will increase slightly in the Amazon basin, but can be regionally reduced by up to 60% due to deforestation. The discharge of organic carbon to the ocean will be reduced by about 40% under the most severe deforestation and climate change scenario. These changes would have local and regional consequences on the carbon balance and habitat characteristics in the Amazon basin itself as well as in the adjacent Atlantic Ocean.
Even if greenhouse gas emissions were stopped today, sea level would continue to rise for centuries, with the long-term sea-level commitment of a 2 degrees C warmer world significantly exceeding 2 m. In view of the potential implications for coastal populations and ecosystems worldwide, we investigate, from an ice-dynamic perspective, the possibility of delaying sea-level rise by pumping ocean water onto the surface of the Antarctic ice sheet. We find that due to wave propagation ice is discharged much faster back into the ocean than would be expected from a pure advection with surface velocities. The delay time depends strongly on the distance from the coastline at which the additional mass is placed and less strongly on the rate of sea-level rise that is mitigated. A millennium-scale storage of at least 80% of the additional ice requires placing it at a distance of at least 700 km from the coastline. The pumping energy required to elevate the potential energy of ocean water to mitigate the currently observed 3 mmyr(-1) will exceed 7% of the current global primary energy supply. At the same time, the approach offers a comprehensive protection for entire coastlines particularly including regions that cannot be protected by dikes.
Diese empirische Studie untersucht den Einfluss der Staatsform auf den Einsatz und das Ausmaß von Wirtschaftssanktionen unter Verwendung von Regressionsanalysen. Die Ergebnisse deuten auf ein friedliches Wirtschaftsverhältnis zwischen den Demokratien hin. Die bisherige Forschung hat den demokratischen Wirtschaftsfrieden mithilfe der institutionellen Theorie erklärt, die das Sanktionsverhalten auf ein rationalistisches Kosten-Nutzen-Kalkül zurückführt. Demgegenüber vertritt die konstruktivistische Theorie die Auffassung, dass die friedvollere Konfliktbewältigung unter Demokratien auf die Ausbildung einer gemeinsamen Identität, sowie auf gemeinsame Werte und Normen zurückzuführen sei.
Gravity dictates the structure of the whole Universe and, although it is triumphantly described by the theory of General Relativity, it is the force that we least understand in nature. One of the cardinal predictions of this theory are black holes. Massive, dark objects are found in the majority of galaxies. Our own galactic center very contains such an object with a mass of about four million solar masses. Are these objects supermassive black holes (SMBHs), or do we need alternatives? The answer lies in the event horizon, the characteristic that defines a black hole. The key to probe the horizon is to model the movement of stars around a SMBH, and the interactions between them, and look for deviations from real observations. Nuclear star clusters harboring a massive, dark object with a mass of up to ~ ten million solar masses are good testbeds to probe the event horizon of the potential SMBH with stars. The channel for interactions between stars and the central MBH are the fact that (a) compact stars and stellar-mass black holes can gradually inspiral into the SMBH due to the emission of gravitational radiation, which is known as an “Extreme Mass Ratio Inspiral” (EMRI), and (b) stars can produce gases which will be accreted by the SMBH through normal stellar evolution, or by collisions and disruptions brought about by the strong central tidal field. Such processes can contribute significantly to the mass of the SMBH. These two processes involve different disciplines, which combined will provide us with detailed information about the fabric of space and time. In this habilitation I present nine articles of my recent work directly related with these topics.
In this paper we report an experimental and computational study of liquid acetonitrile (H3C–C[triple bond, length as m-dash]N) by resonant inelastic X-ray scattering (RIXS) at the N K-edge. The experimental spectra exhibit clear signatures of the electronic structure of the valence states at the N site and incident-beam-polarization dependence is observed as well. Moreover, we find fine structure in the quasielastic line that is assigned to finite scattering duration and nuclear relaxation. We present a simple and light-to-evaluate model for the RIXS maps and analyze the experimental data using this model combined with ab initio molecular dynamics simulations. In addition to polarization-dependence and scattering-duration effects, we pinpoint the effects of different types of chemical bonding to the RIXS spectrum and conclude that the H2C–C[double bond, length as m-dash]NH isomer, suggested in the literature, does not exist in detectable quantities. We study solution effects on the scattering spectra with simulations in liquid and in vacuum. The presented model for RIXS proved to be light enough to allow phase-space-sampling and still accurate enough for identification of transition lines in physical chemistry research by RIXS.
Dependency Resolution Difficulty Increases with Distance in Persian Separable Complex Predicates
(2016)
Delaying the appearance of a verb in a noun-verb dependency tends to increase processing difficulty at the verb; one explanation for this locality effect is decay and/or interference of the noun in working memory. Surprisal, an expectation-based account, predicts that delaying the appearance of a verb either renders it no more predictable or more predictable, leading respectively to a prediction of no effect of distance or a facilitation. Recently, Husain et al. (2014) suggested that when the exact identity of the upcoming verb is predictable (strong predictability), increasing argument-verb distance leads to facilitation effects, which is consistent with surprisal; but when the exact identity of the upcoming verb is not predictable (weak predictability), locality effects are seen. We investigated Husain et al.'s proposal using Persian complex predicates (CPs), which consist of a non-verbal element—a noun in the current study—and a verb. In CPs, once the noun has been read, the exact identity of the verb is highly predictable (strong predictability); this was confirmed using a sentence completion study. In two self-paced reading (SPR) and two eye-tracking (ET) experiments, we delayed the appearance of the verb by interposing a relative clause (Experiments 1 and 3) or a long PP (Experiments 2 and 4). We also included a simple Noun-Verb predicate configuration with the same distance manipulation; here, the exact identity of the verb was not predictable (weak predictability). Thus, the design crossed Predictability Strength and Distance. We found that, consistent with surprisal, the verb in the strong predictability conditions was read faster than in the weak predictability conditions. Furthermore, greater verb-argument distance led to slower reading times; strong predictability did not neutralize or attenuate the locality effects. As regards the effect of distance on dependency resolution difficulty, these four experiments present evidence in favor of working memory accounts of argument-verb dependency resolution, and against the surprisal-based expectation account of Levy (2008). However, another expectation-based measure, entropy, which was computed using the offline sentence completion data, predicts reading times in Experiment 1 but not in the other experiments. Because participants tend to produce more ungrammatical continuations in the long-distance condition in Experiment 1, we suggest that forgetting due to memory overload leads to greater entropy at the verb.
Background: The goal of this study was to estimate the prevalence of and risk factors for diagnosed depression in heart failure (HF) patients in German primary care practices.
Methods: This study was a retrospective database analysis in Germany utilizing the Disease Analyzer (R) Database (IMS Health, Germany). The study population included 132,994 patients between 40 and 90 years of age from 1,072 primary care practices. The observation period was between 2004 and 2013. Follow-up lasted up to five years and ended in April 2015. A total of 66,497 HF patients were selected after applying exclusion criteria. The same number of 66,497 controls were chosen and were matched (1:1) to HF patients on the basis of age, sex, health insurance, depression diagnosis in the past, and follow-up duration after index date.
Results: HF was a strong risk factor for diagnosed depression (p < 0.0001). A total of 10.5% of HF patients and 6.3% of matched controls developed depression after one year of follow-up (p < 0.001). Depression was documented in 28.9% of the HF group and 18.2% of the control group after the five-year follow-up (p < 0.001). Cancer, dementia, osteoporosis, stroke, and osteoarthritis were associated with a higher risk of developing depression. Male gender and private health insurance were associated with lower risk of depression.
Conclusions: The risk of diagnosed depression is significantly increased in patients with HF compared to patients without HF in primary care practices in Germany.
Die Korrespondenz zwischen Alexander von Humboldt und Karl Kreil war umfangreich und betraf den Erdmagnetismus. Aber heute ist nur noch ein einziger Brief im Original bekannt. Dieser Brief, den Kreil am 3. September 1836 Alexander von Humboldt zukommen ließ, stimmt inhaltlich und teilweise wortwörtlich mit dem Brief überein, den Kreil nur einen Tag später, am 4. September 1836, an Carl Friedrich Gauß schickte. Vier Briefe von Kreil an Humboldt wurden in den „Annalen der Physik und Chemie“ publiziert, eine nicht allzu große Anzahl weiterer Briefe an Humboldt wurde in der biographischen Literatur über Kreil und in Briefen Kreils an Koller und Gauß erwähnt. Aber nicht nur die lückenhafte und bruchstückhaft bekannte Korrespondenz zwischen Humboldt und Kreil, die bis 1851 reicht, gibt Aufschluss über die Beziehungen, sondern von besonderer Bedeutung ist des Weiteren der Bestand an Kreiliana in der Bibliothek Humboldts. Es handelt sich um neun Werke Kreils, das letzte aus dem Jahr 1856. Nachweisbare Kontakte zwischen Kreil und Humboldt fanden also mit Sicherheit mindestens bis zu diesem Jahr statt!
Das Wissen um die lokale Struktur von Seltenen Erden Elementen (SEE) in silikatischen und aluminosilikatischen Schmelzen ist von fundamentalem Interesse für die Geochemie der magmatischen Prozesse, speziell wenn es um ein umfassendes Verständnis der Verteilungsprozesse von SEE in magmatischen Systemen geht. Es ist allgemein akzeptiert, dass die SEE-Verteilungsprozesse von Temperatur, Druck, Sauerstofffugazität (im Fall von polyvalenten Kationen) und der Kristallchemie kontrolliert werden. Allerdings ist wenig über den Einfluss der Schmelzzusammensetzung selbst bekannt. Ziel dieser Arbeit ist, eine Beziehung zwischen der Variation der SEE-Verteilung mit der Schmelzzusammensetzung und der Koordinationschemie dieser SEE in der Schmelze zu schaffen.
Dazu wurden Schmelzzusammensetzungen von Prowatke und Klemme (2005), welche eine deutliche Änderung der Verteilungskoeffizienten zwischen Titanit und Schmelze ausschließlich als Funktion der Schmelzzusammensetzung zeigen, sowie haplogranitische bzw. haplobasaltische Schmelzzusammensetzungen als Vertreter magmatischer Systeme mit La, Gd, Yb und Y dotiert und als Glas synthetisiert. Die Schmelzen variierten systematisch im Aluminiumsättigungsindex (ASI), welcher bei den Prowatke und Klemme (2005) Zusammensetzungen einen Bereich von 0.115 bis 0.768, bei den haplogranitischen Zusammensetzungen einen Bereich von 0.935 bis 1.785 und bei den haplobasaltischen Zusammensetzungen einen Bereich von 0.368 bis 1.010 abdeckt. Zusätzlich wurden die haplogranitischen Zusammensetzungen mit 4 % H2O synthetisiert, um den Einfluss von Wasser auf die lokale Umgebung von SEE zu studieren. Um Informationen über die lokalen Struktur von Gd, Yb und Y zu erhalten wurde die Röntgenabsorptionsspektroskopie angewendet. Dabei liefert die Untersuchung der Feinstruktur mittels der EXAFS-Spektroskopie (engl. Extended X-Ray Absorption Fine Structure) quantitative Informationen über die lokale Umgebung, während RIXS (engl. resonant inelastic X-ray scattering), sowie die daraus extrahierte hoch aufgelöste Nahkantenstruktur, XANES (engl. X-ray absorption near edge structure) qualitative Informationen über mögliche Koordinationsänderungen von La, Gd und Yb in den Gläsern liefert. Um mögliche Unterschiede der lokalen Struktur oberhalb der Glastransformationstemperatur (TG) zur Raumtemperatur zu untersuchen, wurden exemplarisch Hochtemperatur Y-EXAFS Untersuchungen durchgeführt.
Für die Auswertung der EXAFS-Messungen wurde ein neu eingeführter Histogramm-Fit verwendet, der auch nicht-symmetrische bzw. nichtgaußförmige Paarverteilungsfunktionen beschreiben kann, wie sie bei einem hohen Grad der Polymerisierung bzw. bei hohen Temperaturen auftreten können. Die Y-EXAFS-Spektren für die Prowatke und Klemme (2005) Zusammensetzungen zeigen mit Zunahme des ASI, eine Zunahme der Asymmetrie und Breite der Y-O Paarverteilungsfunktion, welche sich in sich in der Änderung der Koordinationszahl von 6 nach 8 und einer Zunahme des Y-O Abstand um 0.13Å manifestiert. Ein ähnlicher Trend lässt sich auch für die Gd- und Yb-EXAFS-Spektren beobachten. Die hoch aufgelösten XANESSpektren für La, Gd und Yb zeigen, dass sich die strukturellen Unterschiede zumindest halb-quantitativ bestimmen lassen. Dies gilt insbesondere für Änderungen im mittleren Abstand zu den Sauerstoffatomen. Im Vergleich zur EXAFS-Spektroskopie liefert XANES jedoch keine Informationen über die Form und Breite von Paarverteilungsfunktionen. Die Hochtemperatur EXAFS-Untersuchungen von Y zeigen Änderungen der lokalen Struktur oberhalb der Glasübergangstemperatur an, welche sich vordergründig auf eine thermisch induzierte Erhöhung des mittleren Y-O Abstandes zurückführen lassen. Allerdings zeigt ein Vergleich der Y-O Abstände für Zusammensetzungen mit einem ASI von 0.115 bzw. 0.755, ermittelt bei Raumtemperatur und TG, dass der im Glas beobachtete strukturelle Unterschied entlang der Zusammensetzungsserie in der Schmelze noch stärker ausfallen kann, als bisher für die Gläser angenommen wurde.
Die direkte Korrelation der Verteilungsdaten von Prowatke und Klemme (2005) mit den strukturellen Änderungen der Schmelzen offenbart für Y eine lineare Korrelation, wohingegen Yb und Gd eine nicht lineare Beziehung zeigen. Aufgrund seines Ionenradius und seiner Ladung wird das 6-fach koordinierte SEE in den niedriger polymerisierten Schmelzen bevorzugt durch nicht-brückenbildende Sauerstoffatome koordiniert, um stabile Konfigurationen zu bilden. In den höher polymerisierten Schmelzen mit ASI-Werten in der Nähe von 1 ist 6-fache Koordination nicht möglich, da fast nur noch brückenbildende Sauerstoffatome zur Verfügung stehen. Die Überbindung von brückenbildenden Sauerstoffatomen um das SEE wird durch Erhöhung der Koordinationszahl und des mittleren SEE-O Abstandes ausgeglichen. Dies bedeutet eine energetisch günstigere Konfiguration in den stärker depolymerisierten Zusammensetzungen, aus welcher die beobachtete Variation des Verteilungskoeffizienten resultiert, welcher sich jedoch für jedes Element stark unterscheidet. Für die haplogranitischen und haplobasaltischen Zusammensetzungen wurde mit Zunahme der Polymerisierung auch eine Zunahme der Koordinationszahl und des durchschnittlichen Bindungsabstands, einhergehend mit der Zunahme der Schiefe und der Asymmetrie der Paarverteilungsfunktion, beobachtet. Dies impliziert, dass das jeweilige SEE mit Zunahme der Polymerisierung auch inkompatibler in diesen Zusammensetzungen wird. Weiterhin zeigt die Zugabe von Wasser, dass die Schmelzen depolymerisieren, was in einer symmetrischeren Paarverteilungsfunktion resultiert, wodurch die Kompatibilität wieder zunimmt.
Zusammenfassend zeigt sich, dass die Veränderungen der Schmelzzusammensetzungen in einer Änderung der Polymerisierung der Schmelzen resultieren, die dann einen signifikanten Einfluss auf die lokale Umgebung der SEE hat. Die strukturellen Änderungen lassen sich direkt mit Verteilungsdaten korrelieren, die Trends unterscheiden sich aber stark zwischen leichten, mittleren und schweren SEE. Allerdings konnte diese Studie zeigen, in welcher Größenordnung die Änderungen liegen müssen, um einen signifikanten Einfluss auf den Verteilungskoeffizenten zu haben. Weiterhin zeigt sich, dass der Einfluss der Schmelzzusammensetzung auf die Verteilung der Spurenelemente mit Zunahme der Polymerisierung steigt und daher nicht vernachlässigt werden darf.
Ziel der vorliegenden Arbeit ist es, belohnungsabhängiges (instrumentelles) Lernen und Entscheidungsfindungsprozesse auf Verhaltens- und neuronaler Ebene in Abhängigkeit von chronischem Stresserleben (erfasst über den Lifetime Stress Inventory, Holmes und Rahe 1962) und kognitiven Variablen (eingeteilt in eine fluide und eine kristalline Intelligenzkomponente) an gesunden Probanden zu untersuchen. Dabei steht zu Beginn die Sicherung der Konstruktvalidität zwischen den bislang oft synonym verwendeten Begriffen modellfrei ~ habituell, bzw. modellbasiert ~ zielgerichtet im Fokus. Darauf aufbauend soll dann insbesondere der differentielle und interaktionelle Einfluss von chronischem Stresserleben und kognitiven Variablen auf Entscheidungsprozesse (instrumentelles Lernen) und deren neuronales Korrelat im VS untersucht und dargestellt werden. Abschließend wird die Relevanz der untersuchten belohnungsabhängigen Lernprozesse für die Entstehung und Aufrechterhaltung der Alkoholabhängigkeit zusammen mit weiteren Einflussvariablen in einem Übersichtspapier diskutiert.
Der Fall der Rachel Dolezal
(2016)
Die Amerikanerin Rachel Dolezal war bis ins Jahr 2015 als Afroamerikanerin bekannt. Als Aktivistin der National Association for the Advancement of Colored People setzte sie sich für die Rechte der afroamerikanischen Bevölkerung ein, lebte in einem schwarzen Umfeld und lehrte an einer Universität Afroamerikanische Studien. „I identify as black“ antwortete sie auf die Frage eines amerikanischen Fernsehmoderators, ob sie Afroamerikanerin sei. Ihre Kollegen und ihr näheres Umfeld identifizierten sie ebenfalls als solche. Erst, als regionale Journalisten auf sie aufmerksam wurden und ihre Eltern sich zu Wort meldeten, wurde deutlich, dass Dolezal eigentlich eine weiße Frau ist. Dolezals Eltern bestätigten dies, indem sie Kindheitsfotos einer hellhäutigen, blonden Rachel veröffentlichten. Dolezals Verhalten entfachte daraufhin eine rege mediale Diskussion über ihre Person im Kontext von Ethnizität und »Rasse«.
Die Verfasserin greift Dolezals Fall exemplarisch auf, um der Frage nachzugehen, ob ein Doing Race nach Belieben möglich ist. Darf sich Dolezal als schwarz identifizieren, obwohl sie keine afrikanischen Vorfahren hat? Welche gesellschaftliche Wissensvorräte schränken diese Wahl ein und welche Konsequenzen ergeben sich daraus? Anhand einer Diskursanalyse amerikanischer Zeitungsartikel geht die Verfasserin diesen Fragen nach. Hierbei werden »Rasse« und Ethnizität als soziale Konstruktionen, basierend auf dem Konzept von Stephen Cornell und Douglas Hartmann, betrachtet.
Der Florist
(2016)
Der Klimawandel
(2016)
Was ist Gerechtigkeit? Wie könnten gerechte Regelungen aussehen für die Katastrophen und Leiden, die der Klimawandel auslöst bzw. auslösen wird? Diese sind häufig ungerecht, weil sie oft deutlich stärker diejenigen treffen, die am wenigsten zur Klimaveränderung beigetragen haben.
Doch was genau verstehen wir unter dem Schlagwort: ‚Klimawandel‘? Und kann dieser wirklich den Menschen direkt treffen? Ein kurzer naturwissenschaftlicher Abriss klärt hier die wichtigsten Fragen.
Da es sich hierbei um eine philosophische Arbeit handelt, muss zunächst geklärt werden, ob der Mensch überhaupt die Ursache von so etwas sein kann wie z.B. der Klimaerwärmung. Robert Spaemanns These dazu ist, dass der Mensch durch seinen freien Willen mit seinen Einzelhandlungen das Weltgeschehen verändern kann. Hans Jonas fügt dem hinzu, dass wir durch diese Fähigkeit, verantwortlich sind für die gewollten und ungewollten Folgen unserer Handlungen.
Damit wäre aus naturwissenschaftlicher Sicht (1. Teil der Arbeit) und aus philosophischer Sicht (Anfang 2. Teil) geklärt, dass der Mensch mit größter Wahrscheinlichkeit die Ursache des Klimawandels ist und diese Verursachung moralische Konsequenzen für ihn hat.
Ein philosophischer Gerechtigkeitsbegriff wird aus der Kantischen Rechts- und Moralphilosophie entwickelt, weil diese die einzige ist, die dem Menschen überhaupt ein Recht auf Rechte zusprechen kann. Diese entspringt der transzendentalen Freiheitsfähigkeit des Menschen, weshalb jedem das Recht auf Rechte absolut und immer zukommt. Gleichzeitig mündet Kants Philosophie wiederum in dem Freiheitsgedanken, indem Gerechtigkeit nur existiert, wenn alle Menschen gleichermaßen frei sein können.
Was heißt das konkret? Wie könnte Gerechtigkeit in der Realität wirklich umgesetzt werden? Die Realisierung schlägt zwei Grundrichtungen ein. John Rawls und Stefan Gosepath beschäftigen sich u.a. eingehend mit der prozeduralen Gerechtigkeit, was bedeutet, dass gerechte Verfahren gefunden werden, die das gesellschaftliche Zusammenleben regeln. Das leitende Prinzip hierfür ist vor allem: ein Mitbestimmungsrecht aller, so dass sich im Prinzip alle Bürger ihre Gesetze selbst geben und damit frei handeln.
In Bezug auf den Klimawandel steht die zweite Ausrichtung im Vordergrund – die distributive oder auch Verteilungs-Gerechtigkeit. Materielle Güter müssen so aufgeteilt werden, dass auch trotz empirischer Unterschiede alle Menschen als moralische Subjekte anerkannt werden und frei sein können.
Doch sind diese philosophischen Schlussfolgerungen nicht viel zu abstrakt, um auf ein ebenso schwer fassbares und globales Problem wie den Klimawandel angewendet zu werden? Was könnte daher eine Klimagerechtigkeit sein?
Es gibt viele Gerechtigkeitsprinzipien, die vorgeben, eine gerechte Grundlage für die Klimaprobleme zu bieten wie z.B. das Verursacherprinzip, das Fähigkeitsprinzip oder das Grandfathering-Prinzip, bei dem die Hauptverursacher nach wie vor am meisten emittieren dürfen (dieses Prinzip leitete die bisherigen internationalen Verhandlungen).
Das Ziel dieser Arbeit ist, herauszufinden, wie die Klimaprobleme gelöst werden können, so dass für alle Menschen unter allen Umständen die universellen Menschenrechte her- und sichergestellt werden und diese frei und moralisch handeln können.
Die Schlussfolgerung dieser Arbeit ist, dass Kants Gerechtigkeitsbegriff durch eine Kombination des Subsistenzemissions-Rechts, des Greenhouse-Development-Rights-Principles (GDR-Prinzip) und einer internationalen Staatlichkeit durchgesetzt werden könnte.
Durch das Subsistenzemissions-Recht hat jeder Mensch das Recht, so viel Energie zu verbrauchen und damit zusammenhängende Emissionen zu produzieren, dass er ein menschenwürdiges Leben führen kann. Das GDR-Prinzip errechnet den Anteil an der weltweiten Gesamtverantwortung zum Klimaschutz eines jeden Landes oder sogar eines jeden Weltbürgers, indem es die historischen Emissionen (Klimaschuld) zu der jetzigen finanziellen Kapazität des Landes/ des Individuums (Verantwortungsfähigkeit) hinzuaddiert. Die Implementierung von internationalen Gremien wird verteidigt, weil es ein globales, grenzüberschreitendes Problem ist, dessen Effekte und dessen Verantwortung globale Ausmaße haben.
Ein schlagendes Argument für fast alle Klimaschutzmaßnahmen ist, dass sie Synergien aufweisen zu anderen gesellschaftlichen Bereichen aufweisen wie z.B. Gesundheit und Armutsbekämpfung, in denen auch noch um die Durchsetzung unserer Menschenrechte gerungen wird.
Ist dieser Lösungsansatz nicht völlig utopisch?
Dieser Vorschlag stellt für die internationale Gemeinschaft eine große Herausforderung dar, wäre jedoch die einzig gerechte Lösung unserer Klimaprobleme. Des Weiteren wird an dem Kantischen Handlungsgrundsatz festgehalten, dass das ewige Streben auf ideale Ziele hin, die beste Verwirklichung dieser durch menschliche, fehlbare Wesen ist.
Vor 120 Jahren wurde das Gelände südlich des damaligen Bahnhofs Neubabelsberg erstmals bebaut. Aus dem 1896 errichteten Depot für Lazarett-Baracken entwickelte sich bis 1938 die logistische Zentrale des Deutschen Roten Kreuzes, das ab 1939 auch sein Präsidium nach Babelsberg verlegte. Nach einer Zwischennutzung ab 1945 durch die sowjetische Besatzungsarmee war von 1952 bis zur Wende die Akademie für Staats- und Rechtswissenschaften als „Kaderschmiede“ der Hausherr des nun im Grenzgebiet zu Westberlin liegenden Areals. Heute nutzen die Universität Potsdam und das Hasso-Plattner-Institut für Software-Systemtechnik den Campus, dessen Geschichte mit dieser Publikation erstmals umfassend dokumentiert wird.
Der Wunderbaum
(2016)
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner.
Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units.
This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution.
Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.