Refine
Has Fulltext
- yes (391) (remove)
Year of publication
- 2019 (391) (remove)
Document Type
- Postprint (200)
- Doctoral Thesis (103)
- Article (37)
- Working Paper (31)
- Monograph/Edited Volume (6)
- Master's Thesis (4)
- Part of Periodical (3)
- Report (2)
- Review (2)
- Bachelor Thesis (1)
Language
- English (391) (remove)
Keywords
- morphology (25)
- Informationsstruktur (24)
- Morphologie (24)
- information structure (24)
- Festschrift (23)
- Linguistik (23)
- Syntax (23)
- festschrift (23)
- linguistics (23)
- syntax (23)
Institute
- Institut für Biochemie und Biologie (46)
- Mathematisch-Naturwissenschaftliche Fakultät (36)
- Department Linguistik (35)
- Institut für Geowissenschaften (35)
- Institut für Physik und Astronomie (27)
- Institut für Chemie (24)
- Strukturbereich Kognitionswissenschaften (24)
- Wirtschaftswissenschaften (19)
- Humanwissenschaftliche Fakultät (17)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (16)
On doubling unconditionals
(2019)
We show that the codifference is a useful tool in studying the ergodicity breaking and non-Gaussianity properties of stochastic time series. While the codifference is a measure of dependence that was previously studied mainly in the context of stable processes, we here extend its range of applicability to random-parameter and diffusing-diffusivity models which are important in contemporary physics, biology and financial engineering. We prove that the codifference detects forms of dependence and ergodicity breaking which are not visible from analysing the covariance and correlation functions. We also discuss a related measure of dispersion, which is a nonlinear analogue of the mean squared displacement.
Many studies on biological and soft matter systems report the joint presence of a linear mean-squared displacement and a non-Gaussian probability density exhibiting, for instance, exponential or stretched-Gaussian tails. This phenomenon is ascribed to the heterogeneity of the medium and is captured by random parameter models such as ‘superstatistics’ or ‘diffusing diffusivity’. Independently, scientists working in the area of time series analysis and statistics have studied a class of discrete-time processes with similar properties, namely, random coefficient autoregressive models. In this work we try to reconcile these two approaches and thus provide a bridge between physical stochastic processes and autoregressive models.Westart from the basic Langevin equation of motion with time-varying damping or diffusion coefficients and establish the link to random coefficient autoregressive processes. By exploring that link we gain access to efficient statistical methods which can help to identify data exhibiting Brownian yet non-Gaussian diffusion.
Background: Agility in general and change-of-direction speed (CoD) in particular represent important performance determinants in elite soccer.
Objectives: The objectives of this study were to determine the effects of a 6-week neuromuscular training program on agility performance, and to determine differences in movement times between the slower and faster turning directions in elite soccer players. Materials and Methods: Twenty male elite soccer players from the Stade Rennais Football Club (Ligue 1, France) participated in this study. The players were randomly assigned to a neuromuscular training group (NTG, n = 10) or an active control (CG, n = 10) according to their playing position. NTG participated in a 6-week, twice per week neuromuscular training program that included CoD, plyometric and dynamic stability exercises. Neuromuscular training replaced the regular warm-up program. Each training session lasted 30 min. CG continued their regular training program. Training volume was similar between groups. Before and after the intervention, the two groups performed a reactive agility test that included 180° left and right body rotations followed by a 5-m linear sprint. The weak side was defined as the left/right turning direction that produced slower overall movement times (MT). Reaction time (RT) was assessed and defined as the time from the first appearance of a visual stimulus until the athlete’s first movement. MT corresponded to the time from the first movement until the athlete reached the arrival gate (5 m distance).
Results: No significant between-group baseline differences were observed for RT or MT. Significant group x time interactions were found for MT (p = 0.012, effect size = 0.332, small) for the slower and faster directions (p = 0.011, effect size = 0.627, moderate). Significant pre-to post improvements in MT were observed for NTG but not CG (p = 0.011, effect size = 0.877, moderate). For NTG, post hoc analyses revealed significant MT improvements for the slower (p = 0.012, effect size = 0.897, moderate) and faster directions (p = 0.017, effect size = 0.968, moderate).
Conclusion: Our results illustrate that 6 weeks of neuromuscular training with two sessions per week included in the warm-up program, significantly enhanced agility performance in elite soccer players. Moreover, improvements were found on both sides during body rotations. Thus, practitioners are advised to focus their training programs on both turning directions.
The innovative dual-purpose chicken approach aims at contributing to the transition towards sustainable poultry production by avoiding the culling of male chickens. To successfully integrate sustainability aspects into innovation, goal congruency among actors and clearly communicating the added value within the actor network and to consumers is needed. The challenge of identifying common sustainability goals calls for decision support tools. The objectives of our research were to investigate whether the tool could assist in improving communication and marketing with respect to sustainability and optimizing the value chain organization. Three actor groups participated in the tool application, in which quantitative and qualitative data were collected. The results showed that there were manifold sustainability goals within the innovation network, but only some goals overlapped, and the perception of their implementation also diverged. While easily marketable goals such as ‘animal welfare’ were perceived as being largely implemented, economic goals were prioritized less often, and the implementation was perceived as being rather low. By visualizing congruencies and differences in the goals, the tool helped identify fields of action, such as improved information flows and prompted thinking processes. We conclude that the tool is useful for managing complex decision processes with several actors involved.
In canoe sprint, the trunk muscles play an important role in stabilizing the body in an unstable environment (boat) and in generating forces that are transmitted through the shoulders and arms to the paddle for propulsion of the boat. Isokinetic training is well suited for sports in which propulsion is generated through water resistance due to similarities in the resistive mode. Thus, the purpose of this study was to determine the effects of isokinetic training in addition to regular sport-specific training on trunk muscular fitness and body composition in world-class canoeists and to evaluate associations between trunk muscular fitness and canoe-specific performance. Nine world-class canoeists (age: 25.6 ± 3.3 years; three females; four world champions; three Olympic gold medalists) participated in an 8-week progressive isokinetic training with a 6-week block “muscle hypertrophy” and a 2-week block “muscle power.” Pre- and post-tests included the assessment of peak isokinetic torque at different velocities in concentric (30 and 140∘s-1) and eccentric (30 and 90∘s-1) mode, trunk muscle endurance, and body composition (e.g., body fat, segmental lean mass). Additionally, peak paddle force was assessed in the flume at a water current of 3.4 m/s. Significant pre-to-post increases were found for peak torque of the trunk rotators at 30∘s-1 (p = 0.047; d = 0.4) and 140∘s-1 (p = 0.014; d = 0.7) in concentric mode. No significant pre-to-post changes were detected for eccentric trunk rotator torque, trunk muscle endurance, and body composition (p > 0.148). Significant medium-to-large correlations were observed between concentric trunk rotator torque but not trunk muscle endurance and peak paddle force, irrespective of the isokinetic movement velocity (all r ≥ 0.886; p ≤ 0.008). Isokinetic trunk rotator training is effective in improving concentric trunk rotator strength in world-class canoe sprinters. It is recommended to progressively increase angular velocity from 30∘s-1 to 140∘s-1 over the course of the training period.
For a long time, there were things on this planet that only humans could do, but this time might be coming to an end. By using the universal tool that makes us unique – our intelligence – we have worked to eliminate our uniqueness, at least when it comes to solving cognitive tasks. Artificial intelligence is now able to play chess, understand language, and drive a car – and often better than we.
How did we get here? The philosopher Aristotle formulated the first “laws of thought” in his syllogisms, and the mathematicians Blaise Pascal and Wilhelm Leibniz built some of the earliest calculating machines. The mathematician George Boole was the first to introduce a formal language to represent logic. The natural scientist Alan Turing created his deciphering machine “Colossus,” the first programmable computer. Philosophers, mathematicians, psychologists, and linguists – for centuries, scientists have been developing formulas, machines, and theories that were supposed to enable us to reproduce and possibly even enhance our most valuable ability.
But what exactly is “artificial intelligence”? Even the name calls for comparison. Is artificial intelligence like human intelligence? Alan Turing came up with a test in 1950 to provide a satisfying operational definition of intelligence: According to him, a machine is intelligent if its thinking abilities equal those of humans. It has to reach human levels for any cognitive task. The machine has to prove this by convincing a human interrogator that it is human. Not an easy task: After all, it has to process natural language, store knowledge, draw conclusions, and learn something new. In fact, over the past ten years, a number of AI systems have emerged that have passed the test one way or another in chat conversations with automatically generated texts or images. Nowadays, the discussion usually centers on other questions: Does AI still need its creators? Will it not only outperform humans but someday replace them – be it in the world of work or even beyond? Will AI solve our problems in the age of all-encompassing digital networking – or will it become a part of the problem?
Artificial intelligence, its nature, its limitations, its potential, and its relationship to humans were being discussed even before it existed. Literature and film have created scenarios with very different endings. But what is the view of the scientists who are actually researching with or about artificial intelligence? For the current issue of our research magazine, a cognitive scientist, an education researcher, and a computer scientist shared their views. We also searched the University for projects whose professional environment reveals the numerous opportunities that AI offers for various disciplines. We cover the geosciences and computer science as well as economics, health, and literature studies.
At the same time, we have not lost sight of the broad research spectrum at the University: a legal expert introduces us to the not-so-distant sphere of space law while astrophysicists work on ensuring that state-of-the-art telescopes observe those regions in space where something “is happening” at the right time. A chemist explains why the battery of the future will come from a printer, and molecular biologists explain how they will breed stress-resistant plants. You will read about all this in this issue as well as about current studies on restless legs syndrome in children and the situation of Muslims in Brandenburg. Last but not least, we will introduce you to the sheep currently grazing in Sanssouci Park – all on behalf of science. Quite clever!
Enjoy your read!
THE EDITORS
Transcending the conventional debate around efficiency in sustainable consumption, anti-consumption patterns leading to decreased levels of material consumption have been gaining importance. Change agents are crucial for the promotion of such patterns, so there may be lessons for governance interventions that can be learnt from the every-day experiences of those who actively implement and promote sustainability in the field of anti-consumption. Eighteen social innovation pioneers, who engage in and diffuse practices of voluntary simplicity and collaborative consumption as sustainable options of anti-consumption share their knowledge and personal insights in expert interviews for this research. Our qualitative content analysis reveals drivers, barriers, and governance strategies to strengthen anti-consumption patterns, which are negotiated between the market, the state, and civil society. Recommendations derived from the interviews concern entrepreneurship, municipal infrastructures in support of local grassroots projects, regulative policy measures, more positive communication to strengthen the visibility of initiatives and emphasize individual benefits, establishing a sense of community, anti-consumer activism, and education. We argue for complementary action between top-down strategies, bottom-up initiatives, corporate activities, and consumer behavior. The results are valuable to researchers, activists, marketers, and policymakers who seek to enhance their understanding of materially reduced consumption patterns based on the real-life experiences of active pioneers in the field.
The purpose of this study was to compare the effects of combined resistance and plyometric/sprint training with plyometric/sprint training or typical soccer training alone on muscle strength and power, speed, change-of-direction ability in young soccer players. Thirty-one young (14.5 ± 0.52 years; tanner stage 3–4) soccer players were randomly assigned to either a combined- (COMB, n = 14), plyometric-training (PLYO, n = 9) or an active control group (CONT, n = 8). Two training sessions were added to the regular soccer training consisting of one session of light-load high-velocity resistance exercises combined with one session of plyometric/sprint training (COMB), two sessions of plyometric/sprint training (PLYO) or two soccer training sessions (CONT). Training volume was similar between the experimental groups. Before and after 7-weeks of training, peak torque, as well as absolute and relative (normalized to torque; RTDr) rate of torque development (RTD) during maximal voluntary isometric contraction of the knee extensors (KE) were monitored at time intervals from the onset of contraction to 200 ms. Jump height, sprinting speed at 5, 10, 20-m and change-of-direction ability performances were also assessed. There were no significant between–group baseline differences. Both COMB and PLYO significantly increased their jump height (Δ14.3%; ES = 0.94; Δ12.1%; ES = 0.54, respectively) and RTD at mid to late phases but with greater within effect sizes in COMB in comparison with PLYO. However, significant increases in peak torque (Δ16.9%; p < 0.001; ES = 0.58), RTD (Δ44.3%; ES = 0.71), RTDr (Δ27.3%; ES = 0.62) and sprint performance at 5-m (Δ-4.7%; p < 0.001; ES = 0.73) were found in COMB without any significant pre-to-post change in PLYO and CONT groups. Our results suggest that COMB is more effective than PLYO or CONT for enhancing strength, sprint and jump performances.
We combine ultrafast X-ray diffraction (UXRD) and time-resolved Magneto-Optical Kerr Effect (MOKE) measurements to monitor the strain pulses in laser-excited TbFe2/Nb heterostructures. Spatial separation of the Nb detection layer from the laser excitation region allows for a background-free characterization of the laser-generated strain pulses. We clearly observe symmetric bipolar strain pulses if the excited TbFe2 surface terminates the sample and a decomposition of the strain wavepacket into an asymmetric bipolar and a unipolar pulse, if a SiO2 glass capping layer covers the excited TbFe2 layer. The inverse magnetostriction of the temporally separated unipolar strain pulses in this sample leads to a MOKE signal that linearly depends on the strain pulse amplitude measured through UXRD. Linear chain model simulations accurately predict the timing and shape of UXRD and MOKE signals that are caused by the strain reflections from multiple interfaces in the heterostructure.
Interactions and feedbacks between tectonics, climate, and upper plate architecture control basin geometry, relief, and depositional systems. The Andes is part of a longlived continental margin characterized by multiple tectonic cycles which have strongly modified the Andean upper plate architecture. In the Andean retroarc, spatiotemporal variations in the structure of the upper plate and tectonic regimes have resulted in marked along-strike variations in basin geometry, stratigraphy, deformational style, and mountain belt morphology. These along-strike variations include high-elevation plateaus (Altiplano and Puna) associated with a thin-skin fold-and-thrust-belt and thick-skin deformation in broken foreland basins such as the Santa Barbara system and the Sierras Pampeanas. At the confluence of the Puna Plateau, the Santa Barbara system and the Sierras Pampeanas, major along-strike changes in upper plate architecture, mountain belt morphology, basement exhumation, and deformation style can be recognized. I have used a source to sink approach to unravel the spatiotemporal tectonic evolution of the Andean retroarc between 26 and 28°S. I obtained a large low-temperature thermochronology data set from basement units which includes apatite fission track, apatite U-Th-Sm/He, and zircon U-Th/He (ZHe) cooling ages. Stratigraphic descriptions of Miocene units were temporally constrained by U-Pb LA-ICP-MS zircon ages from interbedded pyroclastic material.
Modeled ZHe ages suggest that the basement of the study area was exhumed during the Famatinian orogeny (550-450 Ma), followed by a period of relative tectonic quiescence during the Paleozoic and the Triassic. The basement experienced horst exhumation during the Cretaceous development of the Salta rift. After initial exhumation, deposition of thick Cretaceous syn-rift strata caused reheating of several basement blocks within the Santa Barbara system. During the Eocene-Oligocene, the Andean compressional setting was responsible for the exhumation of several disconnected basement blocks. These exhumed blocks were separated by areas of low relief, in which humid climate and low erosion rates facilitated the development of etchplains on the crystalline basement. The exhumed basement blocks formed an Eocene to Oligocene broken foreland basin in the back-bulge depozone of the Andean foreland. During the Early Miocene, foreland basin strata filled up the preexisting Paleogene topography. The basement blocks in lower relief positions were reheated; associated geothermal gradients were higher than 25°C/km. Miocene volcanism was responsible for lateral variations on the amount of reheating along the Campo-Arenal basin. Around 12 Ma, a new deformational phase modified the drainage network and fragmented the lacustrine system. As deformation and rock uplift continued, the easily eroded sedimentary cover was efficiently removed and reworked by an ephemeral fluvial system, preventing the development of significant relief. After ~6 Ma, the low erodibility of the basement blocks which began to be exposed caused relief increase, leading to the development of stable fluvial systems. Progressive relief development modified atmospheric circulation, creating a rainfall gradient. After 3 Ma, orographic rainfall and high relief lead to the development of proximal fluvial-gravitational depositional systems in the surrounding basins.
Almost half of the political life has been experienced under the
state of emergency and state of siege policies in the Turkish
Republic. In spite of such a striking number and continuity in the
deployment of legal emergency powers, there are just a few legal
and political studies examining the reasons for such permanency
in governing practices. To fill this gap, this paper aims to discuss
one of the most important sources of the ‘permanent’ political
crisis in the country: the historical evolution of legal emergency
power. In order to highlight how these policies have intensified
the highly fragile citizenship regime by weakening the separation
of power, repressing the use of political rights and increasing the
discretionary power of both the executive and judiciary authori-
ties, the paper sheds light on the emergence and production of
a specific form of legality based on the idea of emergency and the
principle of executive prerogative. In that context, it aims to
provide a genealogical explanation of the evolution of the excep-
tional form of the nation-state, which is based on the way political
society, representation, and legitimacy have been instituted and
accompanying failure of the ruling classes in building hegemony
in the country.
Supercapacitors are electrochemical energy storage devices with rapid charge/discharge rate and long cycle life. Their biggest challenge is the inferior energy density compared to other electrochemical energy storage devices such as batteries. Being the most widely spread type of supercapacitors, electrochemical double-layer capacitors (EDLCs) store energy by electrosorption of electrolyte ions on the surface of charged electrodes. As a more recent development, Na-ion capacitors (NICs) are expected to be a more promising tactic to tackle the inferior energy density due to their higher-capacity electrodes and larger operating voltage. The charges are simultaneously stored by ion adsorption on the capacitive-type cathode surface and via faradic process in the battery-type anode, respectively. Porous carbon electrodes are of great importance in these devices, but the paramount problems are the facile synthetic routes for high-performance carbons and the lack of fundamental understanding of the energy storage mechanisms. Therefore, the aim of the present dissertation is to develop novel synthetic methods for (nitrogen-doped) porous carbon materials with superior performance, and to reveal a deeper understanding energy storage mechanisms of EDLCs and NICs.
The first part introduces a novel synthetic method towards hierarchical ordered meso-microporous carbon electrode materials for EDLCs. The large amount of micropores and highly ordered mesopores endow abundant sites for charge storage and efficient electrolyte transport, respectively, giving rise to superior EDLC performance in different electrolytes. More importantly, the controversial energy storage mechanism of EDLCs employing ionic liquid (IL) electrolytes is investigated by employing a series of porous model carbons as electrodes. The results not only allow to conclude on the relations between the porosity and ion transport dynamics, but also deliver deeper insights into the energy storage mechanism of IL-based EDLCs which is different from the one usually dominating in solvent-based electrolytes leading to compression double-layers.
The other part focuses on anodes of NICs, where novel synthesis of nitrogen-rich porous carbon electrodes and their sodium storage mechanism are investigated. Free-standing fibrous nitrogen-doped carbon materials are synthesized by electrospinning using the nitrogen-rich monomer (hexaazatriphenylene-hexacarbonitrile, C18N12) as the precursor followed by condensation at high temperature. These fibers provide superior capacity and desirable charge/discharge rate for sodium storage. This work also allows insights into the sodium storage mechanism in nitrogen-doped carbons. Based on this mechanism, further optimization is done by designing a composite material composed of nitrogen-rich carbon nanoparticles embedded in conductive carbon matrix for a better charge/discharge rate. The energy density of the assembled NICs significantly prevails that of common EDLCs while maintaining the high power density and long cycle life.
Cyber victimization research reveals various personal and contextual correlations and negative consequences associated with this experience. Despite increasing attention on cyber victimization, few studies have examined such experiences among ethnic minority adolescents. The purpose of the present study was to examine the moderating effect of ethnicity in the longitudinal associations among cyber victimization, school-belongingness, and psychological consequences (i.e., depression, loneliness, anxiety). These associations were investigated among 416 Latinx and white adolescents (46% female; M age = 13.89, SD = 0.41) from one middle school in the United States. They answered questionnaires on cyber victimization, school belongingness, depression, loneliness, and anxiety in the 7th grade (Time 1). One year later, in the 8th grade (Time 2), they completed questionnaires on depression, loneliness, and anxiety. Low levels of school-belongingness strengthened the positive relationships between cyber victimization and Time 2 depression and anxiety, especially among Latinx adolescents. The positive association between cyber victimization and Time 2 loneliness was strengthened for low levels of school-belongingness for all adolescents. These findings may indicate that cyber victimization threatens adolescents’ school-belongingness, which has implications for their emotional adjustment. Such findings underscore the importance of considering diverse populations when examining cyber victimization.
The goal of this three-year longitudinal study was to examine the buffering effect of parental mediation of adolescents’ technology use (i.e., restrictive, co-viewing, and instructive) on the relationships among cyber aggression involvement and substance use (i.e., alcohol use, marijuana use, cigarette smoking, and non-marijuana illicit drug use). Overall, 867 (M age = 13.67, age range from 13–15 years, 51% female, 49% White) 8th grade adolescents from the Midwestern United States participated in this study during the 6th grade (Wave 1), 7th grade (Wave 2), and 8th grade (Wave 3). Results revealed that higher levels of Wave 2 instructive mediation weakened the association between Wave 1 cyber victimization and Wave 3 alcohol use and Wave 3 non-marijuana illicit drug use. The relationship was stronger between Wave 1 cyber victimization and Wave 3 alcohol use and Wave 3 non-marijuana illicit drug use when adolescents reported lower levels of Wave 2 instructive mediation. At lower levels of Wave 2 instructive mediation, the association between Wave 1 cyber aggression perpetration and Wave 3 non-marijuana illicit drug use was stronger. Implications of these findings are discussed in the context of parents recognizing their role in helping to mitigate the negative consequences associated with adolescents’ cyber aggression involvement.
While the consequences of cyberbullying victimization have received some attention in the literature, to date, little is known about the multiple types of strains in adolescents’ lives, such as whether cyberbullying victimization and peer rejection increase their vulnerability to depression and anxiety. Even though some research found that adolescents with disabilities show higher risk for cyberbullying victimization, most research has focused on typically developing adolescents. Thus, the present study focused on examining the moderating effect of peer rejection in the relationships between cyberbullying victimization, depression, and anxiety among adolescents with autism spectrum disorder. There were 128 participants (89% male; ages ranging from 11–16 years old) with autism spectrum disorder in the sixth, seventh, or eighth grade at 16 middle schools in the United States. Participants completed questionnaires on cyberbullying victimization, peer rejection, depression, and anxiety. Results revealed that cyberbullying victimization was associated positively with peer rejection, anxiety, and depression among adolescents with autism spectrum disorder. Further, peer rejection was linked positively with depression and anxiety. Peer rejection moderated the positive relationship between cyberbullying victimization and depression, but not anxiety. Implications for prevention programs and future research are discussed.
Modern health care systems are characterized by pronounced prevention and cost-optimized treatments. This dissertation offers novel empirical evidence on how useful such measures can be. The first chapter analyzes how radiation, a main pollutant in health care, can negatively affect cognitive health. The second chapter focuses on the effect of Low Emission Zones on public heath, as air quality is the major external source of health problems. Both chapters point out potentials for preventive measures. Finally, chapter three studies how changes in treatment prices affect the reallocation of hospital resources. In the following, I briefly summarize each chapter and discuss implications for health care systems as well as other policy areas. Based on the National Educational Panel Study that is linked to data on radiation, chapter one shows that radiation can have negative long-term effects on cognitive skills, even at subclinical doses. Exploiting arguably exogenous variation in soil contamination in Germany due to the Chernobyl disaster in 1986, the findings show that people exposed to higher radiation perform significantly worse in cognitive tests 25 years later. Identification is ensured by abnormal rainfall within a critical period of ten days. The results show that the effect is stronger among older cohorts than younger cohorts, which is consistent with radiation accelerating cognitive decline as people get older. On average, a one-standarddeviation increase in the initial level of CS137 (around 30 chest x-rays) is associated with a decrease in the cognitive skills by 4.1 percent of a standard deviation (around 0.05 school years). Chapter one shows that sub-clinical levels of radiation can have negative consequences even after early childhood. This is of particular importance because most of the literature focuses on exposure very early in life, often during pregnancy. However, population exposed after birth is over 100 times larger. These results point to substantial external human capital costs of radiation which can be reduced by choices of medical procedures. There is a large potential for reductions because about one-third of all CT scans are assumed to be not medically justified (Brenner and Hall, 2007). If people receive unnecessary CT scans because of economic incentives, this chapter points to additional external costs of health care policies. Furthermore, the results can inform the cost-benefit trade-off for medically indicated procedures. Chapter two provides evidence about the effectiveness of Low Emission Zones. Low Emission Zones are typically justified by improvements in population health. However, there is little evidence about the potential health benefits from policy interventions aiming at improving air quality in inner-cities. The chapter ask how the coverage of Low Emission Zones air pollution and hospitalization, by exploiting variation in the roll out of Low Emission Zones in Germany. It combines information on the geographic coverage of Low Emission Zones with rich panel data on the universe of German hospitals over the period from 2006 to 2016 with precise information on hospital locations and the annual frequency of detailed diagnoses. In order to establish that our estimates of Low Emission Zones’ health impacts can indeed be attributed to improvements in local air quality, we use data from Germany’s official air pollution monitoring system and assign monitor locations to Low Emission Zones and test whether measures of air pollution are affected by the coverage of a Low Emission Zone. Results in chapter two confirm former results showing that the introduction of Low Emission Zones improved air quality significantly by reducing NO2 and PM10 concentrations. Furthermore, the chapter shows that hospitals which catchment areas are covered by a Low Emission Zone, diagnose significantly less air pollution related diseases, in particular by reducing the incidents of chronic diseases of the circulatory and the respiratory system. The effect is stronger before 2012, which is consistent with a general improvement in the vehicle fleet’s emission standards. Depending on the disease, a one-standard-deviation increase in the coverage of a hospitals catchment area covered by a Low Emission Zone reduces the yearly number of diagnoses up to 5 percent. These findings have strong implications for policy makers. In 2015, overall costs for health care in Germany were around 340 billion euros, of which 46 billion euros for diseases of the circulatory system, making it the most expensive type of disease caused by 2.9 million cases (Statistisches Bundesamt, 2017b). Hence, reductions in the incidence of diseases of the circulatory system may directly reduce society’s health care costs. Whereas chapter one and two study the demand-side in health care markets and thus preventive potential, chapter three analyzes the supply-side. By exploiting the same hospital panel data set as in chapter two, chapter three studies the effect of treatment price shocks on the reallocation of hospital resources in Germany. Starting in 2005, the implementation of the German-DRG-System led to general idiosyncratic treatment price shocks for individual hospitals. Thus far there is little evidence of the impact of general price shocks on the reallocation of hospital resources. Additionally, I add to the exiting literature by showing that price shocks can have persistent effects on hospital resources even when these shocks vanish. However, simple OLS regressions would underestimate the true effect, due to endogenous treatment price shocks. I implement a novel instrument variable strategy that exploits the exogenous variation in the number of days of snow in hospital catchment areas. A peculiarity of the reform allowed variation in days of snow to have a persistent impact on treatment prices. I find that treatment price increases lead to increases in input factors such as nursing staff, physicians and the range of treatments offered but to decreases in the treatment volume. This indicates supplier-induced demand. Furthermore, the probability of hospital mergers and privatization decreases. Structural differences in pre-treatment characteristics between hospitals enhance these effects. For instance, private and larger hospitals are more affected. IV estimates reveal that OLS results are biased towards zero in almost all dimensions because structural hospital differences are correlated with the reallocation of hospital resources. These results are important for several reasons. The G-DRG-Reform led to a persistent polarization of hospital resources, as some hospitals were exposed to treatment price increases, while others experienced reductions. If hospitals increase the treatment volume as a response to price reductions by offering unnecessary therapies, it has a negative impact on population wellbeing and public spending. However, results show a decrease in the range of treatments if prices decrease. Hospitals might specialize more, thus attracting more patients. From a policy perspective it is important to evaluate if such changes in the range of treatments jeopardize an adequate nationwide provision of treatments. Furthermore, the results show a decrease in the number of nurses and physicians if prices decrease. This could partly explain the nursing crisis in German hospitals. However, since hospitals specialize more they might be able to realize efficiency gains which justify reductions in input factors without loses in quality. Further research is necessary to provide evidence for the impact of the G-DRG-Reform on health care quality. Another important aspect are changes in the organizational structure. Many public hospitals have been privatized or merged. The findings show that this is at least partly driven by the G-DRG-Reform. This can again lead to a lack in services offered in some regions if merged hospitals specialize more or if hospitals are taken over by ecclesiastical organizations which do not provide all treatments due to moral conviction. Overall, this dissertation reveals large potential for preventive health care measures and helps to explain reallocation processes in the hospital sector if treatment prices change. Furthermore, its findings have potentially relevant implications for other areas of public policy. Chapter one identifies an effect of low dose radiation on cognitive health. As mankind is searching for new energy sources, nuclear power is becoming popular again. However, results of chapter one point to substantial costs of nuclear energy which have not been accounted yet. Chapter two finds strong evidence that air quality improvements by Low Emission Zones translate into health improvements, even at relatively low levels of air pollution. These findings may, for instance, be of relevance to design further policies targeted at air pollution such as diesel bans. As pointed out in chapter three, the implementation of DRG-Systems may have unintended side-effects on the reallocation of hospital resources. This may also apply to other providers in the health care sector such as resident doctors.
Perovskite solar cells combine high carrier mobilities with long carrier lifetimes and high radiative efficiencies. Despite this, full devices suffer from significant nonradiative recombination losses, limiting their VOC to values well below the Shockley–Queisser limit. Here, recent advances in understanding nonradiative recombination in perovskite solar cells from picoseconds to steady state are presented, with an emphasis on the interfaces between the perovskite absorber and the charge transport layers. Quantification of the quasi‐Fermi level splitting in perovskite films with and without attached transport layers allows to identify the origin of nonradiative recombination, and to explain the VOC of operational devices. These measurements prove that in state‐of‐the‐art solar cells, nonradiative recombination at the interfaces between the perovskite and the transport layers is more important than processes in the bulk or at grain boundaries. Optical pump‐probe techniques give complementary access to the interfacial recombination pathways and provide quantitative information on transfer rates and recombination velocities. Promising optimization strategies are also highlighted, in particular in view of the role of energy level alignment and the importance of surface passivation. Recent record perovskite solar cells with low nonradiative losses are presented where interfacial recombination is effectively overcome—paving the way to the thermodynamic efficiency limit.
Partial melting is a first order process for the chemical differentiation of the crust (Vielzeuf et al., 1990). Redistribution of chemical elements during melt generation crucially influences the composition of the lower and upper crust and provides a mechanism to concentrate and transport chemical elements that may also be of economic interest. Understanding of the diverse processes and their controlling factors is therefore not only of scientific interest but also of high economic importance to cover the demand for rare metals.
The redistribution of major and trace elements during partial melting represents a central step for the understanding how granite-bound mineralization develops (Hedenquist and Lowenstern, 1994). The partial melt generation and mobilization of ore elements (e.g. Sn, W, Nb, Ta) into the melt depends on the composition of the sedimentary source and melting conditions. Distinct source rocks have different compositions reflecting their deposition and alteration histories. This specific chemical “memory” results in different mineral assemblages and melting reactions for different protolith compositions during prograde metamorphism (Brown and Fyfe, 1970; Thompson, 1982; Vielzeuf and Holloway, 1988). These factors do not only exert an important influence on the distribution of chemical elements during melt generation, they also influence the volume of melt that is produced, extraction of the melt from its source, and its ascent through the crust (Le Breton and Thompson, 1988). On a larger scale, protolith distribution and chemical alteration (weathering), prograde metamorphism with partial melting, melt extraction, and granite emplacement are ultimately depending on a (plate-)tectonic control (Romer and Kroner, 2016). Comprehension of the individual stages and their interaction is crucial in understanding how granite-related mineralization forms, thereby allowing estimation of the mineralization potential of certain areas. Partial melting also influences the isotope systematics of melt and restite. Radiogenic and stable isotopes of magmatic rocks are commonly used to trace back the source of intrusions or to quantify mixing of magmas from different sources with distinct isotopic signatures (DePaolo and Wasserburg, 1979; Lesher, 1990; Chappell, 1996). These applications are based on the fundamental requirement that the isotopic signature in the melt reflects that of the bulk source from which it is derived. Different minerals in a protolith may have isotopic compositions of radiogenic isotopes that deviate from their whole rock signature (Ayres and Harris, 1997; Knesel and Davidson, 2002). In particular, old minerals with a distinct parent-to-daughter (P/D) ratio are expected to have a specific radiogenic isotope signature. As the partial melting reaction only involves selective phases in a protolith, the isotopic signature of the melt reflects that of the minerals involved in the melting reaction and, therefore, should be different from the bulk source signature. Similar considerations hold true for stable isotopes.
Ultrafast magnetisation dynamics have been investigated intensely for two decades. The recovery process after demagnetisation, however, was rarely studied experimentally and discussed in detail. The focus of this work lies on the investigation of the magnetisation on long timescales after laser excitation. It combines two ultrafast time resolved methods to study the relaxation of the magnetic and lattice system after excitation with a high fluence ultrashort laser pulse. The magnetic system is investigated by time resolved measurements of the magneto-optical Kerr effect. The experimental setup has been implemented in the scope of this work. The lattice dynamics were obtained with ultrafast X-ray diffraction. The combination of both techniques leads to a better understanding of the mechanisms involved in magnetisation recovery from a non-equilibrium condition. Three different groups of samples are investigated in this work: Thin Nickel layers capped with nonmagnetic materials, a continuous sample of the ordered L10 phase of Iron Platinum and a sample consisting of Iron Platinum nanoparticles embedded in a carbon matrix. The study of the remagnetisation reveals a general trend for all of the samples: The remagnetisation process can be described by two time dependences. A first exponential recovery that slows down with an increasing amount of energy absorbed in the system until an approximately linear time dependence is observed. This is followed by a second exponential recovery. In case of low fluence excitation, the first recovery is faster than the second. With increasing fluence the first recovery is slowed down and can be described as a linear function. If the pump-induced temperature increase in the sample is sufficiently high, a phase transition to a paramagnetic state is observed. In the remagnetisation process, the transition into the ferromagnetic state is characterised by a distinct transition between the linear and exponential recovery. From the combination of the transient lattice temperature Tp(t) obtained from ultrafast X-ray measurements and magnetisation M(t) gained from magneto-optical measurements we construct the transient magnetisation versus temperature relations M(Tp). If the lattice temperature remains below the Curie temperature the remagnetisation curve M(Tp) is linear and stays below the M(T) curve in equilibrium in the continuous transition metal layers. When the sample is heated above phase transition, the remagnetisation converges towards the static temperature dependence. For the granular Iron Platinum sample the M(Tp) curves for different fluences coincide, i.e. the remagnetisation follows a similar path irrespective of the initial laser-induced temperature jump.
Sphingolipids are a class of lipids that share a sphingoid base backbone. They exert various effects in eukaryotes, ranging from structural roles in plasma membranes to cellular signaling. De novo sphingolipid synthesis takes place in the endoplasmic reticulum (ER), where the condensation of the activated C₁₆ fatty acid palmitoyl-CoA and the amino acid L-serine is catalyzed by serine palmitoyltransferase (SPT). The product, 3-ketosphinganine, is then converted into more complex sphingolipids by additional ER-bound enzymes, resulting in the formation of ceramides. Since sphingolipid homeostasis is crucial to numerous cellular functions, improved assessment of sphingolipid metabolism will be key to better understanding several human diseases. To date, no assay exists capable of monitoring de novo synthesis sphingolipid in its entirety. Here, we have established a cell-free assay utilizing rat liver microsomes containing all the enzymes necessary for bottom-up synthesis of ceramides. Following lipid extraction, we were able to track the different intermediates of the sphingolipid metabolism pathway, namely 3-ketosphinganine, sphinganine, dihydroceramide, and ceramide. This was achieved by chromatographic separation of sphingolipid metabolites followed by detection of their accurate mass and characteristic fragmentations through high-resolution mass spectrometry and tandem-mass spectrometry. We were able to distinguish, unequivocally, between de novo synthesized sphingolipids and intrinsic species, inevitably present in the microsome preparations, through the addition of stable isotope-labeled palmitate-d₃ and L-serine-d₃. To the best of our knowledge, this is the first demonstration of a method monitoring the entirety of ER-associated sphingolipid biosynthesis. Proof-of-concept data was provided by modulating the levels of supplied cofactors (e.g., NADPH) or the addition of specific enzyme inhibitors (e.g., fumonisin B₁). The presented microsomal assay may serve as a useful tool for monitoring alterations in sphingolipid de novo synthesis in cells or tissues. Additionally, our methodology may be used for metabolism studies of atypical substrates – naturally occurring or chemically tailored – as well as novel inhibitors of enzymes involved in sphingolipid de novo synthesis.
Alluvial and transport-limited bedrock rivers constitute the majority of fluvial systems on Earth. Their long profiles hold clues to their present state and past evolution. We currently possess first-principles-based governing equations for flow, sediment transport, and channel morphodynamics in these systems, which we lack for detachment-limited bedrock rivers. Here we formally couple these equations for transport-limited gravel-bed river long-profile evolution. The result is a new predictive relationship whose functional form and parameters are grounded in theory and defined through experimental data. From this, we produce a power-law analytical solution and a finite-difference numerical solution to long-profile evolution. Steady-state channel concavity and steepness are diagnostic of external drivers: concavity decreases with increasing uplift rate, and steepness increases with an increasing sediment-to-water supply ratio. Constraining free parameters explains common observations of river form: to match observed channel concavities, gravel-sized sediments must weather and fine – typically rapidly – and valleys typically should widen gradually. To match the empirical square-root width–discharge scaling in equilibrium-width gravel-bed rivers, downstream fining must occur. The ability to assign a cause to such observations is the direct result of a deductive approach to developing equations for landscape evolution.
In daily life, we automatically form impressions of other individuals on basis of subtle facial features that convey trustworthiness. Because these face-based judgements influence current and future social interactions, we investigated how perceived trustworthiness of faces affects long-term memory using event-related potentials (ERPs). In the current study, participants incidentally viewed 60 neutral faces differing in trustworthiness, and one week later, performed a surprise recognition memory task, in which the same old faces were presented intermixed with novel ones. We found that after one week untrustworthy faces were better recognized than trustworthy faces and that untrustworthy faces prompted early (350–550 ms) enhanced frontal ERP old/new differences (larger positivity for correctly remembered old faces, compared to novel ones) during recognition. Our findings point toward an enhanced long-lasting, likely familiarity-based, memory for untrustworthy faces. Even when trust judgments about a person do not necessarily need to be accurate, a fast access to memories predicting potential harm may be important to guide social behaviour in daily life.
Trait-based approaches to investigate (short- and long-term) phytoplankton dynamics and community assembly have become increasingly popular in freshwater and marine science. Although the nature of the pelagic habitat and the main phytoplankton taxa and ecology are relatively similar in both marine and freshwater systems, the lines of research have evolved, at least in part, separately. We compare and contrast the approaches adopted in marine and freshwater ecosystems with respect to phytoplankton functional traits. We note differences in study goals relating to functional trait use that assess community assembly and those that relate to ecosystem processes and biogeochemical cycling that affect the type of characteristics assigned as traits to phytoplankton taxa. Specific phytoplankton traits relevant for ecological function are examined in relation to
herbivory, amplitude of environmental change and spatial and temporal scales of study. Major differences are identified, including the shorter time scale for regular environmental change in freshwater ecosystems compared to that in the open oceans as well as the
type of sampling done by researchers based on site-accessibility. Overall, we encourage researchers to better motivate why they apply trait-based analyses to their studies and to make use of process-driven approaches, which are more common in marine studies. We further propose fully comparative trait studies conducted along the habitat gradient spanning freshwater to brackish to marine systems, or along geographic gradients. Such studies will benefit from the combined strength of both fields.
The development of new and better optimization and approximation methods for Job Shop Scheduling Problems (JSP) uses simulations to compare their performance. The test data required for this has an uncertain influence on the simulation results, because the feasable search space can be changed drastically by small variations of the initial problem model. Methods could benefit from this to varying degrees. This speaks in favor of defining standardized and reusable test data for JSP problem classes, which in turn requires a systematic describability of the test data in order to be able to compile problem adequate data sets. This article looks at the test data used for comparing methods by literature review. It also shows how and why the differences in test data have to be taken into account. From this, corresponding challenges are derived which the management of test data must face in the context of JSP research.
Keywords
In nature as well as in the context of infection and medical applications, bacteria often have to move in highly complex environments such as soil or tissues. Previous studies have shown that bacteria strongly interact with their surroundings and are often guided by confinements. Here, we investigate theoretically how the dispersal of swimming bacteria can be augmented by microfluidic environments and validate our theoretical predictions experimentally. We consider a system of bacteria performing the prototypical run-and-tumble motion inside a labyrinth with square lattice geometry. Narrow channels between the square obstacles limit the possibility of bacteria to reorient during tumbling events to an area where channels cross. Thus, by varying the geometry of the lattice it might be possible to control the dispersal of cells. We present a theoretical model quantifying diffusive spreading of a run-and-tumble random walker in a square lattice. Numerical simulations validate our theoretical predictions for the dependence of the diffusion coefficient on the lattice geometry. We show that bacteria moving in square labyrinths exhibit enhanced dispersal as compared to unconfined cells. Importantly, confinement significantly extends the duration of the phase with strongly non-Gaussian diffusion, when the geometry of channels is imprinted in the density profiles of spreading cells. Finally, in good agreement with our theoretical findings, we observe the predicted behaviors in experiments with E. coli bacteria swimming in a square lattice labyrinth created in amicrofluidic device. Altogether, our comprehensive understanding of bacterial dispersal in a simple two-dimensional labyrinth makes the first step toward the analysis of more complex geometries relevant for real world applications.
Research on weight-loss interventions in emerging adulthood is warranted. Therefore, a cognitive-behavioral group treatment (CBT), including development-specific topics for adolescents and young adults with obesity (YOUTH), was developed. In a controlled study, we compared the efficacy of this age-specific CBT group intervention to an age-unspecific CBT group delivered across ages in an inpatient setting. The primary outcome was body mass index standard deviation score (BMI-SDS) over the course of one year; secondary outcomes were health-related and disease-specific quality of life (QoL). 266 participants aged 16 to 21 years (65% females) were randomized. Intention-to-treat (ITT) and per-protocol analyses (PPA) were performed. For both group interventions, we observed significant and clinically relevant improvements in BMI-SDS and QoL over the course of time with small to large effect sizes. Contrary to our hypothesis, the age-specific intervention was not superior to the age-unspecific CBT-approach.
The growing energy demand of the modern economies leads to the increased consumption of fossil fuels in form of coal, oil, and natural gases, as the mains sources. The combustion of these carbon-based fossil fuels is inevitably producing greenhouse gases, especially CO2. Approaches to tackle the CO2 problem are to capture it from the combustion sources or directly from air, as well as to avoid CO2 production in energy consuming sources (e.g., in the refrigeration sector). In the former, relatively low CO2 concentrations and competitive adsorption of other gases is often leading to low CO2 capacities and selectivities. In both approaches, the interaction of gas molecules with porous materials plays a key role. Porous carbon materials possess unique properties including electric conductivity, tunable porosity, as well as thermal and chemical stability. Nevertheless, pristine carbon materials offer weak polarity and thus low CO2 affinity. This can be overcome by nitrogen doping, which enhances the affinity of carbon materials towards acidic or polar guest molecules (e.g., CO2, H2O, or NH3). In contrast to heteroatom-free materials, such carbon materials are in most cases “noble”, that is, they oxidize other matter rather than being oxidized due to the very positive working potential of their electrons. The challenging task here is to achieve homogenous distribution of significant nitrogen content with similar bonding motives throughout the carbon framework and a uniform pore size/distribution to maximize host-guest interactions. The aim of this thesis is the development of novel synthesis pathways towards nitrogen-doped nanoporous noble carbon materials with precise design on a molecular level and understanding of their structure-related performance in energy and environmental applications, namely gas adsorption and electrochemical energy storage.
A template-free synthesis approach towards nitrogen-doped noble microporous carbon materials with high pyrazinic nitrogen content and C2N-type stoichiometry was established via thermal condensation of a hexaazatriphenylene derivative. The materials exhibited high uptake of guest molecules, such as H2O and CO2 at low concentrations, as well as moderate CO2/N2 selectivities. In the following step, the CO2/N2 selectivity was enhanced towards molecular sieving of CO2 via kinetic size exclusion of N2. The precise control over the condensation degree, and thus, atomic construction and porosity of the resulting materials led to remarkable CO2/N2 selectivities, CO2 capacities, and heat of CO2 adsorption. The ultrahydrophilic nature of the pore walls and the narrow microporosity of these carbon materials served as ideal basis for the investigation of interface effects with more polar guest molecules than CO2, namely H2O and NH3.
H2O vapor physisorption measurements, as well as NH3-temperature programmed desorption and thermal response measurements showed exceptionally high affinity towards H2O vapor and NH3 gas. Another series of nitrogen-doped carbon materials was synthesized by direct condensation of a pyrazine-fused conjugated microporous polymer and their structure-related performance in electrochemical energy storage, namely as anode materials for sodium-ion battery, was investigated.
All in all, the findings in this thesis exemplify the value of molecularly designed nitrogen-doped carbon materials with remarkable heteroatom content implemented as well-defined structure motives. The simultaneous adjustment of the porosity renders these materials suitable candidates for fundamental studies about the interactions between nitrogen-doped carbon materials and different guest species.
Online hate is a topic that has received considerable interest lately, as online hate represents a risk to self-determination and peaceful coexistence in societies around the globe. However, not much is known about the explanations for adolescents posting or forwarding hateful online material or how adolescents cope with this newly emerging online risk. Thus, we sought to better understand the relationship between a bystander to and perpetrator of online hate, and the moderating effects of problem-focused coping strategies (e.g., assertive, technical coping) within this relationship. Self-report questionnaires on witnessing and committing online hate and assertive and technical coping were completed by 6829 adolescents between 12 and 18 years of age from eight countries. The results showed that increases in witnessing online hate were positively related to being a perpetrator of online hate. Assertive and technical coping strategies were negatively related with perpetrating online hate. Bystanders of online hate reported fewer instances of perpetrating online hate when they reported higher levels of assertive and technical coping strategies, and more frequent instances of perpetrating online hate when they reported lower levels of assertive and technical coping strategies. In conclusion, our findings suggest that, if effective, prevention and intervention programs that target online hate should consider educating young people about problem-focused coping strategies, self-assertiveness, and media skills. Implications for future research are discussed.
Hepcidin-25 (Hep-25) plays a crucial role in the control of iron homeostasis. Since the dysfunction of the hepcidin pathway leads to multiple diseases as a result of iron imbalance, hepcidin represents a potential target for the diagnosis and treatment of disorders of iron metabolism. Despite intense research in the last decade targeted at developing a selective immunoassay for iron disorder diagnosis and treatment and better understanding the ferroportin-hepcidin interaction, questions remain. The key to resolving these underlying questions is acquiring exact knowledge of the 3D structure of native Hep-25. Since it was determined that the N-terminus, which is responsible for the bioactivity of Hep-25, contains a small Cu(II)-binding site known as the ATCUN motif, it was assumed that the Hep-25-Cu(II) complex is the native, bioactive form of the hepcidin. This structure has thus far not been elucidated in detail. Owing to the lack of structural information on metal-bound Hep-25, little is known about its possible biological role in iron metabolism. Therefore, this work is focused on structurally characterizing the metal-bound Hep-25 by NMR spectroscopy and molecular dynamics simulations. For the present work, a protocol was developed to prepare and purify properly folded Hep-25 in high quantities. In order to overcome the low solubility of Hep-25 at neutral pH, we introduced the C-terminal DEDEDE solubility tag. The metal binding was investigated through a series of NMR spectroscopic experiments to identify the most affected amino acids that mediate metal coordination. Based on the obtained NMR data, a structural calculation was performed in order to generate a model structure of the Hep-25-Ni(II) complex. The DEDEDE tag was excluded from the structural calculation due to a lack of NMR restraints. The dynamic nature and fast exchange of some of the amide protons with solvent reduced the overall number of NMR restraints needed for a high-quality structure. The NMR data revealed that the 20 Cterminal Hep-25 amino acids experienced no significant conformational changes, compared to published results, as a result of a pH change from pH 3 to pH 7 and metal binding. A 3D model of the Hep-25-Ni(II) complex was constructed from NMR data recorded for the hexapeptideNi(II) complex and Hep-25-DEDEDE-Ni(II) complex in combination with the fixed conformation of 19 C-terminal amino acids. The NMR data of the Hep-25-DEDEDE-Ni(II) complex indicates that the ATCUN motif moves independently from the rest of the structure. The 3D model structure of the metal-bound Hep-25 allows for future works to elucidate hepcidin’s interaction with its receptor ferroportin and should serve as a starting point for the development of antibodies with improved selectivity.
In light of the debate on the consequences of competitive contracting out of traditionally public services, this research compares two mechanisms used to allocate funds in development cooperation—direct awarding and competitive contracting out—aiming to identify their potential advantages and disadvantages.
The agency theory is applied within the framework of rational-choice institutionalism to study the institutional arrangements that surround two different money allocation mechanisms, identify the incentives they create for the behavior of individual actors in the field, and examine how these then transfer into measurable differences in managerial quality of development aid projects. In this work, project management quality is seen as an important determinant of the overall project success.
For data-gathering purposes, the German development agency, the Gesellschaft für Internationale Zusammenarbeit (GIZ), is used due to its unique way of work. Whereas the majority of projects receive funds via direct-award mechanism, there is a commercial department, GIZ International Services (GIZ IS) that has to compete for project funds.
The data concerning project management practices on the GIZ and GIZ IS projects was gathered via a web-based, self-administered survey of project team leaders. Principal component analysis was applied to reduce the dimensionality of the independent variable to total of five components of project management. Furthermore, multiple regression analysis identified the differences between the separate components on these two project types. Enriched by qualitative data gathered via interviews, this thesis offers insights into everyday managerial practices in development cooperation and identifies the advantages and disadvantages of the two allocation mechanisms.
The thesis first reiterates the responsibility of donors and implementers for overall aid effectiveness. It shows that the mechanism of competitive contracting out leads to better oversight and control of implementers, fosters deeper cooperation between the implementers and beneficiaries, and has a potential to strengthen ownership of recipient countries. On the other hand, it shows that the evaluation quality does not tremendously benefit from the competitive allocation mechanism and that the quality of the component knowledge management and learning is better when direct-award mechanisms are used. This raises questions about the lacking possibilities of actors in the field to learn about past mistakes and incorporate the finings into the future interventions, which is one of the fundamental issues of aid effectiveness. Finally, the findings show immense deficiencies in regard to oversight and control of individual projects in German development cooperation.
Peer cultural socialisation
(2019)
This study investigated how peers can contribute to cultural minority students’ cultural identity, life satisfaction, and school values (school importance, utility, and intrinsic values) by talking about cultural values, beliefs, and behaviours associated with heritage and mainstream culture (peer cultural socialisation). We further distinguished between heritage and mainstream identity as two separate dimensions of cultural identity. Analyses were based on self-reports of 662 students of the first, second, and third migrant generation in Germany (Mean age = 14.75 years, 51% female). Path analyses revealed that talking about heritage culture with friends was positively related to heritage identity. Talking about mainstream culture with friends was negatively associated with heritage identity, but positively with mainstream identity as well as school values. Both dimensions of cultural identity related to higher life satisfaction and more positive school values. As expected, heritage and mainstream identity mediated the link between peer cultural socialisation and adjustment outcomes. Findings highlight the potential of peers as socialisation agents to help promote cultural belonging as well as positive adjustment of cultural minority youth in the school context.
The Role of Bargaining Power
(2019)
Neoclassical theory omits the role of bargaining power in the determination of wages. As a result, the importance of changes in the bargaining position for the development of income shares in the last decades is underestimated. This paper presents a theoretical argument why collective bargaining power is a main determinant of workers’ share of income and how its decline contributed to the severe changes in the distribution of income since the 1980s. In order to confirm this hypothesis, a panel data regression analysis is performed that suggests that unions significantly influence the distribution of income in developed countries.
The Himalayas are a region that is most dependent, but also frequently prone to hazards from changing meltwater resources. This mountain belt hosts the highest mountain peaks on earth, has the largest reserve of ice outside the polar regions, and is home to a rapidly growing population in recent decades. One source of hazard has attracted scientific research in particular in the past two decades: glacial lake outburst floods (GLOFs) occurred rarely, but mostly with fatal and catastrophic consequences for downstream communities and infrastructure. Such GLOFs can suddenly release several million cubic meters of water from naturally impounded meltwater lakes. Glacial lakes have grown in number and size by ongoing glacial mass losses in the Himalayas. Theory holds that enhanced meltwater production may increase GLOF frequency, but has never been tested so far. The key challenge to test this notion are the high altitudes of >4000 m, at which lakes occur, making field work impractical. Moreover, flood waves can attenuate rapidly in mountain channels downstream, so that many GLOFs have likely gone unnoticed in past decades. Our knowledge on GLOFs is hence likely biased towards larger, destructive cases, which challenges a detailed quantification of their frequency and their response to atmospheric warming. Robustly quantifying the magnitude and frequency of GLOFs is essential for risk assessment and management along mountain rivers, not least to implement their return periods in building design codes.
Motivated by this limited knowledge of GLOF frequency and hazard, I developed an algorithm that efficiently detects GLOFs from satellite images. In essence, this algorithm classifies land cover in 30 years (~1988–2017) of continuously recorded Landsat images over the Himalayas, and calculates likelihoods for rapidly shrinking water bodies in the stack of land cover images. I visually assessed such detected tell-tale sites for sediment fans in the river channel downstream, a second key diagnostic of GLOFs. Rigorous tests and validation with known cases from roughly 10% of the Himalayas suggested that this algorithm is robust against frequent image noise, and hence capable to identify previously unknown GLOFs. Extending the search radius to the entire Himalayan mountain range revealed some 22 newly detected GLOFs. I thus more than doubled the existing GLOF count from 16 previously known cases since 1988, and found a dominant cluster of GLOFs in the Central and Eastern Himalayas (Bhutan and Eastern Nepal), compared to the rarer affected ranges in the North. Yet, the total of 38 GLOFs showed no change in the annual frequency, so that the activity of GLOFs per unit glacial lake area has decreased in the past 30 years. I discussed possible drivers for this finding, but left a further attribution to distinct GLOF-triggering mechanisms open to future research.
This updated GLOF frequency was the key input for assessing GLOF hazard for the entire Himalayan mountain belt and several subregions. I used standard definitions in flood hydrology, describing hazard as the annual exceedance probability of a given flood peak discharge [m3 s-1] or larger at the breach location. I coupled the empirical frequency of GLOFs per region to simulations of physically plausible peak discharges from all existing ~5,000 lakes in the Himalayas. Using an extreme-value model, I could hence calculate flood return periods. I found that the contemporary 100-year GLOF discharge (the flood level that is reached or exceeded on average once in 100 years) is 20,600+2,200/–2,300 m3 s-1 for the entire Himalayas. Given the spatial and temporal distribution of historic GLOFs, contemporary GLOF hazard is highest in the Eastern Himalayas, and lower for regions with rarer GLOF abundance. I also calculated GLOF hazard for some 9,500 overdeepenings, which could expose and fill with water, if all Himalayan glaciers have melted eventually. Assuming that the current GLOF rate remains unchanged, the 100-year GLOF discharge could double (41,700+5,500/–4,700 m3 s-1), while the regional GLOF hazard may increase largest in the Karakoram.
To conclude, these three stages–from GLOF detection, to analysing their frequency and estimating regional GLOF hazard–provide a framework for modern GLOF hazard assessment. Given the rapidly growing population, infrastructure, and hydropower projects in the Himalayas, this thesis assists in quantifying the purely climate-driven contribution to hazard and risk from GLOFs.
A new micro/mesoporous hybrid clay nanocomposite prepared from kaolinite clay, Carica papaya seeds, and ZnCl2 via calcination in an inert atmosphere is presented. Regardless of the synthesis temperature, the specific surface area of the nanocomposite material is between ≈150 and 300 m2/g. The material contains both micro- and mesopores in roughly equal amounts. X-ray diffraction, infrared spectroscopy, and solid-state nuclear magnetic resonance spectroscopy suggest the formation of several new bonds in the materials upon reaction of the precursors, thus confirming the formation of a new hybrid material. Thermogravimetric analysis/differential thermal analysis and elemental analysis confirm the presence of carbonaceous matter. The new composite is stable up to 900 °C and is an efficient adsorbent for the removal of a water micropollutant, 4-nitrophenol, and a pathogen, E. coli, from an aqueous medium, suggesting applications in water remediation are feasible.
Pedagogy of integrity
(2019)
The master thesis “Pedagogy of Integrity: an Analysis of the Conceptualization and Implementation of the MA Program Anglophone Modernities in Literature and Culture” deals with colonial patterns in higher education practices. It provides a theoretical framework for decolonization of academic teaching-learning practices on the micro- and meso-didactic levels and suggests concrete solutions for the decolonized education practices, especially for degree programs, which content focuses on post-colonial issues. Besides, through the exemplary analysis of the conceptualization and implementation of the MA Program Anglophone Modernities in Literature and Culture the work explores patterns of colonial heritage as well as will to decolonise these. The main thesis claims that (higher) education should be liberated from colonial patterns, so that real participation for all students in the collective knowledge production becomes possible.
In the theoretical elaborations different concepts of critical and radical pedagogy, e.g. the ones of Paulo Freire and bell hooks, in combination with concepts about modalities of adult learning (e.g. transformative learning) and approaches to education, seeking to combine learning and social justice (e.g. Social Justice Learning) are systematised and explored for their substance and potential to contribute to a criteria catalogue for decolonised educational practises. Besides, attention is paid on higher education research results, which reveal, that students, who belong to underrepresented groups at university (non-traditional students) in their societies of origin, face more difficulties and discrimination as international students at Western universities, than ‘traditional’ international students do. Based on the theoretical elaborations, the work claims that:
(1) the homogeneity-preserving dynamics, found in Western colleges, are an inheritance of colonial time and mindsets, which continue to function in education and multiply social inequality in the context of internationalization, migration, and participation;
(2) all, but especially those higher educational programs, dealing explicitly with inequality phenomena, social and cultural diversity, power relations and issues of domination, as well as with postcolonial criticism, should establish premises of equity and provide de-facto equal opportunities for participation through embodiment of social justice as a way to remain credible;
(3) decolonization of the educational space can be enabled through appropriate didactic action both on the meso- (institution) and micro-didactical (teaching-learning arrangements) agency levels with sufficient will and willingness of responsible professionals at.
By examining representative documents, published by the MA Program Anglophone Modernities in Literature and Culture, using the 'close reading' methodology, as well as through the exemplary analysis of the concept of a teaching-learning program’s event and a student survey, the work seeks to examine wo what extent the Master's Degree Program represents a space of decolonised higher education. The results of the analysis indicate the need for stronger normative value-positioning of the Study program, while many practices that show commitment to participation, social justice and diversity, have been identified.
In the last chapter, the results of the theoretical elaboration and the program’s analysis are synthesized in the concept of an integrity-based pedagogy conceptualisation, called Pedagogy of Integrity, and suggestions are formulated for the teaching practice in the study program, which are meant to help overcome the discrepancy between will and practice towards decolonised educational space.
The individual’s mental lexicon comprises all known words as well related infor-mation on semantics, orthography and phonology. Moreover, entries connect due to simi-larities in these language domains building a large network structure. The access to lexical information is crucial for processing of words and sentences. Thus, a lack of information in-hibits the retrieval and can cause language processing difficulties. Hence, the composition of the mental lexicon is essential for language skills and its assessment is a central topic of lin-guistic and educational research.
In early childhood, measurement of the mental lexicon is uncomplicated, for example through parental questionnaires or the analysis of speech samples. However, with growing content the measurement becomes more challenging: With more and more words in the mental lexicon, the inclusion of all possible known words into a test or questionnaire be-comes impossible. That is why there is a lack of methods to assess the mental lexicon for school children and adults. For the same reason, there are only few findings on the courses of lexical development during school years as well as its specific effect on other language skills. This dissertation is supposed to close this gap by pursuing two major goals: First, I wanted to develop a method to assess lexical features, namely lexicon size and lexical struc-ture, for children of different age groups. Second, I aimed to describe the results of this method in terms of lexical development of size and structure. Findings were intended to help understanding mechanisms of lexical acquisition and inform theories on vocabulary growth.
The approach is based on the dictionary method where a sample of words out of a dictionary is tested and results are projected on the whole dictionary to determine an indi-vidual’s lexicon size. In the present study, the childLex corpus, a written language corpus for children in German, served as the basis for lexicon size estimation. The corpus is assumed to comprise all words children attending primary school could know. Testing a sample of words out of the corpus enables projection of the results on the whole corpus. For this purpose, a vocabulary test based on the corpus was developed. Afterwards, test performance of virtual participants was simulated by drawing different lexicon sizes from the corpus and comparing whether the test items were included in the lexicon or not. This allowed determination of the relation between test performance and total lexicon size and thus could be transferred to a sample of real participants. Besides lexicon size, lexical content could be approximated with this approach and analyzed in terms of lexical structure.
To pursue the presented aims and establish the sampling method, I conducted three consecutive studies. Study 1 includes the development of a vocabulary test based on the childLex corpus. The testing was based on the yes/no format and included three versions for different age groups. The validation grounded on the Rasch Model shows that it is a valid instrument to measure vocabulary for primary school children in German. In Study 2, I estab-lished the method to estimate lexicon sizes and present results on lexical development dur-ing primary school. Plausible results demonstrate that lexical growth follows a quadratic function starting with about 6,000 words at the beginning of school and about 73,000 words on average for young adults. Moreover, the study revealed large interindividual differences. Study 3 focused on the analysis of network structures and their development in the mental lexicon due to orthographic similarities. It demonstrates that networks possess small-word characteristics and decrease in interconnectivity with age.
Taken together, this dissertation provides an innovative approach for the assessment and description of the development of the mental lexicon from primary school onwards. The studies determine recent results on lexical acquisition in different age groups that were miss-ing before. They impressively show the importance of this period and display the existence of extensive interindividual differences in lexical development. One central aim of future research needs to address the causes and prevention of these differences. In addition, the application of the method for further research (e.g. the adaptation for other target groups) and teaching purposes (e.g. adaptation of texts for different target groups) appears to be promising.
The sensitivity of fluvial systems to tectonic and climatic boundary conditions allows us to use the geomorphic and stratigraphic records as quantitative archives of past climatic and tectonic conditions. Thus, fluvial terraces that form on alluvial fans and floodplains as well as the rate of sediment export to oceanic and continental basins are commonly used to reconstruct paleoenvironments. However, we currently lack a systematic and quantitative understanding of the transient evolution of fluvial systems and their associated sediment storage and release in response to changes in base level, water input, and sediment input. Such knowledge is necessary to quantify past environmental change from terrace records or sedimentary deposits and to disentangle the multiple possible causes for terrace formation and sediment deposition. Here, we use a set of seven physical experiments to explore terrace formation and sediment export from a single, braided channel that is perturbed by changes in upstream water discharge or sediment supply, or through downstream base-level fall. Each perturbation differently affects (1) the geometry of terraces and channels, (2) the timing of terrace cutting, and (3) the transient response of sediment export from the basin. In general, an increase in water discharge leads to near-instantaneous channel incision across the entire fluvial system and consequent local terrace cutting, thus preserving the initial channel slope on terrace surfaces, and it also produces a transient increase in sediment export from the system. In contrast, a decreased upstream sediment-supply rate may result in longer lag times before terrace cutting, leading to terrace slopes that differ from the initial channel slope, and also lagged responses in sediment export. Finally, downstream base-level fall triggers the upstream propagation of a diffuse knickzone, forming terraces with upstream-decreasing ages. The slope of terraces triggered by base-level fall mimics that of the newly adjusted active channel, whereas slopes of terraces triggered by a decrease in upstream sediment discharge or an increase in upstream water discharge are steeper compared to the new equilibrium channel. By combining fillterrace records with constraints on sediment export, we can distinguish among environmental perturbations that would otherwise remain unresolved when using just one of these records.
Accusative Unaccusatives
(2019)
In this study, we analyze the forecast accuracy and profitability of buy recommendations published in five major German financial magazines for private households based on fundamental analysis. The results show a high average forecast accuracy but with a very high standard deviation, which indicates poor forecast accuracy with regard to individual stocks. The recommendation profitability slightly exceeds the performance of the MSCI World index. Considering the involved risk, which is represented by a high standard deviation, the excess returns appear to be insufficient.
The growing global demand for meat is being thwarted by shrinking agricultural areas, and opposes efforts to mitigate methane emissions and to improve public health. Cultured meat could contribute to solve these problems, but will such meat be marketable, competitive, and accepted? Using the Delphi method, this study explored the potential development of cultured meat by 2027. Despite the acknowledged urgency to develop sustainable meat alternatives, participants doubt that challenges regarding mass production, production costs, and consumer acceptance will be overcome by 2027. Considering the noticeable impacts of global warming, further research and development as well as a change in consumer perceptions is inevitable.
This paper challenges the solely rational view of the scenario technique as a strategy and foresight tool designed to cope with uncertainty by considering multiple possible future states. The paper employs an affordance-based view that allows for the identification and structuring of hidden, emergent attributes of the scenario technique beyond the intended ones. The suggested framework distinguishes between affordances (1) that are intended by the organization and relate to its goals, (2) that emergently generate organizational benefits, and (3) that do not relate to organizational but individual interests. Also, constraints in the use of scenarios are discussed. Affordance theory’s specific lens shows that the emergence of such attributes depends on the users’ specific intentions.
Additive Manufacturing (AM) in terms of laser powder-bed fusion (L-PBF) offers new prospects regarding the design of parts and enables therefore the production of lattice structures. These lattice structures shall be implemented in various industrial applications (e.g. gas turbines) for reasons of material savings or cooling channels. However, internal defects, residual stress, and structural deviations from the nominal geometry are unavoidable.
In this work, the structural integrity of lattice structures manufactured by means of L-PBF was non-destructively investigated on a multiscale approach.
A workflow for quantitative 3D powder analysis in terms of particle size, particle shape, particle porosity, inter-particle distance and packing density was established. Synchrotron computed tomography (CT) was used to correlate the packing density with the particle size and particle shape. It was also observed that at least about 50% of the powder porosity was released during production of the struts.
Struts are the component of lattice structures and were investigated by means of laboratory CT. The focus was on the influence of the build angle on part porosity and surface quality. The surface topography analysis was advanced by the quantitative characterisation of re-entrant surface features. This characterisation was compared with conventional surface parameters showing their complementary information, but also the need for AM specific surface parameters.
The mechanical behaviour of the lattice structure was investigated with in-situ CT under compression and successive digital volume correlation (DVC). The deformation was found to be knot-dominated, and therefore the lattice folds unit cell layer wise.
The residual stress was determined experimentally for the first time in such lattice structures. Neutron diffraction was used for the non-destructive 3D stress investigation. The principal stress directions and values were determined in dependence of the number of measured directions. While a significant uni-axial stress state was found in the strut, a more hydrostatic stress state was found in the knot. In both cases, strut and knot, seven directions were at least needed to find reliable principal stress directions.
Supermassive black holes reside in the hearts of almost all massive galaxies. Their evolutionary path seems to be strongly linked to the evolution of their host galaxies, as implied by several empirical relations between the black hole mass (M BH ) and different host galaxy properties. The physical driver of this co-evolution is, however, still not understood. More mass measurements over homogeneous samples and a detailed understanding of systematic uncertainties are required to fathom the origin of the scaling relations.
In this thesis, I present the mass estimations of supermassive black holes in the nuclei of one late-type and thirteen early-type galaxies. Our SMASHING sample extends from the intermediate to the massive galaxy mass regime and was selected to fill in gaps in number of galaxies along the scaling relations. All galaxies were observed at high spatial resolution, making use of the adaptive-optics mode of integral field unit (IFU) instruments on state-of-the-art telescopes (SINFONI, NIFS, MUSE). I extracted the stellar kinematics from these observations and constructed dynamical Jeans and Schwarzschild models to estimate the mass of the central black holes robustly. My new mass estimates increase the number of early-type galaxies with measured black hole masses by 15%. The seven measured galaxies with nuclear light deficits (’cores’) augment the sample of cored galaxies with measured black holes by 40%. Next to determining massive black hole masses, evaluating the accuracy of black hole masses is crucial for understanding the intrinsic scatter of the black hole- host galaxy scaling relations. I tested various sources of systematic uncertainty on my derived mass estimates.
The M BH estimate of the single late-type galaxy of the sample yielded an upper limit, which I could constrain very robustly. I tested the effects of dust, mass-to-light ratio (M/L) variation, and dark matter on my measured M BH . Based on these tests, the typically assumed constant M/L ratio can be an adequate assumption to account for the small amounts of dark matter in the center of that galaxy. I also tested the effect of a variable M/L variation on the M BH measurement on a second galaxy. By considering stellar M/L variations in the dynamical modeling, the measured M BH decreased by 30%. In the future, this test should be performed on additional galaxies to learn how an as constant assumed M/L flaws the estimated black hole masses.
Based on our upper limit mass measurement, I confirm previous suggestions that resolving the predicted BH sphere-of-influence is not a strict condition to measure black hole masses. Instead, it is only a rough guide for the detection of the black hole if high-quality, and high signal-to-noise IFU data are used for the measurement. About half of our sample consists of massive early-type galaxies which show nuclear surface brightness cores and signs of triaxiality. While these types of galaxies are typically modeled with axisymmetric modeling methods, the effects on M BH are not well studied yet. The massive galaxies of our presented galaxy sample are well suited to test the effect of different stellar dynamical models on the measured black hole mass in evidently triaxial galaxies. I have compared spherical Jeans and axisymmetric Schwarzschild models and will add triaxial Schwarzschild models to this comparison in the future. The constructed Jeans and Schwarzschild models mostly disagree with each other and cannot reproduce many of the triaxial features of the galaxies (e.g., nuclear sub-components, prolate rotation). The consequence of the axisymmetric-triaxial assumption on the accuracy of M BH and its impact on the black hole - host galaxy relation needs to be carefully examined in the future.
In the sample of galaxies with published M BH , we find measurements based on different dynamical tracers, requiring different observations, assumptions, and methods. Crucially, different tracers do not always give consistent results. I have used two independent tracers (cold molecular gas and stars) to estimate M BH in a regular galaxy of our sample. While the two estimates are consistent within their errors, the stellar-based measurement is twice as high as the gas-based. Similar trends have also been found in the literature. Therefore, a rigorous test of the systematics associated with the different modeling methods is required in the future. I caution to take the effects of different tracers (and methods) into account when discussing the scaling relations.
I conclude this thesis by comparing my galaxy sample with the compilation of galaxies with measured black holes from the literature, also adding six SMASHING galaxies, which were published outside of this thesis. None of the SMASHING galaxies deviates significantly from the literature measurements. Their inclusion to the published early-type galaxies causes a change towards a shallower slope for the M BH - effective velocity dispersion relation, which is mainly driven by the massive galaxies of our sample. More unbiased and homogenous measurements are needed in the future to determine the shape of the relation and understand its physical origin.
Predators can have numerical and behavioral effects on prey animals. While numerical effects are well explored, the impact of behavioral effects is unclear. Furthermore, behavioral effects are generally either analyzed with a focus on single individuals or with a focus on consequences for other trophic levels. Thereby, the impact of fear on the level of prey communities is overlooked, despite potential consequences for conservation and nature management. In order to improve our understanding of predator-prey interactions, an assessment of the consequences of fear in shaping prey community structures is crucial.
In this thesis, I evaluated how fear alters prey space use, community structure and composition, focusing on terrestrial mammals. By integrating landscapes of fear in an existing individual-based and spatially-explicit model, I simulated community assembly of prey animals via individual home range formation. The model comprises multiple hierarchical levels from individual home range behavior to patterns of prey community structure and composition. The mechanistic approach of the model allowed for the identification of underlying mechanism driving prey community responses under fear.
My results show that fear modified prey space use and community patterns. Under fear, prey animals shifted their home ranges towards safer areas of the landscape. Furthermore, fear decreased the total biomass and the diversity of the prey community and reinforced shifts in community composition towards smaller animals. These effects could be mediated by an increasing availability of refuges in the landscape. Under landscape changes, such as habitat loss and fragmentation, fear intensified negative effects on prey communities. Prey communities in risky environments were subject to a non-proportional diversity loss of up to 30% if fear was taken into account. Regarding habitat properties, I found that well-connected, large safe patches can reduce the negative consequences of habitat loss and fragmentation on prey communities. Including variation in risk perception between prey animals had consequences on prey space use. Animals with a high risk perception predominantly used safe areas of the landscape, while animals with a low risk perception preferred areas with a high food availability. On the community level, prey diversity was higher in heterogeneous landscapes of fear if individuals varied in their risk perception compared to scenarios in which all individuals had the same risk perception.
Overall, my findings give a first, comprehensive assessment of the role of fear in shaping prey communities. The linkage between individual home range behavior and patterns at the community level allows for a mechanistic understanding of the underlying processes. My results underline the importance of the structure of the landscape of fear as a key driver of prey community responses, especially if the habitat is threatened by landscape changes. Furthermore, I show that individual landscapes of fear can improve our understanding of the consequences of trait variation on community structures. Regarding conservation and nature management, my results support calls for modern conservation approaches that go beyond single species and address the protection of biotic interactions.
Background
Postoperative delirium is a common disorder in older adults that is associated with higher morbidity and mortality, prolonged cognitive impairment, development of dementia, higher institutionalization rates, and rising healthcare costs. The probability of delirium after surgery increases with patients’ age, with pre-existing cognitive impairment, and with comorbidities, and its diagnosis and treatment is dependent on the knowledge of diagnostic criteria, risk factors, and treatment options of the medical staff. In this study, we will investigate whether a cross-sectoral and multimodal intervention for preventing delirium can reduce the prevalence of delirium and postoperative cognitive decline (POCD) in patients older than 70 years undergoing elective surgery. Additionally, we will analyze whether the intervention is cost-effective.
Methods
The study will be conducted at five medical centers (with two or three surgical departments each) in the southwest of Germany. The study employs a stepped-wedge design with cluster randomization of the medical centers. Measurements are performed at six consecutive points: preadmission, preoperative, and postoperative with daily delirium screening up to day 7 and POCD evaluations at 2, 6, and 12 months after surgery. Recruitment goals are to enroll 1500 patients older than 70 years undergoing elective operative procedures (cardiac, thoracic, vascular, proximal big joints and spine, genitourinary, gastrointestinal, and general elective surgery procedures.
Discussion
Results of the trial should form the basis of future standards for preventing delirium and POCD in surgical wards. Key aims are the improvement of patient safety and quality of life, as well as the reduction of the long-term risk of conversion to dementia. Furthermore, from an economic perspective, we expect benefits and decreased costs for hospitals, patients, and healthcare insurances.
Trial registration
German Clinical Trials Register, DRKS00013311. Registered on 10 November 2017.
The Collatz conjecture is a number theoretical problem, which has puzzled countless researchers using myriad approaches. Presently, there are scarcely any methodologies to describe and treat the problem from the perspective of the Algebraic Theory of Automata. Such an approach is promising with respect to facilitating the comprehension of the Collatz sequence’s "mechanics". The systematic technique of a state machine is both simpler and can fully be described by the use of algebraic means.
The current gap in research forms the motivation behind the present contribution. The present authors are convinced that exploring the Collatz conjecture in an algebraic manner, relying on findings and fundamentals of Graph Theory and Automata Theory, will simplify the problem as a whole.
The Collatz conjecture is a number theoretical problem, which has puzzled countless researchers using myriad approaches. Presently, there are scarcely any methodologies to describe and treat the problem from the perspective of the Algebraic Theory of Automata. Such an approach is promising with respect to facilitating the comprehension of the Collatz sequences "mechanics". The systematic technique of a state machine is both simpler and can fully be described by the use of algebraic means.
The current gap in research forms the motivation behind the present contribution. The present authors are convinced that exploring the Collatz conjecture in an algebraic manner, relying on findings and fundamentals of Graph Theory and Automata Theory, will simplify the problem as a whole.
This thesis investigates whether multilingual speakers’ use of grammatical constraints in an additional language (La) is affected by the native (L1) and non-native grammars (L2) of their linguistic repertoire.
Previous studies have used untimed measures of grammatical performance to show that L1 and L2 grammars affect the initial stages of La acquisition. This thesis extends this work by examining whether speakers at intermediate levels of La proficiency, who demonstrate mature untimed/offline knowledge of the target La constraints, are differentially affected by their L1 and L2 knowledge when they comprehend sentences under processing pressure. With this purpose, several groups of La German speakers were tested on word order and agreement phenomena using online/timed measures of grammatical knowledge. Participants had mirror distributions of their prior languages and they were either L1English/L2Spanish speakers or L1Spanish/L2English speakers. Crucially, in half of the phenomena the target La constraint aligned with English but not with Spanish, while in the other half it aligned with Spanish but not with English. Results show that the L1 grammar plays a major role in the use of La constraints under processing pressure, as participants displayed increased sensitivity to La constraints when they aligned with their L1, and reduced sensitivity when they did not. Further, in specific phenomena in which the L2 and La constraints aligned, increased L2 proficiency resulted in an enhanced sensitivity to the La constraint. These findings suggest that both native and non-native grammars affect how speakers use La grammatical constraints under processing pressure. However, L1 and L2 grammars differentially influence on participants’ performance: While L1 constraints seem to be reliably recruited to cope with the processing demands of real-time La use, proficiency in an L2 can enhance sensitivity to La constraints only in specific circumstances, namely when L2 and La constraints align.
PLATON
(2019)
Lesson planning is both an important and demanding task—especially as part of teacher training. This paper presents the requirements for a lesson planning system and evaluates existing systems regarding these requirements. One major drawback of existing software tools is that most are limited to a text- or form-based representation of the lesson designs. In this article, a new approach with a graphical, time-based representation with (automatic) analyses methods is proposed and the system architecture and domain model are described in detail. The approach is implemented in an interactive, web-based prototype called PLATON, which additionally supports the management of lessons in units as well as the modelling of teacher and student-generated resources. The prototype was evaluated in a study with 61 prospective teachers (bachelor’s and master’s preservice teachers as well as teacher trainees in post-university teacher training) in Berlin, Germany, with a focus on usability. The results show that this approach proofed usable for lesson planning and offers positive effects for the perception of time and self-reflection.
Bienenfresserortungsversuch
(2019)
On a planetary scale human populations need to adapt to both socio-economic and environmental problems amidst rapid global change. This holds true for coupled human-environment (socio-ecological) systems in rural and urban settings alike. Two examples are drylands and urban coasts. Such socio-ecological systems have a global distribution. Therefore, advancing the knowledge base for identifying socio-ecological adaptation needs with local vulnerability assessments alone is infeasible: The systems cover vast areas, while funding, time, and human resources for local assessments are limited. They are lacking in low an middle-income countries (LICs and MICs) in particular.
But places in a specific socio-ecological system are not only unique and complex – they also exhibit similarities. A global patchwork of local rural drylands vulnerability assessments of human populations to socio-ecological and environmental problems has already been reduced to a limited number of problem structures, which typically cause vulnerability. However, the question arises whether this is also possible in urban socio-ecological systems. The question also arises whether these typologies provide added value in research beyond global change. Finally, the methodology employed for drylands needs refining and standardizing to increase its uptake in the scientific community. In this dissertation, I set out to fill these three gaps in research.
The geographical focus in my dissertation is on LICs and MICs, which generally have lower capacities to adapt, and greater adaptation needs, regarding rapid global change. Using a spatially explicit indicator-based methodology, I combine geospatial and clustering methods to identify typical configurations of key factors in case studies causing vulnerability to human populations in two specific socio-ecological systems. Then I use statistical and analytical methods to interpret and appraise both the typical configurations and the global typologies they constitute.
First, I improve the indicator-based methodology and then reanalyze typical global problem structures of socio-ecological drylands vulnerability with seven indicator datasets. The reanalysis confirms the key tenets and produces a more realistic and nuanced typology of eight spatially explicit problem structures, or vulnerability profiles: Two new profiles with typically high natural resource endowment emerge, in which overpopulation has led to medium or high soil erosion. Second, I determine whether the new drylands typology and its socio-ecological vulnerability concept advance a thematically linked scientific debate in human security studies: what drives violent conflict in drylands? The typology is a much better predictor for conflict distribution and incidence in drylands than regression models typically used in peace research. Third, I analyze global problem structures typically causing vulnerability in an urban socio-ecological system - the rapidly urbanizing coastal fringe (RUCF) – with eleven indicator datasets. The RUCF also shows a robust typology, and its seven profiles show huge asymmetries in vulnerability and adaptive capacity. The fastest population increase, lowest income, most ineffective governments, most prevalent poverty, and lowest adaptive capacity are all typically stacked in two profiles in LICs. This shows that beyond local case studies tropical cyclones and/or coastal flooding are neither stalling rapid population growth, nor urban expansion, in the RUCF. I propose entry points for scaling up successful vulnerability reduction strategies in coastal cities within the same vulnerability profile.
This dissertation shows that patchworks of local vulnerability assessments can be generalized to structure global socio-ecological vulnerabilities in both rural and urban socio-ecological systems according to typical problems. In terms of climate-related extreme events in the RUCF, conflicting problem structures and means to deal with them are threatening to widen the development gap between LICs and high-income countries unless successful vulnerability reduction measures are comprehensively scaled up. The explanatory power for human security in drylands warrants further applications of the methodology beyond global environmental change research in the future. Thus, analyzing spatially explicit global typologies of socio-ecological vulnerability is a useful complement to local assessments: The typologies provide entry points for where to consider which generic measures to reduce typical problem structures – including the countless places without local assessments. This can save limited time and financial resources for adaptation under rapid global change.
Address on the opening of the Alexander von Humboldt Season
in Quito, Ecuador, on 13 February 2019
(2019)
Astandard approach to study time-dependent stochastic processes is the power spectral density (PSD), an ensemble-averaged property defined as the Fourier transform of the autocorrelation function of the process in the asymptotic limit of long observation times, T → ∞. In many experimental situations one is able to garner only relatively few stochastic time series of finite T, such that practically neither an ensemble average nor the asymptotic limit T → ∞ can be achieved. To accommodate for a meaningful analysis of such finite-length data we here develop the framework of single-trajectory spectral analysis for one of the standard models of anomalous diffusion, scaled Brownian motion.Wedemonstrate that the frequency dependence of the single-trajectory PSD is exactly the same as for standard Brownian motion, which may lead one to the erroneous conclusion that the observed motion is normal-diffusive. However, a distinctive feature is shown to be provided by the explicit dependence on the measurement time T, and this ageing phenomenon can be used to deduce the anomalous diffusion exponent.Wealso compare our results to the single-trajectory PSD behaviour of another standard anomalous diffusion process, fractional Brownian motion, and work out the commonalities and differences. Our results represent an important step in establishing singletrajectory PSDs as an alternative (or complement) to analyses based on the time-averaged mean squared displacement.
We measure valence-to-core x-ray emission spectra of compressed crystalline GeO₂ up to 56 GPa and of amorphous GeO₂ up to 100 GPa. In a novel approach, we extract the Ge coordination number and mean Ge-O distances from the emission energy and the intensity of the Kβ'' emission line. The spectra of high-pressure polymorphs are calculated using the Bethe-Salpeter equation. Trends observed in the experimental and calculated spectra are found to match only when utilizing an octahedral model. The results reveal persistent octahedral Ge coordination with increasing distortion, similar to the compaction mechanism in the sequence of octahedrally coordinated crystalline GeO₂ high-pressure polymorphs.
Hantavirus assembly and budding are governed by the surface glycoproteins Gn and Gc. In this study, we investigated the glycoproteins of Puumala, the most abundant Hantavirus species in Europe, using fluorescently labeled wild-type constructs and cytoplasmic tail (CT) mutants. We analyzed their intracellular distribution, co-localization and oligomerization, applying comprehensive live, single-cell fluorescence techniques, including confocal microscopy, imaging flow cytometry, anisotropy imaging and Number&Brightness analysis. We demonstrate that Gc is significantly enriched in the Golgi apparatus in absence of other viral components, while Gn is mainly restricted to the endoplasmic reticulum (ER). Importantly, upon co-expression both glycoproteins were found in the Golgi apparatus. Furthermore, we show that an intact CT of Gc is necessary for efficient Golgi localization, while the CT of Gn influences protein stability. Finally, we found that Gn assembles into higher-order homo-oligomers, mainly dimers and tetramers, in the ER while Gc was present as mixture of monomers and dimers within the Golgi apparatus. Our findings suggest that PUUV Gc is the driving factor of the targeting of Gc and Gn to the Golgi region, while Gn possesses a significantly stronger self-association potential.
Cold-regulated (COR) 15A is an intrinsically disordered protein (IDP) from Arabidopsis thaliana important for freezing tolerance. During freezing-induced cellular dehydration, COR15A transitions from a disordered to mostly alpha-helical structure. We tested whether mutations that increase the helicity of COR15A also increase its protective function. Conserved glycine residues were identified and mutated to alanine. Nuclear magnetic resonance (NMR) spectroscopy was used to identify residue-specific changes in helicity for wildtype (WT) COR15A and the mutants. Circular dichroism (CD) spectroscopy was used to monitor the coil-helix transition in response to increasing concentrations of trifluoroethanol (TFE) and ethylene glycol. The impact of the COR15A mutants on the stability of model membranes during a freeze-thaw cycle was investigated by fluorescence spectroscopy. The results of these experiments showed the mutants had a higher content of alpha-helical structure and the increased alpha-helicity improved membrane stabilization during freezing. Comparison of the TFE- and ethylene glycol-induced coil-helix transitions support our conclusion that increasing the transient helicity of COR15A in aqueous solution increases its ability to stabilize membranes during freezing. Altogether, our results suggest the conserved glycine residues are important for maintaining the disordered structure of COR15A but are also compatible with the formation of alpha-helical structure during freezing induced dehydration.
This dissertation is concerned with the relation between qualitative phonological organization in the form of syllabic structure and continuous phonetics, that is, the spatial and temporal dimensions of vocal tract action that express syllabic structure. The main claim of the dissertation is twofold. First, we argue that syllabic organization exerts multiple effects on the spatio-temporal properties of the segments that partake in that organization. That is, there is no unique or privileged exponent of syllabic organization. Rather, syllabic organization is expressed in a pleiotropy of phonetic indices. Second, we claim that a better understanding of the relation between qualitative phonological organization and continuous phonetics is reached when one considers how the string of segments (over which the nature of the phonological organization is assessed) responds to perturbations (scaling of phonetic variables) of localized properties (such as durations) within that string. Specifically, variation in phonetic variables and more specifically prosodic variation is a crucial key to understanding the nature of the link between (phonological) syllabic organization and the phonetic spatio-temporal manifestation of that organization. The effects of prosodic variation on segmental properties and on the overlap between the segments, we argue, offer the right pathway to discover patterns related to syllabic organization. In our approach, to uncover evidence for global organization, the sequence of segments partaking in that organization as well as properties of these segments or their relations with one another must be somehow locally varied. The consequences of such variation on the rest of the sequence can then be used to unveil the span of organization. When local perturbations to segments or relations between adjacent segments have effects that ripple through the rest of the sequence, this is evidence that organization is global. If instead local perturbations stay local with no consequences for the rest of the whole, this indicates that organization is local.
In this work we investigated ultrafast demagnetization in a Heusler-alloy. This material belongs to the halfmetal and exists in a ferromagnetic phase. A special feature of investigated alloy is a structure of electronic bands. The last leads to the specific density of the states. Majority electrons form a metallic like structure while minority electrons form a gap near the Fermi-level, like in semiconductor. This particularity offers a good possibility to use this material as model-like structure and to make some proof of principles concerning demagnetization. Using pump-probe experiments we carried out time-resolved measurements to figure out the times of demagnetization. For the pumping we used ultrashort laser pulses with duration around 100 fs. Simultaneously we used two excitation regimes with two different wavelengths namely 400 nm and 1240 nm. Decreasing the energy of photons to the gap size of the minority electrons we explored the effect of the gap on the demagnetization dynamics. During this work we used for the first time OPA (Optical Parametrical Amplifier) for the generation of the laser irradiation in a long-wave regime. We tested it on the FETOSPEX-beamline in BASSYII electron storage ring. With this new technique we measured wavelength dependent demagnetization dynamics. We estimated that the demagnetization time is in a correlation with photon energy of the excitation pulse. Higher photon energy leads to the faster demagnetization in our material. We associate this result with the existence of the energy-gap for minority electrons and explained it with Elliot-Yaffet-scattering events. Additionally we applied new probe-method for magnetization state in this work and verified their effectivity. It is about the well-known XMCD (X-ray magnetic circular dichroism) which we adopted for the measurements in reflection geometry. Static experiments confirmed that the pure electronic dynamics can be separated from the magnetic one. We used photon energy fixed on the L3 of the corresponding elements with circular polarization. Appropriate incidence angel was estimated from static measurements. Using this probe method in dynamic measurements we explored electronic and magnetic dynamics in this alloy.
Determining the optimal grid resolution for topographic analysis on an airborne lidar dataset
(2019)
Digital elevation models (DEMs) are a gridded representation of the surface of the Earth and typically contain uncertainties due to data collection and processing. Slope and aspect estimates on a DEM contain errors and uncertainties inherited from the representation of a continuous surface as a grid (referred to as truncation error; TE) and from any DEM uncertainty. We analyze in detail the impacts of TE and propagated elevation uncertainty (PEU) on slope and aspect.
Using synthetic data as a control, we define functions to quantify both TE and PEU for arbitrary grids. We then develop a quality metric which captures the combined impact of both TE and PEU on the calculation of topographic metrics. Our quality metric allows us to examine the spatial patterns of error and uncertainty in topographic metrics and to compare calculations on DEMs of different sizes and accuracies.
Using lidar data with point density of ∼10 pts m−2 covering Santa Cruz Island in southern California, we are able to generate DEMs and uncertainty estimates at several grid resolutions. Slope (aspect) errors on the 1 m dataset are on average 0.3∘ (0.9∘) from TE and 5.5∘ (14.5∘) from PEU. We calculate an optimal DEM resolution for our SCI lidar dataset of 4 m that minimizes the error bounds on topographic metric calculations due to the combined influence of TE and PEU for both slope and aspect calculations over the entire SCI. Average slope (aspect) errors from the 4 m DEM are 0.25∘ (0.75∘) from TE and 5∘ (12.5∘) from PEU. While the smallest grid resolution possible from the high-density SCI lidar is not necessarily optimal for calculating topographic metrics, high point-density data are essential for measuring DEM uncertainty across a range of resolutions.
Splits and Birds
(2019)
There is evidence both for mental number representations along a horizontal mental number line with larger numbers to the right of smaller numbers (for Western cultures) and a physically grounded, vertical representation where “more is up.” Few studies have compared effects in the horizontal and vertical dimension and none so far have combined both dimensions within a single paradigm where numerical magnitude was task-irrelevant and none of the dimensions was primed by a response dimension. We now investigated number representations over both dimensions, building on findings that mental representations of numbers and space co-activate each other. In a Go/No-go experiment, participants were auditorily primed with a relatively small or large number and then visually presented with quasi-randomly distributed distractor symbols and one Arabic target number (in Go trials only). Participants pressed a central button whenever they detected the target number and elsewise refrained from responding. Responses were not more efficient when small numbers were presented to the left and large numbers to the right. However, results indicated that large numbers were associated with upper space more strongly than small numbers. This suggests that in two-dimensional space when no response dimension is given, numbers are conceptually associated with vertical, but not horizontal space.
Hydrometeorological hazards caused losses of approximately 110 billion U.S. Dollars in 2016 worldwide. Current damage estimations do not consider the uncertainties in a comprehensive way, and they are not consistent between spatial scales. Aggregated land use data are used at larger spatial scales, although detailed exposure data at the object level, such as openstreetmap.org, is becoming increasingly available across the globe.We present a probabilistic approach for object-based damage estimation which represents uncertainties and is fully scalable in space. The approach is applied and validated to company damage from the flood of 2013 in Germany. Damage estimates are more accurate compared to damage models using land use data, and the estimation works reliably at all spatial scales. Therefore, it can as well be used for pre-event analysis and risk assessments. This method takes hydrometeorological damage estimation and risk assessments to the next level, making damage estimates and their uncertainties fully scalable in space, from object to country level, and enabling the exploitation of new exposure data.
Understanding and quantifying total economic impacts of flood events is essential for flood risk management and adaptation planning. Yet, detailed estimations of joint direct and indirect flood-induced economic impacts are rare. In this study an innovative modeling procedure for the joint assessment of short-term direct and indirect economic flood impacts is introduced. The procedure is applied to 19 economic sectors in eight federal states of Germany after the flood events in 2013. The assessment of the direct economic impacts is object-based and considers uncertainties associated with the hazard, the exposed objects and their vulnerability. The direct economic impacts are then coupled to a supply-side Input-Output-Model to estimate the indirect economic impacts. The procedure provides distributions of direct and indirect economic impacts which capture the associated uncertainties. The distributions of the direct economic impacts in the federal states are plausible when compared to reported values. The ratio between indirect and direct economic impacts shows that the sectors Manufacturing, Financial and Insurance activities suffered the most from indirect economic impacts. These ratios also indicate that indirect economic impacts can be almost as high as direct economic impacts. They differ strongly between the economic sectors indicating that the application of a single factor as a proxy for the indirect impacts of all economic sectors is not appropriate.
The immense popularity of online communication services in the last decade has not only upended our lives (with news spreading like wildfire on the Web, presidents announcing their decisions on Twitter, and the outcome of political elections being determined on Facebook) but also dramatically increased the amount of data exchanged on these platforms. Therefore, if we wish to understand the needs of modern society better and want to protect it from new threats, we urgently need more robust, higher-quality natural language processing (NLP) applications that can recognize such necessities and menaces automatically, by analyzing uncensored texts. Unfortunately, most NLP programs today have been created for standard language, as we know it from newspapers, or, in the best case, adapted to the specifics of English social media.
This thesis reduces the existing deficit by entering the new frontier of German online communication and addressing one of its most prolific forms—users’ conversations on Twitter. In particular, it explores the ways and means by how people express their opinions on this service, examines current approaches to automatic mining of these feelings, and proposes novel methods, which outperform state-of-the-art techniques. For this purpose, I introduce a new corpus of German tweets that have been manually annotated with sentiments, their targets and holders, as well as lexical polarity items and their contextual modifiers. Using these data, I explore four major areas of sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained opinion mining, (iii) message-level polarity classification, and (iv) discourse-aware sentiment analysis. In the first task, I compare three popular groups of lexicon generation methods: dictionary-, corpus-, and word-embedding–based ones, finding that dictionary-based systems generally yield better polarity lists than the last two groups. Apart from this, I propose a linear projection algorithm, whose results surpass many existing automatically-generated lexicons. Afterwords, in the second task, I examine two common approaches to automatic prediction of sentiment spans, their sources, and targets: conditional random fields (CRFs) and recurrent neural networks, obtaining higher scores with the former model and improving these results even further by redefining the structure of CRF graphs. When dealing with message-level polarity classification, I juxtapose three major sentiment paradigms: lexicon-, machine-learning–, and deep-learning–based systems, and try to unite the first and last of these method groups by introducing a bidirectional neural network with lexicon-based attention. Finally, in order to make the new classifier aware of microblogs' discourse structure, I let it separately analyze the elementary discourse units of each tweet and infer the overall polarity of a message from the scores of its EDUs with the help of two new approaches: latent-marginalized CRFs and Recursive Dirichlet Process.
The in‐depth understanding of charge carrier photogeneration and recombination mechanisms in organic solar cells is still an ongoing effort. In donor:acceptor (bulk) heterojunction organic solar cells, charge photogeneration and recombination are inter‐related via the kinetics of charge transfer states—being singlet or triplet states. Although high‐charge‐photogeneration quantum yields are achieved in many donor:acceptor systems, only very few systems show significantly reduced bimolecular recombination relative to the rate of free carrier encounters, in low‐mobility systems. This is a serious limitation for the industrialization of organic solar cells, in particular when aiming at thick active layers. Herein, a meta‐analysis of the device performance of numerous bulk heterojunction organic solar cells is presented for which field‐dependent photogeneration, charge carrier mobility, and fill factor are determined. Herein, a “spin‐related factor” that is dependent on the ratio of back electron transfer of the triplet charge transfer (CT) states to the decay rate of the singlet CT states is introduced. It is shown that this factor links the recombination reduction factor to charge‐generation efficiency. As a consequence, it is only in the systems with very efficient charge generation and very fast CT dissociation that free carrier recombination is strongly suppressed, regardless of the spin‐related factor.
Undisclosed desires
(2019)
Following decades of quality management featuring in higher education settings, questions regarding its implementation, impact and outcomes remain. Indeed, leaving aside anecdotal case studies and value-laden documentaries of best practice, current research still knows very little about the implementation of quality management in teaching and learning within higher education institutions. Referring to data collected from German higher education institutions in which a quality management department or functional equivalent was present, this article theorises and provides evidence for the supposition that the implementation of quality management follows two implicit logics. Specifically, it tends either towards the logic of appropriateness or, contrastingly, towards the logic of consequentialism. This study’s results also suggest that quality managers’ socialisation is related to these logics and that it influences their views on quality management in teaching and learning.
The German Sonderweg thesis has been discarded in most research fields. Yet in regards to the military, things differ: all conflicts before the Second World War are interpreted as prelude to the war of extermination between 1939–1945. This article specifically looks at the Franco-Prussian War 1870–71 and German behaviour vis-à-vis regular combatants, civilians and irregular guerrilla fighters, the so-called francs-tireurs. The author argues that the counter-measures were not exceptional for nineteenth century warfare and also shows how selective reading of the existing secondary literature has distorted our view on the war.
The fabrication of 1D nanostrands composed of stimuli responsive microgels has been shown in this work. Microgels are well known materials able to respond to various stimuli from outer environment. Since these microgels respond via a volume change to an external stimulus, a targeted mechanical response can be achieved. Through carefully choosing the right composition of the polymer matrix, microgels can be designed to react precisely to the targeted stimuli (e.g. drug delivery via pH and temperature changes, or selective contractions through changes in electrical current125).
In this work, it was aimed to create flexible nano-filaments which are capable of fast anisotropic contractions similar to muscle filaments. For the fabrication of such filaments or strands, nanostructured templates (PDMS wrinkles) were chosen due to a facile and low-cost fabrication and versatile tunability of their dimensions. Additionally, wrinkling is a well-known lithography-free method which enables the fabrication of nanostructures in a reproducible manner and with a high long-range periodicity.
In Chapter 2.1, it was shown for the first time that microgels as soft matter particles can be aligned to densely packed microgel arrays of various lateral dimensions. The alignment of microgels with different compositions (e.g. VCL/AAEM, NIPAAm, NIPAAm/VCL and charged microgels) was shown by using different assembly techniques (e.g. spin-coating, template confined molding). It was chosen to set one experimental parameter constant which was the SiOx surface composition of the templates and substrates (e.g. oxidized PDMS wrinkles, Si-wafers and glass slides). It was shown that the fabrication of nanoarrays was feasible with all tested microgel types. Although the microgels exhibited different deformability when aligned on a flat surface, they retained their thermo-responsivity and swelling behavior.
Towards the fabrication of 1D microgel strands interparticle connectivity was aspired. This was achieved via different cross-linking methods (i.e. cross-linking via UV-irradiation and host-guest complexation) discussed in Chapter 2.2. The microgel arrays created by different assembly methods and microgel types were tested for their cross-linking suitability. It was observed that NIPAAm based microgels cannot be cross-linked with UV light. Furthermore, it was found that these microgels exhibit a strong surface-particle-interaction and therefore could not be detached from the given substrates. In contrast to the latter, with VCL/AAEM based microgels it was possible to both UV cross-link them based on the keto-enol tautomerism of the AAEM copolymer, and to detach them from the substrate due to the lower adhesion energy towards SiOx surfaces. With VCL/AAEM microgels long, one-dimensional microgel strands could be re-dispersed in water for further analysis. It has also been shown that at least one lateral dimension of the free dispersed 1D microgel strands is easily controllable by adjusting the wavelength of the wrinkled template. For further work, only VCL/AAEM based microgels were used to focus on the main aim of this work, i.e. the fabrication of 1D microgel nanostrands.
As an alternative to the unspecific and harsh UV cross-linking, the host-guest complexation via diazobenzene cross-linkers and cyclodextrin hosts was explored. The idea behind this approach was to give means to a future construction kit-like approach by incorporation of cyclodextrin comonomers in a broad variety of particle systems (e.g. microgels, nanoparticles). For this purpose, VCL/AAEM microgels were copolymerized with different amounts of mono-acrylate functionalized β-cyclodextrin (CD). After successfully testing the cross-linking capability in solution, the cross-linking of aligned VCL/AAEM/CD microgels was tried. Although the cross-linking worked well, once the single arrays came into contact to each other, they agglomerated. As a reason for this behavior residual amounts of mono-complexed diazobenzene linkers were suspected. Thus, end-capping strategies were tried out (e.g. excess amounts of β-cyclodextrin and coverage with azobenzene functionalized AuNPs) but were unsuccessful. With deeper thought, entropy effects were taken into consideration which favor the release of complexed diazobenzene linker leading to agglomerations. To circumvent this entropy driven effect, a multifunctional polymer with 50% azobenzene groups (Harada polymer) was used. First experiments with this polymer showed promising results regarding a less pronounced agglomeration (Figure 77). Thus, this approach could be pursued in the future. In this chapter it was found out that in contrast to pearl necklace and ribbon like formations, particle alignment in zigzag formation provided the best compromise in terms of stability in dispersion (see Figure 44a and Figure 51) while maintaining sufficient flexibility.
For this reason, microgel strands in zigzag formation were used for the motion analysis described in Chapter 2.3. The aim was to observe the properties of unrestrained microgel strands in solution (e.g. diffusion behavior, rotational properties and ideally, anisotropic contraction after temperature increase). Initially, 1D microgel strands were manipulated via AFM in a liquid cell setup. It could be observed that the strands required a higher load force compared to single microgels to be detached from the surface. However, with the AFM it was not possible to detach the strands in a controllable manner but resulted in a complete removal of single microgel particles and a tearing off the strands from the surface, respectively. For this reason, to observe the motion behavior of unrestrained microgel strands in solution, confocal microscopy was used. Furthermore, to hinder an adsorption of the strands, it was found out that coating the surface of the substrates with a repulsive polymer film was beneficial. Confocal and wide-field microscopy videos showed that the microgel strands exhibit translational and rotational diffusive motion in solution without perceptible bending. Unfortunately, with these methods the detection of the anisotropic stimuli responsive contraction of the free moving microgel strands was not possible. To summarize, the flexibility of microgel strands is more comparable to the mechanical behavior of a semi flexible cable than to a yarn. The strands studied here consist of dozens or even hundreds of discrete submicron units strung together by cross-linking, having few parallels in nanotechnology.
With the insights gained in this work on microgel-surface interactions, in the future, a targeted functionalization of the template and substrate surfaces can be conducted to actively prevent unwanted microgel adsorption for a given microgel system (e.g. PVCL and polystyrene coating235). This measure would make the discussed alignment methods more diverse. As shown herein, the assembly methods enable a versatile microgel alignment (e.g. microgel meshes, double and triple strands). To go further, one could use more complex templates (e.g. ceramic rhombs and star shaped wrinkles (Figure 14) to expand the possibilities of microgel alignment and to precisely control their aspect ratios (e.g. microgel rods with homogeneous size distributions).
Surface modification by polyzwitterions of the sulfabetaine-type, and their resistance to biofouling
(2019)
Films of zwitterionic polymers are increasingly explored for conferring fouling resistance to materials. Yet, the structural diversity of polyzwitterions is rather limited so far, and clear structure-property relationships are missing. Therefore, we synthesized a series of new polyzwitterions combining ammonium and sulfate groups in their betaine moieties, so-called poly(sulfabetaine)s. Their chemical structures were varied systematically, the monomers carrying methacrylate, methacrylamide, or styrene moieties as polymerizable groups. High molar mass homopolymers were obtained by free radical polymerization. Although their solubilities in most solvents were very low, brine and lower fluorinated alcohols were effective solvents in most cases. A set of sulfabetaine copolymers containing about 1 mol % (based on the repeat units) of reactive benzophenone methacrylate was prepared, spin-coated onto solid substrates, and photo-cured. The resistance of these films against the nonspecific adsorption by two model proteins (bovine serum albumin—BSA, fibrinogen) was explored, and directly compared with a set of references. The various polyzwitterions reduced protein adsorption strongly compared to films of poly(n-butyl methacrylate) that were used as a negative control. The poly(sulfabetaine)s showed generally even somewhat higher anti-fouling activity than their poly(sulfobetaine) analogues, though detailed efficacies depended on the individual polymer–protein pairs. Best samples approach the excellent performance of a poly(oligo(ethylene oxide) methacrylate) reference.
The impact of the orientation of zwitterionic groups, with respect to the polymer backbone, on the antifouling performance of thin hydrogel films made of polyzwitterions is explored. In an extension of the recent discussion about differences in the behavior of polymeric phosphatidylcholines and choline phosphates, a quasi-isomeric set of three poly(sulfobetaine methacrylate)s is designed for this purpose. The design is based on the established monomer 3-[N-2-(methacryloyloxy)ethyl-N,N-dimethyl]ammonio-propane-1-sulfonate and two novel sulfobetaine methacrylates, in which the positions of the cationic and the ionic groups relative to the polymerizable group, and thus also to the polymer backbone, are altered. The effect of the varied segmental dipole orientation on their water solubility, wetting behavior by water, and fouling resistance is compared. As model systems, the adsorption of the model proteins bovine serum albumin (BSA), fibrinogen, and lysozyme onto films of the various polyzwitterion surfaces is studied, as well as the settlement of a diatom (Navicula perminuta) and barnacle cyprids (Balanus improvisus) as representatives of typical marine fouling communities. The results demonstrate the important role of the zwitterionic group's orientation on the polymer behavior and fouling resistance
In natural heterogeneous environments, the fitness of animals is strongly influenced by the availability and composition of food. Food quantity and biochemical quality constraints may affect individual traits of consumers differently, mediating fitness response variation within and among species. Using a multifactorial experimental approach, we assessed population growth rate, fecundity, and survival of six strains of the two closely related freshwater rotifer species Brachionus calyciflorus sensu stricto and Brachionus fernandoi. Therefore, rotifers fed low and high concentrations of three algal species differing in their biochemical food quality. Additionally, we explored the potential of a single limiting biochemical nutrient to mediate variations in population growth response. Therefore, rotifers fed a sterol-free alga, which we supplemented with cholesterol-containing liposomes. Co-limitation by food quantity and biochemical food quality resulted in differences in population growth rates among strains, but not between species, although effects on fecundity and survival differed between species. The effect of cholesterol supplementation on population growth was strain-specific but not species-specific. We show that fitness response variations within and among species can be mediated by biochemical food quality. Dietary constraints thus may act as evolutionary drivers on physiological traits of consumers, which may have strong implications for various ecological interactions.
This is a publication-based dissertation comprising three original research stud-ies (one published, one submitted and one ready for submission; status March 2019). The dissertation introduces a generic computer model as a tool to investigate the behaviour and population dynamics of animals in cyclic environments. The model is further employed for analysing how migratory birds respond to various scenarios of altered food supply under global change. Here, ecological and evolutionary time-scales are considered, as well as the biological constraints and trade-offs the individual faces, which ultimately shape response dynamics at the population level. Further, the effect of fine-scale temporal patterns in re-source supply are studied, which is challenging to achieve experimentally. My findings predict population declines, altered behavioural timing and negative carry-over effects arising in migratory birds under global change. They thus stress the need for intensified research on how ecological mechanisms are affected by global change and for effective conservation measures for migratory birds. The open-source modelling software created for this dissertation can now be used for other taxa and related research questions. Overall, this thesis improves our mechanistic understanding of the impacts of global change on migratory birds as one prerequisite to comprehend ongoing global biodiversity loss. The research results are discussed in a broader ecological and scientific context in a concluding synthesis chapter.
Culturally diverse schools may constitute natural arenas for training crucial intercultural skills. We hypothesized that a classroom cultural diversity climate fostering contact and cooperation and multiculturalism, but not a climate fostering color‐evasion, would be positively related to adolescents’ intercultural competence. Adolescents in North Rhine‐Westphalia (N = 631, Mage = 13.69 years, 49% of immigrant background) and Berlin (N = 1,335, Mage = 14.69 years, 52% of immigrant background) in Germany reported their perceptions of the classroom cultural diversity climate and completed quantitative and qualitative measures assessing their intercultural competence. Multilevel structural equation models indicate that contact and cooperation, multiculturalism, and, surprisingly, also color‐evasion (as in emphasizing a common humanity), were positively related to the intercultural competence of immigrant and non‐immigrant background students. We conclude that all three aspects of the classroom climate are uniquely related to aspects of adolescents’ intercultural competence and that none of them may be sufficient on their own.
Over the years, we developed highly selective fluorescent probes for K+ in water, which show K+-induced fluorescence intensity enhancements, lifetime changes, or a ratiometric behavior at two emission wavelengths (cf. Scheme 1, K1-K4). In this paper, we introduce selective fluorescent probes for Na+ in water, which also show Na+ induced signal changes, which are analyzed by diverse fluorescence techniques. Initially, we synthesized the fluorescent probes 2, 4, 5, 6 and 10 for a fluorescence analysis by intensity enhancements at one wavelength by varying the Na+ responsive ionophore unit and the fluorophore moiety to adjust different K-d values for an intra- or extracellular Na+ analysis. Thus, we found that 2, 4 and 5 are Na+ selective fluorescent tools, which are able to measure physiologically important Na+ levels at wavelengths higher than 500 nm. Secondly, we developed the fluorescent probes 7 and 8 to analyze precise Na+ levels by fluorescence lifetime changes. Herein, only 8 (K-d=106 mm) is a capable fluorescent tool to measure Na+ levels in blood samples by lifetime changes. Finally, the fluorescent probe 9 was designed to show a Na+ induced ratiometric fluorescence behavior at two emission wavelengths. As desired, 9 (K-d=78 mm) showed a ratiometric fluorescence response towards Na+ ions and is a suitable tool to measure physiologically relevant Na+ levels by the intensity change of two emission wavelengths at 404 nm and 492 nm.
Trace elements, like Cu, Zn, Fe, or Se, are important for the proper functioning of antioxidant enzymes. However, in excessive amounts, they can also act as pro-oxidants. Accordingly, trace elements influence redox-modulated signaling pathways, such as the Nrf2 pathway. Vice versa, Nrf2 target genes belong to the group of transport and metal binding proteins. In order to investigate whether Nrf2 directly regulates the systemic trace element status, we used mice to study the effect of a constitutive, whole-body Nrf2 knockout on the systemic status of Cu, Zn, Fe, and Se. As the loss of selenoproteins under Se-deprived conditions has been described to further enhance Nrf2 activity, we additionally analyzed the combination of Nrf2 knockout with feeding diets that provide either suboptimal, adequate, or supplemented amounts of Se. Experiments revealed that the Nrf2 knockout partially affected the trace element concentrations of Cu, Zn, Fe, or Se in the intestine, liver, and/or plasma. However, aside from Fe, the other three trace elements were only marginally modulated in an Nrf2-dependent manner. Selenium deficiency mainly resulted in increased plasma Zn levels. One putative mediator could be the metal regulatory transcription factor 1, which was up-regulated with an increasing Se supply and downregulated in Se-supplemented Nrf2 knockout mice.
Scholars of modern Jewish thought explore the hermeneutics of “translation” to describe the transference of concepts between discourses. I suggest a more radical approach – translation as transvaluation – is required. Eschewing modern tests of truth such as “the author would have accepted it” and “the author should have accepted it,” this radical form of translation is intentionally unfaithful to original meanings. However, it is not a reductionist reading or a liberating text. Instead, it is a persistent squabble depending on both source and translation for sustenance. Exploring this paradigm entails a review of three expositions of the Korah biblical narrative; three readings dedicated to keeping an eye on current events: (1) Tsene-rene (Prague, 1622), biblical prose; (2) Yaldei Yisrael Kodesh, (Tel Aviv, 1973), a secular Zionist reworking of Tsene-rene; and (3) The Jews are Coming (Israel, 2014–2017) a satirical television show.
Achromatium oxaliferum is a large sulfur bacterium easily recognized by large intracellular calcium carbonate bodies. Although these bodies often fill major parts of the cells' volume, their role and specific intracellular location are unclear. In this study, we used various microscopy and staining techniques to identify the cell compartment harboring the calcium carbonate bodies. We observed that Achromatium cells often lost their calcium carbonate bodies, either naturally or induced by treatments with diluted acids, ethanol, sodium bicarbonate and UV radiation which did not visibly affect the overall shape and motility of the cells (except for UV radiation). The water-soluble fluorescent dye fluorescein easily diffused into empty cavities remaining after calcium carbonate loss. Membranes (stained with Nile Red) formed a network stretching throughout the cell and surrounding empty or filled calcium carbonate cavities. The cytoplasm (stained with FITC and SYBR Green for nucleic acids) appeared highly condensed and showed spots of dissolved Ca2+ (stained with Fura-2). From our observations, we conclude that the calcium carbonate bodies are located in the periplasm, in extra-cytoplasmic pockets of the cytoplasmic membrane and are thus kept separate from the cell's cytoplasm. This periplasmic localization of the carbonate bodies might explain their dynamic formation and release upon environmental changes.
Graph repair, restoring consistency of a graph, plays a prominent role in several areas of computer science and beyond: For example, in model-driven engineering, the abstract syntax of models is usually encoded using graphs. Flexible edit operations temporarily create inconsistent graphs not representing a valid model, thus requiring graph repair. Similarly, in graph databases—managing the storage and manipulation of graph data—updates may cause that a given database does not satisfy some integrity constraints, requiring also graph repair. We present a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing repairs. In our context, we formalize consistency by so-called graph conditions being equivalent to first-order logic on graphs. We present two kind of repair algorithms: State-based repair restores consistency independent of the graph update history, whereas deltabased (or incremental) repair takes this history explicitly into account. Technically, our algorithms rely on an existing model generation algorithm for graph conditions implemented in AutoGraph. Moreover, the delta-based approach uses the new concept of satisfaction (ST) trees for encoding if and how a graph satisfies a graph condition. We then demonstrate how to manipulate these STs incrementally with respect to a graph update.
When dealing with issues that are of high so-cietal relevance, Earth sciences still face a lack of accep-tance, which is partly rooted in insufficient communicationstrategies on the individual and local community level. Toincrease the efficiency of communication routines, sciencehas to transform its outreach concepts to become more awareof individual needs and demands. The “encoding/decoding”concept as well as critical intercultural communication stud-ies can offer pivotal approaches for this transformation.
Emotions are a central element of human experience. They occur with high frequency in everyday life and play an important role in decision making. However, currently there is no consensus among researchers on what constitutes an emotion and on how emotions should be investigated. This dissertation identifies three problems of current emotion research: the problem of ground truth, the problem of incomplete constructs and the problem of optimal representation. I argue for a focus on the detailed measurement of emotion manifestations with computer-aided methods to solve these problems. This approach is demonstrated in three research projects, which describe the development of methods specific to these problems as well as their application to concrete research questions.
The problem of ground truth describes the practice to presuppose a certain structure of emotions as the a priori ground truth. This determines the range of emotion descriptions and sets a standard for the correct assignment of these descriptions. The first project illustrates how this problem can be circumvented with a multidimensional emotion perception paradigm which stands in contrast to the emotion recognition paradigm typically employed in emotion research. This paradigm allows to calculate an objective difficulty measure and to collect subjective difficulty ratings for the perception of emotional stimuli. Moreover, it enables the use of an arbitrary number of emotion stimuli categories as compared to the commonly used six basic emotion categories. Accordingly, we collected data from 441 participants using dynamic facial expression stimuli from 40 emotion categories. Our findings suggest an increase in emotion perception difficulty with increasing actor age and provide evidence to suggest that young adults, the elderly and men underestimate their emotion perception difficulty. While these effects were predicted from the literature, we also found unexpected and novel results. In particular, the increased difficulty on the objective difficulty measure for female actors and observers stood in contrast to reported findings. Exploratory analyses revealed low relevance of person-specific variables for the prediction of emotion perception difficulty, but highlighted the importance of a general pleasure dimension for the ease of emotion perception.
The second project targets the problem of incomplete constructs which relates to vaguely defined psychological constructs on emotion with insufficient ties to tangible manifestations. The project exemplifies how a modern data collection method such as face tracking data can be used to sharpen these constructs on the example of arousal, a long-standing but fuzzy construct in emotion research. It describes how measures of distance, speed and magnitude of acceleration can be computed from face tracking data and investigates their intercorrelations. We find moderate to strong correlations among all measures of static information on one hand and all measures of dynamic information on the other. The project then investigates how self-rated arousal is tied to these measures in 401 neurotypical individuals and 19 individuals with autism. Distance to the neutral face was predictive of arousal ratings in both groups. Lower mean arousal ratings were found for the autistic group, but no difference in correlation of the measures and arousal ratings could be found between groups. Results were replicated in a high autistic traits group consisting of 41 participants. The findings suggest a qualitatively similar perception of arousal for individuals with and without autism. No correlations between valence ratings and any of the measures could be found which emphasizes the specificity of our tested measures for the construct of arousal.
The problem of optimal representation refers to the search for the best representation of emotions and the assumption that there is a one-fits-all solution. In the third project we introduce partial least squares analysis as a general method to find an optimal representation to relate two high-dimensional data sets to each other. The project demonstrates its applicability to emotion research on the question of emotion perception differences between men and women. The method was used with emotion rating data from 441 participants and face tracking data computed on 306 videos. We found quantitative as well as qualitative differences in the perception of emotional facial expressions between these groups. We showed that women’s emotional perception systematically captured more of the variance in facial expressions. Additionally, we could show that significant differences exist in the way that women and men perceive some facial expressions which could be visualized as concrete facial expression sequences. These expressions suggest differing perceptions of masked and ambiguous facial expressions between the sexes. In order to facilitate use of the developed method by the research community, a package for the statistical environment R was written. Furthermore, to call attention to the method and its usefulness for emotion research, a website was designed that allows users to explore a model of emotion ratings and facial expression data in an interactive fashion.
The politics of zoom
(2019)
Following the mandate in the Paris Agreement for signatories to provide “climate services” to their constituents, “downscaled” climate visualizations are proliferating. But the process of downscaling climate visualizations does not neutralize the political problems with their synoptic global sources—namely, their failure to empower communities to take action and their replication of neoliberal paradigms of globalization. In this study we examine these problems as they apply to interactive climate‐visualization platforms, which allow their users to localize global climate information to support local political action. By scrutinizing the political implications of the “zoom” tool from the perspective of media studies and rhetoric, we add to perspectives of cultural cartography on the issue of scaling from our fields. Namely, we break down the cinematic trope of “zooming” to reveal how it imports the political problems of synopticism to the level of individual communities. As a potential antidote to the politics of zoom, we recommend a downscaling strategy of connectivity, which associates rather than reduces situated views of climate to global ones.
For millennia, humans have affected landscapes all over the world. Due to horizontal expansion, agriculture plays a major role in the process of fragmentation. This process is caused by a substitution of natural habitats by agricultural land leading to agricultural landscapes. These landscapes are characterized by an alternation of agriculture and other land use like forests. In addition, there are landscape elements of natural origin like small water bodies. Areas of different land use are beside each other like patches, or fragments. They are physically distinguishable which makes them look like a patchwork from an aerial perspective. These fragments are each an own ecosystem with conditions and properties that differ from their adjacent fragments. As open systems, they are in exchange of information, matter and energy across their boundaries. These boundary areas are called transition zones. Here, the habitat properties and environmental conditions are altered compared to the interior of the fragments. This changes the abundance and the composition of species in the transition zones, which in turn has a feedback effect on the environmental conditions.
The literature mainly offers information and insights on species abundance and composition in forested transition zones. Abiotic effects, the gradual changes in energy and matter, received less attention. In addition, little is known about non-forested transition zones. For example, the effects on agricultural yield in transition zones of an altered microclimate, matter dynamics or different light regimes are hardly researched or understood. The processes in transition zones are closely connected with altered provisioning and regulating ecosystem services. To disentangle the mechanisms and to upscale the effects, models can be used.
My thesis provides insights into these topics: literature was reviewed and a conceptual framework for the quantitative description of gradients of matter and energy in transition zones was introduced. The results of measurements of environmental gradients like microclimate, aboveground biomass and soil carbon and nitrogen content are presented that span from within the forest into arable land. Both the measurements and the literature review could not validate a transition zone of 100 m for abiotic effects. Although this value is often reported and used in the literature, it is likely to be smaller.
Further, the measurements suggest that on the one hand trees in transition zones are smaller compared to those in the interior of the fragments, while on the other hand less biomass was measured in the arable lands’ transition zone. These results support the hypothesis that less carbon is stored in the aboveground biomass in transition zones. The soil at the edge (zero line) between adjacent forest and arable land contains more nitrogen and carbon content compared to the interior of the fragments. One-year measurements in the transition zone also provided evidence that microclimate is different compared to the fragments’ interior.
To predict the possible yield decreases that transition zones might cause, a modelling approach was developed. Using a small virtual landscape, I modelled the effect of a forest fragment shading the adjacent arable land and the effects of this on yield using the MONICA crop growth model. In the transition zone yield was less compared to the interior due to shading. The results of the simulations were upscaled to the landscape level and exemplarily calculated for the arable land of a whole region in Brandenburg, Germany.
The major findings of my thesis are: (1) Transition zones are likely to be much smaller than assumed in the scientific literature; (2) transition zones aren’t solely a phenomenon of forested ecosystems, but significantly extend into arable land as well; (3) empirical and modelling results show that transition zones encompass biotic and abiotic changes that are likely to be important to a variety of agricultural landscape ecosystem services.
The current thesis examined how second language (L2) speakers of German predict upcoming input during language processing. Early research has shown that the predictive abilities of L2 speakers relative to L1 speakers are limited, resulting in the proposal of the Reduced Ability to Generate Expectations (RAGE) hypothesis. Considering that prediction is assumed to facilitate language processing in L1 speakers and probably plays a role in language learning, the assumption that L1/L2 differences can be explained in terms of different processing mechanisms is a particularly interesting approach. However, results from more recent studies on the predictive processing abilities of L2 speakers have indicated that the claim of the RAGE hypothesis is too broad and that prediction in L2 speakers could be selectively limited. In the current thesis, the RAGE hypothesis was systematically put to the test.
In this thesis, German L1 and highly proficient late L2 learners of German with Russian as L1 were tested on their predictive use of one or more information sources that exist as cues to sentence interpretation in both languages, to test for selective limits. The results showed that, in line with previous findings, L2 speakers can use the lexical-semantics of verbs to predict the upcoming noun. Here the level of prediction was more systematically controlled for than in previous studies by using verbs that restrict the selection of upcoming nouns to the semantic category animate or inanimate. Hence, prediction in L2 processing is possible. At the same time, this experiment showed that the L2 group was slower/less certain than the L1 group. Unlike previous studies, the experiment on case marking demonstrated that L2 speakers can use this morphosyntactic cue for prediction. Here, the use of case marking was tested by manipulating the word order (Dat > Acc vs. Acc > Dat) in double object constructions after a ditransitive verb. Both the L1 and the L2 group showed a difference between the two word order conditions that emerged within the critical time window for an anticipatory effect, indicating their sensitivity towards case. However, the results for the post-critical time window pointed to a higher uncertainty in the L2 group, who needed more time to integrate incoming information and were more affected by the word order variation than the L1 group, indicating that they relied more on surface-level information. A different cue weighting was also found in the experiment testing whether participants predict upcoming reference based on implicit causality information. Here, an additional child L1 group was tested, who had a lower memory capacity than the adult L2 group, as confirmed by a digit span task conducted with both learner groups. Whereas the children were only slightly delayed compared to the adult L1 group and showed the same effect of condition, the L2 speakers showed an over-reliance on surface-level information (first-mention/subjecthood). Hence, the pattern observed resulted more likely from L1/L2 differences than from resource deficits.
The reviewed studies and the experiments conducted show that L2 prediction is affected by a range of factors. While some of the factors can be attributed to more individual differences (e.g., language similarity, slower processing) and can be interpreted by L2 processing accounts assuming that L1 and L2 processing are basically the same, certain limits are better explained by accounts that assume more substantial L1/L2 differences. Crucially, the experimental results demonstrate that the RAGE hypothesis should be refined: Although prediction as a fast-operating mechanism is likely to be affected in L2 speakers, there is no indication that prediction is the dominant source of L1/L2 differences. The results rather demonstrate that L2 speakers show a different weighting of cues and rely more on semantic and surface-level information to predict as well as to integrate incoming information.
Peroxisome biogenesis disorders (PBDs) are nontreatable hereditary diseases with a broad range of severity. Approximately 65% of patients are affected by mutations in the peroxins Pex1 and Pex6. The proteins form the heteromeric Pex1/Pex6 complex, which is important for protein import into peroxisomes. To date, no structural data are available for this AAA+ ATPase complex. However, a wealth of information can be transferred from low-resolution structures of the yeast scPex1/scPex6 complex and homologous, well-characterized AAA+ ATPases. We review the abundant records of missense mutations described in PBD patients with the aim to classify and rationalize them by mapping them onto a homology model of the human Pex1/Pex6 complex. Several mutations concern functionally conserved residues that are implied in ATP hydrolysis and substrate processing. Contrary to fold destabilizing mutations, patients suffering from function-impairing mutations may not benefit from stabilizing agents, which have been reported as potential therapeutics for PBD patients.
Plasmonic metal nanostructures can be tuned to efficiently interact with light, converting the photons into energetic charge carriers and heat. Therefore, the plasmonic nanoparticles such as gold and silver nanoparticles act as nano-reactors, where the molecules attached to their surfaces benefit from the enhanced electromagnetic field along with the generated energetic charge carriers and heat for possible chemical transformations. Hence, plasmonic chemistry presents metal nanoparticles as a unique playground for chemical reactions on the nanoscale remotely controlled by light. However, defining the elementary concepts behind these reactions represents the main challenge for understanding their mechanism in the context of the plasmonically assisted chemistry.
Surface-enhanced Raman scattering (SERS) is a powerful technique employing the plasmon-enhanced electromagnetic field, which can be used for probing the vibrational modes of molecules adsorbed on plasmonic nanoparticles. In this cumulative dissertation, I use SERS to probe the dimerization reaction of 4-nitrothiophenol (4-NTP) as a model example of plasmonic chemistry. I first demonstrate that plasmonic nanostructures such as gold nanotriangles and nanoflowers have a high SERS efficiency, as evidenced by probing the vibrations of the rhodamine dye R6G and the 4-nitrothiophenol 4-NTP. The high signal enhancement enabled the measurements of SERS spectra with a short acquisition time, which allows monitoring the kinetics of chemical reactions in real time.
To get insight into the reaction mechanism, several time-dependent SERS measurements of the 4-NTP have been performed under different laser and temperature conditions. Analysis of the results within a mechanistic framework has shown that the plasmonic heating significantly enhances the reaction rate, while the reaction is probably initiated by the energetic electrons. The reaction was shown to be intensity-dependent, where a certain light intensity is required to drive the reaction. Finally, first attempts to scale up the plasmonic catalysis have been performed showing the necessity to achieve the reaction threshold intensity. Meanwhile, the induced heat needs to quickly dissipate from the reaction substrate, since otherwise the reactants and the reaction platform melt. This study might open the way for further work seeking the possibilities to quickly dissipate the plasmonic heat generated during the reaction and therefore, scaling up the plasmonic catalysis.
Modern rule of law and post-war constitutionalism are both anchored in rights-based limitations on state authority. Rule-of-law norms and principles, at both domestic and international levels, are designed to protect the freedom and dignity of the person. Given this “thick” conception of the rule of law, authoritarian practices that remove constraints on domestic political leaders and weaken mechanisms for holding them accountable necessarily erode both domestic and international rule of law. Drawing on political science research on authoritarian politics, this study identifies three core elements of authoritarian political strategies: subordination of the judiciary, suppression of independent news media and freedom of expression, and restrictions on the ability of civil society groups to organize and participate in public life. According to available data, each of these three practices has become increasingly common in recent years. This study offers a composite measure of the core authoritarian practices and uses it to identify the countries that have shown the most marked increases in authoritarianism. The spread and deepening of these authoritarian practices in diverse regimes around the world diminishes international rule of law. The conclusion argues that resurgent authoritarianism degrades international rule of law even if this is defined as the specifically post-Cold War international legal order.
International courts regularly cite each other, in part as a means of building legitimacy. Such international, cross-court use of precedent (or “judicial dialogue”) among the regional human rights courts and the Human Rights Committee has an additional purpose and effect: the construction of a rights-based global constitutionalism. Judicial dialogue among the human rights courts is purposeful in that the courts see themselves as embedded in, and contributing to, a global human rights legal system. Cross-citation among the human rights courts advances the construction of rights-based global constitutionalism in that it provides a basic degree of coordination among the regional courts. The jurisprudence of the U.N. Human Rights Committee (HRC), as an authoritative interpreter of core international human rights norms, plays the role of a central focal point for the decentralized coordination of jurisprudence. The network of regional courts and the HRC is building an emergent institutional structure for global rights-based constitutionalism.
Objective: We aimed to characterize patients after an acute cardiac event regarding their negative expectations around returning to work and the impact on work capacity upon discharge from cardiac rehabilitation (CR).
Methods: We analyzed routine data of 884 patients (52±7 years, 76% men) who attended 3 weeks of inpatient CR after an acute coronary syndrome (ACS) or cardiac surgery between October 2013 and March 2015. The primary outcome was their status determining their capacity to work (fit vs unfit) at discharge from CR. Further, sociodemographic data (eg, age, sex, and education level), diagnoses, functional data (eg, exercise stress test and 6-min walking test [6MWT]), the Hospital Anxiety and Depression Scale (HADS) and self-assessment of the occupational prognosis (negative expectations and/or unemployment, Würzburger screening) at admission to CR were considered.
Results: A negative occupational prognosis was detected in 384 patients (43%). Out of these, 368 (96%) expected not to return to work after CR and/or were unemployed before CR at 29% (n=113). Affected patients showed a reduced exercise capacity (bicycle stress test: 100 W vs 118 W, P<0.01; 6MWT: 380 m vs 421 m, P<0.01) and were more likely to receive a depression diagnosis (12% vs 3%, P<0.01), as well as higher levels on the HADS. At discharge from CR, 21% of this group (n=81) were fit for work (vs 35% of patients with a normal occupational prognosis (n=175, P<0.01)). Sick leave before the cardiac event (OR 0.4, 95% CI 0.2–0.6, P<0.01), negative occupational expectations (OR 0.4, 95% CI 0.3–0.7, P<0.01) and depression (OR 0.3, 95% CI 0.1–0.8, P=0.01) reduced the likelihood of achieving work capacity upon discharge. In contrast, higher exercise capacity was positively associated.
Conclusion: Patients with a negative occupational prognosis often revealed a reduced physical performance and suffered from a high psychosocial burden. In addition, patients’ occupational expectations were a predictor of work capacity at discharge from CR. Affected patients should be identified at admission to allow for targeted psychosocial care.
Objectives
The aims of this study were to investigate the effects of a six-week in-season period of soccer training and games (congested period) on plasma volume variations (PV), hematological parameters, and physical fitness in elite players. In addition, we analyzed relationships between training load, hematological parameters and players’ physical fitness.
Methods
Eighteen elite players were evaluated before (T1) and after (T2) a six-week in-season period interspersed with 10 soccer matches. At T1 and T2, players performed the Yo-Yo intermittent recovery test level 1 (YYIR1), the repeated shuttle sprint ability test (RSSA), the countermovement jump test (CMJ), and the squat jump test (SJ). In addition, PV and hematological parameters (erythrocytes [M/mm3], hematocrit [%], hemoglobin [g/dl], mean corpuscular volume [fl], mean corpuscular hemoglobin content [pg], and mean hemoglobin concentration [%]) were assessed. Daily ratings of perceived exertion (RPE) were monitored in order to quantify the internal training load.
Results
From T1 to T2, significant performance declines were found for the YYIR1 (p<0.001, effect size [ES] = 0.5), RSSA (p<0.01, ES = 0.6) and SJ tests (p< 0.046, ES = 0.7). However, no significant changes were found for the CMJ (p = 0.86, ES = 0.1). Post-exercise, RSSA blood lactate (p<0.012, ES = 0.2) and PV (p<0.01, ES = 0.7) increased significantly from T1 to T2. A significant decrease was found from T1 to T2 for the erythrocyte value (p<0.002, ES = 0.5) and the hemoglobin concentration (p<0.018, ES = 0.8). The hematocrit percentage rate was also significantly lower (p<0.001, ES = 0.6) at T2. The mean corpuscular volume, mean corpuscular hemoglobin content and the mean hemoglobin content values were not statistically different from T1 to T2. No significant relationships were detected between training load parameters and percentage changes of hematological parameters. However, a significant relationship was observed between training load and changes in RSSA performance (r = -0.60; p<0.003).
Conclusions
An intensive period of “congested match play” over 6 weeks significantly compromised players’ physical fitness. These changes were not related to hematological parameters, even though significant alterations were detected for selected measures.
Wheat is one of the most consumed foods in the world and unfortunately causes allergic reactions which have important health effects. The α-amylase/trypsin inhibitors (ATIs) have been identified as potentially allergen components of wheat. Due to a lack of data on optimization of ATI extraction, a new wheat ATIs extraction approach combining solvent extraction and selective precipitation is proposed in this work. Two types of wheat cultivars (Triticum aestivum L.), Julius and Ponticus were used and parameters such as solvent type, extraction time, temperature, stirring speed, salt type, salt concentration, buffer pH and centrifugation speed were analyzed using the Plackett-Burman design. Salt concentration, extraction time and pH appeared to have significant effects on the recovery of ATIs (p < 0.01). In both wheat cultivars, Julius and Ponticus, ammonium sulfate substantially reduced protein concentration and inhibition of amylase activity (IAA) compared to sodium chloride. The optimal conditions with desirability levels of 0.94 and 0.91 according to the Doehlert design were: salt concentrations of 1.67 and 1.22 M, extraction times of 53 and 118 min, and pHs of 7.1 and 7.9 for Julius and Ponticus, respectively. The corresponding responses were: protein concentrations of 0.31 and 0.35 mg and IAAs of 91.6 and 83.3%. Electrophoresis and MALDI-TOF/MS analysis showed that the extracted ATIs masses were between 10 and 20 kDa. Based on the initial LC-MS/MS analysis, up to 10 individual ATIs were identified in the extracted proteins under the optimal conditions. The positive implication of the present study lies in the quick assessment of their content in different varieties especially while considering their allergenic potential.
Binaries play an important role in observational and theoretical astrophysics. Since the mass and the chemical composition are key ingredients for stellar evolution, high-resolution spectroscopy is an important and necessary tool to derive those parameters to high confidence in binaries. This involves carefully measured orbital motion by the determination of radial velocity (RV) shifts and sophisticated techniques to derive the abundances of elements within the stellar atmosphere.
A technique superior to conventional cross-correlation methods to determine RV shifts in known as spectral disentangling. Hence, a major task of this thesis was the design of a sophisticated software package for this approach. In order to investigate secondary effects, such as flux and line-profile variations, imprinting changes on the spectrum the behavior of spectral disentangling on such variability is a key to understand the derived values, to improve them, and to get information about the variability itself. Therefore, the spectral disentangling code presented in this thesis and available to the community combines multiple advantages: separation of the spectra for detailed chemical analysis, derivation of orbital elements, derivation of individual RVs in order to investigate distorted systems (either by third body interaction or relativistic effects), the suppression of telluric contaminations, the derivation of variability, and the possibility to apply the technique to eclipsing binaries (important for orbital inclination) or in general to systems that undergo flux-variations.
This code in combination with the spectral synthesis codes MOOG and SME was used in order to derive the carbon 12C/13C isotope ratio (CIR) of the benchmark binary Capella. The observational result will be set into context with theoretical evolution by the use of MESA models and resolves the discrepancy of theory and observations existing since the first measurement of Capella's CIR in 1976.
The spectral disentangling code has been made available to the community and its applicability to completely different behaving systems, Wolf-Rayet stars, have also been investigated and resulted in a published article.
Additionally, since this technique relies strongly on data quality, continues development of scientific instruments to achieve best observational data is of great importance in observational astrophysics. That is the reason why there has also been effort in astronomical instrumentation during the work on this thesis.
The Arctic-Boreal regions experience strong changes of air temperature and precipitation regimes, which affect the thermal state of the permafrost. This results in widespread permafrost-thaw disturbances, some unfolding slowly and over long periods, others occurring rapidly and abruptly. Despite optical remote sensing offering a variety of techniques to assess and monitor landscape changes, a persistent cloud cover decreases the amount of usable images considerably. However, combining data from multiple platforms promises to increase the number of images drastically. We therefore assess the comparability of Landsat-8 and Sentinel-2 imagery and the possibility to use both Landsat and Sentinel-2 images together in time series analyses, achieving a temporally-dense data coverage in Arctic-Boreal regions. We determined overlapping same-day acquisitions of Landsat-8 and Sentinel-2 images for three representative study sites in Eastern Siberia. We then compared the Landsat-8 and Sentinel-2 pixel-pairs, downscaled to 60 m, of corresponding bands and derived the ordinary least squares regression for every band combination. The acquired coefficients were used for spectral bandpass adjustment between the two sensors. The spectral band comparisons showed an overall good fit between Landsat-8 and Sentinel-2 images already. The ordinary least squares regression analyses underline the generally good spectral fit with intercept values between 0.0031 and 0.056 and slope values between 0.531 and 0.877. A spectral comparison after spectral bandpass adjustment of Sentinel-2 values to Landsat-8 shows a nearly perfect alignment between the same-day images. The spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to Landsat-8 very well in Eastern Siberian Arctic-Boreal landscapes. After spectral adjustment, Landsat and Sentinel-2 data can be used to create temporally-dense time series and be applied to assess permafrost landscape changes in Eastern Siberia. Remaining differences between the sensors can be attributed to several factors including heterogeneous terrain, poor cloud and cloud shadow masking, and mixed pixels.
Background
Organisms are expected to respond to changing environmental conditions through local adaptation, range shift or local extinction. The process of local adaptation can occur by genetic changes or phenotypic plasticity, and becomes especially relevant when dispersal abilities or possibilities are somehow constrained. For genetic changes to occur, mutations are the ultimate source of variation and the mutation rate in terms of a mutator locus can be subject to evolutionary change. Recent findings suggest that the evolution of the mutation rate in a sexual species can advance invasion speed and promote adaptation to novel environmental conditions. Following this idea, this work uses an individual-based model approach to investigate if the mutation rate can also evolve in a sexual species experiencing different conditions of directional climate change, under different scenarios of colored stochastic environmental noise, probability of recombination and of beneficial mutations. The color of the noise mimicked investigating the evolutionary dynamics of the mutation rate in different habitats.
Results
The results suggest that the mutation rate in a sexual species experiencing directional climate change scenarios can evolve and reach relatively high values mainly under conditions of complete linkage of the mutator locus and the adaptation locus. In contrast, when they are unlinked, the mutation rate can slightly increase only under scenarios where at least 50% of arising mutations are beneficial and the rate of environmental change is relatively fast. This result is robust under different scenarios of stochastic environmental noise, which supports the observation of no systematic variation in the mutation rate among organisms experiencing different habitats.
Conclusions
Given that 50% beneficial mutations may be an unrealistic assumption, and that recombination is ubiquitous in sexual species, the evolution of an elevated mutation rate in a sexual species experiencing directional climate change might be rather unlikely. Furthermore, when the percentage of beneficial mutations and the population size are small, sexual species (especially multicellular ones) producing few offspring may be expected to react to changing environments not by adaptive genetic change, but mainly through plasticity. Without the ability for a plastic response, such species may become – at least locally – extinct.
A contemporary challenge in Ecology and Evolutionary Biology is to anticipate the fate of populations of organisms in the context of a changing world. Climate change and landscape changes due to anthropic activities have been of major concern in the contemporary history. Organisms facing these threats are expected to respond by local adaptation (i.e., genetic changes or phenotypic plasticity) or by shifting their distributional range (migration). However, there are limits to their responses. For example, isolated populations will have more difficulties in developing adaptive innovations by means of genetic changes than interconnected metapopulations. Similarly, the topography of the environment can limit dispersal opportunities for crawling organisms as compared to those that rely on wind. Thus, populations of species with different life history strategy may differ in their ability to cope with changing environmental conditions. However, depending on the taxon, empirical studies investigating organisms’ responses to environmental change may become too complex, long and expensive; plus, complications arising from dealing with endangered species. In consequence, eco-evolutionary modeling offers an opportunity to overcome these limitations and complement empirical studies, understand the action and limitations of underlying mechanisms, and project into possible future scenarios. In this work I take a modeling approach and investigate the effect and relative importance of evolutionary mechanisms (including phenotypic plasticity) on the ability for local adaptation of populations with different life strategy experiencing climate change scenarios. For this, I performed a review on the state of the art of eco-evolutionary Individual-Based Models (IBMs) and identify gaps for future research. Then, I used the results from the review to develop an eco-evolutionary individual-based modeling tool to study the role of genetic and plastic mechanisms in promoting local adaption of populations of organisms with different life strategies experiencing scenarios of climate change and environmental stochasticity. The environment was simulated through a climate variable (e.g., temperature) defining a phenotypic optimum moving at a given rate of change. The rate of change was changed to simulate different scenarios of climate change (no change, slow, medium, rapid climate change). Several scenarios of stochastic noise color resembling different climatic conditions were explored. Results show that populations of sexual species will rely mainly on standing genetic variation and phenotypic plasticity for local adaptation. Population of species with relatively slow growth rate (e.g., large mammals) – especially those of small size – are the most vulnerable, particularly if their plasticity is limited (i.e., specialist species). In addition, whenever organisms from these populations are capable of adaptive plasticity, they can buffer fitness losses in reddish climatic conditions. Likewise, whenever they can adjust their plastic response (e.g., bed-hedging strategy) they will cope with bluish environmental conditions as well. In contrast, life strategies of high fecundity can rely on non-adaptive plasticity for their local adaptation to novel environmental conditions, unless the rate of change is too rapid. A recommended management measure is to guarantee interconnection of isolated populations into metapopulations, such that the supply of useful genetic variation can be increased, and, at the same time, provide them with movement opportunities to follow their preferred niche, when local adaptation becomes problematic. This is particularly important for bluish and reddish climatic conditions, when the rate of change is slow, or for any climatic condition when the level of stress (rate of change) is relatively high.
Salinity is a significant factor for structuring microbial communities, but little is known for aquatic fungi, particularly in the pelagic zone of brackish ecosystems. In this study, we explored the diversity and composition of fungal communities following a progressive salinity decline (from 34 to 3 PSU) along three transects of ca. 2000 km in the Baltic Sea, the world’s largest estuary. Based on 18S rRNA gene sequence analysis, we detected clear changes in fungal community composition along the salinity gradient and found significant differences in composition of fungal communities established above and below a critical value of 8 PSU. At salinities below this threshold, fungal communities resembled those from freshwater environments, with a greater abundance of Chytridiomycota, particularly of the orders Rhizophydiales, Lobulomycetales, and
Gromochytriales. At salinities above 8 PSU, communities were more similar to those from marine environments and, depending on the season, were dominated by a strain of the LKM11 group (Cryptomycota) or by members of Ascomycota and Basidiomycota. Our results highlight salinity as an important environmental driver also for pelagic fungi, and thus should be taken into account to better understand fungal diversity and ecological function in the aquatic realm.