Refine
Year of publication
- 2021 (1619) (remove)
Document Type
- Article (1132)
- Doctoral Thesis (180)
- Postprint (149)
- Conference Proceeding (32)
- Part of a Book (30)
- Review (20)
- Monograph/Edited Volume (19)
- Working Paper (19)
- Part of Periodical (13)
- Other (11)
Language
- English (1619) (remove)
Keywords
- COVID-19 (21)
- climate change (13)
- machine learning (13)
- diffusion (11)
- embodied cognition (11)
- Germany (10)
- USA (10)
- Migration (9)
- United States (9)
- exercise (9)
Institute
- Institut für Biochemie und Biologie (194)
- Institut für Physik und Astronomie (165)
- Institut für Chemie (146)
- Institut für Geowissenschaften (139)
- Department Psychologie (89)
- Institut für Umweltwissenschaften und Geographie (83)
- Fachgruppe Betriebswirtschaftslehre (73)
- Hasso-Plattner-Institut für Digital Engineering GmbH (68)
- Extern (67)
- Institut für Ernährungswissenschaft (66)
- Fachgruppe Politik- & Verwaltungswissenschaft (65)
- Department Sport- und Gesundheitswissenschaften (64)
- Department Linguistik (53)
- Institut für Mathematik (44)
- Fachgruppe Volkswirtschaftslehre (42)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (35)
- Strukturbereich Kognitionswissenschaften (34)
- Department Erziehungswissenschaft (31)
- Wirtschaftswissenschaften (31)
- Historisches Institut (28)
- Institut für Anglistik und Amerikanistik (22)
- Institut für Informatik und Computational Science (21)
- Institut für Jüdische Studien und Religionswissenschaft (21)
- Fakultät für Gesundheitswissenschaften (20)
- Center for Economic Policy Analysis (CEPA) (17)
- Sozialwissenschaften (15)
- Öffentliches Recht (14)
- Institut für Philosophie (13)
- Vereinigung für Jüdische Studien e. V. (12)
- Department für Inklusionspädagogik (11)
- Fachgruppe Soziologie (9)
- Institut für Germanistik (7)
- Department Grundschulpädagogik (6)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (6)
- Hochschulambulanz (5)
- Institut für Slavistik (5)
- Strukturbereich Bildungswissenschaften (5)
- Bürgerliches Recht (4)
- Humanwissenschaftliche Fakultät (4)
- Institut für Romanistik (4)
- Klassische Philologie (4)
- Philosophische Fakultät (4)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (4)
- Institut für Künste und Medien (2)
- Referat für Presse- und Öffentlichkeitsarbeit (2)
- Botanischer Garten (1)
- Foundations of Computational Linguistics (1)
- Institut für Jüdische Theologie (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
- Mathematisch-Naturwissenschaftliche Fakultät (1)
- MenschenRechtsZentrum (1)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (1)
- Psycholinguistics and Neurolinguistics (1)
- Weitere Einrichtungen (1)
The prevalence of diseases associated with misfolded proteins increases with age. When cellular defense mechanisms become limited, misfolded proteins form aggregates and may also develop more stable cross-β structures ultimately forming amyloid aggregates. Amyloid aggregates are associated with neurodegenerative diseases such as Alzheimer’s disease and Huntington’s disease. The formation of amyloid deposits, their toxicity and cellular defense mechanisms have been intensively studied. However, surprisingly little is known about the effects of protein aggregates on cellular signal transduction. It is also not understood whether the presence of aggregation-prone, but still soluble proteins affect signal transduction.
In this study, the still soluble aggregation-prone HttExon1Q74 and its amyloid aggregates were used to analyze the effect of amyloid aggregates on internalization and receptor activation of G protein-coupled receptors (GPCRs), the largest protein family of mammalian cell surface receptors involved in signal transduction. The aggregated HttExon1Q74, but not its soluble form, could inhibit ligand-induced clathrin-mediated endocytosis (CME) of various GPCRs. Most likely this inhibitory effect is based on a terminal sequestration of the HSC70 chaperone to the aggregates which is necessary for CME. Using the vasopressinV1a receptor (V1aR) and the corticotropin-releasing factor receptor 1 (CRF1R) as a model, it could be shown that the presence of HttExon1Q74 aggregates and the inhibition of ligand-induced CME leads to an accumulation of desensitized receptors at the plasma membrane. In turn, this disrupts Gq-mediated Ca2+ signaling and Gs-mediated cAMP signaling of the V1aR and the CRF1R respectively. In contrast to HttExon1Q74 amyloid aggregates, soluble HttExon1Q74 as well as amorphous aggregates did not inhibit GPCR internalization and signaling demonstrating that cellular signal transduction mechanisms are specifically impaired in response to the formation of amyloid aggregates.
In addition, preliminary experiments could show that HttExon1Q74 aggregates provoke an increase in membrane expression of a protein from a structurally and functionally unrelated membrane protein family, namely the serotonin transporter SERT. As SERT is the main pharmacological target to treat depression this could shed light on this commonly occurring comorbidity in neurodegenerative diseases, in particular in early disease states.
Adaptive Force (AF) reflects the capability of the neuromuscular system to adapt adequately to external forces with the intention of maintaining a position or motion. One specific approach to assessing AF is to measure force and limb position during a pneumatically applied increasing external force. Through this method, the highest (AFmax), the maximal isometric (AFisomax) and the maximal eccentric Adaptive Force (AFeccmax) can be determined. The main question of the study was whether the AFisomax is a specific and independent parameter of muscle function compared to other maximal forces. In 13 healthy subjects (9 male and 4 female), the maximal voluntary isometric contraction (pre- and post-MVIC), the three AF parameters and the MVIC with a prior concentric contraction (MVICpri-con) of the elbow extensors were measured 4 times on two days. Arithmetic mean (M) and maximal (Max) torques of all force types were analyzed. Regarding the reliability of the AF parameters between days, the mean changes were 0.31–1.98 Nm (0.61%–5.47%, p = 0.175–0.552), the standard errors of measurements (SEM) were 1.29–5.68 Nm (2.53%–15.70%) and the ICCs(3,1) = 0.896–0.996. M and Max of AFisomax, AFmax and pre-MVIC correlated highly (r = 0.85–0.98). The M and Max of AFisomax were significantly lower (6.12–14.93 Nm; p ≤ 0.001–0.009) and more variable between trials (coefficient of variation (CVs) ≥ 21.95%) compared to those of pre-MVIC and AFmax (CVs ≤ 5.4%). The results suggest the novel measuring procedure is suitable to reliably quantify the AF, whereby the presented measurement errors should be taken into consideration. The AFisomax seems to reflect its own strength capacity and should be detected separately. It is suggested its normalization to the MVIC or AFmax could serve as an indicator of a neuromuscular function.
Adaptive Force (AF) reflects the capability of the neuromuscular system to adapt adequately to external forces with the intention of maintaining a position or motion. One specific approach to assessing AF is to measure force and limb position during a pneumatically applied increasing external force. Through this method, the highest (AFmax), the maximal isometric (AFisomax) and the maximal eccentric Adaptive Force (AFeccmax) can be determined. The main question of the study was whether the AFisomax is a specific and independent parameter of muscle function compared to other maximal forces. In 13 healthy subjects (9 male and 4 female), the maximal voluntary isometric contraction (pre- and post-MVIC), the three AF parameters and the MVIC with a prior concentric contraction (MVICpri-con) of the elbow extensors were measured 4 times on two days. Arithmetic mean (M) and maximal (Max) torques of all force types were analyzed. Regarding the reliability of the AF parameters between days, the mean changes were 0.31–1.98 Nm (0.61%–5.47%, p = 0.175–0.552), the standard errors of measurements (SEM) were 1.29–5.68 Nm (2.53%–15.70%) and the ICCs(3,1) = 0.896–0.996. M and Max of AFisomax, AFmax and pre-MVIC correlated highly (r = 0.85–0.98). The M and Max of AFisomax were significantly lower (6.12–14.93 Nm; p ≤ 0.001–0.009) and more variable between trials (coefficient of variation (CVs) ≥ 21.95%) compared to those of pre-MVIC and AFmax (CVs ≤ 5.4%). The results suggest the novel measuring procedure is suitable to reliably quantify the AF, whereby the presented measurement errors should be taken into consideration. The AFisomax seems to reflect its own strength capacity and should be detected separately. It is suggested its normalization to the MVIC or AFmax could serve as an indicator of a neuromuscular function.
One third of the world's population lives in areas where earthquakes causing at least slight damage are frequently expected. Thus, the development and testing of global seismicity models is essential to improving seismic hazard estimates and earthquake-preparedness protocols for effective disaster-risk mitigation. Currently, the availability and quality of geodetic data along plate-boundary regions provides the opportunity to construct global models of plate motion and strain rate, which can be translated into global maps of forecasted seismicity. Moreover, the broad coverage of existing earthquake catalogs facilitates in present-day the calibration and testing of global seismicity models. As a result, modern global seismicity models can integrate two independent factors necessary for physics-based, long-term earthquake forecasting, namely interseismic crustal strain accumulation and sudden lithospheric stress release.
In this dissertation, I present the construction of and testing results for two global ensemble seismicity models, aimed at providing mean rates of shallow (0-70 km) earthquake activity for seismic hazard assessment. These models depend on the Subduction Megathrust Earthquake Rate Forecast (SMERF2), a stationary seismicity approach for subduction zones, based on the conservation of moment principle and the use of regional "geodesy-to-seismicity" parameters, such as corner magnitudes, seismogenic thicknesses and subduction dip angles. Specifically, this interface-earthquake model combines geodetic strain rates with instrumentally-recorded seismicity to compute long-term rates of seismic and geodetic moment. Based on this, I derive analytical solutions for seismic coupling and earthquake activity, which provide this earthquake model with the initial abilities to properly forecast interface seismicity. Then, I integrate SMERF2 interface-seismicity estimates with earthquake computations in non-subduction zones provided by the Seismic Hazard Inferred From Tectonics based on the second iteration of the Global Strain Rate Map seismicity approach to construct the global Tectonic Earthquake Activity Model (TEAM). Thus, TEAM is designed to reduce number, and potentially spatial, earthquake inconsistencies of its predecessor tectonic earthquake model during the 2015-2017 period. Also, I combine this new geodetic-based earthquake approach with a global smoothed-seismicity model to create the World Hybrid Earthquake Estimates based on Likelihood scores (WHEEL) model. This updated hybrid model serves as an alternative earthquake-rate approach to the Global Earthquake Activity Rate model for forecasting long-term rates of shallow seismicity everywhere on Earth.
Global seismicity models provide scientific hypotheses about when and where earthquakes may occur, and how big they might be. Nonetheless, the veracity of these hypotheses can only be either confirmed or rejected after prospective forecast evaluation. Therefore, I finally test the consistency and relative performance of these global seismicity models with independent observations recorded during the 2014-2019 pseudo-prospective evaluation period. As a result, hybrid earthquake models based on both geodesy and seismicity are the most informative seismicity models during the testing time frame, as they obtain higher information scores than their constituent model components. These results support the combination of interseismic strain measurements with earthquake-catalog data for improved seismicity modeling. However, further prospective evaluations are required to more accurately describe the capacities of these global ensemble seismicity models to forecast longer-term earthquake activity.
The particle noch (‘still’) can have an additive reading similar to auch (‘also’). We argue that both particles indicate that a previously partially answered QUD is re-opened to add a further answer. The particles differ in that the QUD, in the case of auch, can be re-opened with respect to the same topic situation, whereas noch indicates that the QUD is re-opened with respect to a new topic situation. This account predicts a difference in the accommodation behavior of the two particles. We present an experiment whose results are in line with this prediction.
The particle noch (‘still’) can have an additive reading similar to auch (‘also’). We argue that both particles indicate that a previously partially answered QUD is re-opened to add a further answer. The particles differ in that the QUD, in the case of auch, can be re-opened with respect to the same topic situation, whereas noch indicates that the QUD is re-opened with respect to a new topic situation. This account predicts a difference in the accommodation behavior of the two particles. We present an experiment whose results are in line with this prediction.
Portal Wissen = Change
(2021)
Change makes everything different. Let’s be honest: Just about everything is constantly in transformation. Even huge massifs that seem like eternity turned to stone will eventually dissolve into dust. So is change itself the only constant? The Greek philosopher Heraclitus certainly thought so. He said, “The only thing that is constant is change.”
Change is frightening. A change that we cannot explain throws us into turmoil – like a magic trick we cannot decipher. Viruses that mutate, ecosystems that collapse, stars that perish – they all seem to threaten the fragile balance that makes our existence possible. Humanity is late in recognizing that we ourselves are all too often the impetus for dangerous transformations.
Change gives hope. People have always been fascinated by change and felt compelled to explore its origin and essence. Quite successfully. We understand many things much better than generations before. But well enough? Not at all. Alexander von Humboldt said, “Every law of nature that reveals itself to the observer suggests a higher, as yet unrecognized one.” There is still much to be done.
The current issue of Portal Wissen is all about change. We spoke to an astrophysicist who has found her happiness in researching the formation and change of stars. We also look at different aspects of the very earthly climate change and its consequences: A geoscientist explains how global warming affects the stability of mountain ranges.
A legal expert makes clear that the call for a right to climate protection has gone largely unheard until now. How human land use affects biodiversity is being investigated by young researchers of the “Bio- Move” research training group, who have provided us with insights into their work on brown hares, water fleas, and mallard ducks. Other researchers focus on change in the contexts of humans. A group of nutrition scientists at the German Institute of Human Nutrition (DIfE) and sports scientists at the University of Potsdam are investigating the factors that cause our bodies to change as we age – and why some people lose muscles more quickly than others.
Despite all these changes, we do not lose sight of the diversity of research at the University of Potsdam. A visit to the laboratory of the project “OptiZeD” gives us an idea of the possibilities offered by optical sensors for the personalized medicine of tomorrow, while an educational researcher explains why cultural diversity is an asset beneficial to our education. In addition, a cultural scientist reports on the fascination of comics. They are all part of the hopeful change that science is initiating and accomplishing! Enjoy the read!
The mitochondrial chaperone complex HSP60/HSP10 facilitates mitochondrial protein homeostasis by folding more than 300 mitochondrial matrix proteins. It has been shown previously that HSP60 is downregulated in brains of type 2 diabetic (T2D) mice and patients,
causing mitochondrial dysfunction and insulin resistance. As HSP60 is also decreased in peripheral tissues in T2D animals, this thesis investigated the effect of overall reduced HSP60 in the development of obesity and associated co-morbidities.
To this end, both female and male C57Bl/6N control (i.e. without further alterations in their genome, Ctrl) and heterozygous whole-body Hsp60 knock-out (Hsp60+/-) mice, which exhibit a 50 % reduction of HSP60 in all tissues, were fed a normal chow diet (NCD) or a highfat diet (HFD, 60 % calories from fat) for 16 weeks and were subjected to extensive metabolic phenotyping including indirect calorimetry, NMR spectroscopy, insulin, glucose and pyruvate tolerance tests, vena cava insulin injections, as well as histological and molecular analysis.
Interestingly, NCD feeding did not result in any striking phenotype, only a mild increase in energy expenditure in Hsp60+/- mice. Exposing mice to a HFD however revealed an increased body weight due to higher muscle mass in female Hsp60+/- mice, with a simultaneous decrease in energy expenditure. Additionally, these mice displayed decreased fasting glycemia. Opposingly, male Hsp60+/- compared to control mice showed lower body weight gain due to decreased fat mass and an increased energy expenditure, strikingly independent of lean mass. Further, only male Hsp60+/- mice display improved HOMA-IR and Matsuda
insulin sensitivity indices.
Despite the opposite phenotype in regards to body weight development, Hsp60+/- mice of both sexes show a significantly higher cell number, as well as a reduction in adipocyte size in the subcutaneous and gonadal white adipose tissue (sc/gWAT). Curiously, this adipocyte hyperplasia – usually associated with positive aspects of WAT function – is disconnected from metabolic improvements, as the gWAT of male Hsp60+/- mice shows mitochondrial dysfunction, oxidative stress, and insulin resistance. Transcriptomic analysis of gWAT shows an up
regulation of genes involved in macroautophagy. Confirmatory, expression of microtubuleassociated protein 1A/1B light chain 3B (LC3), as a protein marker of autophagy, and direct measurement of lysosomal activity is increased in the gWAT of male Hsp60+/- mice.
In summary, this thesis revealed a novel gene-nutrient interaction. The reduction of the crucial chaperone HSP60 did not have large effects in mice fed a NCD, but impacted metabolism during DIO in a sex-specific manner, where, despite opposing body weight and
body composition phenotypes, both female and male Hsp60+/- mice show signs of protection from high fat diet-induced systemic insulin resistance.
Learning to read in German
(2021)
In the present dissertation, the development of eye movement behavior and the perceptual span of German beginning readers was investigated in Grades 1 to 3 (Study 1) and longitudinally within a one-year time interval (Study 2), as well as in relation to intrinsic and extrinsic reading motivation (Study 3). The presented results are intended to fill the gap of only sparse information on young readers’ eye movements and completely missing information on German young readers’ perceptual span and its development. On the other hand, reading motivation data have been scrutinized with respect to reciprocal effects on reading comprehension but not with respect to more immediate, basic cognitive processing (e.g., word decoding) that is indicated by different eye movement measures. Based on a longitudinal study design, children in Grades 1–3 participated in a moving window reading experiment with eye movement recordings in two successive years. All children were participants of a larger longitudinal study on intrapersonal developmental risk factors in childhood and adolescence (PIER study). Motivation data and other psychometric reading data were collected during individual inquiries and tests at school. Data analyses were realized in three separate studies that focused on different but related aspects of reading and perceptual span development. Study 1 presents the first cross-sectional report on the perceptual span of beginning German readers. The focus was on reading rate changes in Grades 1 to 3 and on the issue of the onset of the perceptual span development and its dependence on basic foveal reading processes. Study 2 presents a successor of Study 1 providing first longitudinal data of the perceptual span in elementary school children. It also includes information on the stability of observed and predicted reading rates and perceptual span sizes and introduces a new measure of the perceptual span based on nonlinear mixed-effects models. Another issue addressed in this study is the longitudinal between-group comparison of slower and faster readers which refers to the detection of developmental patterns. Study 3 includes longitudinal reading motivation data and investigates the relation between different eye movement measures including perceptual span and intrinsic as well as extrinsic reading motivation. In Study 1, a decelerated increase in reading rate was observed between Grades 1 to 3. Grade effects were also reported for saccade length, refixation probability, and different fixation duration measures. With higher grade, mean saccade length increased, whereas refixation probability, first-fixation duration, gaze duration, and total reading time decreased. Perceptual span development was indicated by an increase in window size effects with grade level. Grade level differences with respect to window size effects were stronger between Grades 2 and 3 than between Grades 1 and 2. These results were replicated longitudinally in Study 2. Again, perceptual span size significantly changed between Grades 2 and 3, but not between Grades 1 and 2 or Grades 3 and 4. Observed and predicted reading rates were found to be highly stable after first grade, whereas stability of perceptual span was only moderate for all grade levels. Group differences between slower and faster readers in Year 1 remained observable in Year 2 showing a pattern of stable achievement differences rather than a compensatory pattern. Between Grades 2 and 3, between-group differences in reading rate even increased resulting in a Matthew effect. A similar effect was observed for perceptual span development between Grades 3 and 4. Finally, in Study 3, significant relations between beginning readers’ eye movements and their reading motivation were observed. In both years of measurement, higher intrinsic reading motivation was related to more skilled eye movement patterns as indicated by short fixations, longer saccades, and higher reading rates. In Year 2, intrinsic reading motivation was also significantly and negatively correlated with refixation probability. These correlational patterns were confirmed in cross-sectional linear models controlling for grade level and reading amount and including both reading motivation measures, extrinsic and intrinsic motivation. While there were significant positive relations between intrinsic reading motivation and word decoding as indicated by the above stated eye movement measures, extrinsic reading motivation only predicted variance in eye movements in Year 2 (significant for fixation durations and reading rate), with a consistently opposite pattern of effects as compared to intrinsic reading motivation. Finally, longitudinal effects of Year 1 intrinsic reading motivation on Year 2 word decoding were observed for gaze duration, total reading time, refixation probability, and perceptual span within cross-lagged panel models. These effects were reciprocal because all eye movement measures significantly predicted variance in intrinsic reading motivation. Extrinsic reading motivation in Year 1 did not affect any eye movement measure in Year 2, and vice versa, except for a significant, negative relation with perceptual span. Concluding, the present dissertation demonstrates that largest gains in reading development in terms of eye movement changes are observable between Grades 1 and 2. Together with the observed pattern of stable differences between slower and faster readers and a widening achievement gap between Grades 2 and 3 for reading rate, these results underline the importance of the first year(s) of formal reading instruction. The development of the perceptual span lags behind as it is most apparent between Grades 2 and 3. This suggests that efficient parafoveal processing presupposes a certain degree of foveal reading proficiency (e.g., word decoding). Finally, this dissertation demonstrates that intrinsic reading motivation—but not extrinsic motivation—effectively supports the development of skilled reading.
“Embodied Practices – Looking From Small Places” is an edited transcript of a conversation between theatre and performance scholar Sruti Bala (University of Amsterdam) and sociologist, criminologist and anthropologist Dylan Kerrigan (University of Leicester) that took place as an online event in November 2020. Throughout their talk, Bala and Kerrigan engage with the legacy of Haitian anthropologist Michel-Rolph Trouillot. Specifically, they focus on his approach of looking from small units, such as small villages in Dominica, outwards to larger political structures such as global capitalism, social inequalities and the distribution of power. They also share insights from their own research on embodied practices in the Caribbean, Europe and India and answer questions such as: What can research on and through embodied practices tell us about systems of power and domination that move between the local and the global? How can performance practices which are informed by multiple locations and cultures be read and appreciated adequately? Sharing insights from his research into Guyanese prisons, Kerrigan outlines how he aims to connect everyday experiences and struggles of Caribbean people to trans-historical and transnational processes such as racial capitalism and post/coloniality. Furthermore, he elaborates on how he uses performance practices such as spoken word poetry and data verbalisation to connect with systematically excluded groups. Bala challenges naïve notions about the inherent transformative potential of performance in her research on performance and translation. She points to the way in which performance and its reception is always already inscribed in what she calls global or planetary asymmetries. At the conclusion of this conversation, they broach the question: are small places truly as small as they seem?
This open access book presents a topical, comprehensive and differentiated analysis of Germany’s public administration and reforms. It provides an overview on key elements of German public administration at the federal, Länder and local levels of government as well as on current reform activities of the public sector. It examines the key institutional features of German public administration; the changing relationships between public administration, society and the private sector; the administrative reforms at different levels of the federal system and numerous sectors; and new challenges and modernization approaches like digitalization, Open Government and Better Regulation. Each chapter offers a combination of descriptive information and problem-oriented analysis, presenting key topical issues in Germany which are relevant to an international readership.
One of the key challenges in modern Facility Management (FM) is to digitally reflect the current state of the built environment, referred to as-is or as-built versus as-designed representation. While the use of Building Information Modeling (BIM) can address the issue of digital representation, the generation and maintenance of BIM data requires a considerable amount of manual work and domain expertise. Another key challenge is being able to monitor the current state of the built environment, which is used to provide feedback and enhance decision making. The need for an integrated solution for all data associated with the operational life cycle of a building is becoming more pronounced as practices from Industry 4.0 are currently being evaluated and adopted for FM use. This research presents an approach for digital representation of indoor environments in their current state within the life cycle of a given building. Such an approach requires the fusion of various sources of digital data. The key to solving such a complex issue of digital data integration, processing and representation is with the use of a Digital Twin (DT). A DT is a digital duplicate of the physical environment, states, and processes. A DT fuses as-designed and as-built digital representations of built environment with as-is data, typically in the form of floorplans, point clouds and BIMs, with additional information layers pertaining to the current and predicted states of an indoor environment or a complete building (e.g., sensor data). The design, implementation and initial testing of prototypical DT software services for indoor environments is presented and described. These DT software services are implemented within a service-oriented paradigm, and their feasibility is presented through functioning and tested key software components within prototypical Service-Oriented System (SOS) implementations. The main outcome of this research shows that key data related to the built environment can be semantically enriched and combined to enable digital representations of indoor environments, based on the concept of a DT. Furthermore, the outcomes of this research show that digital data, related to FM and Architecture, Construction, Engineering, Owner and Occupant (AECOO) activity, can be combined, analyzed and visualized in real-time using a service-oriented approach. This has great potential to benefit decision making related to Operation and Maintenance (O&M) procedures within the scope of the post-construction life cycle stages of typical office buildings.
The ubiquitin-proteasome-system (UPS) is a cellular cascade involving three enzymatic steps for protein ubiquitination to target them to the 26S proteasome for proteolytic degradation. Several components of the UPS have been shown to be central for regulation of defense responses during infections with phytopathogenic bacteria. Upon recognition of the pathogen, local defense is induced which also primes the plant to acquire systemic resistance (SAR) for enhanced immune responses upon challenging infections. Here, ubiquitinated proteins were shown to accumulate locally and systemically during infections with Psm and after treatment with the SAR-inducing metabolites salicylic acid (SA) and pipecolic acid (Pip). The role of the 26S proteasome in local defense has been described in several studies, but the potential role during SAR remains elusive and was therefore investigated in this project by characterizing the Arabidopsis proteasome mutants rpt2a-2 and rpn12a-1 during priming and infections with Pseudomonas. Bacterial replication assays reveal decreased basal and systemic immunity in both mutants which was verified on molecular level showing impaired activation of defense- and SAR-genes. rpt2a-2 and rpn12a-1 accumulate wild type like levels of camalexin but less SA. Endogenous SA treatment restores local PR gene expression but does not rescue the SAR-phenotype. An RNAseq experiment of Col-0 and rpt2a-2 reveal weak or absent induction of defense genes in the proteasome mutant during priming. Thus, a functional 26S proteasome was found to be required for induction of SAR while compensatory mechanisms can still be initiated.
E3-ubiquitin ligases conduct the last step of substrate ubiquitination and thereby convey specificity to proteasomal protein turnover. Using RNAseq, 11 E3-ligases were found to be differentially expressed during priming in Col-0 of which plant U-box 54 (PUB54) and ariadne 12 (ARI12) were further investigated to gain deeper understanding of their potential role during priming.
PUB54 was shown to be expressed during priming and /or triggering with virulent Pseudomonas. pub54 I and pub54-II mutants display local and systemic defense comparable to Col-0. The heavy-metal associated protein 35 (HMP35) was identified as potential substrate of PUB54 in yeast which was verified in vitro and in vivo. PUB54 was shown to be an active E3-ligase exhibiting auto-ubiquitination activity and performing ubiquitination of HMP35. Proteasomal turnover of HMP35 was observed indicating that PUB54 targets HMP35 for ubiquitination and subsequent proteasomal degradation. Furthermore, hmp35-I benefits from increased resistance in bacterial replication assays. Thus, HMP35 is potentially a negative regulator of defense which is targeted and ubiquitinated by PUB54 to regulate downstream defense signaling. ARI12 is transcriptionally activated during priming or triggering and hyperinduced during priming and triggering. Gene expression is not inducible by the defense related hormone salicylic acid (SA) and is dampened in npr1 and fmo1 mutants consequently depending on functional SA- and Pip-pathways, respectively. ARI12 accumulates systemically after priming with SA, Pip or Pseudomonas. ari12 mutants are not altered in resistance but stable overexpression leads to increased resistance in local and systemic tissue. During priming and triggering, unbalanced ARI12 levels (i.e. knock out or overexpression) leads to enhanced FMO1 activation indicating a role of ARI12 in Pip-mediated SAR. ARI12 was shown to be an active E3-ligase with auto-ubiquitination activity likely required for activation with an identified ubiquitination site at K474. Mass spectrometrically identified potential substrates were not verified by additional experiments yet but suggest involvement of ARI12 in regulation of ROS in turn regulating Pip-dependent SAR pathways.
Thus, data from this project provide strong indications about the involvement of the 26S proteasome in SAR and identified a central role of the two so far barely described E3-ubiquitin ligases PUB54 and ARI12 as novel components of plant defense.
The business problem of having inefficient processes, imprecise process analyses, and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating, and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS), and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes.
This paper reads ‘The Detainee’s Tale as told to Ali Smith’ (2016) as an exemplary demonstration of the work of world literature. Smith’s story articulates an ethics of reading that is grounded in the recipient’s openness to the singular, unpredictable, and unverifiable text of the other. More specifically, Smith’s account enables the very event that it painstakingly stages: the encounter with alterity and newness, which is both the theme of the narrative and the effect of the text on the reader. At the same time, however, the text urges to move from an ethics of literature understood as the responsible reception of the other by an individual reader to a more explicitly convivial and political ethics of commitment beyond the scene of reading.
Promoting the decarbonization of economic activity through climate policies raises many questions. From a macroeconomic perspective, it is important to understand how these policies perform under uncertainty, how they affect short-run dynamics and to what extent they have distributional effects. In addition, uncertainties directly associated with climate policies, such as uncertainty about the carbon budget or emission intensities, become relevant aspects. We study the implications of emission reduction schemes within a Two-Agent New-Keynesian (TANK) model. This quantitative exercise, based on data for the German economy, provides various insights. In the light of frictions and fluctuations, compared to other instruments, a carbon price (i.e. tax) is associated with lower volatility in output and consumption. In terms of aggregate welfare, price instruments are found to be preferable. Conditional on the distribution of revenues from climate policies, quantity instruments can exert regressive effects, posing a larger economic loss on wealth-poor households, whereas price instruments are moderately progressive. Finally, we find that unexpected changes in climate policies can induce substantial aggregate adjustments. With uncertainty about the carbon budget, the costs of adjustment are larger under quantity instruments.
Deoxyribonucleic acid (DNA) nanostructures enable the attachment of functional molecules to nearly any unique location on their underlying structure. Due to their single-base-pair structural resolution, several ligands can be spatially arranged and closely controlled according to the geometry of their desired target, resulting in optimized binding and/or signaling interactions.
This dissertation covers three main projects. All of them use variations of functionalized DNA nanostructures that act as platform for oligovalent presentation of ligands. The purpose of this work was to evaluate the ability of DNA nanostructures to precisely display different types of functional molecules and to consequently enhance their efficacy according to the concept of multivalency. Moreover, functionalized DNA structures were examined for their suitability in functional screening assays. The developed DNA-based compound ligands were used to target structures in different biological systems.
One part of this dissertation attempted to bind pathogens with small modified DNA nanostructures. Pathogens like viruses and bacteria are known for their multivalent attachment to host cells membranes. By blocking their receptors for recognition and/or fusion with their targeted host in an oligovalent manner, the objective was to impede their ability to adhere to and invade cells. For influenza A, only enhanced binding of oligovalent peptide-DNA constructs compared to the monovalent peptide could be observed, whereas in the case of respiratory syncytial virus (RSV), binding as well as blocking of the target receptors led to an increased inhibition of infection in vitro.
In the final part, the ability of chimeric DNA-peptide constructs to bind to and activate signaling receptors on the surface of cells was investigated. Specific binding of DNA trimers, conjugated with up to three peptides, to EphA2 receptor expressing cells was evaluated in flow cytometry experiments. Subsequently, their ability to activate these receptors via phosphorylation was assessed. EphA2 phosphorylation was significantly increased by DNA trimers carrying three peptides compared to monovalent peptide. As a result of activation, cells underwent characteristic morphological changes, where they "round up" and retract their periphery.
The results obtained in this work comprehensively prove the capability of DNA nanostructures to serve as stable, biocompatible, controllable platforms for the oligovalent presentation of functional ligands. Functionalized DNA nanostructures were used to enhance biological effects and as tool for functional screening of bio-activity. This work demonstrates that modified DNA structures have the potential to improve drug development and to unravel the activation of signaling pathways.
While W.E.B. Du Bois’s first novel, The Quest of the Silver Fleece (1911), is set squarely in the USA, his second work of fiction, Dark Princess: A Romance (1928), abandons this national framework, depicting the treatment of African Americans in the USA as embedded into an international system of economic exploitation based on racial categories. Ultimately, the political visions offered in the novels differ starkly, but both employ a Western literary canon – so-called ‘classics’ from Greek, German, English, French, and US American literature. With this, Du Bois attempts to create a new space for African Americans in the world (literature) of the 20th century. Weary of the traditions of this ‘world literature’, the novels complicate and begin to decenter the canon that they draw on. This reading traces what I interpret as subtle signs of frustration over the limits set by the literature that underlies Dark Princess, while its predecessor had been more optimistic in its appropriation of Eurocentric fiction for its propagandist aims.
Massive Open Online Courses (MOOCs) open up new opportunities to learn a wide variety of skills online and are thus well suited for individual education, especially where proffcient teachers are not available locally. At the same time, modern society is undergoing a digital transformation, requiring the training of large numbers of current and future employees. Abstract thinking, logical reasoning, and the need to formulate instructions for computers are becoming increasingly relevant. A holistic way to train these skills is to learn how to program. Programming, in addition to being a mental discipline, is also considered a craft, and practical training is required to achieve mastery. In order to effectively convey programming skills in MOOCs, practical exercises are incorporated into the course curriculum to offer students the necessary hands-on experience to reach an in-depth understanding of the programming concepts presented. Our preliminary analysis showed that while being an integral and rewarding part of courses, practical exercises bear the risk of overburdening students who are struggling with conceptual misunderstandings and unknown syntax. In this thesis, we develop, implement, and evaluate different interventions with the aim to improve the learning experience, sustainability, and success of online programming courses. Data from four programming MOOCs, with a total of over 60,000 participants, are employed to determine criteria for practical programming exercises best suited for a given audience.
Based on over five million executions and scoring runs from students' task submissions, we deduce exercise difficulties, students' patterns in approaching the exercises, and potential flaws in exercise descriptions as well as preparatory videos. The primary issue in online learning is that students face a social gap caused by their isolated physical situation. Each individual student usually learns alone in front of a computer and suffers from the absence of a pre-determined time structure as provided in traditional school classes. Furthermore, online learning usually presses students into a one-size-fits-all curriculum, which presents the same content to all students, regardless of their individual needs and learning styles. Any means of a personalization of content or individual feedback regarding problems they encounter are mostly ruled out by the discrepancy between the number of learners and the number of instructors. This results in a high demand for self-motivation and determination of MOOC participants. Social distance exists between individual students as well as between students and course instructors. It decreases engagement and poses a threat to learning success. Within this research, we approach the identified issues within MOOCs and suggest scalable technical solutions, improving social interaction and balancing content difficulty.
Our contributions include situational interventions, approaches for personalizing educational content as well as concepts for fostering collaborative problem-solving. With these approaches, we reduce counterproductive struggles and create a universal improvement for future programming MOOCs. We evaluate our approaches and methods in detail to improve programming courses for students as well as instructors and to advance the state of knowledge in online education.
Data gathered from our experiments show that receiving peer feedback on one's programming problems improves overall course scores by up to 17%. Merely the act of phrasing a question about one's problem improved overall scores by about 14%. The rate of students reaching out for help was significantly improved by situational just-in-time interventions. Request for Comment interventions increased the share of students asking for help by up to 158%. Data from our four MOOCs further provide detailed insight into the learning behavior of students. We outline additional significant findings with regard to student behavior and demographic factors. Our approaches, the technical infrastructure, the numerous educational resources developed, and the data collected provide a solid foundation for future research.
Paths Are Made by Walking
(2021)
Introduction
(2021)
This edited volume examines entanglements and disentanglements between Africa and East Germany during and after the Cold War from a global history perspective. Extending the view beyond political elites, it asks for the negotiated and plural character of socialism in these encounters and sheds light on migration, media, development, and solidarity through personal and institutional agency. With its distinctive focus on moorings and unmoorings, the volume shows how the encounters, albeit often brief, significantly influenced both African and East German histories.
Simultaneously speculative and inspired by everyday experiences, this volume develops an aesthetics of metabolism that offers a new perspective on the human-environment relation, one that is processual, relational, and not dependent on conscious thought. In art installations, design prototypes, and researchcreation projects that utilize air, light, or temperature to impact subjective experience the author finds aesthetic milieus that shift our awareness to the role of different sense modalities in aesthetic experience. Metabolic and atmospheric processes allow for an aesthetics besides and beyond the usually dominant visual sense.
Against a background of increasing violence against non-natives, we estimate the effect of hate crime on refugees’ mental health in Germany. For this purpose, we combine two datasets: administrative records on xenophobic crime against refugee shelters by the Federal Criminal Office and the IAB-BAMF-SOEP Survey of Refugees. We apply a regression discontinuity in time design to estimate the effect of interest. Our results indicate that hate crime has a substantial negative effect on several mental health indicators, including the Mental Component Summary score and the Patient Health Questionnaire-4 score. The effects are stronger for refugees with closer geographic proximity to the focal hate crime and refugees with low country-specific human capital. While the estimated effect is only transitory, we argue that negative mental health shocks during the critical period after arrival have important long-term consequences. Keywords: Mental health, hate crime, migration, refugees, human capital.
This article is a discussion of Plin. Ep. 7.29 and Ep. 8.6, in which he presents his reaction to seeing the grave monument of Marcus Antonius Pallas, the freedman and minister of the Emperor Claudius, beside the Via Tiburtina. The monument records a senatorial vote of thanks to Pallas, and Pliny expresses intense indignation at the Senate’s subservience and at the power and influence wielded by a freedman. This article compares Pliny’s letters with Tacitus’ account of the senatorial vote of thanks to Pallas at Ann. 12.52–3 and explores the differences between the ways in which the two authors encourage readers to relate to past events. It is noted that the Pallas letters are unusual amongst Pliny’s let- ters for their treatment of material unconnected with the life and career of Pliny and his friends, and argued that in Ep. 7.29 Pliny uses language and attitudes drawn from satire to evoke the past. Ep. 8.6 is read as an idiosyncratic piece of historical enquiry, consider- ing Pliny’s use of citation and his anonymization of historical individuals. Both letters are considered in the context of the surrounding letters, and a hypothesis is offered regarding the identity of their addressee Montanus, considering evidence from Tacitus’ Histories and Annals. Discussion of Tac. Ann. 12.52–3 focusses on the use of irony. Pliny’s evocation of enargeia (‘vividness’) is compared with that of Tacitus. The article concludes with comparison of the historical accounts offered by Pliny and Tacitus through reflection on Juvenal, Satire 1.
The Central Andes region in South America is characterized by a complex and heterogeneous deformation system. Recorded seismic activity and mapped neotectonic structures indicate that most of the intraplate deformation is located along the margins of the orogen, in the transitions to the foreland and the forearc. Furthermore, the actively deforming provinces of the foreland exhibit distinct deformation styles that vary along strike, as well as characteristic distributions of seismicity with depth. The style of deformation transitions from thin-skinned in the north to thick-skinned in the south, and the thickness of the seismogenic layer increases to the south. Based on geological/geophysical observations and numerical modelling, the most commonly invoked causes for the observed heterogeneity are the variations in sediment thickness and composition, the presence of inherited structures, and changes in the dip of the subducting Nazca plate. However, there are still no comprehensive investigations on the relationship between the lithospheric composition of the Central Andes, its rheological state and the observed deformation processes. The central aim of this dissertation is therefore to explore the link between the nature of the lithosphere in the region and the location of active deformation. The study of the lithospheric composition by means of independent-data integration establishes a strong base to assess the thermal and rheological state of the Central Andes and its adjacent lowlands, which alternatively provide new foundations to understand the complex deformation of the region. In this line, the general workflow of the dissertation consists in the construction of a 3D data-derived and gravity-constrained density model of the Central Andean lithosphere, followed by the simulation of the steady-state conductive thermal field and the calculation of strength distribution. Additionally, the dynamic response of the orogen-foreland system to intraplate compression is evaluated by means of 3D geodynamic modelling.
The results of the modelling approach suggest that the inherited heterogeneous composition of the lithosphere controls the present-day thermal and rheological state of the Central Andes, which in turn influence the location and depth of active deformation processes. Most of the seismic activity and neo--tectonic structures are spatially correlated to regions of modelled high strength gradients, in the transition from the felsic, hot and weak orogenic lithosphere to the more mafic, cooler and stronger lithosphere beneath the forearc and the foreland. Moreover, the results of the dynamic simulation show a strong localization of deviatoric strain rate second invariants in the same region suggesting that shortening is accommodated at the transition zones between weak and strong domains. The vertical distribution of seismic activity appears to be influenced by the rheological state of the lithosphere as well. The depth at which the frequency distribution of hypocenters starts to decrease in the different morphotectonic units correlates with the position of the modelled brittle-ductile transitions; accordingly, a fraction of the seismic activity is located within the ductile part of the crust. An exhaustive analysis shows that practically all the seismicity in the region is restricted above the 600°C isotherm, in coincidence with the upper temperature limit for brittle behavior of olivine. Therefore, the occurrence of earthquakes below the modelled brittle-ductile could be explained by the presence of strong residual mafic rocks from past tectonic events. Another potential cause of deep earthquakes is the existence of inherited shear zones in which brittle behavior is favored through a decrease in the friction coefficient. This hypothesis is particularly suitable for the broken foreland provinces of the Santa Barbara System and the Pampean Ranges, where geological studies indicate successive reactivation of structures through time. Particularly in the Santa Barbara System, the results indicate that both mafic rocks and a reduction in friction are required to account for the observed deep seismic events.
While the last few decades have seen impressive improvements in several areas in Natural Language Processing, asking a computer to make sense of the discourse of utterances in a text remains challenging. There are several different theories that aim to describe and analyse the coherent structure that a well-written text inhibits. These theories have varying degrees of applicability and feasibility for practical use. Presumably the most data-driven of these theories is the paradigm that comes with the Penn Discourse TreeBank, a corpus annotated for discourse relations containing over 1 million words. Any language other than English however, can be considered a low-resource language when it comes to discourse processing.
This dissertation is about shallow discourse parsing (discourse parsing following the paradigm of the Penn Discourse TreeBank) for German. The limited availability of annotated data for German means the potential of modern, deep-learning based methods relying on such data is also limited. This dissertation explores to what extent machine-learning and more recent deep-learning based methods can be combined with traditional, linguistic feature engineering to improve performance for the discourse parsing task. A pivotal role is played by connective lexicons that exhaustively list the discourse connectives of a particular language along with some of their core properties.
To facilitate training and evaluation of the methods proposed in this dissertation, an existing corpus (the Potsdam Commentary Corpus) has been extended and additional data has been annotated from scratch. The approach to end-to-end shallow discourse parsing for German adopts a pipeline architecture and either presents the first results or improves over state-of-the-art for German for the individual sub-tasks of the discourse parsing task, which are, in processing order, connective identification, argument extraction and sense classification. The end-to-end shallow discourse parser for German that has been developed for the purpose of this dissertation is open-source and available online.
In the course of writing this dissertation, work has been carried out on several connective lexicons in different languages. Due to their central role and demonstrated usefulness for the methods proposed in this dissertation, strategies are discussed for creating or further developing such lexicons for a particular language, as well as suggestions on how to further increase their usefulness for shallow discourse parsing.
Ionizing radiation is used in cancer radiation therapy to effectively damage the DNA of tumors leading to cell death and reduction of the tumor tissue. The main damage is due to generation of highly reactive secondary species such as low-energy electrons (LEE) with the most probable energy around 10 eV through ionization of water molecules in the cells. A simulation of the dose distribution in the patient is required to optimize the irradiation modality in cancer radiation therapy, which must be based on the fundamental physical processes of high-energy radiation with the tissue. In the present work the accurate quantification of DNA radiation damage in the form of absolute cross sections for LEE-induced DNA strand breaks (SBs) between 5 and 20 eV is done by using the DNA origami technique. This method is based on the analysis of well-defined DNA target sequences attached to DNA origami triangles with atomic force microscopy (AFM) on the single molecule level. The present work focuses on poly-adenine sequences (5'-d(A4), 5'-d(A8), 5'-d(A12), 5'-d(A16), and 5'- d(A20)) irradiated with 5.0, 7.0, 8.4, and 10 eV electrons. Independent of the DNA length, the strand break cross section shows a maximum around 7.0 eV electron energy for all investigated oligonucleotides confirming that strand breakage occurs through the initial formation of negative ion resonances. Additionally, DNA double strand breaks from a DNA hairpin 5'-d(CAC)4T(Bt-dT)T2(GTG)4 are examined for the first time and are compared with those of DNA single strands 5'-d(CAC)4 and 5'- d(GTG)4. The irradiation is made in the most likely energy range of 5 to 20 eV with an anionic resonance maximum around 10 eV independently of the DNA sequence. There is a clear difference between σSSB and σDSB of DNA single and double strands, where the strand break for ssDNA are always higher in all electron energies compared to dsDNA by the factor 3. A further part of this work deals with the characterization and analysis of new types of radiosensitizers used in chemoradiotherapy, which selectively increases the DNA damage upon radiation. Fluorinated DNA sequences with 2'-fluoro-2'-deoxycytidine (dFC) show an increased sensitivity at 7 and 10 eV compared to the unmodified DNA sequences by an enhancement factor between 2.1 and 2.5. In addition, light-induced oxidative damage of 5'-d(GTG)4 and 5'-d((CAC)4T(Bt-dT)T2(GTG)4) modified DNA origami triangles by singlet oxygen 1O2 generated from three photoexcited DNA groove binders [ANT994], [ANT1083] and [Cr(ddpd)2][BF4]3 illuminated in different experiments with UV-Vis light at 430, 435 and 530 nm wavelength is demonstrated. The singlet oxygen induced generation of DNA damage could be detected in both aqueous and dry environments for [ANT1083] and [Cr(ddpd)2][BF4]3.
Emotions are a complex concept and they are present in our everyday life. Persons on the autism spectrum are said to have difficulties in social interactions, showing deficits in emotion recognition in comparison to neurotypically developed persons. But social-emotional skills are believed to be positively augmented by training. A new adaptive social cognition training tool “E.V.A.” is introduced which teaches emotion recognition from face, voice and body language. One cross-sectional and one longitudinal study with adult neurotypical and autistic participants were conducted. The aim of the cross-sectional study was to characterize the two groups and see if differences in their social-emotional skills exist. The longitudinal study, on the other hand, aimed for detecting possible training effects following training with the new training tool. In addition, in both studies usability assessments were conducted to investigate the perceived usability of the new tool for neurotypical as well as autistic participants. Differences were found between autistic and neurotypical participants in their social-emotional and emotion recognition abilities. Training effects for neurotypical participants in an emotion recognition task were found after two weeks of home training. Similar perceived usability was found for the neurotypical and autistic participants. The current findings suggest that persons with ASC do not have a general deficit in emotion recognition, but are in need for more time to correctly recognize emotions. In addition, findings suggest that training emotion recognition abilities is possible. Further studies are needed to verify if the training effects found for neurotypical participants also manifest in a larger ASC sample.
Media artists have been struggling for financial survival ever since media art came into being. The non-material value of the artwork, a provocative attitude towards the traditional arts world and originally anti-capitalist mindset of the movement makes it particularly difficult to provide a constructive solution. However, a cultural entrepreneurial approach can be used to build a framework in order to find a balance between culture and business while ensuring that the cultural mission remains the top priority.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
We developed an orbital tuned age model for the composite Chew Bahir sediment core, obtained from the Chew Bahir basin (CHB), southern Ethiopia. To account for the effects of sedimentation rate changes on the spectral expression of the orbital cycles we developed a new method: the Multi-band Wavelet Age modeling technique (MUBAWA). By using a Continuous Wavelet Transformation, we were able to track frequency shifts that resulted from changing sedimentation rates and thus calculated tuned age model encompassing the last 620 kyrs. The results show a good agreement with the directly dated age model that is available from the dating of volcanic ashes. Then we used the XRF data from CHB and developed a new and robust humid-arid index of east African climate during the last 620 kyrs. To disentangle the relationship of the selected elements we performed a principal component analysis (PCA). In a following step we applied a continuous wavelet transformation on the PC1, using the directly dated age model. The resulting wavelet power spectrum, unlike a normal power spectrum, displays the occurrence of cycles/frequencies in time. The results highlight that the precession cycles are most dominantly expressed under the 400 kyrs eccentricity maximum whereas weakly expressed during eccentricity minimum. This suggests that insolation is a key driver of the climatic variability observed at CHB throughout the last 620 kyrs. In addition, the prevalence of half-precession and obliquity signals was documented. The latter is attributed to the inter-tropical insolation gradient and not interpreted as an imprint of high latitudes forcing on climatic changes in the tropics. In addition, a windowed analysis of variability was used to detect changes in variance over time and showed that strong climate variability occurred especially along the transition from a dominant insolation-controlled humid climate background state towards a predominantly dry and less-insolation controlled climate. The last chapter dealt with non-linear aspects of climate changes represented by the sediments of the CHB. We use recurrence quantification analysis to detect non-linear changes within the potassium concentration of Chew Bahir sediment cores during the last 620 kyrs. The concentration of potassium in the sediments of the lake is subject to geochemical processes related to the evaporation rate of the lake water at the time of deposition. Based on recurrence analysis, two types of variabilities could be distinguished. Type 1 represents slow variations within the precession period bandwidth of 20 kyrs and a tendency towards extreme climatic events whereas type 2 represents fast, highly variable climatic transitions between wet and dry climate states. While type 1 variability is linked to eccentricity maxima, type 2 variability occurs during the 400 kyrs eccentricity minimum. The climate history presented here shows that during high eccentricity a strongly insolation-driven climate system prevailed, whereas during low eccentricity the climate was more strongly affected by short-term variability changes. The short-term environmental changes, reflected in the increased variability might have influenced the evolution, technological advances and expansion of early modern humans who lived in this region. In the Olorgesaille Basin the temporal changes in the occurrence of stone tools, which bracket the transition from Acheulean to Middle Stone Age (MSA) technologies at between 499–320 kyrs, could potentially correlate to the marked transition from a rather stable climate with less variability to a climate with increased variability in the CHB. We conclude that populations of early anatomically modern humans are more likely to have experienced climatic stress during episodes of low eccentricity, associated with dry and high variability climate conditions, which may have led to technological innovation, such as the transition from the Acheulean to the Middle Stone Age.
Foresight in networks
(2021)
The goal of this dissertation is to contribute to the corporate foresight research field by investigating capabilities, practices, and challenges particularly in the context of interorganizational settings and networked organizations informed by the theoretical perspectives of the relational view and dynamic capabilities.
Firms are facing an increasingly complex environment and highly complex product and service landscapes that often require multiple organizations to collaborate for innovation and offerings. Public-private partnerships that are targeted at supporting this have been introduced by policy-makers in the recent past. One example for such a partnership is the European Institute of Innovation and Technology (EIT) with multiple Knowledge and Innovation Communities (KICs). The EIT has been initiated by the European Commission in 2008 with the ambition of addressing grand societal challenges, driving innovativeness of European companies, and supporting systemic change. The resulting network organizations are managed similarly to corporations with managers, boards, and firm-like governance structures. EIT Digital as one of the EIT KICs are a central case of this work.
Research in this dissertation was based on the expectation that corporate foresight activities will increasingly be embedded in such interorganizational settings and a) can draw on such settings for the benefit of themselves and b) may contribute to shared visions, trust building and planning in these network organizations. In this dissertation the EIT Digital (formerly EIT ICT Labs) is a central case, supplemented with insights from three additional cases. I draw on the rich theoretical understanding of the resource-based view, dynamic capabilities, and particularly the relational view to further the discussion in the field of corporate foresight—defined as foresight in organizations in contrast to foresight with a macro-economical perspective—towards a relational understanding. Further, I use and revisit Rohrbeck’s Maturity Model for the Future Orientation of Firms as conceptual frame for corporate foresight in interorganizational settings. The analyses—available as four individual publications complemented by on additional chapter—are designed as exploratory case studies based on multiple data sources including an interview series with 49 persons, two surveys (N=54, n=20), three supplementary interviews, access to key documents and presentations, and observation through participation in meetings and activities of the EIT Digital. This research setting allowed me to contribute to corporate foresight research and practice by 1) integrating relational constructs primarily drawn from the relational view and dynamic capabilities research into the corporate foresight research stream, 2) exploring and understanding capabilities that are required for corporate foresight in interorganizational and networked organizations, 3) discussing and extending the Maturity Model for network organizations, and 4) to support individual organizations to tie their foresight systems effectively to networked foresight systems.
There is a general consensus that diverse ecological communities are better equipped to adapt to changes in their environment, but our understanding of the mechanisms by which they do so remains incomplete. Accurately predicting how the global biodiversity crisis affects the functioning of ecosystems, and the services they provide, requires extensive knowledge about these mechanisms.
Mathematical models of food webs have been successful in uncovering many aspects of the link between diversity and ecosystem functioning in small food web modules, containing at most two adaptive trophic levels. Meaningful extrapolation of this understanding to the functioning of natural food webs remains difficult, due to the presence of complex interactions that are not always accurately captured by bitrophic descriptions of food webs. In this dissertation, we expand this approach to tritrophic food web models by including the third trophic level. Using a functional trait approach, coexistence of all species is ensured using fitness-balancing trade-offs. For example, the defense-growth trade-off implies that species may be defended against predation, but this defense comes at the cost of a lower maximal growth rate. In these food webs, the functional diversity on a given trophic level can be varied by modifying the trait differences between the species on that level.
In the first project, we find that functional diversity promotes high biomass on the top level, which, in turn, leads to a reduction in the temporal variability due to compensatory dynamical patterns governed by the top level. Next, these results are generalized by investigating the average behavior of tritrophic food webs, for wide intervals of all parameters describing species interactions in the food web. We find that the diversity on the top level is most important for determining the biomass and temporal variability of all other trophic levels, and show how biomass is only transferred efficiently to the top level when diversity is high everywhere in the food web. In the third project, we compare the response of a simple food chain against a nutrient pulse perturbation, to that of a food web with diversity on every trophic level. By joint consideration of the resistance, resilience, and elasticity, we uncover that the response is efficiently buffered when biomass on the top level is high, which is facilitated by functional diversity on every trophic level in the food web. Finally, in the fourth project, we show that even in a simple consumer-resource model without any diversity, top-down control on the intermediate level frequently causes the phase difference between the intermediate and basal level to deviate from the quarter-cycle lag rule. By adding a top predator, we show that these deviations become even more likely, and anti-phase cycles are often observed.
The combined results of these projects show how the properties of the top trophic level, including its functional diversity, have a decisive influence on the functioning of tritrophic food webs from a mechanistic perspective. Because top species are often among the most vulnerable to extinction, our results emphasize the importance of their conservation in ecosystem management and restoration strategies.
In her writings on ancient myth, the British author Natalie Haynes moves women to the centre of attention. Her two latest books, A Thousand Ships and Pandora’s Jar – a fiction novel and a non-fiction one – approach this topic from two different perspectives. This interview takes stock of Haynes’ motives and methodology as well as of the challenges she faces in the process of writing.
Interview with Alana Jelinek
(2021)
Alana Jelinek is an art historian and artist — “an artist making art, and also writing about art”, in her words — , a former European Research Council artist in residence at the Museum of Anthropology and Archaeology at the University of Cambridge, and currently teaching in the School of Creative Arts at the University of Hertfordshire. Her art has revolved mostly around the issues of post- and neocolonialism and their connections with neoliberalism — a more implicit topic in her works from the 1990s on the “tourist gaze” developed into an interest in museums, collecting and ethnography throughout the past two decades. In this interview, she talks to thersites about the role of classical heritage and ancient art in her own work.
This article focuses on the feminist reception of Zenobia of Palmyra in Great Britain during the long nineteenth century and the early twentieth century. A special focus lies on her reception by the British suffragettes who belonged to the Women’s Social and Political Union. Even though Zenobia’s story did not end happily, the warrior queen’s example served to inspire these early feminists. Several products of historical culture – such as books, pieces of art, newspaper articles and theatre plays – provide insight into the reception of her as an historical figure, which is dominated by the image of a strong and courageous woman. The article will shed light on how exactly Zenobia’s example was instrumentalised throughout the first feminist movement in Britain.
Spring Issue
(2021)
The suitability of a newly developed cell-based functional assay was tested for the detection of the activity of a range of neurotoxins and neuroactive pharmaceuticals which act by stimulation or inhibition of calcium-dependent neurotransmitter release. In this functional assay, a reporter enzyme is released concomitantly with the neurotransmitter from neurosecretory vesicles. The current study showed that the release of a luciferase from a differentiated human neuroblastoma-based reporter cell line (SIMA-hPOMC1-26-GLuc cells) can be stimulated by a carbachol-mediated activation of the Gq-coupled muscarinic-acetylcholine receptor and by the Ca2+-channel forming spider toxin α-latrotoxin. Carbachol-stimulated luciferase release was completely inhibited by the muscarinic acetylcholine receptor antagonist atropine and α-latrotoxin-mediated release by the Ca2+-chelator EGTA, demonstrating the specificity of luciferase-release stimulation. SIMA-hPOMC1-26-GLuc cells express mainly L- and N-type and to a lesser extent T-type VGCC on the mRNA and protein level. In accordance with the expression profile a depolarization-stimulated luciferase release by a high K+-buffer was effectively and dose-dependently inhibited by L-type VGCC inhibitors and to a lesser extent by N-type and T-type inhibitors. P/Q- and R-type inhibitors did not affect the K+-stimulated luciferase release. In summary, the newly established cell-based assay may represent a versatile tool to analyze the biological efficiency of a range of neurotoxins and neuroactive pharmaceuticals which mediate their activity by the modulation of calcium-dependent neurotransmitter release.
The suitability of a newly developed cell-based functional assay was tested for the detection of the activity of a range of neurotoxins and neuroactive pharmaceuticals which act by stimulation or inhibition of calcium-dependent neurotransmitter release. In this functional assay, a reporter enzyme is released concomitantly with the neurotransmitter from neurosecretory vesicles. The current study showed that the release of a luciferase from a differentiated human neuroblastoma-based reporter cell line (SIMA-hPOMC1-26-GLuc cells) can be stimulated by a carbachol-mediated activation of the Gq-coupled muscarinic-acetylcholine receptor and by the Ca2+-channel forming spider toxin α-latrotoxin. Carbachol-stimulated luciferase release was completely inhibited by the muscarinic acetylcholine receptor antagonist atropine and α-latrotoxin-mediated release by the Ca2+-chelator EGTA, demonstrating the specificity of luciferase-release stimulation. SIMA-hPOMC1-26-GLuc cells express mainly L- and N-type and to a lesser extent T-type VGCC on the mRNA and protein level. In accordance with the expression profile a depolarization-stimulated luciferase release by a high K+-buffer was effectively and dose-dependently inhibited by L-type VGCC inhibitors and to a lesser extent by N-type and T-type inhibitors. P/Q- and R-type inhibitors did not affect the K+-stimulated luciferase release. In summary, the newly established cell-based assay may represent a versatile tool to analyze the biological efficiency of a range of neurotoxins and neuroactive pharmaceuticals which mediate their activity by the modulation of calcium-dependent neurotransmitter release.
Mental health problems remain among the main generators of costs within and beyond the health care system. Psychotherapy, the tool of choice in their treatment, is qualified by social interaction, and cooperation within the therapist-patient-dyad. Research into the factors influencing therapy success to date is neither exhaustive nor conclusive. Among many others, the quality of the relationship between therapist and patient stands out regardless of the followed psychotherapy school. Emerging research points to a connection between interpersonal synchronization within the sessions and therapy outcome. Consequently, it can be considered significant for the shaping of this relationship. The framework of Embodied Cognition assumes bodily and neuronal correlates of thinking. Therefore, the present paper reviews investigations on interpersonal, non-verbal synchrony in two domains: firstly, studies on interpersonal synchrony in psychotherapy are reviewed (synchronization of movement). Secondly, findings on neurological correlates of interpersonal synchrony (assessed with EEG, fMRI, fNIRS) are summarized in a narrative manner. In addition, the question is asked whether interpersonal synchrony can be achieved voluntarily on an individual level. It is concluded that there might be mechanisms which could give more insights into therapy success, but as of yet remain uninvestigated. Further, the framework of embodied cognition applies more to the current body of evidence than classical cognitivist views. Nevertheless, deeper research into interpersonal physical and neurological processes utilizing the framework of Embodied Cognition emerges as a possible route of investigation on the road to lower drop-out rates, improved and quality-controlled therapeutic interventions, thereby significantly reducing healthcare costs.
Mental health problems remain among the main generators of costs within and beyond the health care system. Psychotherapy, the tool of choice in their treatment, is qualified by social interaction, and cooperation within the therapist-patient-dyad. Research into the factors influencing therapy success to date is neither exhaustive nor conclusive. Among many others, the quality of the relationship between therapist and patient stands out regardless of the followed psychotherapy school. Emerging research points to a connection between interpersonal synchronization within the sessions and therapy outcome. Consequently, it can be considered significant for the shaping of this relationship. The framework of Embodied Cognition assumes bodily and neuronal correlates of thinking. Therefore, the present paper reviews investigations on interpersonal, non-verbal synchrony in two domains: firstly, studies on interpersonal synchrony in psychotherapy are reviewed (synchronization of movement). Secondly, findings on neurological correlates of interpersonal synchrony (assessed with EEG, fMRI, fNIRS) are summarized in a narrative manner. In addition, the question is asked whether interpersonal synchrony can be achieved voluntarily on an individual level. It is concluded that there might be mechanisms which could give more insights into therapy success, but as of yet remain uninvestigated. Further, the framework of embodied cognition applies more to the current body of evidence than classical cognitivist views. Nevertheless, deeper research into interpersonal physical and neurological processes utilizing the framework of Embodied Cognition emerges as a possible route of investigation on the road to lower drop-out rates, improved and quality-controlled therapeutic interventions, thereby significantly reducing healthcare costs.
The propagation of test fields, such as electromagnetic, Dirac or linearized gravity, on a fixed spacetime manifold is often studied by using the geometrical optics approximation. In the limit of infinitely high frequencies, the geometrical optics approximation provides a conceptual transition between the test field and an effective point-particle description. The corresponding point-particles, or wave rays, coincide with the geodesics of the underlying spacetime. For most astrophysical applications of interest, such as the observation of celestial bodies, gravitational lensing, or the observation of cosmic rays, the geometrical optics approximation and the effective point-particle description represent a satisfactory theoretical model. However, the geometrical optics approximation gradually breaks down as test fields of finite frequency are considered.
In this thesis, we consider the propagation of test fields on spacetime, beyond the leading-order geometrical optics approximation. By performing a covariant Wentzel-Kramers-Brillouin analysis for test fields, we show how higher-order corrections to the geometrical optics approximation can be considered. The higher-order corrections are related to the dynamics of the spin internal degree of freedom of the considered test field. We obtain an effective point-particle description, which contains spin-dependent corrections to the geodesic motion obtained using geometrical optics. This represents a covariant generalization of the well-known spin Hall effect, usually encountered in condensed matter physics and in optics. Our analysis is applied to electromagnetic and massive Dirac test fields, but it can easily be extended to other fields, such as linearized gravity. In the electromagnetic case, we present several examples where the gravitational spin Hall effect of light plays an important role. These include the propagation of polarized light rays on black hole spacetimes and cosmological spacetimes, as well as polarization-dependent effects on the shape of black hole shadows. Furthermore, we show that our effective point-particle equations for polarized light rays reproduce well-known results, such as the spin Hall effect of light in an inhomogeneous medium, and the relativistic Hall effect of polarized electromagnetic wave packets encountered in Minkowski spacetime.
Modern knowledge bases contain and organize knowledge from many different topic areas. Apart from specific entity information, they also store information about their relationships amongst each other. Combining this information results in a knowledge graph that can be particularly helpful in cases where relationships are of central importance. Among other applications, modern risk assessment in the financial sector can benefit from the inherent network structure of such knowledge graphs by assessing the consequences and risks of certain events, such as corporate insolvencies or fraudulent behavior, based on the underlying network structure. As public knowledge bases often do not contain the necessary information for the analysis of such scenarios, the need arises to create and maintain dedicated domain-specific knowledge bases.
This thesis investigates the process of creating domain-specific knowledge bases from structured and unstructured data sources. In particular, it addresses the topics of named entity recognition (NER), duplicate detection, and knowledge validation, which represent essential steps in the construction of knowledge bases.
As such, we present a novel method for duplicate detection based on a Siamese neural network that is able to learn a dataset-specific similarity measure which is used to identify duplicates. Using the specialized network architecture, we design and implement a knowledge transfer between two deduplication networks, which leads to significant performance improvements and a reduction of required training data.
Furthermore, we propose a named entity recognition approach that is able to identify company names by integrating external knowledge in the form of dictionaries into the training process of a conditional random field classifier. In this context, we study the effects of different dictionaries on the performance of the NER classifier. We show that both the inclusion of domain knowledge as well as the generation and use of alias names results in significant performance improvements.
For the validation of knowledge represented in a knowledge base, we introduce Colt, a framework for knowledge validation based on the interactive quality assessment of logical rules. In its most expressive implementation, we combine Gaussian processes with neural networks to create Colt-GP, an interactive algorithm for learning rule models. Unlike other approaches, Colt-GP uses knowledge graph embeddings and user feedback to cope with data quality issues of knowledge bases. The learned rule model can be used to conditionally apply a rule and assess its quality.
Finally, we present CurEx, a prototypical system for building domain-specific knowledge bases from structured and unstructured data sources. Its modular design is based on scalable technologies, which, in addition to processing large datasets, ensures that the modules can be easily exchanged or extended. CurEx offers multiple user interfaces, each tailored to the individual needs of a specific user group and is fully compatible with the Colt framework, which can be used as part of the system.
We conduct a wide range of experiments with different datasets to determine the strengths and weaknesses of the proposed methods. To ensure the validity of our results, we compare the proposed methods with competing approaches.
Focus on English Linguistics
(2021)
The Arctic environments constitute rich and dynamic ecosystems, dominated by microorganisms extremely well adapted to survive and function under severe conditions. A range of physiological adaptations allow the microbiota in these habitats to withstand low temperatures, low water and nutrient availability, high levels of UV radiation, etc. In addition, other adaptations of clear competitive nature are directed at not only surviving but thriving in these environments, by disrupting the metabolism of neighboring cells and affecting intermicrobial communication. Since Arctic microbes are bioindicators which amplify climate alterations in the environment, the Arctic region presents the opportunity to study local microbiota and carry out research about interesting, potentially virulent phenotypes that could be dispersed into other habitats around the globe as a consequence of accelerating climate change. In this context, exploration of Arctic habitats as well as descriptions of the microbes inhabiting them are abundant but microbial competitive strategies commonly associated with virulence and pathogens are rarely reported. In this project, environmental samples from the Arctic region were collected and microorganisms (bacteria and fungi) were isolated. The clinical relevance of these microorganisms was assessed by observing the following virulence markers: ability to grow at a range of temperatures, expression of antimicrobial resistance and production of hemolysins. The aim of this project is to determine the frequency and relevance of these characteristics in an effort to understand microbial adaptations in habitats threatened by climate change. The isolates obtained and described here were able to grow at a range of temperatures, in some cases more than 30 °C higher than their original isolation temperature. A considerable number of them consistently expressed compounds capable of lysing sheep and bovine erythrocytes on blood agar at different incubation temperatures. Ethanolic extracts of these bacteria were able to cause rapid and complete lysis of erythrocyte suspensions and might even be hemolytic when assayed on human blood. In silico analyses showed a variety of resistance elements, some of them novel, against natural and synthetic antimicrobial compounds. In vitro experiments against a number of antimicrobial compounds showed resistance phenotypes belonging to wild-type populations and some non-wild type which clearly denote human influence in the acquisition of antimicrobial resistance. The results of this project demonstrate the presence of virulence-associated factors expressed by microorganisms of natural, non-clinical environments. This study contains some of the first reports, to the best of our knowledge, of hemolytic microbes isolated from the Arctic region. In addition, it provides additional information about the presence and expression of intrinsic and acquired antimicrobial resistance in environmental isolates, contributing to the understanding of the evolution of relevant pathogenic species and opportunistic pathogens. Finally, this study highlights some of the potential risks associated with changes in the polar regions (habitat melting and destruction, ecosystem transition and re-colonization) as important indirect consequences of global warming and altered climatic conditions around the planet.
Introduction
(2021)
Food intake is driven by the need for energy but also by the demand for essential nutrients such as protein. Whereas it was well known how diets high in protein mediate satiety, it remained unclear how diets low in protein induce appetite. Therefore, this thesis aims to contribute to the research area of the detection of restricted dietary protein and adaptive responses.
This thesis provides clear evidence that the liver-derived hormone fibroblast growth factor 21 (FGF21) is an endocrine signal of a dietary protein restriction, with the cellular amino acid sensor general control nonderepressible 2 (GCN2) kinase acting as an upstream regulator of FGF21 during protein restriction. In the brain, FGF21 is mediating the protein-restricted metabolic responses, e.g. increased energy expenditure, food intake, insulin sensitivity, and improved glucose homeostasis. Furthermore, endogenous FGF21 induced by dietary protein or methionine restriction is preventing the onset of type 2 diabetes in the New Zealand Obese mouse.
Overall, FGF21 plays an important role in the detection of protein restriction and macronutrient imbalance in rodents and humans, and mediates both the behavioral and metabolic responses to dietary protein restriction. This makes FGF21 a critical physiological signal of dietary protein restriction, highlighting the important but often overlooked impact of dietary protein on metabolism and eating behavior, independent of dietary energy content.
Polymeric semiconductors are strong contenders for replacing traditional inorganic semiconductors in electronic applications requiring low power, low cost and flexibility, such as biosensors, flexible solar cells and electronic displays. Molecular doping has the potential to enable this revolution by improving the conductivity and charge transport properties of this class of materials. Despite decades of research in this field, gaps in our understanding of the nature of dopant–polymer interactions has resulted in limited commercialization of this technology. This work aims at providing a deeper insight into the underlying mechanisms of molecular p-doping of semiconducting polymers in the solution and solid-state, and thereby bring the scientific community closer to realizing the dream of making organic semiconductors commonplace in the electronics industry. The role of 1) dopant size/shape, 2) polymer chain aggregation and 3) charge delocalization on the doping mechanism and efficiency is addressed using optical (UV-Vis-NIR) and electron paramagnetic resonance (EPR) spectroscopies. By conducting a comprehensive study of the nature and concentration of the doping-induced species in solutions of the polymer poly(3-hexylthiophene) (P3HT) with 3 different dopants, we identify the unique optical signatures of the delocalized polaron, localized polaron and charge-transfer complex, and report their extinction coefficient values. Furthermore, with X-ray diffraction, atomic force microscopy and electrical conductivity measurements, we study the impact of processing technique and doping mechanism on the morphology and thereby, charge transport through the doped films.
This work demonstrates that the doping mechanism and type of doping-induced species formed are strongly influenced by the polymer backbone arrangement rather than dopant shape/size. The ability of the polymer chain to aggregate is found to be crucial for efficient charge transfer (ionization) and polaron delocalization. At the same time, our results suggest that the high ionization efficiency of a dopant–polymer system in solution may subsequently hinder efficient charge transport in the solid-state due to the reduction in the fraction of tie chains, which enable charges to move efficiently between aggregated domains in the films. This study demonstrates the complex multifaceted nature of polymer doping while providing important hints for the future design of dopant-host systems and film fabrication techniques.
Noise is ubiquitous in nature and usually results in rich dynamics in stochastic systems such as oscillatory systems, which exist in such various fields as physics, biology and complex networks. The correlation and synchronization of two or many oscillators are widely studied topics in recent years.
In this thesis, we mainly investigate two problems, i.e., the stochastic bursting phenomenon in noisy excitable systems and synchronization in a three-dimensional Kuramoto model with noise. Stochastic bursting here refers to a sequence of coherent spike train, where each spike has random number of followers due to the combined effects of both time delay and noise. Synchronization, as a universal phenomenon in nonlinear dynamical systems, is well illustrated in the Kuramoto model, a prominent model in the description of collective motion.
In the first part of this thesis, an idealized point process, valid if the characteristic timescales in the problem are well separated, is used to describe statistical properties such as the power spectral density and the interspike interval distribution. We show how the main parameters of the point process, the spontaneous excitation rate, and the probability to induce a spike during the delay action can be calculated from the solutions of a stationary and a forced Fokker-Planck equation. We extend it to the delay-coupled case and derive analytically the statistics of the spikes in each neuron, the pairwise correlations between any two neurons, and the spectrum of the total output from the network.
In the second part, we investigate the three-dimensional noisy Kuramoto model, which can be used to describe the synchronization in a swarming model with helical trajectory. In the case without natural frequency, the Kuramoto model can be connected with the Vicsek model, which is widely studied in collective motion and swarming of active matter. We analyze the linear stability of the incoherent state and derive the critical coupling strength above which the incoherent state loses stability. In the limit of no natural frequency, an exact self-consistent equation of the mean field is derived and extended straightforward to any high-dimensional case.
History of Forgetfulness
(2021)
This open access book presents a topical, comprehensive and differentiated analysis of Germany's public administration and reforms. It provides an overview on key elements of German public administration at the federal, Länder and local levels of government as well as on current reform activities of the public sector. It examines the key institutional features of German public administration; the changing relationships between public administration, society and the private sector; the administrative reforms at different levels of the federal system and numerous sectors; and new challenges and modernization approaches like digitalization, Open Government and Better Regulation. Each chapter offers a combination of descriptive information and problem-oriented analysis, presenting key topical issues in Germany which are relevant to an international readership.
Federal Administration
(2021)
The federal administration is significantly small (around 10 percent of all public employees). This speciality of the German administrative system is based on the division of responsibilities: the central (federal) level drafts and adopts most of the laws and public programmes, and the state level (together with the municipal level) implements them. The administration of the federal level comprises the ministries, subordinated agencies for special and selected operational tasks (e.g. the authorisation of drugs, information security and registration of refugees) in distinct administrative sectors (e.g. foreign service, armed forces and federal police). The capacity for preparing and monitoring government bills and statutory instruments is well developed. Moreover, the instruments and tools of coordination are exemplary compared with other countries, although the recent digital turn has been adopted less advanced than elsewhere.
This chapter describes the most prominent public management reform trajectories in German public administration over the past decades since unification. In the 1990s, the New Steering Model emerged as a German variant of the NPM. Since the mid-2000s, local governments in Germany have been subjected to a mandatory reform of their budgeting and accounting system known as the New Municipal Financial Management reforms. Both reforms have led to a substantial change in terms of internal decentralisation, customer orientation, transparency in resource use and the financial situation of administrative bodies. But the emerging reform patterns and their impacts have not replaced the dominance of a strong legalist culture with hierarchical, centralised control. However, in the course of the reforms, a citizen-customer perspective, more participation of citizens and limited application of new management instruments have been accommodated within the persisting bureaucratic system.
Over the last decades, Better Regulation has become a major reform topic at the federal and—in some cases—also at the Länder level. Although the debate about improving regulatory quality and reducing unnecessary burdens created by bureaucracy and red tape date back to the 1960s and 1970s, the introduction by law in 2006 of a new independent institutionalised body for regulatory control at the federal level of government has brought a new quality to the discourse and practice of Better Regulation in Germany. This chapter introduces the basic features of the legislative process at the federal level in Germany, addresses the issue of Better Regulation and outlines the role of the National Regulatory Control Council (Nationaler Normenkontrollrat—NKR) as a ‘watchdog’ for compliance costs, red tape and regulatory impacts.
Although German bureaucracy is typically categorised as Weberian, a clear distinction between politics and administration has never been a defining characteristic of the German political-administrative system. Many close interrelations and interactions between elected politicians and appointed civil servants can be observed at all levels of administration. Higher-ranking civil servants in Germany are used to and generally appreciate the functional politicisation of their jobs, that is their close involvement in all stages of the policy process, from policy formation, goal definition, negotiation within and outside government to the implementation and evaluation of policies. For top positions, therefore, a class of ‘political civil servants’ is a special feature of the German system, and obtaining ‘political craft’ has become an important part of the learning and job experience of higher-ranking civil servants.
German Public Administration
(2021)
The international community of public administration and administrative sciences shows a great interest in the basic features of the German administrative system. The German public administration with its formative decentralisation (called: administrative federalism) is regarded as a prime example of multilevel governance and strong local self-government. Furthermore, over the past decades, the traditional profile of the German administrative system has significantly been reshaped and remoulded through reforms, processes of modernisation and the transformation process in East Germany. Studies on the German administrative system should focus especially on
key institutional features of public administration;
changing relationships between public administration, society and the private sector;
administrative reforms at different levels of the federal system; and
new challenges and modernisation approaches, such as digitalisation, open government and better regulation.
Human resource management (HRM) reform has not been the focus of attention in Germany despite its obvious relevance for effective policy implementation. Although there is a general trend worldwide towards convergence between public and private HRM strategies and practices, management of the workforce in German public administration still remains largely traditional and bureaucratic. This chapter describes and analyses German practices regarding the central functions and elements of HRM such as planning, recruitment, training and leadership. Furthermore, it explores the importance and contribution of public service motivation, performance-related pay and diversity management in the context of German practices. The chapter concludes by highlighting some of the major paradoxes of German public HRM in light of current challenges, such as demographic change, digital transformation and organisational development capabilities.
The chapter analyses recent reforms in the multilevel system of the Länder, specifically territorial, functional and structural reforms, which represent three of the most crucial and closely interconnected reform trajectories at the subnational level. It sheds light on the variety of reform approaches pursued in the different Länder and also highlights some factors that account for these differences. The transfer of state functions to local governments is addressed as well as the restructuring of Länder administrations (e.g. abolishment of the meso level of the Länder administration and of single-purpose state agencies) and the rescaling of territorial boundaries at county and municipal levels, including a brief review of the recently failed (territorial) reforms in Eastern Germany.
The German system of public sector employment (including civil servants and public employees) qualifies as a classical European continental civil service model moulded in traditional forms of a Weberian bureaucracy. Its features include a career-based employment system with entry based on levels of formal qualification. Coordinated by legal frames and centralised collective bargaining, the civil service is, at the same time, decentralised and flexible enough to accommodate regional differences and societal changes. In comparison, the civil service system stands out for its high degrees of professionalism and legal fairness with low levels of corruption or cronyism.
This work develops hybrid methods of imaging spectroscopy for open pit mining and examines their feasibility compared with state-of-the-art. The material distribution within a mine face differs in the small scale and within daily assigned extraction segments. These changes can be relevant to subsequent processing steps but are not always visually identifiable prior to the extraction. Misclassifications that cause false allocations of extracted material need to be minimized in order to reduce energy-intensive material re-handling. The use of imaging spectroscopy aspires to the allocation of relevant deposit-specific materials before extraction, and allows for efficient material handling after extraction. The aim of this work is the parameterization of imaging spectroscopy for pit mining applications and the development and evaluation of a workflow for a mine face, ground- based, spectral characterization. In this work, an application-based sensor adaptation is proposed. The sensor complexity is reduced by down-sampling the spectral resolution of the system based on the samples’ spectral characteristics. This was achieved by the evaluation of existing hyperspectral outcrop analysis approaches based on laboratory sample scans from the iron quadrangle in Minas Gerais, Brazil and by the development of a spectral mine face monitoring workflow which was tested for both an operating and an inactive open pit copper mine in the Republic of Cyprus.
The workflow presented here is applied to three regional data sets: 1) Iron ore samples from Brazil, (laboratory); 2) Samples and hyperspectral mine face imagery from the copper-gold-pyrite mine Apliki, Republic of Cyprus (laboratory and mine face data); and 3) Samples and hyperspectral mine face imagery from the copper-gold-pyrite deposit Three Hills, Republic of Cyprus (laboratory and mine face data). The hyperspectral laboratory dataset of fifteen Brazilian iron ore samples was used to evaluate different analysis methods and different sensor models. Nineteen commonly used methods to analyze and map hyperspectral data were compared regarding the methods’ resulting data products and the accuracy of the mapping and the analysis computation time. Four of the evaluated methods were determined for subsequent analyses to determine the best-performing algorithms: The spectral angle mapper (SAM), a support vector machine algorithm (SVM), the binary feature fitting algorithm (BFF) and the EnMap geological mapper (EnGeoMap). Next, commercially available imaging spectroscopy sensors were evaluated for their usability in open pit mining conditions. Step-wise downsampling of the data - the reduction of the number of bands with an increase of each band’s bandwidth - was performed to investigate the possible simplification and ruggedization of a sensor without a quality fall-off of the mapping results. The impact of the atmosphere visible in the spectrum between 1300–2010nm was reduced by excluding the spectral range from the data for mapping. This tested the feasibility of the method under realistic open pit data conditions. Thirteen datasets based on the different, downsampled sensors were analyzed with the four predetermined methods. The optimum sensor for spectral mine face material distinction was determined as a VNIR-SWIR sensor with 40nm bandwidths in the VNIR and 15nm bandwidths in the SWIR spectral range and excluding the atmospherically impacted bands. The Apliki mine sample dataset was used for the application of the found optimal analyses and sensors. Thirty-six samples were analyzed geochemically and mineralogically. The sample spectra were compiled to two spectral libraries, both distinguishing between seven different geochemical-spectral clusters. The reflectance dataset was downsampled to five different sensors. The five different datasets were mapped with the SAM, BFF and SVM method achieving mapping accuracies of 85-72%, 85-76% and 57-46% respectively. One mine face scan of Apliki was used for the application of the developed workflow. The mapping results were validated against the geochemistry and mineralogy of thirty-six documented field sampling points and a zonation map of the mine face which is based on sixty-six samples and field mapping. The mine face was analyzed with SAM and BFF. The analysis maps were visualized on top of a Structure-from-Motion derived 3D model of the open pit. The mapped geological units and zones correlate well with the expected zonation of the mine face. The third set of hyperspectral imagery from Three Hills was available for applying the fully-developed workflow. Geochemical sample analyses and laboratory spectral data of fifteen different samples from the Three Hills mine, Republic of Cyprus, were used to analyse a downsampled mine face scan of the open pit. Here, areas of low, medium and high ore content were identified.
The developed workflow is successfully applied to the open pit mines Apliki and Three Hills and the spectral maps reflect the prevailing geological conditions. This work leads through the acquisition, preparation and processing of imaging spectroscopy data, the optimum choice of analysis methodology, and the utilization of simplified, robust sensors that meet the requirements of open pit mining conditions. It accentuates the importance of a site-specific and deposit-specific spectral library for the mine face analysis and underlines the need for geological and spectral analysis experts to successfully implement imaging spectroscopy in the field of open pit mining.
In this introductory chapter, the editors describe the main theoretical basis of analysis of this book and the methodological approach. The core of this book consists of 14 country-specific chapters, which allow a European comparison and show the increasing variance in migration policy approaches within and between European countries. The degree of local autonomy, the level of centralisation and the traditional forms of migration policy are factors that especially influence the possibilities for local authorities to formulate their own integration policies.
This chapter focuses on the relationship between public opinion on migration and its media coverage. Different explanatory models, including individual characteristics, cultural factors and the impact of media and politics, have been proposed to explain public attitudes towards migrants. Understanding the local context is important, as the shares of migrants living in each region and city vary considerably. Providing correct statistical information, stressing the diversity of current migration patterns in Europe and taking part in media and public discussions are ways in which to impact public attitudes at the local level.
The chapter begins with a brief historical overview of Germany’s transition in the twentieth and twenty-first century from a transit and emigration country to one of immigration. The next part of this chapter looks at the challenges and problems facing German immigration policy within a multi-level federal system. Finally, the chapter gives an analysis of some of the trends in German migration policy since the refugee crisis in 2015, such as changes in the party system and in the concepts underlying migration policies to better manage, control and limit immigration to Germany.
As expected, the traditions of national-state migration policies continue to play a very important role, path-dependence in this policy field remains high. The distribution of competences in migration policy and the integration of migrants in the nation states continues to be very different. When implementing integration strategies at grassroots level, the respective policies should be tailored to the profile of both the local migrant community and the native population. Besides better migration management in local administration and the interaction of top-down and bottom-up efforts to integrate migrants is of importance.
This book presents an overview of European migration policy and the various institutional arrangements within and between various actors, such as local councils, local media, local economies, and local civil society initiatives. Both the role of local authorities in this policy field and their cooperation with civil society initiatives or networks are under-explored topics for research. In response, this book provides a range of detailed case studies focusing on the six main groups of national and administrative traditions in Europe: Germanic, Scandinavian, Napoleonic, Southeastern European, Central-Eastern European and Anglo-Saxon.
Energy is at the heart of the climate crisis—but also at the heart of any efforts for climate change mitigation. Energy consumption is namely responsible for approximately three quarters of global anthropogenic greenhouse gas (GHG) emissions. Therefore, central to any serious plans to stave off a climate catastrophe is a major transformation of the world's energy system, which would move society away from fossil fuels and towards a net-zero energy future. Considering that fossil fuels are also a major source of air pollutant emissions, the energy transition has important implications for air quality as well, and thus also for human and environmental health. Both Europe and Germany have set the goal of becoming GHG neutral by 2050, and moreover have demonstrated their deep commitment to a comprehensive energy transition. Two of the most significant developments in energy policy over the past decade have been the interest in expansion of shale gas and hydrogen, which accordingly have garnered great interest and debate among public, private and political actors.
In this context, sound scientific information can play an important role by informing stakeholder dialogue and future research investments, and by supporting evidence-based decision-making. This thesis examines anticipated environmental impacts from possible, relevant changes in the European energy system, in order to impart valuable insight and fill critical gaps in knowledge. Specifically, it investigates possible future shale gas development in Germany and the United Kingdom (UK), as well as a hypothetical, complete transition to hydrogen mobility in Germany. Moreover, it assesses the impacts on GHG and air pollutant emissions, and on tropospheric ozone (O3) air quality. The analysis is facilitated by constructing emission scenarios and performing air quality modeling via the Weather Research and Forecasting model coupled with chemistry (WRF-Chem). The work of this thesis is presented in three research papers.
The first paper finds that methane (CH4) leakage rates from upstream shale gas development in Germany and the UK would range between 0.35% and 1.36% in a realistic, business-as-usual case, while they would be significantly lower - between 0.08% and 0.15% - in an optimistic, strict regulation and high compliance case, thus demonstrating the value and potential of measures to substantially reduce emissions. Yet, while the optimistic case is technically feasible, it is unlikely that the practices and technologies assumed would be applied and accomplished on a systematic, regular basis, owing to economics and limited monitoring resources. The realistic CH4 leakage rates estimated in this study are comparable to values reported by studies carried out in the US and elsewhere. In contrast, the optimistic rates are similar to official CH4 leakage data from upstream gas production in Germany and in the UK. Considering that there is a lack of systematic, transparent and independent reports supporting the official values, this study further highlights the need for more research efforts in this direction. Compared with national energy sector emissions, this study suggests that shale gas emissions of volatile organic compounds (VOCs) could be significant, though relatively insignificant for other air pollutants. Similar to CH4, measures could be effective for reducing VOCs emissions.
The second paper shows that VOC and nitrogen oxides (NOx) emissions from a future shale gas industry in Germany and the UK have potentially harmful consequences for European O3 air quality on both the local and regional scale. The results indicate a peak increase in maximum daily 8-hour average O3 (MDA8) ranging from 3.7 µg m-3 to 28.3 µg m-3. Findings suggest that shale gas activities could result in additional exceedances of MDA8 at a substantial percentage of regulatory measurement stations both locally and in neighboring and distant countries, with up to circa one third of stations in the UK and one fifth of stations in Germany experiencing additional exceedances. Moreover, the results reveal that the shale gas impact on the cumulative health-related metric SOMO35 (annual Sum of Ozone Means Over 35 ppb) could be substantial, with a maximum increase of circa 28%. Overall, the findings suggest that shale gas VOC emissions could play a critical role in O3 enhancement, while NOx emissions would contribute to a lesser extent. Thus, the results indicate that stringent regulation of VOC emissions would be important in the event of future European shale gas development to minimize deleterious health outcomes.
The third paper demonstrates that a hypothetical, complete transition of the German vehicle fleet to hydrogen fuel cell technology could contribute substantially to Germany's climate and air quality goals. The results indicate that if the hydrogen were to be produced via renewable-powered water electrolysis (green hydrogen), German carbon dioxide equivalent (CO2eq) emissions would decrease by 179 MtCO2eq annually, though if electrolysis were powered by the current electricity mix, emissions would instead increase by 95 MtCO2eq annually. The findings generally reveal a notable anticipated decrease in German energy emissions of regulated air pollutants. The results suggest that vehicular hydrogen demand is 1000 PJ annually, which would require between 446 TWh and 525 TWh for electrolysis, hydrogen transport and storage. When only the heavy duty vehicle segment (HDVs) is shifted to green hydrogen, the results of this thesis show that vehicular hydrogen demand drops to 371 PJ, while a deep emissions cut is still realized (-57 MtCO2eq), suggesting that HDVs are a low-hanging fruit for contributing to decarbonization of the German road transport sector with hydrogen energy.
Compound values are not universally supported in virtual machine (VM)-based programming systems and languages. However, providing data structures with value characteristics can be beneficial. On one hand, programming systems and languages can adequately represent physical quantities with compound values and avoid inconsistencies, for example, in representation of large numbers. On the other hand, just-in-time (JIT) compilers, which are often found in VMs, can rely on the fact that compound values are immutable, which is an important property in optimizing programs. Considering this, compound values have an optimization potential that can be put to use by implementing them in VMs in a way that is efficient in memory usage and execution time. Yet, optimized compound values in VMs face certain challenges: to maintain consistency, it should not be observable by the program whether compound values are represented in an optimized way by a VM; an optimization should take into account, that the usage of compound values can exhibit certain patterns at run-time; and that necessary value-incompatible properties due to implementation restrictions should be reduced.
We propose a technique to detect and compress common patterns of compound value usage at run-time to improve memory usage and execution speed. Our approach identifies patterns of frequent compound value references and introduces abbreviated forms for them. Thus, it is possible to store multiple inter-referenced compound values in an inlined memory representation, reducing the overhead of metadata and object references. We extend our approach by a notion of limited mutability, using cells that act as barriers for our approach and provide a location for shared, mutable access with the possibility of type specialization. We devise an extension to our approach that allows us to express automatic unboxing of boxed primitive data types in terms of our initial technique. We show that our approach is versatile enough to express another optimization technique that relies on values, such as Booleans, that are unique throughout a programming system. Furthermore, we demonstrate how to re-use learned usage patterns and optimizations across program runs, thus reducing the performance impact of pattern recognition.
We show in a best-case prototype that the implementation of our approach is feasible and can also be applied to general purpose programming systems, namely implementations of the Racket language and Squeak/Smalltalk. In several micro-benchmarks, we found that our approach can effectively reduce memory consumption and improve execution speed.
While a growing body of literature finds positive impacts of Start-Up Subsidies (SUS) on labor market outcomes of participants, little is known about how the design of these programs shapes their effectiveness and hence how to improve policy. As experimental variation in program design is unavailable, we exploit the 2011 reform of the current German SUS program for the unemployed which strengthened case-workers’ discretionary power, increased entry requirements and reduced monetary support. We estimate the impact of the reform on the program’s effectiveness using samples of participants and non-participants from before and after the reform. To control for time-constant unobserved heterogeneity as well as differential selection patterns based on observable characteristics over time, we combine Difference-in-Differences with inverse probability weighting using covariate balancing propensity scores. Holding participants’ observed characteristics as well as macroeconomic conditions constant, the results suggest that the reform was successful in raising employment effects on average. As these findings may be contaminated by changes in selection patterns based on unobserved characteristics, we assess our results using simulation-based sensitivity analyses and find that our estimates are highly robust to changes in unobserved characteristics. Hence, the reform most likely had a positive impact on the effectiveness of the program, suggesting that increasing entry requirements and reducing support in-creased the program’s impacts while reducing the cost per participant.
The large literature that aims to find evidence of climate migration delivers mixed findings. This meta-regression analysis i) summarizes direct links between adverse climatic events and migration, ii) maps patterns of climate migration, and iii) explains the variation in outcomes. Using a set of limited dependent variable models, we meta-analyze thus-far the most comprehensive sample of 3,625 estimates from 116 original studies and produce novel insights on climate migration. We find that extremely high temperatures and drying conditions increase migration. We do not find a significant effect of sudden-onset events. Climate migration is most likely to emerge due to contemporaneous events, to originate in rural areas and to take place in middle-income countries, internally, to cities. The likelihood to become trapped in affected areas is higher for women and in low-income countries, particularly in Africa. We uniquely quantify how pitfalls typical for the broader empirical climate impact literature affect climate migration findings. We also find evidence of different publication biases.
Reciprocal space slicing
(2021)
An experimental technique that allows faster assessment of out-of-plane strain dynamics of thin film heterostructures via x-ray diffraction is presented. In contrast to conventional high-speed reciprocal space-mapping setups, our approach reduces the measurement time drastically due to a fixed measurement geometry with a position-sensitive detector. This means that neither the incident (ω) nor the exit (2θ) diffraction angle is scanned during the strain assessment via x-ray diffraction. Shifts of diffraction peaks on the fixed x-ray area detector originate from an out-of-plane strain within the sample. Quantitative strain assessment requires the determination of a factor relating the observed shift to the change in the reciprocal lattice vector. The factor depends only on the widths of the peak along certain directions in reciprocal space, the diffraction angle of the studied reflection, and the resolution of the instrumental setup. We provide a full theoretical explanation and exemplify the concept with picosecond strain dynamics of a thin layer of NbO2.
Reciprocal space slicing
(2021)
An experimental technique that allows faster assessment of out-of-plane strain dynamics of thin film heterostructures via x-ray diffraction is presented. In contrast to conventional high-speed reciprocal space-mapping setups, our approach reduces the measurement time drastically due to a fixed measurement geometry with a position-sensitive detector. This means that neither the incident (ω) nor the exit (2θ) diffraction angle is scanned during the strain assessment via x-ray diffraction. Shifts of diffraction peaks on the fixed x-ray area detector originate from an out-of-plane strain within the sample. Quantitative strain assessment requires the determination of a factor relating the observed shift to the change in the reciprocal lattice vector. The factor depends only on the widths of the peak along certain directions in reciprocal space, the diffraction angle of the studied reflection, and the resolution of the instrumental setup. We provide a full theoretical explanation and exemplify the concept with picosecond strain dynamics of a thin layer of NbO2.
By regulating the concentration of carbon in our atmosphere, the global carbon cycle drives changes in our planet’s climate and habitability. Earth surface processes play a central, yet insufficiently constrained role in regulating fluxes of carbon between terrestrial reservoirs and the atmosphere. River systems drive global biogeochemical cycles by redistributing significant masses of carbon across the landscape. During fluvial transit, the balance between carbon oxidation and preservation determines whether this mass redistribution is a net atmospheric CO2 source or sink. Existing models for fluvial carbon transport fail to integrate the effects of sediment routing processes, resulting in large uncertainties in fluvial carbon fluxes to the oceans.
In this Ph.D. dissertation, I address this knowledge gap through three studies that focus on the timescale and routing pathways of fluvial mass transfer and show their effect on the composition and fluxes of organic carbon exported by rivers. The hypotheses posed in these three studies were tested in an analog lowland alluvial river system – the Rio Bermejo in Argentina. The Rio Bermejo annually exports more than 100 Mt of sediment and organic matter from the central Andes, and transports this material nearly 1300 km downstream across the lowland basin without influence from tributaries, allowing me to isolate the effects of geomorphic processes on fluvial organic carbon cycling. These studies focus primarily on the geochemical composition of suspended sediment collected from river depth profiles along the length of the Rio Bermejo.
In Chapter 3, I aimed to determine the mean fluvial sediment transit time for the Rio Bermejo and evaluate the geomorphic processes that regulate the rate of downstream sediment transfer. I developed a framework to use meteoric cosmogenic 10Be (10Bem) as a chronometer to track the duration of sediment transit from the mountain front downstream along the ~1300 km channel of the Rio Bermejo. I measured 10Bem concentrations in suspended sediment sampled from depth profiles, and found a 230% increase along the fluvial transit pathway. I applied a simple model for the time-dependent accumulation of 10Bem on the floodplain to estimate a mean sediment transit time of 8.5±2.2 kyr. Furthermore, I show that sediment transit velocity is influenced by lateral migration rate and channel morphodynamics. This approach to measuring sediment transit time is much more precise than other methods previously used and shows promise for future applications.
In Chapter 4, I aimed to quantify the effects of hydrodynamic sorting on the composition and quantity of particulate organic carbon (POC) export transported by lowland rivers. I first used scanning electron miscroscopy (SEM) coupled with nanoscale secondary ion mass spectrometry (NanoSIMS) analyses to show that the Bermejo transports two principal types of POC: 1) mineral-bound organic carbon associated with <4 µm, platy grains, and 2) coarse discrete organic particles. Using n-alkane stable isotope data and particle shape analysis, I showed that these two carbon pools are vertically sorted in the water column, due to differences in particle settling velocity. This vertical sorting may drive modern POC to be transported efficiently from source-to-sink, driving efficient CO2 drawdown. Simultaneously, vertical sorting may drive degraded, mineral-bound POC to be deposited overbank and stored on the floodplain for centuries to millennia, resulting in enhanced POC remineralization. In the Rio Bermejo, selective deposition of coarse material causes the proportion of mineral-bound POC to increase with distance downstream, but the majority of exported POC is composed of discrete organic particles, suggesting that the river is a net carbon sink. In summary, this study shows that selective deposition and hydraulic sorting control the composition and fate of fluvial POC during fluvial transit.
In Chapter 5, I characterized and quantified POC transformation and oxidation during fluvial transit. I analyzed the radiocarbon content and stable carbon isotopic composition of Rio Bermejo suspended sediment and found that POC ages during fluvial transit, but is also degraded and oxidized during transient floodplain storage. Using these data, I developed a conceptual model for fluvial POC cycling that allows the estimation of POC oxidation relative to POC export, and ultimately reveals whether a river is a net source or sink of CO2 to the atmosphere. Through this study, I found that the Rio Bermejo annually exports more POC than is oxidized during transit, largely due to high rates of lateral migration that cause erosion of floodplain vegetation and soil into the river. These results imply that human engineering of rivers could alter the fluvial carbon balance, by reducing lateral POC inputs and increasing the mean sediment transit time.
Together, these three studies quantitatively link geomorphic processes to rates of POC transport and degradation across sub-annual to millennial time scales and nanoscale to 103 km spatial scales, laying the groundwork for a global-scale fluvial organic carbon cycling model.
Investigation of Sirtuin 3 overexpression as a genetic model of fasting in hypothalamic neurons
(2021)
Geomagnetic field modeling using spherical harmonics requires the inversion for hundreds to thousands of parameters. This large-scale problem can always be formulated as an optimization problem, where a global minimum of a certain cost function has to be calculated. A variety of approaches is known in order to solve this inverse problem, e.g. derivative-based methods or least-squares methods and their variants. Each of these methods has its own advantages and disadvantages, which affect for example the applicability to non-differentiable functions or the runtime of the corresponding algorithm.
In this work, we pursue the goal to find an algorithm which is faster than the established methods and which is applicable to non-linear problems. Such non-linear problems occur for example when estimating Euler angles or when the more robust L_1 norm is applied. Therefore, we will investigate the usability of stochastic optimization methods from the CMAES family for modeling the geomagnetic field of Earth's core. On one hand, basics of core field modeling and their parameterization are discussed using some examples from the literature. On the other hand, the theoretical background of the stochastic methods are provided. A specific CMAES algorithm was successfully applied in order to invert data of the Swarm satellite mission and to derive the core field model EvoMag. The EvoMag model agrees well with established models and observatory data from Niemegk. Finally, we present some observed difficulties and discuss the results of our model.
Sephardim and Ashkenazim
(2021)
Sephardic and Ashkenazic Judaism have long been studied separately. Yet, scholars are becoming ever more aware of the need to merge them into a single field of Jewish Studies. This volume opens new perspectives and bridges traditional gaps. The authors are not simply contributing to their respective fields of Sephardic or Ashkenazic Studies. Rather, they all include both Sephardic and Ashkenazic perspectives as they reflect on different aspects of encounters and reconsider traditional narratives. Subjects range from medieval and early modern Sephardic and Ashkenazic constructions of identities, influences, and entanglements in the fields of religious art, halakhah, kabbalah, messianism, and charity to modern Ashkenazic Sephardism and Sephardic admiration for Ashkenazic culture. For reasons of coherency, the contributions all focus on European contexts between the fourteenth and the nineteenth centuries.
Sephardim and Ashkenazim
(2021)
We develop a model of optimal carbon taxation and redistribution taking into account horizontal equity concerns by considering heterogeneous energy efficiencies. By deriving first- and second-best rules for policy instruments including carbon taxes, transfers and energy subsidies, we then investigate analytically how horizontal equity is considered in the social welfare maximizing tax structure. We calibrate the model to German household data and a 30 percent emission reduction goal. Our results show that energy-intensive households should receive more redistributive resources than energy-efficient households if and only if social inequality aversion is sufficiently high. We further find that redistribution of carbon tax revenue via household-specific transfers is the first-best policy. Equal per-capita transfers do not suffer from informational problems, but increase mitigation costs by around 15 percent compared to the first- best for unity inequality aversion. Adding renewable energy subsidies or non-linear energy subsidies, reduces mitigation costs further without relying on observability of households’ energy efficiency.
The aim of the doctoral project was to answer the question of whether the structural word-initial noun capitalization, as it can otherwise only be found in Luxembourgish alongside German, has a function that is advantageous for the reader. The overriding hypothesis was that an advantage is achieved by activating a syntactic category, namely the core of a noun phrase, through the parafoveal perception of the capital letters. This perception from the corner of the eye should make it possible to preprocess the following noun. As a result, sentence processing should be facilitated, which should ultimately be reflected in overall faster reading times and fixation durations.
The structure of the project includes three studies, some of which included different participant groups:
Study 1:
Study design: Semantic priming using garden-path sentences should bring out the functionality of noun capitalization for the reader
Participant groups: German natives reading German
Study 2:
Study design: same design as study 1, but in English
Participant groups:
English natives without any knowledge of German reading English
English natives who regularly read German reading English
German with high proficiency in English reading English
Study 3:
Study design:
Influence of the noun frequency on a potential preprocessing using the boundary paradigm; Study languages: German and English
Participant groups:
German natives reading German
English natives without any knowledge of German reading English
German with high proficiency in English reading English
Brief summary: The noun capitalization clearly has an impact on sentence processing in both German and English. It cannot be confirmed that this has a substantial, decisive advantage.
We investigate how the economic consequences of the pandemic, and of the government-mandated measures to contain its spread, affect the self-employed – particularly women – in Germany. For our analysis, we use representative, real-time survey data in which respondents were asked about their situation during the COVID-19 pandemic. Our findings indicate that among the self-employed, who generally face a higher likelihood of income losses due to COVID-19 than employees, women are 35% more likely to experience income losses than their male counterparts. Conversely, we do not find a comparable gender gap among employees. Our results further suggest that the gender gap among the self-employed is largely explained by the fact that women disproportionately work in industries that are more severely affected by the COVID-19 pandemic. Our analysis of potential mechanisms reveals that women are significantly more likely to be impacted by government-imposed restrictions, i.e. the regulation of opening hours. We conclude that future policy measures intending to mitigate the consequences of such shocks should account for this considerable variation in economic hardship.
In this paper, we study the effect of exogenous global crop price changes on migration from agricultural and non-agricultural households in Sub-Saharan Africa. We show that, similar to the effect of positive local weather shocks, the effect of a locally-relevant global crop price increase on household out-migration depends on the initial household wealth. Higher international producer prices relax the budget constraint of poor agricultural households and facilitate migration. The order of magnitude of a standardized price effect is approx. one third of the standardized effect of a local weather shock. Unlike positive weather shocks, which mostly facilitate internal rural-urban migration, positive income shocks through rising producer prices only increase migration to neighboring African countries, likely due to the simultaneous decrease in real income in nearby urban areas. Finally, we show that while higher producer prices induce conflict, conflict does not play a role for the household decision to send a member as a labor migrant.
Diabetes is a major public health problem with increasing global prevalence. Type 2 diabetes (T2D), which accounts for 90% of all diagnosed cases, is a complex polygenic disease also modulated by epigenetics and lifestyle factors. For the identification of T2D-associated genes, linkage analyses combined with mouse breeding strategies and bioinformatic tools were useful in the past. In a previous study in which a backcross population of the lean and diabetes-prone dilute brown non-agouti (DBA) mouse and the obese and diabetes-susceptible New Zealand obese (NZO) mouse was characterized, a major diabetes quantitative trait locus (QTL) was identified on chromosome 4. The locus was designated non-insulin dependent diabetes from DBA (Nidd/DBA). The aim of this thesis was (i) to perform a detailed phenotypic characterization of the Nidd/DBA mice, (ii) to further narrow the critical region and (iii) to identify the responsible genetic variant(s) of the Nidd/DBA locus. The phenotypic characterization of recombinant congenic mice carrying a 13.6 Mbp Nidd/DBA fragment with 284 genes presented a gradually worsening metabolic phenotype. Nidd/DBA allele carriers exhibited severe hyperglycemia (~19.9 mM) and impaired glucose clearance at 12 weeks of age. Ex vivo perifusion experiments with islets of 13-week-old congenic mice revealed a tendency towards reduced insulin secretion in homozygous DBA mice. In addition, 16-week-old mice showed a severe loss of β-cells and reduced pancreatic insulin content. Pathway analysis of transcriptome data from islets of congenic mice pointed towards a downregulation of cell survival genes. Morphological analysis of pancreatic sections displayed a reduced number of bi-hormonal cells co-expressing glucagon and insulin in homozygous DBA mice, which could indicate a reduced plasticity of endocrine cells in response to hyperglycemic stress. Further generation and phenotyping of recombinant congenic mice enabled the isolation of a 3.3 Mbp fragment that was still able to induce hyperglycemia and contained 61 genes. Bioinformatic analyses including haplotype mapping, sequence and transcriptome analysis were integrated in order to further reduce the number of candidate genes and to identify the presumable causative gene variant. Four putative candidate genes (Ttc39a, Kti12, Osbpl9, Calr4) were defined, which were either differentially expressed or carried a sequence variant. In addition, in silico ChIP-Seq analyses of the 3.3 Mbp region indicated a high number of SNPs located in active regions of binding sites of β-cell transcription factors. This points towards potentially altered cis-regulatory elements that could be responsible for the phenotype conferred by the Nidd/DBA locus. In summary, the Nidd/DBA locus mediates impaired glucose homeostasis and reduced insulin secretion capacity which finally leads to β-cell death. The downregulation of cell survival genes and reduced plasticity of endocrine cells could further contribute to the β-cell loss. The critical region was narrowed down to a 3.3 Mbp fragment containing 61 genes, of which four might be involved in the development of the diabetogenic Nidd/DBA phenotype.
‘Smart’ Janus emulsions
(2021)
Emulsions constitute one of the most prominent and continuously evolving research areas in Colloid Chemistry, which involves the preparation of mixtures or dispersions of immiscible components in a continuous medium. Besides conventional oil-in-water or water-in-oil emulsions, other emulsions of complex droplet morphologies have recently attracted significant research interests. Especially Janus emulsions, in which each droplet is comprised of two distinct sub-regions, have shown versatile potential applications. One of their advantages is the possibility of compartmentalization, which enables to play with two different chemistries in a single droplet. Though microfluidic methods are conventionally used to prepare Janus emulsions, their industrial applications are largely hindered by low throughput and extensive instrumentations. Recently, it has been discovered that simply one-pot moderate/high energy emulsification is also capable of developing Janus morphology, although their preparation and stabilization remain rather substantially challenging. This cumulative doctoral thesis focuses on the preparation and characterization of ‘smart’ Janus emulsions, i.e. Janus emulsions with special stimuli-responsive features. One-step moderate/high energy emulsification of olive and silicone oil in an aqueous medium was carried out. Special consideration was devoted to the interfacial tensions among the components to maintain the criteria of forming characteristic droplet architectures, in addition to avoiding multiple emulsion destabilization phenomena like imminent phase separation or even separated droplet formation. A series of investigations were conducted related to the formation of complexes of charged macromolecules and role of them as stabilizers to achieve stable Janus emulsions for a realistic timeframe (more than 3 months). The correlation between the size of the stabilizer particles and the droplet size of emulsion was established. Furthermore, it was observed that Janus emulsion gels with interesting rheological properties can be fabricated in the presence of suitable polyelectrolyte complexes. Janus emulsions that could be influenced by pH, temperature or magnetic field were successfully produced in presence of characteristic stimuli-responsive stabilizers. Afterwards, the effect of these changes was studied by different characterization techniques. The size and morphology could be tuned easily by changing the pH. The incorporation of iron oxide magnetic nanoparticles (synthesized separately by a co-precipitation method) to one component of the Janus emulsion was carried out so that the movement and orientation of the complex droplets in aqueous media could be controlled by an external magnetic field. Additionally, temperature-triggered instantaneous reversible breakdown of Janus droplets was also accomplished. The responses of the Janus droplets by the stimuli were well-documented and explained. Another goal of the present contribution was to exploit this special morphological feature of emulsions as a template for producing porous materials. This was demonstrated by the preparation of ultralight magnetic responsive aerogels, utilizing Janus emulsion gels. The produced aerogels also showed the capacity to separate toxic dye from water. To the best of our knowledge, this is the first example of investigation towards batch scale production of Janus emulsion with such special stimuli-responsive properties by a simple bulk emulsification method.
In the light of climate change, rising demands for agricultural products and the intensification and specialization of agricultural systems, ensuring an adequate and reliable supply of food is fundamental for food security. Maintaining diversity and redundancy has been postulated as one generic principle to increase the resilience of agricultural production and other ecosystem services. For example, if one crop fails due to climate instability and extreme events, others can compensate the losses. Crop diversity might be particularly important if different crops show asynchronous production trends. Furthermore, spatial heterogeneity has been suggested to increase stability at larger scales as production losses in some areas can be buffered by surpluses in undisturbed ones. Besides systematically investigating the mechanisms underlying stability, identifying transformative pathways that foster them is important.
In my thesis, I aim at answering the following questions: (i) How does yield stability differ between nations, regions and farms, and what is the effect of crop diversity on yield stability in relation to agricultural inputs, climate heterogeneity, climate instability and time at the national, regional or farm level? (ii) Is asynchrony between crops a better predictor of production stability than crop diversity? (iii) What is the effect of asynchrony between and within crops on stability and how is it related to crop diversity and space, respectively? (iv) What is the state of the art and what are knowledge gaps in exploring resilience and its multidimensionality in ecological and social-ecological systems with agent-based models and what are potential ways forward?
In the first chapter, I provide the theoretical background for the subsequent analyses. I stress the need to better understand the resilience of social-ecological systems and particularly the stability of agricultural production. Moreover, I introduce diversity and spatial heterogeneity as two prominently discussed resilience mechanisms and describe approaches to assess resilience.
In the second chapter, I combined agriculture and climate data at three levels of organization and spatial extents to investigate yield stability patterns and their relation to crop diversity, fertilizer, irrigation, climate heterogeneity and instability and time of nations globally, regions in Europe and farms in Germany using statistical analyses. Yield stability decreased from the national to the farm level. Several nations and regions substantially contributed to larger-scale stability. Crop diversity was positively associated with yield stability across all three levels of organization. This effect was typically more profound at smaller scales and in variable climates. In addition to crop diversity, climate heterogeneity was an important stabilizing mechanism especially at larger scales. These results confirm the stabilizing effect of crop diversity and spatial heterogeneity, yet their importance depends on the scale and agricultural management.
Building on the findings of the second chapter, I deepened in the third chapter my research on the effect of crop diversity at the national level. In particular, I tested if asynchrony between crops, i.e. between the temporal production patterns of different crops, better predicts agricultural production stability than crop diversity. The stabilizing effect of asynchrony was multiple times higher than the effect of crop diversity, i.e. asynchrony is one important property that can explain why a higher diversity supports the stability of national food production. Therefore, strategies to stabilize agricultural production through crop diversification also need to account for the asynchrony of the crops considered.
The previous chapters suggest that both asynchrony between crops and spatial heterogeneity are important stabilizing mechanisms. In the fourth chapter, I therefore aimed at better understanding the relative importance of asynchrony between and within crops, i.e. between the temporal production patterns of different crops and between the temporal production patterns of different cultivation areas of the same crop. Better understanding their relative importance is important to inform agricultural management decisions, but so far this has been hardly assessed. To address this, I used crop production data to study the effect of asynchrony between and within crops on the stability of agricultural production in regions in Germany and nations in Europe. Both asynchrony between and within crops consistently stabilized agricultural production. Adding crops increased asynchrony between crops, yet this effect levelled off after eight crops in regions in Germany and after four crops in nations in Europe. Combining already ten farms within a region led to high asynchrony within crops, indicating distinct production patters, while this effect was weaker when combining multiple regions within a nation. The results suggest, that both mechanisms need to be considered in agricultural management strategies that strive for more resilient farming systems.
The analyses in the foregoing chapters focused at different levels of organization, scales and factors potentially influencing agricultural stability. However, these statistical analyses are restricted by data availability and investigate correlative relationships, thus they cannot provide a mechanistic understanding of the actual processes underlying resilience. In this regard, agent-based models (ABM) are a promising tool. Besides their ability to measure different properties and to integrate multiple situations through extensive manipulation in a fully controlled system, they can capture the emergence of system resilience from individual interactions and feedbacks across different levels of organization. In the fifth chapter, I therefore reviewed the state of the art and potential knowledge gaps in exploring resilience and its multidimensionality in ecological and social-ecological systems with ABMs. Next, I derived recommendations for a more effective use of ABMs in resilience research. The review suggests that the potential of ABMs is not utilized in most models as they typically focus on a single dimension of resilience and are mostly limited to one reference state, disturbance type and scale. Moreover, only few studies explicitly test the ability of different mechanisms to support resilience. To solve real-world problems related to the resilience of complex systems, ABMs need to assess multiple stability properties for different situations and under consideration of the mechanisms that are hypothesized to render a system resilient.
In the sixth chapter, I discuss the major conclusions that can be drawn from the previous chapters. Moreover, I showcase the use of simulation models to identify management strategies to enhance asynchrony and thus stability, and the potential of ABMs to identify pathways to implement such strategies.
The results of my thesis confirm the stabilizing effect of crop diversity, yet its importance depends on the scale, agricultural management and climate. Moreover, strategies to stabilize agricultural production through crop diversification also need to account for the asynchrony of the crops considered. As spatial heterogeneity and particularly asynchrony within crops strongly enhances stability, integrated management approaches are needed that simultaneously address multiple resilience mechanisms at different levels of organization, scales and time horizons. For example, the simulation suggests that only increasing the number of crops at both the pixel and landscape level avoids trade-offs between asynchrony between and within crops. If their potential is better exploited, agent-based models have the capacity to systematically assess resilience and to identify comprehensive pathways towards resilient farming systems.
The co-occurrence of warm spells and droughts can lead to detrimental socio-economic and ecological impacts, largely surpassing the impacts of either warm spells or droughts alone. We quantify changes in the number of compound warm spells and droughts from 1979 to 2018 in the Mediterranean Basin using the ERA5 data set. We analyse two types of compound events: 1) warm season compound events, which are extreme in absolute terms in the warm season from May to October and 2) year-round deseasonalised compound events, which are extreme in relative terms respective to the time of the year. The number of compound events increases significantly and especially warm spells are increasing strongly – with an annual growth rates of 3.9 (3.5) % for warm season (deseasonalised) compound events and 4.6 (4.4) % for warm spells –, whereas for droughts the change is more ambiguous depending on the applied definition. Therefore, the rise in the number of compound events is primarily driven by temperature changes and not the lack of precipitation. The months July and August show the highest increases in warm season compound events, whereas the highest increases of deseasonalised compound events occur in spring and early summer. This increase in deseasonalised compound events can potentially have a significant impact on the functioning of Mediterranean ecosystems as this is the peak phase of ecosystem productivity and a vital phenophase.
The co-occurrence of warm spells and droughts can lead to detrimental socio-economic and ecological impacts, largely surpassing the impacts of either warm spells or droughts alone. We quantify changes in the number of compound warm spells and droughts from 1979 to 2018 in the Mediterranean Basin using the ERA5 data set. We analyse two types of compound events: 1) warm season compound events, which are extreme in absolute terms in the warm season from May to October and 2) year-round deseasonalised compound events, which are extreme in relative terms respective to the time of the year. The number of compound events increases significantly and especially warm spells are increasing strongly – with an annual growth rates of 3.9 (3.5) % for warm season (deseasonalised) compound events and 4.6 (4.4) % for warm spells –, whereas for droughts the change is more ambiguous depending on the applied definition. Therefore, the rise in the number of compound events is primarily driven by temperature changes and not the lack of precipitation. The months July and August show the highest increases in warm season compound events, whereas the highest increases of deseasonalised compound events occur in spring and early summer. This increase in deseasonalised compound events can potentially have a significant impact on the functioning of Mediterranean ecosystems as this is the peak phase of ecosystem productivity and a vital phenophase.
Compound weather events may lead to extreme impacts that can affect many aspects of society including agriculture. Identifying the underlying mechanisms that cause extreme impacts, such as crop failure, is of crucial importance to improve their understanding and forecasting. In this study, we investigate whether key meteorological drivers of extreme impacts can be identified using the least absolute shrinkage and selection operator (LASSO) in a model environment, a method that allows for automated variable selection and is able to handle collinearity between variables. As an example of an extreme impact, we investigate crop failure using annual wheat yield as simulated by the Agricultural Production Systems sIMulator (APSIM) crop model driven by 1600 years of daily weather data from a global climate model (EC-Earth) under present-day conditions for the Northern Hemisphere. We then apply LASSO logistic regression to determine which weather conditions during the growing season lead to crop failure. We obtain good model performance in central Europe and the eastern half of the United States, while crop failure years in regions in Asia and the western half of the United States are less accurately predicted. Model performance correlates strongly with annual mean and variability of crop yields; that is, model performance is highest in regions with relatively large annual crop yield mean and variability. Overall, for nearly all grid points, the inclusion of temperature, precipitation and vapour pressure deficit is key to predict crop failure. In addition, meteorological predictors during all seasons are required for a good prediction. These results illustrate the omnipresence of compounding effects of both meteorological drivers and different periods of the growing season for creating crop failure events. Especially vapour pressure deficit and climate extreme indicators such as diurnal temperature range and the number of frost days are selected by the statistical model as relevant predictors for crop failure at most grid points, underlining their overarching relevance. We conclude that the LASSO regression model is a useful tool to automatically detect compound drivers of extreme impacts and could be applied to other weather impacts such as wildfires or floods. As the detected relationships are of purely correlative nature, more detailed analyses are required to establish the causal structure between drivers and impacts.
Compound weather events may lead to extreme impacts that can affect many aspects of society including agriculture. Identifying the underlying mechanisms that cause extreme impacts, such as crop failure, is of crucial importance to improve their understanding and forecasting. In this study, we investigate whether key meteorological drivers of extreme impacts can be identified using the least absolute shrinkage and selection operator (LASSO) in a model environment, a method that allows for automated variable selection and is able to handle collinearity between variables. As an example of an extreme impact, we investigate crop failure using annual wheat yield as simulated by the Agricultural Production Systems sIMulator (APSIM) crop model driven by 1600 years of daily weather data from a global climate model (EC-Earth) under present-day conditions for the Northern Hemisphere. We then apply LASSO logistic regression to determine which weather conditions during the growing season lead to crop failure. We obtain good model performance in central Europe and the eastern half of the United States, while crop failure years in regions in Asia and the western half of the United States are less accurately predicted. Model performance correlates strongly with annual mean and variability of crop yields; that is, model performance is highest in regions with relatively large annual crop yield mean and variability. Overall, for nearly all grid points, the inclusion of temperature, precipitation and vapour pressure deficit is key to predict crop failure. In addition, meteorological predictors during all seasons are required for a good prediction. These results illustrate the omnipresence of compounding effects of both meteorological drivers and different periods of the growing season for creating crop failure events. Especially vapour pressure deficit and climate extreme indicators such as diurnal temperature range and the number of frost days are selected by the statistical model as relevant predictors for crop failure at most grid points, underlining their overarching relevance. We conclude that the LASSO regression model is a useful tool to automatically detect compound drivers of extreme impacts and could be applied to other weather impacts such as wildfires or floods. As the detected relationships are of purely correlative nature, more detailed analyses are required to establish the causal structure between drivers and impacts.
Flooding is a vast problem in many parts of the world, including Europe. It occurs mainly due to extreme weather conditions (e.g. heavy rainfall and snowmelt) and the consequences of flood events can be devastating. Flood risk is mainly defined as a combination of the probability of an event and its potential adverse impacts. Therefore, it covers three major dynamic components: hazard (physical characteristics of a flood event), exposure (people and their physical environment that being exposed to flood), and vulnerability (the elements at risk). Floods are natural phenomena and cannot be fully prevented. However, their risk can be managed and mitigated. For a sound flood risk management and mitigation, a proper risk assessment is needed. First of all, this is attained by a clear understanding of the flood risk dynamics. For instance, human activity may contribute to an increase in flood risk. Anthropogenic climate change causes higher intensity of rainfall and sea level rise and therefore an increase in scale and frequency of the flood events. On the other hand, inappropriate management of risk and structural protection measures may not be very effective for risk reduction. Additionally, due to the growth of number of assets and people within the flood-prone areas, risk increases. To address these issues, the first objective of this thesis is to perform a sensitivity analysis to understand the impacts of changes in each flood risk component on overall risk and further their mutual interactions. A multitude of changes along the risk chain are simulated by regional flood model (RFM) where all processes from atmosphere through catchment and river system to damage mechanisms are taken into consideration. The impacts of changes in risk components are explored by plausible change scenarios for the mesoscale Mulde catchment (sub-basin of the Elbe) in Germany.
A proper risk assessment is ensured by the reasonable representation of the real-world flood event. Traditionally, flood risk is assessed by assuming homogeneous return periods of flood peaks throughout the considered catchment. However, in reality, flood events are spatially heterogeneous and therefore traditional assumption misestimates flood risk especially for large regions. In this thesis, two different studies investigate the importance of spatial dependence in large scale flood risk assessment for different spatial scales. In the first one, the “real” spatial dependence of return period of flood damages is represented by continuous risk modelling approach where spatially coherent patterns of hydrological and meteorological controls (i.e. soil moisture and weather patterns) are included. Further the risk estimations under this modelled dependence assumption are compared with two other assumptions on the spatial dependence of return periods of flood damages: complete dependence (homogeneous return periods) and independence (randomly generated heterogeneous return periods) for the Elbe catchment in Germany. The second study represents the “real” spatial dependence by multivariate dependence models. Similar to the first study, the three different assumptions on the spatial dependence of return periods of flood damages are compared, but at national (United Kingdom and Germany) and continental (Europe) scales. Furthermore, the impacts of the different models, tail dependence, and the structural flood protection level on the flood risk under different spatial dependence assumptions are investigated.
The outcomes of the sensitivity analysis framework suggest that flood risk can vary dramatically as a result of possible change scenarios. The risk components that have not received much attention (e.g. changes in dike systems and in vulnerability) may mask the influence of climate change that is often investigated component.
The results of the spatial dependence research in this thesis further show that the damage under the false assumption of complete dependence is 100 % larger than the damage under the modelled dependence assumption, for the events with return periods greater than approximately 200 years in the Elbe catchment. The complete dependence assumption overestimates the 200-year flood damage, a benchmark indicator for the insurance industry, by 139 %, 188 % and 246 % for the UK, Germany and Europe, respectively. The misestimation of risk under different assumptions can vary from upstream to downstream of the catchment. Besides, tail dependence in the model and flood protection level in the catchments can affect the risk estimation and the differences between different spatial dependence assumptions.
In conclusion, the broader consideration of the risk components, which possibly affect the flood risk in a comprehensive way, and the consideration of the spatial dependence of flood return periods are strongly recommended for a better understanding of flood risk and consequently for a sound flood risk management and mitigation.
As society paves its way towards device miniaturization and precision medicine, micro-scale actuation and guided transport become increasingly prominent research fields, with high potential impact in both technological and clinical contexts. In order to accomplish directed motion of micron-sized objects, as biosensors and drug-releasing microparticles, towards specific target sites, a promising strategy is the use of living cells as smart biochemically-powered carriers, building the so-called bio-hybrid systems. Inspired by leukocytes, native cells of living organisms efficiently migrating to critical targets as tumor tissue, an emerging concept is to exploit the amoeboid crawling motility of such cells as mean of transport for drug delivery applications.
In the research work described in this thesis, I synergistically applied experimental, computational and theoretical modeling approaches to investigate the behaviour and transport mechanism of a novel kind of bio-hybrid system for active transport at the micro-scale, referred to as cellular truck. This system consists of an amoeboid crawling cell, the carrier, attached to a microparticle, the cargo, which may ideally be drug-loaded for specific therapeutic treatments.
For the purposes of experimental investigation, I employed the amoeba Dictyostelium discoideum as crawling cellular carrier, being a renowned model organism for leukocyte migration and, in general, for eukaryotic cell motility. The performed experiments revealed a complex recurrent cell-cargo relative motion, together with an intermittent motility of the cellular truck as a whole. The evidence suggests the presence of cargoes on amoeboid cells to act as mechanical stimulus leading cell polarization, thus promoting cell motility and giving rise to the observed intermittent dynamics of the truck. Particularly, bursts in cytoskeletal polarity along the cell-cargo axis have been
found to occur in time with a rate dependent on cargo geometrical features, as particle diameter. Overall, the collected experimental evidence pointed out a pivotal role of cell-cargo interactions in the emergent cellular truck motion dynamics. Especially, they can determine the transport capabilities of amoeboid cells, as the cargo size significantly impacts the cytoskeletal activity and repolarization dynamics along the cell-cargo axis, the latter responsible for truck displacement and reorientation.
Furthermore, I developed a modeling framework, built upon the experimental evidence on cellular truck behaviour, that connects the relative dynamics and interactions arising at the truck scale with the actual particle transport dynamics. In fact, numerical simulations of the proposed model successfully reproduced the phenomenology of the cell-cargo system, while enabling the prediction of the transport properties of cellular trucks over larger spatial and temporal scales. The theoretical analysis provided a deeper understanding of the role of cell-cargo interaction on mass transport, unveiling in particular how the long-time transport efficiency is governed by the interplay between the persistence time of cell polarity and time scales of the relative dynamics stemming from cell-cargo interaction. Interestingly, the model predicts the existence of an optimal cargo size, enhancing the diffusivity of cellular trucks; this is in line with previous independent experimental data, which appeared rather counterintuitive and had no explanation prior to this study.
In conclusion, my research work shed light on the importance of cargo-carrier interactions in the context of crawling cell-mediated particle transport, and provides a prototypical, multifaceted framework for the analysis and modelling of such complex bio-hybrid systems and their perspective optimization.
Future Outlook and Scenarios
(2021)
Where is local self-government heading in the future? Among trends identified is firstly an intensification of multilevel, intermunicipal, and cross-border governance. In the future even more of cooperation and coordination among different political and administrative levels will be required. Territorial boundaries have become increasingly incongruent with functional public activities. Secondly, the innovative potential of introducing markets as templates for organisational reform has reached its end. Future reforms will most likely try to adapt market reforms to local public contexts, or even reverse the development. Finally, a tightening of state steering and an increased dependence on state funding to uphold local services is expected. Waves of amalgamations might slow down this process but they will not make financial problems disappear completely.
The digital transformation of the local public sector is an important step towards making local service delivery more citizen-centred and user-oriented. The state of digitalisation in public administration in Germany is, however, well behind the far-reaching hopes associated with this modernisation theme. This chapter will explore the question as to what extent digital tools have been introduced in German local governments, more specifically in local one-stop shops (Bürgerämter), which hurdles local actors face when coping with the digital transformation, and which tools impact on citizens and local employees as well as have unintended effects and dysfunctionalities so far. A comprehensive and standardised survey amongst mayors and heads of staff councils in German municipalities as well as citizens and employees’ surveys and case studies will form the empirical basis of this chapter.
Beyond Charter and Index
(2021)
The Chapter examines the concept of local autonomy in modern European states by analysing theoretical approaches. The classical, deductive approach defines local autonomy mostly through legal, economic and financial conditions, especially by formal structures. This proves to be too weak to define the internal strength of local authorities and their real political-administrative power. A more multidimensional definition of autonomy, including indicators as importance, capacity, as well as discretion and democracy at local level is needed. The authors utilise the indicators, used by the Local Autonomy Index (LAI) developed by Ladner et al. and the European Charter of Local Self-Government to find out what is still missing. The contribution redounds to stimulate the scientific debate on local autonomy in Europe. Until the concept of local autonomy will fit for all European states with extremely differentiated local authorities, the research in this field remains a conceptual and heuristic endeavour. Especially, because local government and democracy are until now territory-based, whereas the reality is one of multilevel and cross-border governance.