Refine
Year of publication
Document Type
- Article (20726)
- Doctoral Thesis (3140)
- Postprint (2090)
- Monograph/Edited Volume (1198)
- Other (660)
- Review (585)
- Conference Proceeding (326)
- Preprint (232)
- Part of a Book (231)
- Working Paper (134)
Language
- English (29537) (remove)
Is part of the Bibliography
- yes (29537) (remove)
Keywords
- climate change (172)
- Germany (103)
- machine learning (86)
- diffusion (76)
- German (68)
- Arabidopsis thaliana (67)
- anomalous diffusion (58)
- stars: massive (58)
- Climate change (55)
- Holocene (55)
Institute
- Institut für Physik und Astronomie (4876)
- Institut für Biochemie und Biologie (4710)
- Institut für Geowissenschaften (3309)
- Institut für Chemie (2855)
- Institut für Mathematik (1571)
- Department Psychologie (1405)
- Institut für Ernährungswissenschaft (1031)
- Department Linguistik (924)
- Wirtschaftswissenschaften (825)
- Institut für Informatik und Computational Science (796)
The wide distribution of location-acquisition technologies means that large volumes of spatio-temporal data are continuously being accumulated. Positioning systems such as GPS enable the tracking of various moving objects' trajectories, which are usually represented by a chronologically ordered sequence of observed locations. The analysis of movement patterns based on detailed positional information creates opportunities for applications that can improve business decisions and processes in a broad spectrum of industries (e.g., transportation, traffic control, or medicine). Due to the large data volumes generated in these applications, the cost-efficient storage of spatio-temporal data is desirable, especially when in-memory database systems are used to achieve interactive performance requirements.
To efficiently utilize the available DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes structures). By considering horizontal data partitioning, we can independently apply different tuning options on a fine-grained level. However, the selection of cost and performance-balancing configurations is challenging, due to the vast number of possible setups consisting of mutually dependent individual decisions.
In this thesis, we introduce multiple approaches to improve spatio-temporal data management by automatically optimizing diverse tuning options for the application-specific access patterns and data characteristics. Our contributions are as follows:
(1) We introduce a novel approach to determine fine-grained table configurations for spatio-temporal workloads. Our linear programming (LP) approach jointly optimizes the (i) data compression, (ii) ordering, (iii) indexing, and (iv) tiering. We propose different models which address cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload, memory budgets, and data characteristics. To yield maintainable and robust configurations, we further extend our LP-based approach to incorporate reconfiguration costs as well as optimizations for multiple potential workload scenarios.
(2) To optimize the storage layout of timestamps in columnar databases, we present a heuristic approach for the workload-driven combined selection of a data layout and compression scheme. By considering attribute decomposition strategies, we are able to apply application-specific optimizations that reduce the memory footprint and improve performance.
(3) We introduce an approach that leverages past trajectory data to improve the dispatch processes of transportation network companies. Based on location probabilities, we developed risk-averse dispatch strategies that reduce critical delays.
(4) Finally, we used the use case of a transportation network company to evaluate our database optimizations on a real-world dataset. We demonstrate that workload-driven fine-grained optimizations allow us to reduce the memory footprint (up to 71% by equal performance) or increase the performance (up to 90% by equal memory size) compared to established rule-based heuristics.
Individually, our contributions provide novel approaches to the current challenges in spatio-temporal data mining and database research. Combining them allows in-memory databases to store and process spatio-temporal data more cost-efficiently.
The present thesis looks at cultural conceptualisations in relation to DEATH in Irish English from a Cultural Linguistic perspective and puts a special focus on the diachronic development of these conceptualisations. For the study, a corpus consisting of 1,400 death notices from the Dublin-based national newspaper The Irish Times from 14 historical periods between 1859 and 2023 was compiled, resulting in a highly specialised 70,000-word corpus. First, the manual qualitative analysis of the death notices produced evidence for eight superordinate cultural conceptualisations surrounding DEATH, namely, in the order of their frequency THE DEAD ARE TO BE REMEMBERED OR REGRETTED, DEATH IS SOMETHING POSITIVE, DEATH IS REST, DEATH IS A JOURNEY, DYING IS THE BEGINNING OF ANOTHER LIFE, DEATH IS (NOT) A TABOO, DEATH IS GOD’S WILL, and DEATH IS THE END. These conceptualisations were derived from linguistic expressions in the death notices that have these conceptualisations as a cognitive basis. Second, the quantitative comparison of the individual conceptualisations detected diachronic variation, which is interconnected with historical and social developments in Ireland. The thesis, therefore, illustrates the applicability of Cultural Linguistics as an adequate method for diachronic studies interested in culturally determined developments of conceptualisations.
A new challenger seeks to enter the German party system: Bündnis Sahra Wagenknecht (BSW). With her new party, former Die Linke politician Sahra Wagenknecht combines a left-authoritarian profile (economically left-leaning, but culturally conservative) with anti-US, pro-Russia and anti-elitist stances. This article provides the first large-n academic study of the voter potential of this new party by using a quasi-representative sample (n = 6,000) drawn from a Voting Advice Application-like dataset that comes from a website designed to explore the Bündnis Sahra Wagenknecht’s positions. The results show that congruence with foreign policy positions and anti-elitism are strong predictors of the propensity to vote for the Bündnis Sahra Wagenknecht. In contrast, social/welfare and immigration policies are less predictive for assessing the party’s potential. Among the different socio-demographic groups, the Bündnis Sahra Wagenknecht has a strong potential among baby boomers, the less educated and East Germans. Regarding party voters, the Bündnis Sahra Wagenknecht is favoured by supporters of some minor parties like dieBasis, Freie Wähler and Die PARTEI, but also non-voters. Among the established parties, the party’s potential is high among Die Linke voters and, to a lesser extent, voters of the Social democrats (SPD) and Alternative for Germany (AfD). A potential below the average is reported for the supporters of the Liberals (FDP) and Christian Democrats (CDU/CSU) and most clearly for Green and Volt voters.
Spring Issue
(2024)
This chapter focuses on the features of Article 1's paragraph 1 of the 1951 Convention. The article primarily determines the scope of application of the Convention's ratione personae while outlining the basis of the protection of refugees. Additionally, Article 1 addresses the concerns surrounding the inclusion, cessation, and exclusion of refugees. The chapter then tackles the historical development of the article by considering the instruments used prior to the 1951 Convention. It also cites that the Constitution of the International Refugee Organization appears to contain an ambiguity as to how the refugee notion was perceived, so refugees only became the IRO Constitution's concern when they have valid objections to returning to their home country.
Article 22 1951 Convention
(2024)
This chapter covers the 1951 Convention's Article 22. It explains the provision's aim to grant refugees access to the contracting States' national educational systems. Moreover, Article 22 encompasses learning at all different levels of education in schools, universities, and other educational institutions. However, the provision does not address any issues related to the upbringing of children by their parents. The chapter mentions the relevancy of Article 22 when it comes to durable solutions for refugees in an effort to enable them to integrate into the host country's society. It also discusses the drafting history, declarations, and reservations of Article 22 and the instruments used prior to the 1951 Convention.
This chapter examines the extent of the 1951 Convention's Article 44 and the 1967 Protocol's Article IX. It starts with identifying the standard denunciation clause in Article 44 and Article IX. Multilateral treaties of unlimited duration allow States parties an unconditional right to withdraw. A denunciation releases the denouncing party from any obligation further to perform the treaty in relation to the other parties of the 1967 Protocol. The chapter clarifies that denunciation or withdrawal expresses the same legal concept since it is a procedure initiated unilaterally by a State that wants to terminate its legal engagements under a treaty.
This chapter tackles the analysis and function of Article 43 of the 1951 Convention and Article VIII of the 1967 Protocol. It explains that a multilateral treaty can be enforced when met with necessary conditions, such as the Article 24 of the Vienna Convention on the Law of Treaties (VCLT). The provision also regulates the 1951 Convention's entry into force of States' ratification or accession. The chapter notes that the 1967 Protocol entered into force after Sweden deposited its instrument of accession. It elaborates on the specific details needed for the ratification or accession prior to the entry into force.
This study pushes our understanding of research reliability by reproducing and replicating claims from 110 papers in leading economic and political science journals. The analysis involves computational reproducibility checks and robustness assessments. It reveals several patterns. First, we uncover a high rate of fully computationally reproducible results (over 85%). Second, excluding minor issues like missing packages or broken pathways, we uncover coding errors for about 25% of studies, with some studies containing multiple errors. Third, we test the robustness of the results to 5,511 re-analyses. We find a robustness reproducibility of about 70%. Robustness reproducibility rates are relatively higher for re-analyses that introduce new data and lower for re-analyses that change the sample or the definition of the dependent variable. Fourth, 52% of re-analysis effect size estimates are smaller than the original published estimates and the average statistical significance of a re-analysis is 77% of the original. Lastly, we rely on six teams of researchers working independently to answer eight additional research questions on the determinants of robustness reproducibility. Most teams find a negative relationship between replicators' experience and reproducibility, while finding no relationship between reproducibility and the provision of intermediate or even raw data combined with the necessary cleaning codes.
This chapter focuses on Article 46 of the 1951 Convention and Article X of the 1967 Protocol. It explains the depository of a treaty playing an essential procedural role in ensuring the smooth operation of a multilateral treaty. Article 46 enumerates the Secretary-General's function as a depositary performed by the Treaty Section of the Office of Legal Affairs in the United Nations Secretariat. Similarly, Article X confirms and details the Secretary-General's designation and role as depositary of the 1967 Protocol. The chapter mentions that the enumeration of Article X's depositary notification is exemplary instead of conclusive. It examines the depositoary notifications of declarations, signatures, and researvations under Article 46 and Article X.
This chapter covers the function of Testimonium to the 1951 Convention and Article XI of the 1967 Protocol. It looks into the relevance of the 1951 Convetion's testimonium. The testimonium primarily focuses on the Convetion's authentic languages, regulation of deposition, and certified true copies being delivered to all members of the UN and non-member States. On the other hand, Article XI contains the standard procedures for regulating the deposition of a copy of the 1967 Protocol in the Secretariat of the United Nations and foreseeing the transmission of certified copies thereof by the Secretary general. The chapter mentions how both elements are not commonly explicitly indicated in modern treaties.
This chapter looks into the 1951 Convention's Article 39 and the 1967 Protocol's Article V. In 2000, the Secretary-General identified the 1951 Convention as belonging to a core group of 25 multilateral treaties representative of the key objectives of the UN and the spirit of its Charter. Additionally, the rules found in the Vienna Convention on the Law of Treaties (VCLT) apply to the 1951 Convention as a matter of customary international law. On the other hand, the 1967 Protocol does not amend the 1951 Convention but binds its parties to observe the substantive provisions. The chapter cites that the 1967 Protocol constitutes an independent and complete international instrument that is open not only to the States parties to the 1951 Convention.
Article 1 E 1951 Convention
(2024)
This chapter elaborates on the function of Article 1 E of the 1951 Convention, which was originally aimed at German refugees. It refers to a special group of people who qualify for refugee status but enjoy the rights of national citizens despite their lack of formal citizenship. The article's object and purpose revolve around excluding persons from refugee protection who do not need any international protection since they have the status of national citizens. Additionally, access to refugee status is excluded ipso facto because the individual may resort to effective protection similar to that of citizenship upon being admitted to the country of sojourn. The chapter explains how Article 1 E is an integral part of the balanced system of international refugee protection prescribed by the Convention.
Article 34 1951 Convention
(2024)
This chapter tackles the features and historical development of the 1951 Convention's Article 34. It explains the function of the provision, which primarily focuses on requesting Contracting States to facilitate the assimilation and naturalization of refugees. Moreover, the provision forms the legal bases for local integration and naturalization as some of the traditional durable solutions to refugeehood. The soft obligation imposed by Article 34 primarily focuses on the long-term solution by naturalization. The chapter then elaborates on the balance between local integration, naturalization, and voluntary return after it was disrupted due to the fall of the Iron Curtain in 1989.
In 2022, there were 4.62 billion social media users worldwide. Social media generates a wealth of data which migration scholars have recently started to explore in pursuit of a variety of methodological and thematic research questions. Scholars use social media data to estimate migration stocks, forecast migration flows, or recruit migrants for targeted online surveys. Social media has also been used to understand how migrants get information about their planned journeys and destination countries, how they organize and mobilize online, how migration issues are politicized online, and how migrants integrate culturally into destination countries by sharing common interests. While social media data drives innovative research, it also poses severe challenges regarding data privacy, data protection, and methodological questions relating to external validity. In this chapter, I briefly introduce various strands of migration research using social media data and discuss the advantages, disadvantages, and opportunities.
During the last decades, therapeutical proteins have risen to great significance in the pharmaceutical industry. As non-human proteins that are introduced into the human body cause a distinct immune system reaction that triggers their rapid clearance, most newly approved protein pharmaceuticals are shielded by modification with synthetic polymers to significantly improve their blood circulation time. All such clinically approved protein-polymer conjugates contain polyethylene glycol (PEG) and its conjugation is denoted as PEGylation. However, many patients develop anti-PEG antibodies which cause a rapid clearance of PEGylated molecules upon repeated administration. Therefore, the search for alternative polymers that can replace PEG in therapeutic applications has become important. In addition, although the blood circulation time is significantly prolonged, the therapeutic activity of some conjugates is decreased compared to the unmodified protein. The reason is that these conjugates are formed by the traditional conjugation method that addresses the protein's lysine side chains. As proteins have many solvent exposed lysines, this results in a somewhat uncontrolled attachment of polymer chains, leading to a mixture of regioisomers, with some of them eventually affecting the therapeutic performance.
This thesis investigates a novel method for ligating macromolecules in a site-specific manner, using enzymatic catalysis. Sortase A is used as the enzyme: It is a well-studied transpeptidase which is able to catalyze the intermolecular ligation of two peptides. This process is commonly referred to as sortase-mediated ligation (SML). SML constitutes an equilibrium reaction, which limits product yield. Two previously reported methods to overcome this major limitation were tested with polymers without using an excessive amount of one reactant.
Specific C- or N-terminal peptide sequences (recognition sequence and nucleophile) as part of the protein are required for SML. The complementary peptide was located at the polymer chain end. Grafting-to was used to avoid damaging the protein during polymerization. To be able to investigate all possible combinations (protein-recognition sequence and nucleophile-protein as well as polymer-recognition sequence and nucleophile-polymer) all necessary building blocks were synthesized. Polymerization via reversible deactivation radical polymerization (RDRP) was used to achieve a narrow molecular weight distribution of the polymers, which is required for therapeutic use.
The synthesis of the polymeric building blocks was started by synthesizing the peptide via automated solid-phase peptide synthesis (SPPS) to avoid post-polymerization attachment and to enable easy adaptation of changes in the peptide sequence. To account for the different functionalities (free N- or C-terminus) required for SML, different linker molecules between resin and peptide were used.
To facilitate purification, the chain transfer agent (CTA) for reversible addition-fragmentation chain-transfer (RAFT) polymerization was coupled to the resin-immobilized recognition sequence peptide. The acrylamide and acrylate-based monomers used in this thesis were chosen for their potential to replace PEG.
Following that, surface-initiated (SI) ATRP and RAFT polymerization were attempted, but failed. As a result, the newly developed method of xanthate-supported photo-iniferter (XPI) RAFT polymerization in solution was used successfully to obtain a library of various peptide-polymer conjugates with different chain lengths and narrow molar mass distributions.
After peptide side chain deprotection, these constructs were used first to ligate two polymers via SML, which was successful but revealed a limit in polymer chain length (max. 100 repeat units). When utilizing equimolar amounts of reactants, the use of Ni2+ ions in combination with a histidine after the recognition sequence to remove the cleaved peptide from the equilibrium maximized product formation with conversions of up to 70 %.
Finally, a model protein and a nanobody with promising properties for therapeutical use were biotechnologically modified to contain the peptide sequences required for SML. Using the model protein for C- or N-terminal SML with various polymers did not result in protein-polymer conjugates. The reason is most likely the lack of accessibility of the protein termini to the enzyme. Using the nanobody for C-terminal SML, on the other hand, was successful. However, a similar polymer chain length limit was observed as in polymer-polymer SML. Furthermore, in case of the synthesis of protein-polymer conjugates, it was more effective to shift the SML equilibrium by using an excess of polymer than by employing the Ni2+ ion strategy.
Overall, the experimental data from this work provides a good foundation for future research in this promising field; however, more research is required to fully understand the potential and limitations of using SML for protein-polymer synthesis. In future, the method explored in this dissertation could prove to be a very versatile pathway to obtain therapeutic protein-polymer conjugates that exhibit high activities and long blood circulation times.
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2019. Selected projects have presented their results on April 9th and November 12th 2019 at the Future SOC Lab Day events.
This thesis explores word order variability in verb-final languages. Verb-final languages have a reputation for a high amount of word order variability. However, that reputation amounts to an urban myth due to a lack of systematic investigation. This thesis provides such a systematic investigation by presenting original data from several verb-final languages with a focus on four Uralic ones: Estonian, Udmurt, Meadow Mari, and South Sámi. As with every urban myth, there is a kernel of truth in that many unrelated verb-final languages share a particular kind of word order variability, A-scrambling, in which the fronted elements do not receive a special information-structural role, such as topic or contrastive focus. That word order variability goes hand in hand with placing focussed phrases further to the right in the position directly in front of the verb. Variations on this pattern are exemplified by Uyghur, Standard Dargwa, Eastern Armenian, and three of the Uralic languages, Estonian, Udmurt, and Meadow Mari. So far for the kernel of truth, but the fourth Uralic language, South Sámi, is comparably rigid and does not feature this particular kind of word order variability. Further such comparably rigid, non-scrambling verb-final languages are Dutch, Afrikaans, Amharic, and Korean. In contrast to scrambling languages, non-scrambling languages feature obligatory subject movement, causing word order rigidity next to other typical EPP effects.
The EPP is a defining feature of South Sámi clause structure in general. South Sámi exhibits a one-of-a-kind alternation between SOV and SAuxOV order that is captured by the assumption of the EPP and obligatory movement of auxiliaries but not lexical verbs. Other languages that allow for SAuxOV order either lack an alternation because the auxiliary is obligatorily present (Macro-Sudan SAuxOVX languages), or feature an alternation between SVO and SAuxOV (Kru languages; V2 with underlying OV as a fringe case). In the SVO–SAuxOV languages, both auxiliaries and lexical verbs move. Hence, South Sámi shows that the textbook difference between the VO languages English and French, whether verb movement is restricted to auxiliaries, also extends to OV languages. SAuxOV languages are an outlier among OV languages in general but are united by the presence of the EPP.
Word order variability is not restricted to the preverbal field in verb-final languages, as most of them feature postverbal elements (PVE). PVE challenge the notion of verb-finality in a language. Strictly verb-final languages without any clause-internal PVE are rare. This thesis charts the first structural and descriptive typology of PVE. Verb-final languages vary in the categories they allow as PVE. Allowing for non-oblique PVE is a pivotal threshold: when non-oblique PVE are allowed, PVE can be used for information-structural effects. Many areally and genetically unrelated languages only allow for given PVE but differ in whether the PVE are contrastive. In those languages, verb-finality is not at stake since verb-medial orders are marked. In contrast, the Uralic languages Estonian and Udmurt allow for any PVE, including information focus. Verb-medial orders can be used in the same contexts as verb-final orders without semantic and pragmatic differences. As such, verb placement is subject to actual free variation. The underlying verb-finality of Estonian and Udmurt can only be inferred from a range of diagnostics indicating optional verb movement in both languages. In general, it is not possible to account for PVE with a uniform analysis: rightwards merge, leftward verb movement, and rightwards phrasal movement are required to capture the cross- and intralinguistic variation.
Knowing that a language is verb-final does not allow one to draw conclusions about word order variability in that language. There are patterns of homogeneity, such as the word order variability driven by directly preverbal focus and the givenness of postverbal elements, but those are not brought about by verb-finality alone. Preverbal word order variability is restricted by the more abstract property of obligatory subject movement, whereas the determinant of postverbal word order variability has to be determined in the future.
Among the different meanings carried by numerical information, cardinality is fundamental for survival and for the development of basic as well as of higher numerical skills. Importantly, the human brain inherits from evolution a predisposition to map cardinality onto space, as revealed by the presence of spatial-numerical associations (SNAs) in humans and animals. Here, the mapping of cardinal information onto physical space is addressed as a hallmark signature characterizing numerical cognition.
According to traditional approaches, cognition is defined as complex forms of internal information processing, taking place in the brain (cognitive processor). On the contrary, embodied cognition approaches define cognition as functionally linked to perception and action, in the continuous interaction between a biological body and its physical and sociocultural environment.
Embracing the principles of the embodied cognition perspective, I conducted four novel studies designed to unveil how SNAs originate, develop, and adapt, depending on characteristics of the organism, the context, and their interaction. I structured my doctoral thesis in three levels. At the grounded level (Study 1), I unfold the biological foundations underlying the tendency to map cardinal information across space; at the embodied level (Study 2), I reveal the impact of atypical motor development on the construction of SNAs; at the situated level (Study 3), I document the joint influence of visuospatial attention and task properties on SNAs. Furthermore, I experimentally investigate the presence of associations between physical and numerical distance, another numerical property fundamental for the development of efficient mathematical minds (Study 4).
In Study 1, I present the Brain’s Asymmetric Frequency Tuning hypothesis that relies on hemispheric asymmetries for processing spatial frequencies, a low-level visual feature that the (in)vertebrate brain extracts from any visual scene to create a coherent percept of the world. Computational analyses of the power spectra of the original stimuli used to document the presence of SNAs in human newborns and animals, support the brain’s asymmetric frequency tuning as a theoretical account and as an evolutionarily inherited mechanism scaffolding the universal and innate tendency to represent cardinality across horizontal space.
In Study 2, I explore SNAs in children with rare genetic neuromuscular diseases: spinal muscular atrophy (SMA) and Duchenne muscular dystrophy (DMD). SMA children never accomplish independent motoric exploration of their environment; in contrast, DMD children do explore but later lose this ability. The different SNAs reported by the two groups support the critical role of early sensorimotor experiences in the spatial representation of cardinality.
In Study 3, I directly compare the effects of overt attentional orientation during explicit and implicit processing of numerical magnitude. First, the different effects of attentional orienting based on the type of assessment support different mechanisms underlying SNAs during explicit and implicit assessment of numerical magnitude. Secondly, the impact of vertical shifts of attention on the processing of numerical distance sheds light on the correspondence between numerical distance and peri-personal distance.
In Study 4, I document the presence of different SNAs, driven by numerical magnitude and numerical distance, by employing different response mappings (left vs. right and near vs. distant).
In the field of numerical cognition, the four studies included in the present thesis contribute to unveiling how the characteristics of the organism and the environment influence the emergence, the development, and the flexibility of our attitude to represent cardinal information across space, thus supporting the predictions of the embodied cognition approach. Furthermore, they inform a taxonomy of body-centred factors (biological properties of the brain and sensorimotor system) modulating the spatial representation of cardinality throughout the course of life, at the grounded, embodied, and situated levels.
If the awareness for different variables influencing SNAs over the course of life is important, it is equally important to consider the organism as a whole in its sensorimotor interaction with the world. Inspired by my doctoral research, here I propose a holistic perspective that considers the role of evolution, embodiment, and environment in the association of cardinal information with directional space. The new perspective advances the current approaches to SNAs, both at the conceptual and at the methodological levels.
Unveiling how the mental representation of cardinality emerges, develops, and adapts is necessary to shape efficient mathematical minds and achieve economic productivity, technological progress, and a higher quality of life.
Despite the high hopes associated with public sector digitalization, especially in times of crisis, it does not yet hold up to its potential. Both the negotiation and implementation of digitalization policy presents a challenge for all levels of government, requiring extensive coordination efforts. In general, there are conflicting views if more centralized or decentralized policy processes are more effective for coordination—a tension further exacerbated in the context of digitalization policy within multilevel systems, where the imperative of standardization collides with decentralization forces inherent in federalism.
Based on the analysis of expert interviews (n = 29), this chapter examines how digitalization policy in the context of the German federal intergovernmental relations context is located and negotiated, and how this relates to local policy implementation. Focusing on the decentralized German tax administration as a case study, the analysis reveals a shift from a conflicted to a multi-layered policy process, underpinned by a mechanism of “concentration without centralization.” Strategic and operational competencies are bundled in an institutionalized and legally regulated network for digitalization to achieve necessary standardization of digital infrastructure. Furthermore, the research emphasizes the influence of intergovernmental relations on local implementation and the associated challenges and opportunities.
Consumer behaviour changes and strategic management decisions are driving adaptations in manufacturing routines. Based on the theory of situational strength, we investigated how contextual and person-related factors influence workers’ adaptation in a two-worker position routine. Contextual factors, like retrieval cues (Study 1), time pressure (Study 2), and convenience (Study 3), were varied. Person-related factors included retentivity, general and routine-specific self-efficacy, and perceived adaptation costs. Dependent variables included various error types and production time before and after adaptation. In each study, 148 participants were trained in a production routine at t1 and executed an adapted routine at t2, one week later. Repeated measures ANOVA for performance at t1 and t2, and MANOVA for performance at t2, revealed that time increased for all groups at t2. For participants in Studies 1 & 2, error rates remained consistent. Retentivity significantly impacted errors at both t1 and t2, emphasising that routine changes in a ‘running business’ take time, regardless of contextual factors. Workers with lower retentivity may require additional support.
This systematic literature review highlights the gap in demand forecasting in the manufacturing sector, which is challenged by complex supply chains and rapid market change. Traditional methods fall short in this dynamic environment, highlighting the need for an approach that combines advanced forecasting techniques, high-quality data, and industry-specific insights. Our research contributes by evaluating advanced forecasting methods, the effectiveness of AI and data strategies to improve accuracy. Our analysis reveals a shift towards machine learning and deep learning to improve accuracy and highlights the untapped potential of external data sources. Key findings provide both researchers and practitioners with guidance on effective forecasting strategies and key data types and offer an integrated framework for improving forecasting accuracy and strategic decision-making in manufacturing. This work fills a critical research gap and provides stakeholders with actionable insights to manage the complexity of modern manufacturing, representing a significant advance in forecasting practice.
Without fear or favour
(2024)
Learning in virtual, immersive environments must be well-designed to foster learning instead of overwhelming and distracting the learner. So far, learning instructions based on cognitive load theory recommend keeping the learning instructions clean and simple to reduce the extraneous cognitive load of the learner to foster learning performance. The advantages of immersive learning, such as multiple options for realistic simulation, movement and feedback, raise questions about the tension between an increase of excitement and flow with highly realistic environments on the one hand and a reduction of cognitive load by developing clean and simple surroundings on the other hand. This study aims to gain insights into learners' cognitive responses during the learning process by continuously assessing cognitive load through eye-tracking. The experiment compares two distinct immersive learning environments and varying methods of content presentation.
We study the effect of energy and transport policies on pollution in two developing country cities. We use a quantitative equilibrium model with choice of housing, energy use, residential location, transport mode, and energy technology. Pollution comes from commuting and residential energy use. The model parameters are calibrated to replicate key variables for two developing country cities, Maputo, Mozambique, and Yogyakarta, Indonesia. In the counterfactual simulations, we study how various transport and energy policies affect equilibrium pollution. Policies may induce rebound effects from increasing residential energy use or switching to high emission modes or locations. In general, these rebound effects tend to be largest for subsidies to public transport or modern residential energy technology.
Migrant integration is a prime example of intergovernmental coordination and multilevel governance; first because no level of government can carry out this task alone, and second because its cross-cutting nature often leads to fragmented institutional structures that must be overcome. Within the research strand of intergovernmental relations (IGR), the focus has been on executive actors and governmental decision-makers, resulting in an underexposure of the role of public administration, known as inter-administrative relations (IAR). Against this backdrop, we aim to remedy some of the deficits in IGR research by (1) adopting an explicit IAR perspective which systematically addresses the role of local governments; (2) including a comparative dimension in IAR research that accounts for different administrative ‘starting conditions’ in European countries; and (3) using the policy area of migrant integration as a case in point to empirically investigate developments of institutional convergence and divergence in IAR patterns. It is argued that the coordination of migrant integration in the three countries examined has moved towards more intergovernmental coordination, on the one hand, and that the role of municipalities in this context has been enhanced—varying degrees of (de-)centralization notwithstanding. While certain convergent patterns of inter-governmental coordination have become apparent during the migration crisis, historical path dependencies and administrative cultures still appear to be factors that influence institutional development.
Urban climate strategies have become central tools for steering climate policy in cities. Local policymakers must coordinate a wide range of actors, among them sub-municipal administrative units and neighbouring administrations, in order to ensure legitimate, socially accepted and effective policy. The study examines, from a comparative perspective, how intergovernmental relations (IGR) play out in the formulation and implementation of climate strategies in the metropolitan areas of Berlin and Paris. Embedded in different institutional contexts, both cities followed a trajectory initiated by relatively centralized strategy formulation with an ongoing shift towards more decentralized and coordinated intergovernmental approaches with their respective district administrations. In terms of horizontal IGR, Berlin took a decoupled approach with limited coordination with the state of Brandenburg, whereas Paris was much more closely integrated with its surrounding areas through the inter-municipal metropolis of Greater Paris. Institutional capacity, multilevel coordination and participation demands are identified as three challenges for the existing IGR structures. Addressing these challenges places significant strains on local administrative capacity. The findings highlight the limitations of centralized approaches to IGR at the local level and the importance of aligning the distribution of functional responsibilities with the rights of consultation and participation in climate policy formulation processes.
This open access book assesses the consequences of contemporary economic and political crises for intergovernmental relations in Europe. Focusing on the crises arising from the Covid-19 pandemic, climate change, surges in migration, and the resurgence of regional nationalist movements, it explores the shifting power balances within intergovernmental relations’ systems. The book takes a comparative analytical perspective on how intergovernmental relations are changing across Europe, and how central governments have responded to coordination challenges as recent crises have disrupted established service delivery chains and their underpinning political and bureaucratic arrangements. It also examines the relationship between recent crises and the sub-national resurgence of territorial politics in many European countries. The book will appeal to those with interests in public administration, sub-national governance and European politics.
In 2015, German Chancellor Angela Merkel decided to allow over a million asylum seekers to cross the border into Germany. One key concern was that her decision would signal an open-door policy to aspiring migrants worldwide – thus further increasing migration to Germany and making the country permanently more attractive to irregular and humanitarian migrants. This ‘pull-effect’ hypothesis has been a mainstay of policy discussions ever since. With the continued global rise in forced displacement, not appearing welcoming to migrants has become a guiding principle for the asylum policy of many large receiving countries. In this article, we exploit the unique case study that Merkel's 2015 decision provides for answering the fundamental question of whether welcoming migration policies have sustained effects on migration towards destination countries. We analyze an extensive range of data on migration inflows, migration aspirations and online search interest between 2000 and 2020. The results reject the ‘pull effect’ hypothesis while reaffirming states’ capacity to adapt to changing contexts and regulate migration.
The field of healthcare is characterized by constant innovation, with gender-specific medicine emerging as a new subfield that addresses sex and gender disparities in clinical manifestations, outcomes, treatment, and prevention of disease. Despite its importance, the adoption of gender-specific medicine remains understudied, posing potential risks to patient outcomes due to a lack of awareness of the topic. Building on the Innovation Decision Process Theory, this study examines the spread of information about gender-specific medicine in online networks. The study applies social network analysis to a Twitter dataset reflecting online discussions about the topic to gain insights into its adoption by health professionals and patients online. Results show that the network has a community structure with limited information exchange between sub-communities and that mainly medical experts dominate the discussion. The findings suggest that the adoption of gender-specific medicine might be in its early stages, focused on knowledge exchange. Understanding the diffusion of gender-specific medicine among medical professionals and patients may facilitate its adoption and ultimately improve health outcomes.
In the aftermath of the Shoah and the ostensible triumph of nationalism, it became common in historiography to relegate Jews to the position of the “eternal other” in a series of binaries: Christian/Jewish, Gentile/Jewish, European/Jewish, non-Jewish/Jewish, and so forth. For the longest time, these binaries remained characteristic of Jewish historiography, including in the Central European context. Assuming instead, as the more recent approaches in Habsburg studies do, that pluriculturalism was the basis of common experience in formerly Habsburg Central Europe, and accepting that no single “majority culture” existed, but rather hegemonies were imposed in certain contexts, then the often used binaries are misleading and conceal the complex and sometimes even paradoxical conditions that shaped Jewish life in the region before the Shoah.
The very complexity of Habsburg Central Europe both in synchronic and diachronic perspective precludes any singular historical narrative of “Habsburg Jewry,” and it is not the intention of this volume to offer an overview of “Habsburg Jewish history.” The selected articles in this volume illustrate instead how important it is to reevaluate categories, deconstruct historical narratives, and reconceptualize implemented approaches in specific geographic, temporal, and cultural contexts in order to gain a better understanding of the complex and pluricultural history of the Habsburg Empire and the region as a whole.
Assessing the impact of global change on hydrological systems is one of the greatest hydrological challenges of our time. Changes in land cover, land use, and climate have an impact on water quantity, quality, and temporal availability. There is a widespread consensus that, given the far-reaching effects of global change, hydrological systems can no longer be viewed as static in their structure; instead, they must be regarded as entire ecosystems, wherein hydrological processes interact and coevolve with biological, geomorphological, and pedological processes. To accurately predict the hydrological response under the impact of global change, it is essential to understand this complex coevolution. The knowledge of how hydrological processes, in particular the formation of subsurface (preferential) flow paths, evolve within this coevolution and how they feed back to the other processes is still very limited due to a lack of observational data.
At the hillslope scale, this intertwined system of interactions is known as the hillslope feedback cycle. This thesis aims to enhance our understanding of the hillslope feedback cycle by studying the coevolution of hillslope structure and hillslope hydrological response. Using chronosequences of moraines in two glacial forefields developed from siliceous and calcareous glacial till, the four studies shed light on the complex coevolution of hydrological, biological, and structural hillslope properties, as well as subsurface hydrological flow paths over an evolutionary period of 10 millennia in these two contrasting geologies. The findings indicate that the contrasting properties of siliceous and calcareous parent materials lead
to variations in soil structure, permeability, and water storage. As a result, different plant species and vegetation types are favored on siliceous versus calcareous parent material, leading to diverse ecosystems with distinct hydrological dynamics. The siliceous parent material was found to show a higher activity level in driving the coevolution. The soil pH resulting from parent material weathering emerges as a crucial factor, influencing vegetation development, soil formation, and consequently, hydrology. The acidic weathering of the siliceous parent material favored the accumulation of organic matter, increasing the soils’ water storage capacity and attracting acid-loving shrubs, which further promoted organic matter accumulation and ultimately led to podsolization after 10 000 years. Tracer experiments revealed that the subsurface flow path evolution was influenced by soil and vegetation development, and vice versa. Subsurface flow paths changed from vertical, heterogeneous matrix flow to finger-like flow paths over a few hundred years, evolving into macropore flow, water storage, and lateral subsurface flow after several thousand years. The changes in flow paths among younger age classes were driven by weathering processes altering soil structure, as well as by vegetation development and root activity. In the older age
class, the transition to more water storage and lateral flow was attributed to substantial organic matter accumulation and ongoing podsolization. The rapid vertical water transport in the finger-like flow paths, along with the conductive sandy material, contributed to podsolization and thus to the shift in the hillslope hydrological response.
In contrast, the calcareous site possesses a high pH buffering capacity, creating a neutral to basic environment with relatively low accumulation of dead organic matter, resulting in a lower water storage capacity and the establishment of predominantly grass vegetation. The coevolution was found to be less dynamic over the millennia. Similar to the siliceous site, significant changes in subsurface flow paths occurred between the young age classes. However, unlike the siliceous site, the subsurface flow paths at the calcareous site only altered in shape and not in direction. Tracer experiments showed that flow paths changed from vertical, heterogeneous matrix flow to vertical, finger-like flow paths after a few hundred to thousands of years, which was driven by root activities and weathering processes. Despite having a finer soil texture, water storage at the calcareous site was significantly lower than at the siliceous site, and water transport remained primarily rapid and vertical, contributing to the flourishing of grass vegetation.
The studies elucidated that changes in flow paths are predominantly shaped by the characteristics of the parent material and its weathering products, along with their complex interactions with initial water flow paths and vegetation development. Time, on the other hand, was not found to be a primary factor in describing the evolution of the hydrological response. This thesis makes a valuable contribution to closing the gap in the observations of the coevolution of hydrological processes within the hillslope feedback cycle, which is important to improve predictions of hydrological processes in changing landscapes. Furthermore, it emphasizes the importance of interdisciplinary studies in addressing the hydrological challenges arising from global change.
The plant cell wall plays several crucial roles during plant development with its integrity acting as key signalling component for growth regulation during biotic and abiotic stresses. Cellulose microfibrils, the principal load-bearing components is the major component of the primary cell wall, whose synthesis is mediated by microtubule-associated CELLULOSE SYNTHASE (CESA) COMPLEXES (CSC). Previous studies have shown that CSC interacting proteins COMPANION OF CELLULOSE SYNTHASE (CC) facilitate sustained cellulose synthesis during salt stress by promoting repolymerization of cortical microtubules. However, our understanding of cellulose synthesis during salt stress remains incomplete.
In this study, a pull-down of CC1 protein led to the identification of a novel interactor, termed LEA-like. Phylogenetic analysis revealed that LEA-like belongs to the LATE EMBRYOGENESIS ABUNDANT (LEA) protein family, specifically to the LEA_2 subgroup, showing a close relationship with the CC proteins. Roots of the double mutants lea-like and its closest homolog emb3135 exhibited hypersensitivity when grown on cellulose synthesis inhibitors. Further analysis of higher-order mutants of lea-like, emb3135, and cesa6 demonstrated a genetic interaction between them indicating a significant role in cellulose synthesis.
Live-cell imaging revealed that both LEA-like and EMB3135 migrated with the CSC at the plasma membrane along microtubule tracks in control and oryzalin-treated conditions which destabilize microtubules, suggesting a tight interaction. Investigation of fluorescently labeled lines of different domains of the LEA-like protein revealed that the N-terminal cytosolic domain of LEA-like colocalizes with microtubules, suggesting a physical association between the two.
Considering the established role of LEA proteins in abiotic stress tolerance, we performed phenotypic analysis of the mutant under various stresses. Growth of double mutants of lea-like and emb3135 on NaCl containing media resulted in swelling of root cell indicating a putative role in salt stress tolerance. Supportive of this the quadruple mutant, lacking LEA-like, EMB3135, CC1, and CC2 proteins, exhibited a severe root growth defect on NaCl media compared to control conditions. Live-cell imaging revealed that under salt stress, the LEA-like protein forms aggregates in the plasma membrane.
In conclusion, this study has unveiled two novel interactors of the CSC that act with the CC proteins that regulate plant growth in response to salt stress providing new insights into the intricate regulation of cellulose synthesis, particularly under such conditions.
Hardy inequalities on graphs
(2024)
The dissertation deals with a central inequality of non-linear potential theory, the Hardy inequality. It states that the non-linear energy functional can be estimated from below by a pth power of a weighted p-norm, p>1. The energy functional consists of a divergence part and an arbitrary potential part. Locally summable infinite graphs were chosen as the underlying space. Previous publications on Hardy inequalities on graphs have mainly considered the special case p=2, or locally finite graphs without a potential part.
Two fundamental questions now arise quite naturally: For which graphs is there a Hardy inequality at all? And, if it exists, is there a way to obtain an optimal weight? Answers to these questions are given in Theorem 10.1 and Theorem 12.1. Theorem 10.1 gives a number of characterizations; among others, there is a Hardy inequality on a graph if and only if there is a Green's function. Theorem 12.1 gives an explicit formula to compute optimal Hardy weights for locally finite graphs under some additional technical assumptions. Examples show that Green's functions are good candidates to be used in the formula.
Emphasis is also placed on illustrating the theory with examples. The focus is on natural numbers, Euclidean lattices, trees and star graphs. Finally, a non-linear version of the Heisenberg uncertainty principle and a Rellich inequality are derived from the Hardy inequality.
Laser induced switching offers an attractive possibility to manipulate small magnetic domains for prospective memory and logic devices on ultrashort time scales. Moreover, optical control of magnetization without high applied magnetic fields allows manipulation of magnetic domains individually and locally, without expensive heat dissipation. One of the major challenges for developing novel optically controlled magnetic memory and logic devices is reliable formation and annihilation of non-volatile magnetic domains that can serve as memory bits in ambient conditions. Magnetic skyrmions, topologically nontrivial spin textures, have been studied intensively since their discovery due to their stability and scalability in potential spintronic devices. However, skyrmion formation and, especially, annihilation processes are still not completely understood and further investigation on such mechanisms are needed. The aim of this thesis is to contribute to better understanding of the physical processes behind the optical control of magnetism in thin films, with the goal of optimizing material parameters and methods for their potential use in next generation memory and logic devices.
First part of the thesis is dedicated to investigation of all-optical helicity-dependent switching (AO-HDS) as a method for magnetization manipulation. AO-HDS in Co/Pt multilayer and CoFeB alloys with and without the presence of Dzyaloshinskii-Moriya interaction (DMI), which is a type of exchange interaction, have been investigated by magnetic imaging using photo-emission electron microscopy (PEEM) in combination with X-ray magnetic circular dichroism (XMCD). The results show that in a narrow range of the laser fluence, circularly polarized laser light induces a drag on domain walls. This enables a local deterministic transformation of the magnetic domain pattern from stripes to bubbles in out-of-plane magnetized Co/Pt multilayers, only controlled by the helicity of ultrashort laser pulses. The temperature and characteristic fields at which the stripe-bubble transformation occurs has been calculated using theory for isolated magnetic bubbles, using as parameters experimentally determined average size of stripe domains and the magnetic layer thickness.
The second part of the work aims at purely optical formation and annihilation of magnetic skyrmions by a single laser pulse. The presence of a skyrmion phase in the investigated CoFeB alloys was first confirmed using a Kerr microscope. Then the helicity-dependent skyrmion manipulation was studied using AO-HDS at different laser fluences. It was found that formation or annihilation individual skyrmions using AO-HDS is possible, but not always reliable, as fluctuations in the laser fluence or position can easily overwrite the helicity-dependent effect of AO-HDS. However, the experimental results and magnetic simulations showed that the threshold values for the laser fluence for the formation and annihilation of skyrmions are different. A higher fluence is required for skyrmion formation, and existing skyrmions can be annihilated by pulses with a slightly lower fluence. This provides a further option for controlling formation and annihilation of skyrmions using the laser fluence. Micromagnetic simulations provide additional insights into the formation and annihilation mechanism.
The ability to manipulate the magnetic state of individual skyrmions is of fundamental importance for magnetic data storage technologies. Our results show for the first time that the optical formation and annihilation of skyrmions is possible without changing the external field. These results enable further investigations to optimise the magnetic layer to maximise the energy gap between the formation and annihilation barrier. As a result, unwanted switching due to small laser fluctuations can be avoided and fully deterministic optical switching can be achieved.
Cities and other human settlements are major contributors to climate change and are highly vulnerable to its impacts. They are also uniquely positioned to reduce greenhouse gas emissions and lead adaptation efforts. These compound challenges and opportunities require a comprehensive perspective on the public policy of human settlements. Drawing on core literature that has driven debate around cities and climate over recent decades, we put forward a set of boundary objects that can be applied to connect the knowledge of epistemic communities and support an integrated urbanism. We then use these boundary objects to develop the Goals-Intervention-Stakeholder-Enablers (GISE) framework for a public policy of human settlements that is both place-specific and provides insights and tools useful for climate action in cities and other human settlements worldwide. Using examples from Berlin, we apply this framework to show that climate mitigation and adaptation, public health, and well-being goals are closely linked and mutually supportive when a comprehensive approach to urban public policy is applied.
The global drylands cover nearly half of the terrestrial surface and are home to more than two billion people. In many drylands, ongoing land-use change transforms near-natural savanna vegetation to agricultural land to increase food production. In Southern Africa, these heterogenous savanna ecosystems are also recognized as habitats of many protected animal species, such as elephant, lion and large herds of diverse herbivores, which are of great value for the tourism industry. Here, subsistence farmers and livestock herder communities often live in close proximity to nature conservation areas. Although these land-use transformations are different regarding the future they aspire to, both processes, nature conservation with large herbivores and agricultural intensification, have in common, that they change the vegetation structure of savanna ecosystems, usually leading to destruction of trees, shrubs and the woody biomass they consist of.
Such changes in woody vegetation cover and biomass are often regarded as forms of land degradation and forest loss. Global forest conservation approaches and international programs aim to stop degradation processes, also to conserve the carbon bound within wood from volatilization into earth’s atmosphere. In search for mitigation options against global climate change savannas are increasingly discussed as potential carbon sinks. Savannas, however, are not forests, in that they are naturally shaped by and adapted to disturbances, such as wildfires and herbivory. Unlike in forests, disturbances are necessary for stable, functioning savanna ecosystems and prevent these ecosystems from forming closed forest stands. Their consequently lower levels of carbon storage in woody vegetation have long been the reason for savannas to be overlooked as a potential carbon sink but recently the question was raised if carbon sequestration programs (such as REDD+) could also be applied to savanna ecosystems. However, heterogenous vegetation structure and chronic disturbances hamper the quantification of carbon stocks in savannas, and current procedures of carbon storage estimation entail high uncertainties due to methodological obstacles. It is therefore challenging to assess how future land-use changes such as agricultural intensification or increasing wildlife densities will impact the carbon storage balance of African drylands.
In this thesis, I address the research gap of accurately quantifying carbon storage in vegetation and soils of disturbance-prone savanna ecosystems. I further analyse relevant drivers for both ecosystem compartments and their implications for future carbon storage under land-use change. Moreover, I show that in savannas different carbon storage pools vary in their persistence to disturbance, causing carbon bound in shrub vegetation to be most likely to experience severe losses under land-use change while soil organic carbon stored in subsoils is least likely to be impacted by land-use change in the future.
I start with summarizing conventional approaches to carbon storage assessment and where and for which reasons they fail to accurately estimated savanna ecosystem carbon storage. Furthermore, I outline which future-making processes drive land-use change in Southern Africa along two pathways of land-use transformation and how these are likely to influence carbon storage. In the following chapters, I propose a new method of carbon storage estimation which is adapted to the specific conditions of disturbance-prone ecosystems and demonstrate the advantages of this approach in relation to existing forestry methods. Specifically, I highlight sources for previous over- and underestimation of savanna carbon stocks which the proposed methodology resolves. In the following chapters, I apply the new method to analyse impacts of land-use change on carbon storage in woody vegetation in conjunction with the soil compartment. With this interdisciplinary approach, I can demonstrate that indeed both, agricultural intensification and nature conservation with large herbivores, reduce woody carbon storage above- and belowground, but partly sequesters this carbon into the soil organic carbon stock. I then quantify whole-ecosystem carbon storage in different ecosystem compartments (above- and belowground woody carbon in shrubs and trees, respectively, as well as topsoil and subsoil organic carbon) of two savanna vegetation types (scrub savanna and savanna woodland). Moreover, in a space-for-time substitution I analyse how land-use changes impact carbon storage in each compartment and in the whole ecosystem. Carbon storage compartments are found to differ in their persistence to land-use change with carbon bound in shrub biomass being least persistent to future changes and subsoil organic carbon being most stable under changing land-use. I then explore which individual land-use change effects act as drivers of carbon storage through Generalized Additive Models (GAMs) and uncover non-linear effects, especially of elephant browsing, with implications for future carbon storage. In the last chapter, I discuss my findings in the larger context of this thesis and discuss relevant implications for land-use change and future-making decisions in rural Africa.
Resolving the evolutionary history of two hippotragin antelopes using archival and ancient DNA
(2024)
African antelopes are iconic but surprisingly understudied in terms of their genetics, especially when it comes to their evolutionary history and genetic diversity. The age of genomics provides an opportunity to investigate evolution using whole nuclear genomes. Decreasing sequencing costs enable the recovery of multiple loci per genome, giving more power to single specimen analyses and providing higher resolution insights into species and populations that can help guide conservation efforts. This age of genomics has only recently begun for African antelopes. Many African bovids have a declining population trend and hence, are often endangered. Consequently, contemporary samples from the wild are often hard to collect. In these cases, ex situ samples from contemporary captive populations or in the form of archival or ancient DNA (aDNA) from historical museum or archaeological/paleontological specimens present a great research opportunity with the latter two even offering a window to information about the past. However, the recovery of aDNA is still considered challenging from regions with prevailing climatic conditions that are deemed adverse for DNA preservation like the African continent. This raises the question if DNA recovery from fossils as old as the early Holocene from these regions is possible.
This thesis focuses on investigating the evolutionary history and genetic diversity of two species: the addax (Addax nasomaculatus) and the blue antelope (Hippotragus leucophaeus). The addax is critically endangered and might even already be extinct in the wild, while the blue antelope became extinct ~1800 AD, becoming the first extinct large African mammal species in historical times. Together, the addax and the blue antelope can inform us about current and past extinction events and the knowledge gained can help guide conservation efforts of threatened species. The three studies used ex situ samples and present the first nuclear whole genome data for both species. The addax study used historical museum specimens and a contemporary sample from a captive population. The two studies on the blue antelope used mainly historical museum specimens but also fossils, and resulted in the recovery of the oldest paleogenome from Africa at that time.
The aim of the first study was to assess the genetic diversity and the evolutionary history of the addax. It found that the historical wild addax population showed only limited phylogeographic structuring, indicating that the addax was a highly mobile and panmictic population and suggesting that the current European captive population might be missing the majority of the historical mitochondrial diversity. It also found the nuclear and mitochondrial diversity in the addax to be rather low compared to other wild ungulate species. Suggestions on how to best save the remaining genetic diversity are presented. The European zoo population was shown to exhibit no or only minor levels of inbreeding, indicating good prospects for the restoration of the species in the wild. The trajectory of the addax’s effective population size indicated a major bottleneck in the late Pleistocene and a low effective population size well before recent human impact led to the species being critically endangered today.
The second study set out to investigate the identities of historical blue antelope specimens using aDNA techniques. Results showed that six out of ten investigated specimens were misidentified, demonstrating the blue antelope to be one of the scarcest mammal species in historical natural history collections, with almost no bone reference material. The preliminary analysis of the mitochondrial genomes suggested a low diversity and hence low population size at the time of the European colonization of southern Africa.
Study three presents the results of the analyses of two blue antelope nuclear genomes, one ~200 years old and another dating to the early Holocene, 9,800–9,300 cal years BP. A fossil-calibrated phylogeny dated the divergence time of the three historically extant Hippotragus species to ~2.86 Ma and demonstrated the blue and the sable antelope (H. niger) to be sister species. In addition, ancient gene flow from the roan (H. equinus) into the blue antelope was detected. A comparison with the roan and the sable antelope indicated that the blue antelope had a much lower nuclear diversity, suggesting a low population size since at least the early Holocene. This concurs with findings from the fossil record that show a considerable decline in abundance after the Pleistocene–Holocene transition. Moreover, it suggests that the blue antelope persisted throughout the Holocene regardless of a low population size, indicating that human impact in the colonial era was a major factor in the blue antelope’s extinction.
This thesis uses aDNA analyses to provide deeper insights into the evolutionary history and genetic diversity of the addax and the blue antelope. Human impact likely was the main driver of extinction in the blue antelope, and is likely the main factor threatening the addax today. This thesis demonstrates the value of ex situ samples for science and conservation, and suggests to include genetic data for conservation assessments of species. It further demonstrates the beneficial use of aDNA for the taxonomic identification of historically important specimens in natural history collections. Finally, the successful retrieval of a paleogenome from the early Holocene of Africa using shotgun sequencing shows that DNA retrieval from samples of that age is possible from regions generally deemed unfavorable for DNA preservation, opening up new research opportunities. All three studies enhance our knowledge of African antelopes, contributing to the general understanding of African large mammal evolution and to the conservation of these and similarly threatened species.
State space models enjoy wide popularity in mathematical and statistical modelling across disciplines and research fields. Frequent solutions to problems of estimation and forecasting of a latent signal such as the celebrated Kalman filter hereby rely on a set of strong assumptions such as linearity of system dynamics and Gaussianity of noise terms.
We investigate fallacy in mis-specification of the noise terms, that is signal noise
and observation noise, regarding heavy tailedness in that the true dynamic frequently produces observation outliers or abrupt jumps of the signal state due to realizations of these heavy tails not considered by the model. We propose a formalisation of observation noise mis-specification in terms of Huber’s ε-contamination as well as a computationally cheap solution via generalised Bayesian posteriors with a diffusion Stein divergence loss resulting in the diffusion score matching Kalman filter - a modified algorithm akin in complexity to the regular Kalman filter. For this new filter interpretations of novel terms, stability and an ensemble variant are discussed. Regarding signal noise mis-specification, we propose a formalisation in the frame work of change point detection and join ideas from the popular CUSUM algo-
rithm with ideas from Bayesian online change point detection to combine frequent reliability constraints and online inference resulting in a Gaussian mixture model variant of multiple Kalman filters. We hereby exploit open-end sequential probability ratio tests on the evidence of Kalman filters on observation sub-sequences for aggregated inference under notions of plausibility.
Both proposed methods are combined to investigate the double mis-specification problem and discussed regarding their capabilities in reliable and well-tuned uncertainty quantification. Each section provides an introduction to required terminology and tools as well as simulation experiments on the popular target tracking task and the non-linear, chaotic Lorenz-63 system to showcase practical performance of theoretical considerations.
Floods continue to be the leading cause of economic damages and fatalities among natural disasters worldwide. As future climate and exposure changes are projected to intensify these damages, the need for more accurate and scalable flood risk models is rising. Over the past decade, macro-scale flood risk models have evolved from initial proof-of-concepts to indispensable tools for decision-making at global-, nationaland, increasingly, the local-level. This progress has been propelled by the advent of high-performance computing and the availability of global, space-based datasets. However, despite such advancements, these models are rarely validated and consistently fall short of the accuracy achieved by high-resolution local models. While capabilities have improved, significant gaps persist in understanding the behaviours of such macro-scale models, particularly their tendency to overestimate risk. This dissertation aims to address such gaps by examining the scale transfers inherent in the construction and application of coarse macroscale models. To achieve this, four studies are presented that, collectively, address exposure, hazard, and vulnerability components of risk affected by upscaling or downscaling.
The first study focuses on a type of downscaling where coarse flood hazard inundation grids are enhanced to a finer resolution. While such inundation downscaling has been employed in numerous global model chains, ours is the first study to focus specifically on this component, providing an evaluation of the state of the art and a novel algorithm. Findings demonstrate that our novel algorithm is eight times faster than existing methods, offers a slight improvement in accuracy, and generates more physically coherent flood maps in hydraulically challenging regions. When applied to a case study, the algorithm generated a 4m resolution inundation map from 30m hydrodynamic model outputs in 33 s, a 60-fold improvement in runtime with a 25% increase in RMSE compared with direct hydrodynamic modelling. All evaluated downscaling algorithms yielded better accuracy than the coarse hydrodynamic model when compared to observations, demonstrating similar limits of coarse hydrodynamic models reported by others. The substitution of downscaling into flood risk model chains, in place of high-resolution modelling, can drastically improve the lead time of impactbased forecasts and the efficiency of hazard map production. With downscaling, local regions could obtain high resolution local inundation maps by post-processing a global model without the need for expensive modelling or expertise.
The second study focuses on hazard aggregation and its implications for exposure, investigating implicit aggregations commonly used to intersect hazard grids with coarse exposure models. This research introduces a novel spatial classification framework to understand the effects of rescaling flood hazard grids to a coarser resolution. The study derives closed-form analytical solutions for the location and direction of bias from flood grid aggregation, showing that bias will always be present in regions near the edge of inundation. For example, inundation area will be positively biased when water depth grids are aggregated, while volume will be negatively biased when water elevation grids are aggregated. Extending the analysis to effects of hazard aggregation on building exposure, this study shows that exposure in regions at the edge of inundation are an order of magnitude more sensitive to aggregation errors than hazard alone. Among the two aggregation routines considered, averaging water surface elevation grids better preserved flood depths at buildings than averaging of water depth grids. The study provides the first mathematical proof and generalizeable treatment of flood hazard grid aggregation, demonstrating important mechanisms to help flood risk modellers understand and control model behaviour.
The final two studies focus on the aggregation of vulnerability models or flood damage functions, investigating the practice of applying per-asset functions to aggregate exposure models. Both studies extend Jensen’s inequality, a well-known 1906 mathematical proof, to demonstrate how the aggregation of flood damage functions leads to bias. Applying Jensen’s proof in this new context, results show that typically concave flood damage functions will introduce a positive bias (overestimation) when aggregated. This behaviour was further investigated with a simulation experiment including 2 million buildings in Germany, four global flood hazard simulations and three aggregation scenarios. The results show that positive aggregation bias is not distributed evenly in space, meaning some regions identified as “hot spots of risk” in assessments may in fact just be hot spots of aggregation bias. This study provides the first application of Jensen’s inequality to explain the overestimates reported elsewhere and advice for modellers to minimize such artifacts.
In total, this dissertation investigates the complex ways aggregation and disaggregation influence the behaviour of risk models, focusing on the scale-transfers underpinning macro-scale flood risk assessments. Extending a key finding of the flood hazard literature to the broader context of flood risk, this dissertation concludes that all else equal, coarse models overestimate risk. This dissertation goes beyond previous studies by providing mathematical proofs for how and where such bias emerges in aggregation routines, offering a mechanistic explanation for coarse model overestimates. It shows that this bias is spatially heterogeneous, necessitating a deep understanding of how rescaling may bias models to effectively reduce or communicate uncertainties. Further, the dissertation offers specific recommendations to help modellers minimize scale transfers in problematic regions. In conclusion, I argue that such aggregation errors are epistemic, stemming from choices in model structure, and therefore hold greater potential and impetus for study and mitigation. This deeper understanding of uncertainties is essential for improving macro-scale flood risk models and their effectiveness in equitable, holistic, and sustainable flood management.
Wars and the world
(2024)
This book offers a descriptive analysis of the Soviet/Russian wars in Afghanistan, Chechnya, and Georgia, as well as an in-depth exploration of the ways in which these wars are framed in the collective consciousness created by global popular culture. Russian and Western modalities of remembrance have been, and remain, engaged in a world war that takes place (not exclusively, but intensively) on the level of popular culture. The action/reaction dynamic, confrontational narratives and othering between the two "camps" never ceased. The Cold War, in many ways and contrary to the views of many others who hoped for the end of history, never really ended.
Phobic cosmopolitanism
(2024)
Plate tectonic boundaries constitute the suture zones between tectonic plates. They are shaped by a variety of distinct and interrelated processes and play a key role in geohazards and georesource formation. Many of these processes have been previously studied, while many others remain unaddressed or undiscovered. In this work, the geodynamic numerical modeling software ASPECT is applied to shed light on further process interactions at continental plate boundaries. In contrast to natural data, geodynamic modeling has the advantage that processes can be directly quantified and that all parameters can be analyzed over the entire evolution of a structure. Furthermore, processes and interactions can be singled out from complex settings because the modeler has full control over all of the parameters involved. To account for the simplifying character of models in general, I have chosen to study generic geological settings with a focus on the processes and interactions rather than precisely reconstructing a specific region of the Earth.
In Chapter 2, 2D models of continental rifts with different crustal thicknesses between 20 and 50 km and extension velocities in the range of 0.5-10 mm/yr are used to obtain a speed limit for the thermal steady-state assumption, commonly employed to address the temperature fields of continental rifts worldwide. Because the tectonic deformation from ongoing rifting outpaces heat conduction, the temperature field is not in equilibrium, but is characterized by a transient, tectonically-induced heat flow signal. As a result, I find that isotherm depths of the geodynamic evolution models are shallower than a temperature distribution in equilibrium would suggest. This is particularly important for deep isotherms and narrow rifts. In narrow rifts, the magnitude of the transient temperature signal limits a well-founded applicability of the thermal steady-state assumption to extension velocities of 0.5-2 mm/yr. Estimation of the crustal temperature field affects conclusions on all temperature-dependent processes ranging from mineral assemblages to the feasible exploitation of a geothermal reservoir.
In Chapter 3, I model the interactions of different rheologies with the kinematics of folding and faulting using the example of fault-propagation folds in the Andean fold-and-thrust belt. The evolution of the velocity fields from geodynamic models are compared with those from trishear models of the same structure. While the latter use only geometric and kinematic constraints of the main fault, the geodynamic models capture viscous, plastic, and elastic deformation in the entire model domain. I find that both models work equally well for early, and thus relatively simple stages of folding and faulting, while results differ for more complex situations where off-fault deformation and secondary faulting are present. As fault-propagation folds can play an important role in the formation of reservoirs, knowledge of fluid pathways, for example via fractures and faults, is crucial for their characterization.
Chapter 4 deals with a bending transform fault and the interconnections between tectonics and surface processes. In particular, the tectonic evolution of the Dead Sea Fault is addressed where a releasing bend forms the Dead Sea pull-apart basin, while a restraining bend further to the North resulted in the formation of the Lebanese mountains. I ran 3D coupled geodynamic and surface evolution models that included both types of bends in a single setup. I tested various randomized initial strain distributions, showing that basin asymmetry is a consequence of strain localization. Furthermore, by varying the surface process efficiency, I find that the deposition of sediment in the pull-apart basin not only controls basin depth, but also results in a crustal flow component that increases uplift at the restraining bend.
Finally, in Chapter 5, I present the computational basis for adding further complexity to plate boundary models in ASPECT with the implementation of earthquake-like behavior using the rate-and-state friction framework. Despite earthquakes happening on a relatively small time scale, there are many interactions between the seismic cycle and the long time spans of other geodynamic processes. Amongst others, the crustal state of stress as well as the presence of fluids or changes in temperature may alter the frictional behavior of a fault segment. My work provides the basis for a realistic setup of involved structures and processes, which is therefore important to obtain a meaningful estimate for earthquake hazards.
While these findings improve our understanding of continental plate boundaries, further development of geodynamic software may help to reveal even more processes and interactions in the future.
Large parts of the Earth’s interior are inaccessible to direct observation, yet global geodynamic processes are governed by the physical material properties under extreme pressure and temperature conditions. It is therefore essential to investigate the deep Earth’s physical properties through in-situ laboratory experiments. With this goal in mind, the optical properties of mantle minerals at high pressure offer a unique way to determine a variety of physical properties, in a straight-forward, reproducible, and time-effective manner, thus providing valuable insights into the physical processes of the deep Earth. This thesis focusses on the system Mg-Fe-O, specifically on the optical properties of periclase (MgO) and its iron-bearing variant ferropericlase ((Mg,Fe)O), forming a major planetary building block. The primary objective is to establish links between physical material properties and optical properties. In particular the spin transition in ferropericlase, the second-most abundant phase of the lower mantle, is known to change the physical material properties. Although the spin transition region likely extends down to the core-mantle boundary, the ef-fects of the mixed-spin state, where both high- and low-spin state are present, remains poorly constrained.
In the studies presented herein, we show how optical properties are linked to physical properties such as electrical conductivity, radiative thermal conductivity and viscosity. We also show how the optical properties reveal changes in the chemical bonding. Furthermore, we unveil how the chemical bonding, the optical and other physical properties are affected by the iron spin transition. We find opposing trends in the pres-sure dependence of the refractive index of MgO and (Mg,Fe)O. From 1 atm to ~140 GPa, the refractive index of MgO decreases by ~2.4% from 1.737 to 1.696 (±0.017). In contrast, the refractive index of (Mg0.87Fe0.13)O (Fp13) and (Mg0.76Fe0.24)O (Fp24) ferropericlase increases with pressure, likely because Fe Fe interactions between adjacent iron sites hinder a strong decrease of polarizability, as it is observed with increasing density in the case of pure MgO. An analysis of the index dispersion in MgO (decreasing by ~23% from 1 atm to ~103 GPa) reflects a widening of the band gap from ~7.4 eV at 1 atm to ~8.5 (±0.6) eV at ~103 GPa. The index dispersion (between 550 and 870 nm) of Fp13 reveals a decrease by a factor of ~3 over the spin transition range (~44–100 GPa). We show that the electrical band gap of ferropericlase significantly widens up to ~4.7 eV in the mixed spin region, equivalent to an increase by a factor of ~1.7. We propose that this is due to a lower electron mobility between adjacent Fe2+ sites of opposite spin, explaining the previously observed low electrical conductivity in the mixed spin region. From the study of absorbance spectra in Fp13, we show an increasing covalency of the Fe-O bond with pressure for high-spin ferropericlase, whereas in the low-spin state a trend to a more ionic nature of the Fe-O bond is observed, indicating a bond weakening effect of the spin transition. We found that the spin transition is ultimately caused by both an increase of the ligand field-splitting energy and a decreasing spin-pairing energy of high-spin Fe2+.
Overcoming natural biomass limitations in gram-negative bacteria through synthetic carbon fixation
(2024)
The carbon demands of an ever-increasing human population and the concomitant rise in net carbon emissions requires CO2 sequestering approaches for production of carbon-containing molecules. Microbial production of carbon-containing products from plant-based sugars could replace current fossil-based production. However, this form of sugar-based microbial production directly competes with human food supply and natural ecosystems. Instead, one-carbon feedstocks derived from CO2 and renewable energy were proposed as an alternative. The one carbon molecule formate is a stable, readily soluble and safe-to-store energetic mediator that can be electrochemically generated from CO2 and (excess off-peak) renewable electricity. Formate-based microbial production could represent a promising approach for a circular carbon economy. However, easy-to-engineer and efficient formate-utilizing microbes are lacking. Multiple synthetic metabolic pathways were designed for better-than-nature carbon fixation. Among them, the reductive glycine pathway was proposed as the most efficient pathway for aerobic formate assimilation. While some of these pathways have been successfully engineered in microbial hosts, these synthetic strains did so far not exceed the performance of natural strains. In this work, I engineered and optimized two different synthetic formate assimilation pathways in gram-negative bacteria to exceed the limits of a natural carbon fixation pathway, the Calvin cycle.
The first chapter solidified Cupriavidus necator as a promising formatotrophic host to produce value-added chemicals. The formate tolerance of C. necator was assessed and a production pathway for crotonate established in a modularized fashion. Last, bioprocess optimization was leveraged to produce crotonate from formate at a titer of 148 mg/L.
In the second chapter, I chromosomally integrated and optimized the synthetic reductive glycine pathway in C. necator using a transposon-mediated selection approach. The insertion methodology allowed selection for condition-specific tailored pathway expression as improved pathway performance led to better growth. I then showed my engineered strains to exceed the biomass yields of the Calvin cycle utilizing wildtype C. necator on formate. This demonstrated for the first time the superiority of a synthetic formate assimilation pathway and by extension of synthetic carbon fixation efforts as a whole.
In chapter 3, I engineered a segment of a synthetic carbon fixation cycle in Escherichia coli. The GED cycle was proposed as a Calvin cycle alternative that does not perform a wasteful oxygenation reaction and is more energy efficient. The pathways simple architecture and reasonable driving force made it a promising candidate for enhanced carbon fixation. I created a deletion strain that coupled growth to carboxylation via the GED pathway segment. The CO2 dependence of the engineered strain and 13C-tracer analysis confirmed operation of the pathway in vivo.
In the final chapter, I present my efforts of implementing the GED cycle also in C. necator, which might be a better-suited host, as it is accustomed to formatotrophic and hydrogenotrophic growth. To provide the carboxylation substrate in vivo, I engineered C. necator to utilize xylose as carbon source and created a selection strain for carboxylase activity. I verify activity of the key enzyme, the carboxylase, in the decarboxylative direction. Although CO2-dependent growth of the strain was not obtained, I showed that all enzymes required for operation of the GED cycle are active in vivo in C. necator.
I then evaluate my success with engineering a linear and cyclical one-carbon fixation pathway in two different microbial hosts. The linear reductive glycine pathway presents itself as a much simpler metabolic solution for formate dependent growth over the sophisticated establishment of hard-to-balance carbon fixation cycles. Last, I highlight advantages and disadvantages of C. necator as an upcoming microbial benchmark organism for synthetic metabolism efforts and give and outlook on its potential for the future of C1-based manufacturing.
Virtual Reality (VR) leads to the highest level of immersion if presented using a 1:1 mapping of virtual space to physical space—also known as real walking. The advent of inexpensive consumer virtual reality (VR) headsets, all capable of running inside-out position tracking, has brought VR to the home. However, many VR applications do not feature full real walking, but instead, feature a less immersive space-saving technique known as instant teleportation. Given that only 0.3% of home users run their VR experiences in spaces more than 4m2, the most likely explanation is the lack of the physical space required for meaningful use of real walking. In this thesis, we investigate how to overcome this hurdle. We demonstrate how to run 1:1-mapped VR experiences in small physical spaces and we explore the trade-off between space and immersion. (1) We start with a space limit of 15cm. We present DualPanto, a device that allows (blind) VR users to experience the virtual world from a 1:1 mapped bird’s eye perspective—by leveraging haptics. (2) We then relax our space constraints to 50cm, which is what seated users (e.g., on an airplane or train ride) have at their disposal. We leverage the space to represent a standing user in 1:1 mapping, while only compressing the user’s arm movement. We demonstrate our 4 prototype VirtualArms at the example of VR experiences limited to arm movement, such as boxing. (3) Finally, we relax our space constraints further to 3m2 of walkable space, which is what 75% of home users have access to. As well- established in the literature, we implement real walking with the help of portals, also known as “impossible spaces”. While impossible spaces on such dramatic space constraints tend to degenerate into incomprehensible mazes (as demonstrated, for example, by “TraVRsal”), we propose plausibleSpaces: presenting meaningful virtual worlds by adapting various visual elements to impossible spaces. Our techniques push the boundary of spatially meaningful VR interaction in various small spaces. We see further future challenges for new design approaches to immersive VR experiences for the smallest physical spaces in our daily life.
Organizations are investing billions on innovation and agility initiatives to stay competitive in their increasingly uncertain business environments. Design Thinking, an innovation approach based on human-centered exploration, ideation and experimentation, has gained increasing popularity. The market for Design Thinking, including software products and general services, is projected to reach 2.500 million $ (US-Dollar) by 2028. A dispersed set of positive outcomes have been attributed to Design Thinking. However, there is no clear understanding of what exactly comprises the impact of Design Thinking and how it is created. To support a billion-dollar market, it is essential to understand the value Design Thinking is bringing to organizations not only to justify large investments, but to continuously improve the approach and its application.
Following a qualitative research approach combined with results from a systematic literature review, the results presented in this dissertation offer a structured understanding of Design Thinking impact. The results are structured along two main perspectives of impact: the individual and the organizational perspective. First, insights from qualitative data analysis demonstrate that measuring and assessing the impact of Design Thinking is currently one central challenge for Design Thinking practitioners in organizations. Second, the interview data revealed several effects Design Thinking has on individuals, demonstrating how Design Thinking can impact boundary management behaviors and enable employees to craft their jobs more actively.
Contributing to innovation management research, the work presented in this dissertation systematically explains the Design Thinking impact, allowing other researchers to both locate and integrate their work better. The results of this research advance the theoretical rigor of Design Thinking impact research, offering multiple theoretical underpinnings explaining the variety of Design Thinking impact. Furthermore, this dissertation contains three specific propositions on how Design Thinking creates an impact: Design Thinking creates an impact through integration, enablement, and engagement. Integration refers to how Design Thinking enables organizations through effectively combining things, such as for example fostering balance between exploitation and exploration activities. Through Engagement, Design Thinking impacts organizations involving users and other relevant stakeholders in their work. Moreover, Design Thinking creates impact through Enablement, making it possible for individuals to enact a specific behavior or experience certain states.
By synthesizing multiple theoretical streams into these three overarching themes, the results of this research can help bridge disciplinary boundaries, for example between business, psychology and design, and enhance future collaborative research. Practitioners benefit from the results as multiple desirable outcomes are detailed in this thesis, such as successful individual job crafting behaviors, which can be expected from practicing Design Thinking. This allows practitioners to enact more evidence-based decision-making concerning Design Thinking implementation. Overall, considering multiple levels of impact as well as a broad range of theoretical underpinnings are paramount to understanding and fostering Design Thinking impact.
The trace elements copper, iron, manganese, selenium and zinc are essential micronutrients involved in various cellular processes, all with different responsibilities. Based on that importance, their concentrations are tightly regulated in mammalian organisms. The maintenance of those levels is termed trace element homeostasis and mediated by a combination of processes regulating absorption, cellular and systemic transport mechanisms, storage and effector proteins as well as excretion. Due to their chemical properties, some functions of trace elements overlap, as seen in antioxidative defence, for example, comprising an expansive spectrum of antioxidative proteins and molecules. Simultaneously, the same is true for regulatory mechanisms, causing trace elements to influence each other’s homeostases. To mimic physiological conditions, trace elements should therefore not be evaluated separately but considered in parallel. While many of these homeostatic mechanisms are well-studied, for some elements new pathways are still discovered. Additionally, the connections between dietary trace element intake, trace element status and health are not fully unraveled, yet. With current demographic developments, also the influence of ageing as well as of certain pathological conditions is of increasing interest. Here, the TraceAge research unit was initiated, aiming to elucidate the homeostases of and interactions between essential trace elements in healthy and diseased elderly. While human cohort studies can offer insights into trace element profiles, also in vivo model organisms are used to identify underlying molecular mechanisms. This is achieved by a set of feeding studies including mice of various age groups receiving diets of reduced trace element content. To account for cognitive deterioration observed with ageing, neurodegenerative diseases, as well as genetic mutations triggering imbalances in cerebral trace element concentrations, one TraceAge work package focuses on trace elements in the murine brain, specifically the cerebellum. In that context, concentrations of the five essential trace elements of interest, copper, iron, manganese, selenium and zinc, were quantified via inductively coupled plasma-tandem mass spectrometry, revealing differences in priority of trace element homeostases between brain and liver. Upon moderate reduction of dietary trace element supply, cerebellar concentrations of copper and manganese deviated from those in adequately supplied animals. By further reduction of dietary trace element contents, also concentrations of cerebellar iron and selenium were affected, but not as strong as observed in liver tissue. In contrast, zinc concentrations remained stable. Investigation of aged mice revealed cerebellar accumulation of copper and iron, possibly contributing to oxidative stress on account of their redox properties. Oxidative stress affects a multitude of cellular components and processes, among them, next to proteins and lipids, also the DNA. Direct insults impairing its integrity are of relevance here, but also indirect effects, mediated by the machinery ensuring genomic stability and its functionality. The system includes the DNA damage response, comprising detection of endogenous and exogenous DNA lesions, decision on subsequent cell fate and enabling DNA repair, which presents another pillar of genomic stability maintenance. Also in proteins of this machinery, trace elements act as cofactors, shaping the hypothesis of impaired genomic stability maintenance under conditions of disturbed trace element homeostasis. To investigate this hypothesis, a variety of approaches was used, applying OECD guidelines Organisation for Economic Co-operation and Development, adapting existing protocols for use in cerebellum tissue and establishing new methods. In order to assess the impact of age and dietary trace element depletion on selected endpoints estimating genomic instability, DNA damage and DNA repair were investigated. DNA damage analysis, in particular of DNA strand breaks and oxidatively modified DNA bases, revealed stable physiological levels which were neither affected by age nor trace element supply. To examine whether this is a result of increased repair rates, two steps characteristic for base excision repair, namely DNA incision and ligation activity, were studied. DNA glycosylases and DNA ligases were not reduced in their activity by age or trace element depletion, either. Also on the level of gene expression, major proteins involved in genomic stability maintenance were analysed, mirroring results obtained from protein studies. To conclude, the present work describes homeostatic regulation of trace elements in the brain, which, in absence of genetic mutations, is able to retain physiological levels even under conditions of reduced trace element supply to a certain extent. This is reflected by functionality of genomic stability maintenance mechanisms, illuminating the prioritization of the brain as vital organ.