Refine
Has Fulltext
- yes (13420) (remove)
Year of publication
Document Type
- Article (4018)
- Postprint (3307)
- Doctoral Thesis (2562)
- Monograph/Edited Volume (973)
- Review (571)
- Part of Periodical (492)
- Preprint (446)
- Master's Thesis (268)
- Conference Proceeding (246)
- Working Paper (245)
Language
Keywords
- Germany (118)
- Deutschland (106)
- climate change (80)
- Sprachtherapie (77)
- Patholinguistik (73)
- patholinguistics (73)
- Logopädie (72)
- Zeitschrift (71)
- Nachhaltigkeit (61)
- European Union (60)
Institute
- Extern (1405)
- MenschenRechtsZentrum (943)
- Institut für Physik und Astronomie (720)
- Institut für Biochemie und Biologie (716)
- Wirtschaftswissenschaften (583)
- Institut für Chemie (556)
- Institut für Mathematik (521)
- Institut für Romanistik (515)
- Institut für Geowissenschaften (512)
- Mathematisch-Naturwissenschaftliche Fakultät (489)
Die Universität Potsdam positioniert sich als Hochschule im digitalen Zeitalter mit dem Ziel, den umfassenden Einsatz von digitalen Medien in Lehre und Studium als gelebte Lehr-, Lern- und Prüfungskultur für alle Studierenden, Lehrenden und Mitarbeitenden zu verwirklichen.
Aufbauend auf den Erfahrungen und Vorarbeiten der letzten Jahre, wie der Bestandsaufnahme E-Learning sowie von früheren Strategien und Leitbildern, mit denen digitale Medien zunehmend in Lehre und Studium integriert wurden, verfügt die Universität Potsdam über eine starke Ausgangsposition im Bereich der digitalen Lehre. Daher zielt die aktuelle E-Learning-Strategie (2023–2028) auf die Weiterentwicklung und Verstetigung dieser Ansätze. Sie identifiziert dabei sechs zentrale Handlungsfelder: "Austausch und Vernetzung", "Content", "Innovation und Verstetigung", "Medienkompetenz", "Qualitätsentwicklung" und "UP und die Welt".
Die Strategie wurde im Rahmen eines partizipativen Prozesses entwickelt, der von der E-Learning-Steuerungsgruppe koordiniert und von Vertreter*innen aus allen Bereichen und allen Statusgruppen der Universität unterstützt wurde. Sie wurde in der 319. Sitzung des Senats am 5. Juli 2023 beschlossen und mit redaktionellen Änderungen 2024 veröffentlicht.
Terrestrial landscape dynamics are dominated by the production, mobilisation, transfer and deposition of sediment. Numerous chemical elements are carried by sediments, making them a key component for ecological processes, as soil constitution and thus plants and animal ecosystems depends on them, and by extension the human species. They are also essential for climate evolution and regulation as marine sedimentation acts as a carbon sink. However, the processes at the origin of their production, mobilisation, transfer can occur suddenly with high-energy content – such as volcanic eruption, mass wasting or flooding events and wildfires – shaking ecosystems and shaping landforms. Besides, in the last era, the human species has shown its ability to disturb landscape dynamics and change sediments cycles. Thus, there is a need for predictive understanding of the processes involved. This relies on un-derstanding of the mechanisms of key processes and their controls, and knowledge of the state and evolution of the Earth’s surface. Classic approaches to these challenges include empirical observations and numerical modeling of geochemical fluxes and surface processes, as well as the study of terrestrial sedimentary archives to better understand the parameters at stake in landscape dynamics and climate changes and the different actions and retroactions between the production, mobilisation, transfer and deposition of sediments which ultimately shape landscapes. Environmental seismology complements these approaches.
Environmental seismology is the science topic investigating the source functions and propagation properties of seismic vibrations triggered by processes happening at or near the Earth’s surface, below and above it - cryosphere, hydrosphere, atmosphere, human habitat, biosphere, etc., to obtain insight into all these physical processes. Indeed, from mass wasting event to rivers, from wild species to hu-man, all these processes are generating seismic waves. Environmental seismology is a rather recent field, with new branches rapidly expanding and at various stages of scientific progress. This thesis is motivated by the goal of learning more on two major natural process hazards (river bedload transport and mass wasting) as well as on human-generated acoustic hazard, while covering the axis of funda-mental research progression, from data exploration and method and theory development to proof of concept, with the twin aims of developing a better understanding of the operation of these specific processes, and of advancing the methods we have at our disposal to study them.
First, I provide a benchmark for assessment of the reliability of existing seismic bedload model inver-sion to retrieve bedload flux from seismic data. Bedload flux measurements are essential to better understand river dynamics, and this can be achieved with environmental seismology. However, due to a lack of well-constrained validation data, the accuracy of the resulting inversions is unknown. I address this gap in Chapter 2.2, reporting a seismic field experiment, and comparing the results to high-quality independent bedload measurements to constrain a seismic bedload model. The study shows that the quality of bedload flux estimates from seismic data strongly depends on the quality of the input data for the model. Direct measurements of relevant parameters, chiefly seismic ground proper-ties needed for the Green's function and the grain size distribution of the moving bedload, considerably improve the model quality over generic approaches using empirical or theoretical functions. I also pro-vide a numerical tool to facilitate the use of water turbulence and bedload seismic inversion models: seismic models for bedload flux and water turbulence require painstaking work to constrain parame-ters describing the ground properties by active seismic study or analysis of passive seismic data, and the grain size distribution via independent measurements. Reasonable predictions can be achieved by using a Monte Carlo approach to optimize the free parameters with respect to the target parameters. The validation of the tool, in Chapter 2.3, with independent measurements of water depth and bedload flux at a study site on the Eshtemoa river in Israël makes it available for reliable use at other sites. The work reported in this chapter has been published in Lagarde et al. 2021 and Dietze et al. 2019b.
In a second study, reported in Chapter 3, I investigate the formation of a failure plane prior to a rock-slide. A better understanding of the dynamics of the preparation phase is essential to determine the timing, volume and mobilization mechanism of a rock slope failure, and this can be achieved with envi-ronmental seismology. I take advantage of a network of seismic stations close to an instable slope recording cracking signals prior to the slope failure, and use a machine learning technique based on hidden Markov models to isolate these signals from the seismic data, retrieving the cumulative number of cracking events in a period of 20 days prior to a large rockslide and 10 days after. The trajectory of the cumulative number of cracks shifts from a rather linear shape in the two weeks prior to the rock-slide to an S-shaped development in the last 27 h before failure. I interpret this change as a switch from initial distributed cracking to localised damage accumulation in the hours prior to the failure. I develop a simple physical model to explain the temporal evolution of crack activity during the S-shape phase, revealing the importance of an internal parameter, the total crack boundary length as the dominant control parameter of failure plane evolution. This study has been published as Lagarde et al. 2023.
Third, I develop a model converting acoustic signals to seismic signals. A part of the acoustic vibrations generated on the Earth’s surface is converted to seismic signals at the ground interface. Consequently, noise pollution may be translated to slope fatigue and rock micro (or macro) fracturing resulting in a degrading effect on landforms. Moreover, this pollution can have negative impacts, such as physical, physiological as well as psychological effects on animal species. At present, the impact of seismic pol-lution generated by acoustic sources is difficult to evaluate. In Chapter 4, I improve and implement a model converting the acoustic pressure generated by a source in the atmosphere to the corresponding seismic signal for a receiver within the ground. The ground is considered to be a porous elastic medium in which wave behaviour can be approximated by the Biot-Stoll model. The model is extended for use of a temporal pressure pulse as an input, and to produce output on a plan 2D map, where the wind effect on the acoustic to seismic coupling can be reproduced. I invest extensive effort in making the model user-friendly, as the project aims at reaching a large audience, comprising, for example, geo-morphologists, biologists and sociologists. Finally, the model is subjected to synthetic testing as well as a qualitative comparison of the predicted ground particle velocity and the seismic signal of a real heli-copter flight as a source of acoustic input.
These studies advance understanding of the operation of specific natural processes in channels and on hillslopes, and bring us closer to designing functioning early warning systems for mass-wasting and flood events. This thesis also raises questions that have not been considered before, such as the con-tribution of human acoustic pollution to the seismic hum and its impact on the natural environment, or the importance of cracks in the self-development of the failure plane prior to slope. Together, these studies question general assumptions usually made regarding the triggering of mass wasting or the hillslope-channel connectivity. Beyond this, the thesis covers the axis of fundamental research progres-sion, from data exploration and method and theory development to proof of concept, and shows how in the rapidly developing field of environmental seismology, an active awareness of progress can help strengthen and accelerate general advances.
On 21 April 2021, the European Commission presented its long-awaited proposal for a Regulation “laying down harmonized rules on Artificial Intelligence”, the so-called “Artificial Intelligence Act” (AIA). This article takes a critical look at the proposed regulation. After an introduction (1), the paper analyzes the unclear preemptive effect of the AIA and EU competences (2), the scope of application (3), the prohibited uses of Artificial Intelligence (AI) (4), the provisions on high-risk AI systems (5), the obligations of providers and users (6), the requirements for AI systems with limited risks (7), the enforcement system (8), the relationship of the AIA with the existing legal framework (9), and the regulatory gaps (10). The last section draws some final conclusions (11).
Mit dem Klima wandelt sich auch notwendig die offene Gesellschaft. Und mit ihr wandelt sich wiederum auch die Verfassung(-sinterpretation). Periodisch wiederkehrende Gesundheits- und Sicherheitskrisen fordern eine dynamische Reaktion des Grundgesetzes auf mit ihnen einhergehende Probleme. In andauernden Krisen wie der Umweltkrise muss die Verfassung gleichzeitig in vielerlei Hinsicht nachhaltig sein. Dabei muss das, was wir unter Freiheit, Klima‑, Umwelt- oder Tierschutz verstehen, immer im Wandel bleiben.
Jurisdiction
(2022)
Keine Reform für die Zukunft
(2021)
Am 1. Januar 2021 trat die jüngste Reform des Erneuerbare-Energien-Gesetzes (EEG) in Kraft. Sie führte mit der finanziellen Beteiligung der Gemeinden an den Erträgen der Windenergie klammheimlich eine verfassungswidrige Abgabe ein: Durch das Zusammenspiel des neuen § 36k EEG 2021 mit der altbekannten EEG-Umlage fließt eine bei den Strom-Endverbrauchern erhobene Abgabe in die kommunalen Haushalte. Das kann auf keine Gesetzgebungskompetenz gestützt werden. Darüber hinaus führt die Deckelung der EEG-Umlage in den Jahren 2021 und 2022 in Verbindung mit § 36k EEG 2021 dazu, dass in verfassungswidriger Weise Bundesmittel den Gemeinden zur freien Verfügung gestellt werden.
Strength of weakness
(2020)
The paper investigates quality management in teaching and learning in higher education institutions from a principal-agent perspective. Based on data gained from semi-structured interviews and from a nation-wide survey with quality managers of German higher education institutions, the study shows how quality managers position themselves in relation to their perception of the interests of other actors in higher education institutions. The paper describes the various interests and discusses the main implications of this constellation of actors. It argues that quality managers, although they may be considered as rather weak actors within the higher education institution, may be characterised as having a strength of weakness due to diverging interests of their principals.
Strategic social media use positively influences organizational goals such as the long-term accrual of social capital, and thus social media information governance has become an increasingly important organizational objective. It is particularly important for humanitarian nongovernmental organizations (HNGOs), whose work relies on accurate and timely information regarding socially altruistic behavior (donations, volunteerism, etc.). Despite the potential of social media for increasing social capital, tensions in governing social media information across an organization's different operational levels (regional, intermediate, and national) pose a difficult challenge. Prominent governance frameworks offer little guidance, as their focus on control and incremental policymaking is largely incompatible with the processes, roles, standards, and metrics needed for managing self-governing social media. This study offers a notion of dynamic and co-evolutionary process management of multi-level organizations as a means of conceptualizing social media information governance for the accrual of organizational social capital. Based on interviews with members of HNGOs, this study reveals tensions that emerge within eight focus areas of accruing social capital in multi-level organizations, explains how dynamic process management can ease those tensions, and proposes corresponding strategy recommendations.
With the latest technological developments and associated new possibilities in teaching, the personalisation of learning is gaining more and more importance. It assumes that individual learning experiences and results could generally be improved when personal learning preferences are considered. To do justice to the complexity of the personalisation possibilities of teaching and learning processes, we illustrate the components of learning and teaching in the digital environment and their interdependencies in an initial model. Furthermore, in a pre-study, we investigate the relationships between the learner's ability to (digital) self-organise, the learner’s prior- knowledge learning in different variants of mode and learning outcomes as one part of this model. With this pre-study, we are taking the first step towards a holistic model of teaching and learning in digital environments.
Die Rolle von Kommunen wird in diesem Buch einem europäischen Vergleich unterzogen. Dabei werden Kategorien wie kommunale Autonomie, Aufgabenprofile, territoriale und politische sowie finanzielle Rahmenbedingungen miteinander verglichen. Auch vergangene und bestehende Reformtrends und -diskurse werden beschrieben und eingeordnet. Die Studie ist eine umfassende Sekundäranalyse und bereitet aktuelle Zahlen aus verschiedenen Quellen auf. Durchgeführt wurde sie von einem Team um Prof. Sabine Kuhlmann vom Lehrstuhl für Politikwissenschaft, Verwaltung und Organisation an der Universität Potsdam.
Alcohol use disorder (AUD) is the most common substance use disorder worldwide. Although dopamine-related findings were often observed in AUD, associated neurobiological mechanisms are still poorly understood. Therefore, in the present study, we investigate D2/3 receptor availability in healthy participants, participants at high risk (HR) to develop addiction (not diagnosed with AUD), and AUD patients in a detoxified stage, applying F-18-fallypride positron emission tomography (F-18-PET). Specifically, D2/3 receptor availability was investigated in (1) 19 low-risk (LR) controls, (2) 19 HR participants, and (3) 20 AUD patients after alcohol detoxification. Quality and severity of addiction were assessed with clinical questionnaires and (neuro)psychological tests. PET data were corrected for age of participants and smoking status. In the dorsal striatum, we observed significant reductions of D2/3 receptor availability in AUD patients compared with LR participants. Further, receptor availability in HR participants was observed to be intermediate between LR and AUD groups (linearly decreasing). Still, in direct comparison, no group difference was observed between LR and HR groups or between HR and AUD groups. Further, the score of the Alcohol Dependence Scale (ADS) was inversely correlated with D2/3 receptor availability in the combined sample. Thus, in line with a dimensional approach, striatal D2/3 receptor availability showed a linear decrease from LR participants to HR participants to AUD patients, which was paralleled by clinical measures. Our study shows that a core neurobiological feature in AUD seems to be detectable in an early, subclinical state, allowing more individualized alcohol prevention programs in the future.
White mica and tourmaline are the dominant hydrothermal alteration minerals at the world-class Panasqueira W-Sn-Cu deposit in Portugal. Thus, understanding the controls on their chemical composition helps to constrain ore formation processes at this deposit and determine their usefulness as pathfinder minerals for mineralization in general. We combine whole-rock geochemistry of altered and unaltered metasedimentary host rocks with in situ LA-ICP-MS measurements of tourmaline and white mica from the alteration halo. Principal component analysis (PCA) is used to better identify geochemical patterns and trends of hydrothermal alteration in the datasets. The hydrothermally altered metasediments are enriched in As, Sn, Cs, Li, W, F, Cu, Rb, Zn, Tl, and Pb relative to unaltered samples. In situ mineral analyses show that most of these elements preferentially partition into white mica over tourmaline (Li, Rb, Cs, Tl, W, and Sn), whereas Zn is enriched in tourmaline. White mica has distinct compositions in different settings within the deposit (greisen, vein selvages, wall rock alteration zone, late fault zone), indicating a compositional evolution with time. In contrast, tourmaline from different settings overlaps in composition, which is ascribed to a stronger dependence on host rock composition and also to the effects of chemical zoning and microinclusions affecting the LA-ICP-MS analyses. Hence, in this deposit, white mica is the better recorder of the fluid composition. The calculated trace-element contents of the Panasqueira mineralizing fluid based on the mica data and estimates of mica-fluid partition coefficients are in good agreement with previous fluid-inclusion analyses. A compilation of mica and tourmaline trace-element compositions from Panasqueira and other W-Sn deposits shows that white mica has good potential as a pathfinder mineral, with characteristically high Li, Cs, Rb, Sn, and W contents. The trace-element contents of hydrothermal tourmaline are more variable. Nevertheless, the compiled data suggest that high Sn and Li contents are distinctive for tourmaline from W-Sn deposits.
In nature, plants are constantly exposed to many transient, but recurring, stresses. Thus, to complete their life cycles, plants require a dynamic balance between capacities to recover following cessation of stress and maintenance of stress memory. Recently, we uncovered a new functional role for macroautophagy/autophagy in regulating recovery from heat stress (HS) and resetting cellular memory of HS inArabidopsis thaliana. Here, we demonstrated that NBR1 (next to BRCA1 gene 1) plays a crucial role as a receptor for selective autophagy during recovery from HS. Immunoblot analysis and confocal microscopy revealed that levels of the NBR1 protein, NBR1-labeled puncta, and NBR1 activity are all higher during the HS recovery phase than before. Co-immunoprecipitation analysis of proteins interacting with NBR1 and comparative proteomic analysis of annbr1-null mutant and wild-type plants identified 58 proteins as potential novel targets of NBR1. Cellular, biochemical and functional genetic studies confirmed that NBR1 interacts with HSP90.1 (heat shock protein 90.1) and ROF1 (rotamase FKBP 1), a member of the FKBP family, and mediates their degradation by autophagy, which represses the response to HS by attenuating the expression ofHSPgenes regulated by the HSFA2 transcription factor. Accordingly, loss-of-function mutation ofNBR1resulted in a stronger HS memory phenotype. Together, our results provide new insights into the mechanistic principles by which autophagy regulates plant response to recurrent HS.
Induced point mutations are important genetic resources for their ability to create hypo- and hypermorphic alleles that are useful for understanding gene functions and breeding. However, such mutant populations have only been developed for a few temperate maize varieties, mainly B73 and W22, yet no tropical maize inbred lines have been mutagenized and made available to the public to date. We developed a novel Ethyl Methanesulfonate (EMS) induced mutation resource in maize comprising 2050 independent M2 mutant families in the elite tropical maize inbred ML10. By phenotypic screening, we showed that this population is of comparable quality with other mutagenized populations in maize. To illustrate the usefulness of this population for gene discovery, we performed rapid mapping-by-sequencing to clone a fasciated-ear mutant and identify a causal promoter deletion in ZmCLE7 (CLE7). Our mapping procedure does not require crossing to an unrelated parent, thus is suitable for mapping subtle traits and ones affected by heterosis. This first EMS population in tropical maize is expected to be very useful for the maize research community. Also, the EMS mutagenesis and rapid mapping-by-sequencing pipeline described here illustrate the power of performing forward genetics in diverse maize germplasms of choice, which can lead to novel gene discovery due to divergent genetic backgrounds.
The European Alps are amongst the regions with highest glacier mass loss rates over the last decades. Under the threat of ongoing climate change, the ability to predict glacier mass balance changes for water and risk management purposes has become imperative. This raises an urgent need for reliable glacier models. The European Alps do not only host glaciers, but also numerous caves containing carbonate formations, called speleothems. Previous studies have shown that those speleothems also grew during times when the cave was covered by a warm-based glacier. In this thesis, I utilise speleothems from the European Alps as archives of local, environmental conditions related to mountain glacier evolution.
Previous studies have shown that speleothem isotope data from the Alps can be strongly affected by in-cave processes. Therefore, part of this thesis focusses on developing an isotope evolution model, which successfully reproduces differences between contemporaneous growing speleothems. The model is used to propose correction approaches for prior calcite precipitation effects on speleothem oxygen isotopes (δ18O). Applications on speleothem records from caves outside of the Alps demonstrate that corrected δ18O agrees better with other records and climate model simulations.
Existing speleothem growth histories and carbon isotope (δ13C) records from Alpine caves located at different elevations are used to infer soil vs. glacier cover and the thermal regime of the glacier over the last glacial cycle. The compatibility with glacier evolution models is statistically assessed. A general agreement between speleothem δ13C-derived information on soil vs. glacier presence and modelled glacier coverage is found. However, glacier retreat during Marine Isotope Stage (MIS) 3 seems to be underestimated by the model. Furthermore, speleothem data provides evidence of surface temperature above the freezing point which is, however, not fully reproduced by the simulations.
History of glacier cover and their thermal regime is explored for the high-elevation cave system Melchsee-Frutt in the Swiss Alps. Based on new (MIS 9b – MIS 7b, MIS 2) and available speleothem δ13C (MIS 7a – 5d) data, warm-based glacier cover is inferred for MIS 8, 7d, 6, and 2. Also a short period of cold-based ice coverage is found for early MIS 6. In a detailed multi-proxy analysis (δ18O, δ13C, Mg/Ca and Sr/Ca), millennial-scale changes in the glacier-related source of the water infiltrating in the karst during MIS 8 and 7d are found and linked to Northern Hemisphere climate variability.
While speleothem records from high-elevation cave sites in the Alps exhibit huge potential for glacier reconstruction, several limitations remain, which are discussed throughout this thesis. Ultimately, recommendations are given to further leverage subglacial speleothems as an archive of glacier dynamics.
Galaxy morphology is a fossil record of how galaxies formed and evolved and can be regarded as a function of the dynamical state of a galaxy. It encodes the physical processes that dominate its evolutionary history, and is strongly aligned with physical properties like stellar mass, star formation rate and local environment. At a distance of ∼50 and 60 kpc, the Magellanic Clouds represent the nearest interacting pair of dwarf irregular galaxies to the Milky Way, rendering them an important test bed for galaxy morphology in the context of galaxy interactions and the effect of the local environment in which they reside. The Large Magellanic Cloud is classified as the prototype for Magellanic Spiral galaxies, with one prominent spiral arm, an offset bar and an inclined rotating disc while the Small Magellanic Cloud is classified as a dwarf Irregular galaxy and is known for its unstructured shape and large depth across the line–of–sight. Resolved stellar populations are powerful probes of a wide range of astrophysical phenomena, the proximity of the Magellanic Clouds allows us to resolve their stellar populations to individual stars that share coherent chemical and age distributions. The coherent properties of resolved stellar populations enable us to analyse them as a function of position within the Magellanic Clouds, offering a picture of the growth of the galaxies’ substructures over time and yielding a comprehensive view of their morphology. Furthermore, investigating the kinematics of the Magellanic Clouds offers valuable insights into their dynamics and evolutionary history. By studying the motions and velocities of stars within these galaxies, we can trace their past interactions, with the Milky Way or with each other and unravel the complex interplay of forces that have influenced the Magellanic Clouds’ formation and evolution.
In Chapter 2, the VISTA survey of the Magellanic Clouds was employed to generate unprecedented high-resolution morphological maps of the Magellanic Clouds in the near-infrared. Utilising colour-magnitude diagrams and theoretical evolutionary models to segregate stellar populations, this approach enabled a comprehensive age tomography of the galaxies. It revealed previously uncharacterised features in their central regions at spatial resolutions of 0.13 kpc (Large Magellanic Cloud) and 0.16 kpc (Small Magellanic Cloud), the findings showcased the impact of tidal interactions on their inner regions. Notably, the study highlighted the enhanced coherent structures in the Large Magellanic Cloud, shedding light on the significant role of the recent Magellanic Clouds’ interaction 200 Myr ago in shaping many of the fine structures. The Small Magellanic Cloud revealed asymmetry in younger populations and irregularities in intermediate-age ones, pointing towards the influence of past tidal interactions.
In Chapter 3, an examination of the outskirts of the Magellanic Clouds led to the identification of new substructures through the use of near-infrared photometry from the VISTA Hemisphere Survey and multi-dimensional phase-space information from Gaia. The distances and proper motions of these substructures were investigated. This analysis revealed the impact of past Magellanic Clouds’ interactions and the influence of the Milky Way’s tidal field on the morphology and kinematics of the Magellanic Clouds. A bi-modal distance distribution was identified within the luminosity function of the red clump stars in the Small Magellanic Cloud, notably in its eastern regions, with the foreground substructure being attributed to the Magellanic Clouds’ interaction around 200 Myr ago. Furthermore, associations with the Counter Bridge and Old Bridge were uncovered through the detection of background and foreground structures in various regions of the SMC.
In chapter 4, a detailed kinematic analysis of the Small Magellanic Cloud was conducted using spectra from the European Southern Observatory Science Archive Facility. The study reveals distinct kinematics in the Wing and bar regions, attributed to interactions with the Large Magellanic Cloud and variations in star formation history. Notably, velocity disparities are observed in the bar’s young main sequence stars, aligning with specific star-forming episodes, and suggesting potential galactic stretching or tidal stripping, as corroborated by proper motion studies.
As followers are becoming more educated and better connected, empowering leadership has gained traction in recent times as an alternative to traditional top-down models of leadership. Several scholars have investigated the relationship between empowering leadership and other variables in different contexts. As most previous studies have focused on the positive aspects of empowering leadership, research on its potential dark side is scarce. Furthermore, no previous study has examined whether and how the transfer of workload from followers to leaders can occur over time, which I proposed can lead to emotional exhaustion and work-family conflict among leaders. Therefore, I proposed that despite the positive outcomes of empowering leadership for both followers and leaders, it may also trigger negative outcomes capable of affecting the well-being of leaders. Drawing on the Conservation of Resources (COR) theory, Job Demand-Resources (JD-R) theory, and Too-Much-of-a-Good-Thing (TMGT) effect model, I investigated this idea. Using follower workload as a moderator, I proposed that the relationship between empowering leadership and leader workload is positive when follower workload is high and negative when follower workload is low. In addition, I examined how empowering leadership interacts with follower workload to affect leader emotional exhaustion and work-family conflict, mediated by leader workload. I proposed that this interaction results in a negative relationship between empowering leadership and both outcomes when follower workload is low, and a positive relationship when it is high.
I tested these hypotheses using data from a three-wave time-lagged design field study with 65 leader-follower dyads consisting of civil servants from different administrative entities of India and Pakistan. The time lag between each study variable was four weeks. At Time 1 (T1), followers answered questions about demographic characteristics, virtual interaction with their leaders, their workload, and the extent to which their leaders practice empowering leadership. At the same time, leaders answered questions about demographic characteristics and their job satisfaction. At Time 2 (T2), leaders provided data on their own workload. Finally, at Time 3 (T3), leaders rated their emotional exhaustion and work-family conflict. A moderated mediation model was tested using PROCESS Model 7 in R. The findings of the study reveal that a significant increase in follower workload through empowering leadership will also increase the leader's workload. Consequently, this increased leader workload leads to a crossover of this interactive effect onto the level of emotional exhaustion and work-family conflict experienced by leaders.
This research offers various contributions to the leadership literature. While empowering leadership has been commonly associated with positive outcomes, my study reveals that it can also lead to negative outcomes. In addition, it shifts the focus of existing research from the effect of empowering leadership on followers to the consequences that it might have for leaders themselves. Overall, my research underscores the need for leaders to consider the potential counterproductive effects of empowering leadership and tailor their approach accordingly.
Human-induced climate change is impacting the global water cycle by, e.g., causing changes in precipitation patterns, evapotranspiration dynamics, cryosphere shrinkage, and complex streamflow trends. These changes, coupled with the increased frequency and severity of extreme hydrometeorological events like floods, droughts, and heatwaves, contribute to hydroclimatic disasters, posing significant implications for local and global infrastructure, human health, and overall productivity.
In the tropical Andes, climate change is evident through warming trends, glacier retreats, and shifts in precipitation patterns, leading to altered risks of floods and droughts, e.g., in the upper Amazon River basin. Projections for the region indicate rising temperatures, potential glacier disappearance or substantial shrinkage, and altered streamflow patterns, highlighting challenges in water availability due to these expected changes and growing human water demand. The evolving trends in hydroclimatic conditions in the tropical Andes present significant challenges to socioeconomic and environmental systems, emphasizing the need for a comprehensive understanding to guide effective adaptation policies and strategies in response to the impacts of climate change in the region.
The main objective of this thesis is to investigate current hydrological dynamics in the tropical Andes of Peru and Ecuador and their responses to climate change. Given the scarcity of hydrometeorological data in the region, this objective was accomplished through a comprehensive data preparation and analysis in combination with hydrological modeling using the Soil and Water Assessment Tool (SWAT) eco-hydrological model. In this context, the initial steps involved assessing, identifying, and/or generating more reliable climate input data to address data limitations.
The thesis introduces RAIN4PE, a high-resolution precipitation dataset for Peru and Ecuador, developed by merging satellite, reanalysis, and ground-based data with surface elevation through the random forest method. Further adjustments of precipitation estimates were made for catchments influenced by fog/cloud water input on the eastern side of the Andes using streamflow data and applying the method of reverse hydrology. RAIN4PE surpasses other global and local precipitation datasets, showcasing superior reliability and accuracy in representing precipitation patterns and simulating hydrological processes across the tropical Andes. This establishes it as the optimal precipitation product for hydrometeorological applications in the region.
Due to the significant biases and limitations of global climate models (GCMs) in representing key atmospheric variables over the tropical Andes, this study developed regionally adapted GCM simulations specifically tailored for Peru and Ecuador. These simulations are known as the BASD-CMIP6-PE dataset, and they were derived using reliable, high-resolution datasets like RAIN4PE as reference data. The BASD-CMIP6-PE dataset shows notable improvements over raw GCM simulations, reflecting enhanced representations of observed climate properties and accurate simulation of streamflow, including high and low flow indices. This renders it suitable for assessing regional climate change impacts on agriculture, water resources, and hydrological extremes.
In addition to generating more accurate climatic input data, a reliable hydrological model is essential for simulating watershed hydrological processes. To tackle this challenge, the thesis presents an innovative multiobjective calibration framework integrating remote sensing vegetation data, baseflow index, discharge goodness-of-fit metrics, and flow duration curve signatures. In contrast to traditional calibration strategies relying solely on discharge goodness-of-fit metrics, this approach enhances the simulation of vegetation, streamflow, and the partitioning of flow into surface runoff and baseflow in a typical Andean catchment. The refined hydrological model calibration strategy was applied to conduct reliable simulations and understand current and future hydrological trajectories in the tropical Andes.
By establishing a region-suitable and thoroughly tested hydrological model with high-resolution and reliable precipitation input data from RAIN4PE, this study provides new insights into the spatiotemporal distribution of water balance components in Peru and transboundary catchments. Key findings underscore the estimation of Peru's total renewable freshwater resource (total river runoff of 62,399 m3/s), with the Peruvian Amazon basin contributing 97.7%. Within this basin, the Amazon-Andes transition region emerges as a pivotal hotspot for water yield (precipitation minus evapotranspiration), characterized by abundant rainfall and lower atmospheric water demand/evapotranspiration. This finding underlines its paramount role in influencing the hydrological variability of the entire Amazon basin.
Subsurface hydrological pathways, particularly baseflow from aquifers, strongly influence water yield in lowland and Andean catchments, sustaining streamflow, especially during the extended dry season. Water yield demonstrates an elevation- and latitude-dependent increase in the Pacific Basin (catchments draining into the Pacific Ocean), while it follows an unimodal curve in the Peruvian Amazon Basin, peaking in the Amazon-Andes transition region. This observation indicates an intricate relationship between water yield and elevation.
In Amazon lowlands rivers, particularly in the Ucayali River, floodplains play a significant role in shaping streamflow seasonality by attenuating and delaying peak flows for up to two months during periods of high discharge. This observation underscores the critical importance of incorporating floodplain dynamics into hydrological simulations and river management strategies for accurate modeling and effective water resource management.
Hydrological responses vary across different land use types in high Andean catchments. Pasture areas exhibit the highest water yield, while agricultural areas and mountain forests show lower yields, emphasizing the importance of puna (high-altitude) ecosystems, such as pastures, páramos, and bofedales, in regulating natural storage.
Projected future hydrological trajectories were analyzed by driving the hydrological model with regionalized GCM simulations provided by the BASD-CMIP6-PE dataset. The analysis considered sustainable (low warming, SSP1-2.6) and fossil fuel-based development (high-end warming, SSP5-8.5) scenarios for the mid (2035-2065) and end (2065-2095) of the century. The projected changes in water yield and streamflow across the tropical Andes exhibit distinct regional and seasonal variations, particularly amplified under a high-end warming scenario towards the end of the century. Projections suggest year-round increases in water yield and streamflow in the Andean regions and decreases in the Amazon lowlands, with exceptions such as the northern Amazon expecting increases during wet seasons. Despite these regional differences, the upper Amazon River's streamflow is projected to remain relatively stable throughout the 21st century. Additionally, projections anticipate a decrease in low flows in the Amazon lowlands and an increased risk of high flows (floods) in the Andean and northern Amazon catchments.
This thesis significantly contributes to enhancing climatic data generation, overcoming regional limitations that previously impeded hydrometeorological research, and creating new opportunities. It plays a crucial role in advancing hydrological model calibration, improving the representation of internal hydrological processes, and achieving accurate results for the right reasons. Novel insights into current hydrological dynamics in the tropical Andes are fundamental for improving water resource management. The anticipated intensified changes in water flows and hydrological extreme patterns under a high-end warming scenario highlight the urgency of implementing emissions mitigation and adaptation measures to address the heightened impacts on water resources.
In fact, the new datasets (RAIN4PE and BASD-CMIP6-PE) have already been utilized by researchers and experts in regional and local-scale projects and catchments in Peru and Ecuador. For instance, they have been applied in river catchments such as Mantaro, Piura, and San Pedro to analyze local historical and future developments in climate and water resources.
Prediction is often regarded as a central and domain-general aspect of cognition. This proposal extends to language, where predictive processing might enable the comprehension of rapidly unfolding input by anticipating upcoming words or their semantic features. To make these predictions, the brain needs to form a representation of the predictive patterns in the environment. Predictive processing theories suggest a continuous learning process that is driven by prediction errors, but much is still to be learned about this mechanism in language comprehension. This thesis therefore combined three electroencephalography (EEG) experiments to explore the relationship between prediction and implicit learning at the level of meaning.
Results from Study 1 support the assumption that the brain constantly infers und updates probabilistic representations of the semantic context, potentially across multiple levels of complexity. N400 and P600 brain potentials could be predicted by semantic surprise based on a probabilistic estimate of previous exposure and a more complex probability representation, respectively.
Subsequent work investigated the influence of prediction errors on the update of semantic predictions during sentence comprehension. In line with error-based learning, unexpected sentence continuations in Study 2 ¬– characterized by large N400 amplitudes ¬– were associated with increased implicit memory compared to expected continuations. Further, Study 3 indicates that prediction errors not only strengthen the representation of the unexpected word, but also update specific predictions made from the respective sentence context. The study additionally provides initial evidence that the amount of unpredicted information as reflected in N400 amplitudes drives this update of predictions, irrespective of the strength of the original incorrect prediction.
Together, these results support a central assumption of predictive processing theories: A probabilistic predictive representation at the level of meaning that is updated by prediction errors. They further propose the N400 ERP component as a possible learning signal. The results also emphasize the need for further research regarding the role of the late positive ERP components in error-based learning. The continuous error-based adaptation described in this thesis allows the brain to improve its predictive representation with the aim to make better predictions in the future.
On the effects of disorder on the ability of oscillatory or directional dynamics to synchronize
(2024)
In this thesis I present a collection of publications of my work, containing analytic results and observations in numerical experiments on the effects of various inhomogeneities, on the ability of coupled oscillators to synchronize their collective dynamics. Most of these works are concerned with the effects of Gaussian and non-Gaussian noise acting on the phase of autonomous oscillators (Secs. 2.1-2.4) or on the direction of higher dimensional state vectors (Secs. 2.5,2.6). I obtain exact and approximate solutions to the non-linear equations governing the distributions of phases, or perform linear stability analysis of the uniform distribution to obtain the transition point from a completely disordered state to partial order or more complicated collective behavior. Other inhomogeneities, that can affect synchronization of coupled oscillators, are irregular, chaotic oscillations or a complex, and possibly random structure in the coupling network. In Section 2.9 I present a new method to define the phase- and frequency linear response function for chaotic oscillators. In Sections 2.4, 2.7 and 2.8 I study synchronization in complex networks of coupled oscillators. Each section in Chapter 2 - Manuscripts, is devoted to one research paper and begins with a list of the main results, a description of my contributions to the work and a short account of the scientific context, i.e. the questions and challenges which started the research and the relation of the work to my other research projects. The manuscripts in this thesis are reproductions of the arXiv versions, i.e. preprints under the creative commons licence.
Data preparation stands as a cornerstone in the landscape of data science workflows, commanding a significant portion—approximately 80%—of a data scientist's time. The extensive time consumption in data preparation is primarily attributed to the intricate challenge faced by data scientists in devising tailored solutions for downstream tasks. This complexity is further magnified by the inadequate availability of metadata, the often ad-hoc nature of preparation tasks, and the necessity for data scientists to grapple with a diverse range of sophisticated tools, each presenting its unique intricacies and demands for proficiency.
Previous research in data management has traditionally concentrated on preparing the content within columns and rows of a relational table, addressing tasks, such as string disambiguation, date standardization, or numeric value normalization, commonly referred to as data cleaning. This focus assumes a perfectly structured input table. Consequently, the mentioned data cleaning tasks can be effectively applied only after the table has been successfully loaded into the respective data cleaning environment, typically in the later stages of the data processing pipeline.
While current data cleaning tools are well-suited for relational tables, extensive data repositories frequently contain data stored in plain text files, such as CSV files, due to their adaptable standard. Consequently, these files often exhibit tables with a flexible layout of rows and columns, lacking a relational structure. This flexibility often results in data being distributed across cells in arbitrary positions, typically guided by user-specified formatting guidelines.
Effectively extracting and leveraging these tables in subsequent processing stages necessitates accurate parsing. This thesis emphasizes what we define as the “structure” of a data file—the fundamental characters within a file essential for parsing and comprehending its content. Concentrating on the initial stages of the data preprocessing pipeline, this thesis addresses two crucial aspects: comprehending the structural layout of a table within a raw data file and automatically identifying and rectifying any structural issues that might hinder its parsing. Although these issues may not directly impact the table's content, they pose significant challenges in parsing the table within the file.
Our initial contribution comprises an extensive survey of commercially available data preparation tools. This survey thoroughly examines their distinct features, the lacking features, and the necessity for preliminary data processing despite these tools. The primary goal is to elucidate the current state-of-the-art in data preparation systems while identifying areas for enhancement. Furthermore, the survey explores the encountered challenges in data preprocessing, emphasizing opportunities for future research and improvement.
Next, we propose a novel data preparation pipeline designed for detecting and correcting structural errors. The aim of this pipeline is to assist users at the initial preprocessing stage by ensuring the correct loading of their data into their preferred systems. Our approach begins by introducing SURAGH, an unsupervised system that utilizes a pattern-based method to identify dominant patterns within a file, independent of external information, such as data types, row structures, or schemata. By identifying deviations from the dominant pattern, it detects ill-formed rows. Subsequently, our structure correction system, TASHEEH, gathers the identified ill-formed rows along with dominant patterns and employs a novel pattern transformation algebra to automatically rectify errors. Our pipeline serves as an end-to-end solution, transforming a structurally broken CSV file into a well-formatted one, usually suitable for seamless loading.
Finally, we introduce MORPHER, a user-friendly GUI integrating the functionalities of both SURAGH and TASHEEH. This interface empowers users to access the pipeline's features through visual elements. Our extensive experiments demonstrate the effectiveness of our data preparation systems, requiring no user involvement. Both SURAGH and TASHEEH outperform existing state-of-the-art methods significantly in both precision and recall.
Das Anliegen der vorliegenden Arbeit ist die Vermittlung des antiken Verhältnisses zwischen Mensch und natürlicher Umgebung im Lateinunterricht sowie ein Vergleich mit der heutigen Situation. Die Ergründung jenes Verhältnisses erfolgt am Beispiel des antiken Bergbaus, eines besonders anschaulichen Feldes der Umweltgeschichte. Denn es weist ein hohes Maß an Aktualität auf sowie ein großes Potential, aus der Beschäftigung mit ihm Erkenntnisse für die Gegenwart zu gewinnen.
Vorgelegt wird eine Unterrichtskonzeption, die zugleich eine Analyse der menschlichen Naturwahrnehmung vornimmt. Zunächst wird dabei die Heterogenität dieser Wahrnehmung in der Antike aufgezeigt und in Bezug zur damals geäußerten Kritik am Bergbau gesetzt. Anschließend werden folgende Teilaspekte behandelt: 1. die antike bergbauliche Technik und Praxis, 2. die damals herrschenden Arbeitsbedingungen, 3. die gewonnenen Rohstoffe und ihre Verwendung sowie 4. die Folgen des Bergbaus für Mensch und Umwelt. Der didaktische Teil besteht aus einem Entwurf für drei Doppelstunden. Er enthält die Lehrmaterialien, die jeweiligen Erläuterungen und den Erwartungshorizont.
Das Professionswissen von Studierenden des Lehramts Primarstufe im Bereich „Haus der Vierecke“
(2024)
Die Professionalisierung angehender Lehrkräfte als bedeutende Steuerungsgröße für die Schulbildung ist eine wesentliche Aufgabe der Lehre an Universitäten. Sie stellt eine Säule des universitären Reformprojekts „PSI-Potsdam“ im Rahmen der „Qualitätsoffensive Lehrerbildung“ dar. Ziel ist die Qualitätssicherung durch Evaluation und Weiterentwicklung von Lehrveranstaltungen mithilfe von Gestaltungsprinzipien zur Vermittlung des Professionswissens.
Die vorliegende Arbeit fokussiert die Wirksamkeit der Lehrveranstaltung „Geometrie und ihre Didaktik 1 und 2“ und untersucht exemplarisch, inwiefern Studierende des Lehramts Primarstufe Mathematik das dort angestrebte Fach- und fachdidaktische Wissen zur Begriffsbildung am Beispiel des Hauses der Vierecke erlangt haben. Angemessene mentale Modelle verschiedener Vierecksarten aufzubauen und diese hierarchisch zueinander in Beziehung zu setzen, erfordert einen aktiven Prozess gemäß dem didaktischen Modell zum Lernen geometrischer Begriffe und stellt somit eine Schwierigkeit für Lernende an Schule und Universität gleichermaßen dar.
Zur Beantwortung der Forschungsfrage wurden in einer qualitativen Studie mit Mixed-Methods-Design zunächst 95 Studierende schriftlich zu ihrem Wissen hinsichtlich des genannten Themas befragt. Anschließend wurde zur Identifikation von Lernhürden und Schwierigkeiten ein Fokusgruppeninterview durchgeführt. Die Auswertung der Daten erfolgte computergestützt mittels einer qualitativen Inhaltsanalyse.
Die Ergebnisse bilden eine große Vielfalt verschiedener Kompetenzstände in allen relevanten Facetten ab. Im Rahmen der geforderten Perspektivübernahme, Ursachenfindung und modellgeleiteten Vorschlägen zu deren Vorbeugung zeigten sich insbesondere Defizite in Form von Fehlvorstellungen. Weiterhin gab es Schwierigkeiten bei der Anwendung und Integration des geforderten Professionswissens in allen betrachteten Wissenskomponenten. Hieraus werden zum einen Entwicklungsvorschläge bezüglich der Lehrveranstaltung abgeleitet, um die fachwissenschaftliche Basis der zukünftigen Lehrkräfte zu stärken. Hierunter fällt es, sensibler mit prototypischen Darstellungen umzugehen und den Begriffsaufbau bei den Studierenden zu stärken, indem unter anderem auf einer Metaebene Zusammenhänge des Hauses der Vierecke im Spiralcurriculum explizit gemacht werden. Zum anderen beziehen sich Vorschläge auf das Studiendesign, speziell den Aufbau der Befragung zur zielführenden Erhebung des fokussierten Professionswissens. Hierfür werden unter anderem eine explizite Erhebung der eigenen Vorstellungen sowie eine Umformulierung der Wissenstestaufgabe mittels Operatoren angeregt.
„Über die vergangenen Jahrzehnte wurde der Ruf nach einer nachhaltigen Entwicklung aufgrund zahlreicher globaler, die gesamte Menschheit betreffender Herausforderungen immer lauter (Kropp, 2019, S. 4).“
Bildung für nachhaltige Entwicklung (BNE) verfolgt das Ziel, Menschen dazu zu befähigen, diesen globalen Herausforderungen aktiv zu begegnen, ihre eigene Zukunft mitzugestalten sowie Verantwortung für die Zukunft nachfolgender Generationen zu übernehmen. Auch der Sachunterricht in der Grundschule sieht sich vor der Aufgabe, die Prinzipien der BNE in die schulische Praxis zu übertragen. Im Zentrum steht dabei die Frage nach geeigneten Zugängen zu diesem perspektivenvernetzenden Thema, die für die Schülerinnen und Schüler motivierend und zugleich bildungswirksam sein sollen. Einen derartigen Zugang innerhalb des Schulunterrichts kann bei angemessener Umsetzung das Imkern darstellen.
Der auf die schulische Praxis ausgerichtete Band 3 der Potsdamer Beiträge zur Innovation des Sachunterrichts präsentiert daher am Beispiel des Imkerns ein Konzept, wie im Rahmen des Sachunterrichts der Grundschule eine praktische Lerntätigkeit der Kinder im Einklang mit den Zielen, Dimensionen und Kompetenzerwartungen der Bildung für nachhaltige Entwicklung ermöglicht werden kann. Der Band richtet sich als Grundlagenwerk an alle Lehrkräfte des Sachunterrichts und dessen Bezugsfächer sowie an andere interessierte Leserinnen und Leser.
Quantified Self, die pro-aktive Selbstvermessung von Menschen, hat sich in den letzten Jahren von einer Nischenanwendung zu einem Massenphänomen entwickelt. Dabei stehen den Nutzern heute vielfältige technische Unterstützungsmöglichkeiten, beispielsweise in Form von Smartphones, Fitness-Trackern oder Gesundheitsapps zur Verfügung, welche eine annähernd lückenlose Überwachung unterschiedlicher Kontextfaktoren einer individuellen Lebenswirklichkeit erlauben.
In der Folge widmet sich diese Arbeit unter anderem der Fragestellung, inwieweit diese intensive und eigen-initiierte Beschäftigung, insbesondere mit gesundheitsbezogenen Daten, die weitgehend als objektiviert und damit belastbar gelten, die Gesundheitskompetenz derart aktiver Menschen erhöhen kann. Darüber hinaus werden Aspekte untersucht, inwieweit die neuen Technologien in der Lage sind, spezifische medizinische Erkenntnisse zu vertiefen und in der Konsequenz die daraus resultierenden Behandlungsprozesse zu verändern.
Während der Ursprung des Quantified Self im 2. Gesundheitsmarkt liegt, geht die vorliegende Arbeit der Frage nach, welche strukturellen, personellen und prozessualen Anknüpfungspunkte perspektivisch im 1. Gesundheitsmarkt existieren werden, wenn ein potentieller Patient in einer stärker emanzipierten Weise den Wunsch verspürt, oder eine entsprechende Forderung stellt, seine gesammelten Gesundheitsdaten in möglichst umfassender Form in eine medizinische Behandlung zu integrieren.
Dabei werden auf der einen Seite aktuelle Entwicklungen im 2. Gesundheitsmarkt untersucht, die gekennzeichnet sind von einer hohen Dynamik und einer großen Intransparenz. Auf der anderen Seite steht der als stark reguliert und wenig digitalisiert geltende 1. Gesundheitsmarkt mit seinen langen Entwicklungszyklen und ausgeprägten Partikularinteressen der verschiedenen Stakeholder.
In diesem Zuge werden aktuelle Entwicklungen des zugrunde liegenden Rechtsrahmens, speziell im Hinblick auf stärker patientenzentrierte und digitalisierte Normen untersucht, wobei insbesondere das Digitale Versorgung Gesetz eine wichtige Rolle einnimmt.
Ziel der Arbeit ist die stärkere Durchdringung von Wechselwirkungen an der Schnittstelle zwischen den beiden Gesundheitsmärkten in Bezug auf die Verwendung von Technologien der Selbstvermessung, um in der Folge zukünftige Geschäftspotentiale für existierende oder neu in den Markt drängende Dienstleister zu eruieren.
Als zentrale Methodik kommt hier eine Delphi-Studie zum Einsatz, die in einem interprofessionellen Ansatz versucht, ein Zukunftsbild dieser derzeit noch sehr jungen Entwicklungen für das Jahr 2030 aufzuzeigen. Eingebettet werden die Ergebnisse in die Untersuchung einer allgemeinen gesellschaftlichen Akzeptanz der skizzierten Veränderungen.
Due to their sessile lifestyle, plants are constantly exposed to pathogens and possess a multi-layered immune system that prevents infection. The first layer of immunity called pattern-triggered immunity (PTI), enables plants to recognise highly conserved molecules that are present in pathogens, resulting in immunity from non-adaptive pathogens. Adapted pathogens interfere with PTI, however the second layer of plant immunity can recognise these virulence factors resulting in a constant evolutionary battle between plant and pathogen. Xanthomonas campestris pv. vesicatoria (Xcv) is the causal agent of bacterial leaf spot disease in tomato and pepper plants. Like many Gram-negative bacteria, Xcv possesses a type-III secretion system, which it uses to translocate type-III effectors (T3E) into plant cells. Xcv has over 30 T3Es that interfere with the immune response of the host and are important for successful infection. One such effector is the Xanthomonas outer protein M (XopM) that shows no similarity to any other known protein. Characterisation of XopM and its role in virulence was the focus of this work.
While screening a tobacco cDNA library for potential host target proteins, the vesicle-associated membrane protein (VAMP)-associated protein 1-2 like (VAP12) was identified. The interaction between XopM and VAP12 was confirmed in the model species Nicotiana benthamiana and Arabidopsis as well as in tomato, a Xcv host. As plants possess multiple VAP proteins, it was determined that the interaction of XopM and VAP is isoform specific.
It could be confirmed that the major sperm protein (MSP) domain of NtVAP12 is sufficient for binding XopM and that binding can be disrupted by substituting one amino acid (T47) within this domain. Most VAP interactors have at least one FFAT (two phenylalanines [FF] in an acidic tract) related motif, screening the amino acid sequence of XopM showed that XopM has two FFAT-related motifs. Substitution of the second residue of each FFAT motif (Y61/F91) disrupts NtVAP12 binding, suggesting that these motifs cooperatively mediate this interaction. Structural modelling using AlphaFold further confirmed that the unstructured N-terminus of XopM binds NtVAP12 at its MSP domain, which was further confirmed by the generation of truncated XopM variants.
Infection of pepper leaves, with a XopM deficient Xcv strain did not result in a reduction of virulence in comparison to the Xcv wildtype, showing that the function of XopM during infection is redundant. Virus-induced gene silencing of NbVAP12 in N. benthamiana plants also did not affect Xcv virulence, which further indicated that interaction with VAP12 is also non-essential for Xcv virulence. Despite such findings, ectopic expression of wildtype XopM and XopMY61A/F91A in transgenic Arabidopsis seedlings enhanced the growth of a non-pathogenic Pseudomonas syringae pv. tomato (Pst) DC3000 strain. XopM was found to interfere with the PTI response allowing Pst growth independent of its binding to VAP. Furthermore, transiently expressed XopM could suppress reactive oxygen species (ROS; one of the earliest PTI responses) production in N. benthamiana leaves. The FFAT double mutant XopMY61A/F91A as well as the C-terminal truncation variant XopM106-519 could still suppress the ROS response while the N-terminal variant XopM1-105 did not. Suppression of ROS production is therefore independent of VAP binding. In addition, tagging the C-terminal variant of XopM with a nuclear localisation signal (NLS; NLS-XopM106-519) resulted in significantly higher ROS production than the membrane localising XopM106-519 variant, indicating that XopM-induced ROS suppression is localisation dependent.
To further characterise XopM, mass spectrometry techniques were used to identify post-translational modifications (PTM) and potential interaction partners. PTM analysis revealed that XopM contains up to 21 phosphorylation sites, which could influence VAP binding. Furthermore, proteins of the Rab family were identified as potential plant protein interaction partners. Rab proteins serve a multitude of functions including vesicle trafficking and have been previously identified as T3E host targets. Taking this into account, a model of virulence of XopM was proposed, with XopM anchoring itself to VAP proteins to potentially access plasma membrane associated proteins. XopM possibly interferes with vesicle trafficking, which in turn suppresses ROS production through an unknown mechanism.
In this work it was shown that XopM targets VAP proteins. The data collected suggests that this T3E uses VAP12 to anchor itself into the right place to carry out its function. While more work is needed to determine how XopM contributes to virulence of Xcv, this study sheds light onto how adapted pathogens overcome the immune response of their hosts. It is hoped that such knowledge will contribute to the development of crops resistant to Xcv in the future.
Efraim Frisch (1873–1942) und Albrecht Mendelssohn Bartholdy (1874–1936) waren im klassischen Zeitalter der Intellektuellen (neben anderem) Zeitschriftenentrepeneure und Gründer der kleinen Zeitschriften Der Neue Merkur (1914–1916/1919–1925) und Europäische Gespräche (1923–1933). Sie stehen (nicht nur mit ihren Zeitschriften) für einen der wiederholt in der Moderne unternommenen Versuche, die in der Aufklärung erschlossenen Ressourcen – demokratischer Republikanismus und universelle und gleiche Rechte für alle Menschen – im Vertrauen auf ihre globale Umsetzbarkeit zu aktivieren. In der Zeit der Weimarer Republik gehörten sie zu den Republikanern, „die Weimar als Symbol ernst nahmen und zäh und mutig bemüht waren, dem Ideal konkreten Inhalt zu verleihen“ (Peter Gay). Ihr bislang unüberliefert gebliebenes Beispiel fügt sich ein in die Demokratiegeschichte der europäischen Moderne, in die Geschichte internationaler Gesellschaftsbeziehungen und die Geschichte der Selbstbehauptung intellektueller Autonomie.
Die zäsurenübergreifend den Zeitraum von 1900 bis ca. 1940 untersuchende Studie ermöglicht wesentliche Einblicke in die Biografien Frischs und Mendelssohn Bartholdys, in die deutsch-französische/europäisch-transatlantische Welt der kleinen (literarisch-politischen) Zeitschriften des frühen 20. Jahrhunderts sowie in das medien-intellektuelle Feld des späten Kaiserreiches und der Weimarer Republik in seiner humanistisch-demokratisch-republikanischen Tendenz. Darüber hinaus beinhaltet sie neue Erkenntnisse zur Geschichte der ‚Heidelberger Vereinigung‘ – der Arbeitsgemeinschaft für eine Politik des Rechts – um Prinz Max von Baden, zur deutschen Friedensdelegation in Versailles 1919 und ihrem Hamburger Nachleben, zum Handbuch der Politik sowie zur ersten amtlichen Aktenpublikation des Auswärtigen Amtes – der Großen Politik der Europäischen Kabinette 1871–1914. Schließlich zu den Bemühungen der ‚Internationalists‘ der 1920er Jahre, eine effektive Ächtung des Angriffskrieges herbeizuführen.
Archive haben die Aufgabe, Wissen zu bewahren und zugänglich zu machen. Die Sammlungen des Museums für Naturkunde Berlin (MfN) wuchsen während der Zeit der europäischen Kolonialexpansion stark an. Naturalien aus der ganzen Welt gelangten nach Berlin und gleichzeitig fand ein wissenschaftlicher Austausch zu denselben statt. Die Spuren dieser Objekte und der Korrespondenzen können im Archiv des Museums nachverfolgt werden. Heute gelten koloniale Kontexte weitestgehend als Unrechtskontexte, deren Aufarbeitung gefordert wird. Um Provenienzforschung betreiben zu können, ist es daher unerlässlich, dass Museen und Archive ihre Sammlungen offenlegen (soweit rechtlich und ethisch möglich) und Außenstehenden den Zugriff ermöglichen.
Im Rahmen dieser Masterarbeit soll der respektvolle Umgang mit Archivgut aus kolonialen Kontexten kritisch reflektiert und Handlungsfelder für einen kulturell angemessenen Umgang mit sensiblen Inhalten aufgezeigt werden. Konkret beziehen sich die Handlungsoptionen auf Archivgut aus kolonialen Kontexten mit Bezug zu Australien. Dabei werden Provenienzforschung, Sensibilität, Mehrsprachigkeit, indigenes kulturelles Wissen (ICIP) sowie Plattform- und Schnittstellenoptionen für die Vernetzung von Daten und Inhalten bedacht. Ziel ist es, vor dem Hintergrund der Archive als Orte kulturellen Gedächtnisses den Umgang mit Archivgut aus kolonialen Kontexten zu reflektieren.
Die bedarfsgerechte Versorgung im Alter zukünftig sicherzustellen, gehört zu den entscheidenden Aufgaben unserer Zeit. Der in Deutschland bestehende Fachkräftemangel sowie der demografische Wandel belasten das Pflegesystem in mehrfacher Hinsicht: In einer alternden Gesellschaft sind immer mehr Menschen auf eine anhaltende Unterstützung angewiesen. Niedrige Geburtenraten und damit verbunden ein sinkender Bevölkerungs-anteil von Menschen im erwerbsfähigen Alter bringen einen bereits heute spürbaren Mangel an beruflich Pflegenden mit sich.
Um eine menschenwürdige Pflege anhaltend zu gewährleisten, müssen vorhandene Ressourcen gezielter eingesetzt und zusätzliche Reserven freigelegt werden. Viele Hoffnungen liegen hier auf technologischen Innovationen. Die Digitalisierung soll das Gesundheitswesen effizienter gestalten und beispielsweise durch Künstliche Intelligenz zeitraubende Prozesse vereinfachen oder sogar automatisieren. Im Kontext der Pflege wird der Einsatz von robotischen Assistenzsystemen diskutiert.
Aus diesem Grund wurde die die Potsdamer Bürger:innenkonferenz „Robotik in der Altenpflege?“ initiiert. Um die Zukunft der Pflege gemeinsam zu gestalten, wurden 3.500 Potsdamer Bürgerinnen und Bürger kontaktiert und schließlich fünfundzwanzig Teilnehmende ausgewählt. Im Frühjahr 2024 kamen sie zusammen, um den verantwortlichen Einsatz von Robotik in der Pflege zu diskutieren.
Die hier vorliegende Erklärung ist das Ergebnis der Bürger:innenkonferenz. Sie enthält die zentralen Positionen der Teilnehmenden.
Die Bürger:innenkonferenz ist Teil des Projekts E-cARE („Ethics Guidelines for Socially Assistive Robots in Elderly Care: An Empirical-Participatory Approach“), welches die Juniorprofessur für Medizinische Ethik mit Schwerpunkt auf Digitalisierung der Fakultät für Gesundheitswissenschaften Brandenburg, Universität Potsdam, durchgeführt hat.
Massive stars (Mini > 8 Msol) are the key feedback agents within galaxies, as they shape their surroundings via their powerful winds, ionizing radiation, and explosive supernovae. Most massive stars are born in binary systems, where interactions with their companions significantly alter their evolution and the feedback they deposit in their host galaxy. Understanding binary evolution, particularly in the low-metallicity environments as proxies for the Early Universe, is crucial for interpreting the rest-frame ultraviolet spectra observed in high-redshift galaxies by telescopes like Hubble and James Webb.
This thesis aims to tackle this challenge by investigating in detail massive binaries within the low-metallicity environment of the Small Magellanic Cloud galaxy. From ultraviolet and multi-epoch optical spectroscopic data, we uncovered post-interaction binaries. To comprehensively characterize these binary systems, their stellar winds, and orbital parameters, we use a multifaceted approach. The Potsdam Wolf-Rayet stellar atmosphere code is employed to obtain the stellar and wind parameters of the stars. Additionally, we perform consistent light and radial velocity fitting with the Physics of Eclipsing Binaries software, allowing for the independent determination of orbital parameters and component masses. Finally, we utilize these results to challenge the standard picture of stellar evolution and improve our understanding of low-metallicity stellar populations by calculating our binary evolution models with the Modules for Experiments in Stellar Astrophysics code.
We discovered the first four O-type post-interaction binaries in the SMC (Chapters 2, 5, and 6). Their primary stars have temperatures similar to other OB stars and reside far from the helium zero-age main sequence, challenging the traditional view of binary evolution. Our stellar evolution models suggest this may be due to enhanced mixing after core-hydrogen burning. Furthermore, we discovered the so-far most massive binary system undergoing mass transfer (Chapter 3), offering a unique opportunity to test mass-transfer efficiency in extreme conditions. Our binary evolution calculations revealed unexpected evolutionary pathways for accreting stars in binaries, potentially providing the missing link to understanding the observed Wolf-Rayet population within the SMC (Chapter 4). The results presented in this thesis unveiled the properties of massive binaries at low-metallicity which challenge the way the spectra of high-redshift galaxies are currently being analyzed as well as our understanding of massive-star feedback within galaxies.
Astrophysical shocks, driven by explosive events such as supernovae, efficiently accelerate charged particles to relativistic energies. The majority of these shocks occur in collisionless plasmas where the energy transfer is dominated by particle-wave interactions.Strong nonrelativistic shocks found in supernova remnants are plausible sites of galactic cosmic ray production, and the observed emission indicates the presence of nonthermal electrons. To participate in the primary mechanism of energy gain - Diffusive Shock Acceleration - electrons must have a highly suprathermal energy, implying a need for very efficient pre-acceleration. This poorly understood aspect of the shock acceleration theory is known as the electron injection problem. Studying electron-scale phenomena requires the use of fully kinetic particle-in-cell (PIC) simulations, which describe collisionless plasma from first principles.
Most published studies consider a homogenous upstream medium, but turbulence is ubiquitous in astrophysical environments and is typically driven at magnetohydrodynamic scales, cascading down to kinetic scales. For the first time, I investigate how preexisting turbulence affects electron acceleration at nonrelativistic shocks using the fully kinetic approach. To accomplish this, I developed a novel simulation framework that allows the study of shocks propagating in turbulent media. It involves simulating slabs of turbulent plasma separately, which are further continuously inserted into a shock simulation. This demands matching of the plasma slabs at the interface. A new procedure of matching electromagnetic fields and currents prevents numerical transients, and the plasma evolves self-consistently. The versatility of this framework has the potential to render simulations more consistent with turbulent systems in various astrophysical environments.
In this Thesis, I present the results of 2D3V PIC simulations of high-Mach-number nonrelativistic shocks with preexisting compressive turbulence in an electron-ion plasma. The chosen amplitudes of the density fluctuations ($\lesssim15\%$) concord with \textit{in situ} measurements in the heliosphere and the local interstellar medium. I explored how these fluctuations impact the dynamics of upstream electrons, the driving of the plasma instabilities, electron heating and acceleration. My results indicate that while the presence of the turbulence enhances variations in the upstream magnetic field, their levels remain too low to influence the behavior of electrons at perpendicular shocks significantly. However, the situation is different at oblique shocks. The external magnetic field inclined at an angle between $50^\circ \lesssim \theta_\text{Bn} \lesssim 75^\circ$ relative to the shock normal allows the escape of fast electrons toward the upstream region. An extended electron foreshock region is formed, where these particles drive various instabilities. Results of an oblique shock with $\theta_\text{Bn}=60^\circ$ propagating in preexisting compressive turbulence show that the foreshock becomes significantly shorter, and the shock-reflected electrons have higher temperatures. Furthermore, the energy spectrum of downstream electrons shows a well-pronounced nonthermal tail that follows a power law with an index up to -2.3.
The methods and results presented in this Thesis could serve as a starting point for more realistic modeling of interactions between shocks and turbulence in plasmas from first principles.
Condensation and crystallization are omnipresent phenomena in nature. The formation of droplets or crystals on a solid surface are familiar processes which, beyond their scientific interest, are required in many technological applications. In recent years, experimental techniques have been developed which allow patterning a substrate with surface domains of molecular thickness, surface area in the mesoscopic scale, and different wettabilities (i.e., different degrees of preference for a substance that is in contact with the substrate). The existence of new patterned surfaces has led to increased theoretical efforts to understand wetting phenomena in such systems.
In this thesis, we deal with some problems related to the equilibrium of phases (e.g., liquid-vapor coexistence) and the kinetics of phase separation in the presence of chemically patterned surfaces. Two different cases are considered: (i) patterned surfaces in contact with liquid and vapor, and (ii) patterned surfaces in contact with a crystalline phase. One of the problems that we have studied is the following: It is widely believed that if air containing water vapor is cooled to its dew point, droplets of water are immediately formed. Although common experience seems to support this view, it is not correct. It is only when air is cooled well below its dew point that the phase transition occurs immediately. A vapor cooled slightly below its dew point is in a metastable state, meaning that the liquid phase is more stable than the vapor, but the formation of droplets requires some time to occur, which can be very long.
It was first pointed out by J. W. Gibbs that the metastability of a vapor depends on the energy necessary to form a nucleus (a droplet of a critical size). Droplets smaller than the critical size will tend to disappear, while droplets larger than the critical size will tend to grow. This is consistent with an energy barrier that has its maximum at the critical size, as is the case for droplets formed directly in the vapor or in contact with a chemically uniform planar wall. Classical nucleation theory describes the time evolution of the condensation in terms of the random process of droplet growth through this energy barrier. This process is activated by thermal fluctuations, which eventually will form a droplet of the critical size.
We consider nucleation of droplets from a vapor on a substrate patterned with easily wettable (lyophilic) circular domains. Under certain conditions of pressure and temperature, the condensation of a droplet on a lyophilic circular domain proceeds through a barrier with two maxima (a double barrier). We have extended classical nucleation theory to account for the kinetics of nucleation through a double barrier, and applied this extension to nucleation on lyophilic circular domains.
Genome-scale metabolic models are mathematical representations of all known reactions occurring in a cell. Combined with constraints based on physiological measurements, these models have been used to accurately predict metabolic fluxes and effects of perturbations (e.g. knock-outs) and to inform metabolic engineering strategies. Recently, protein-constrained models have been shown to increase predictive potential (especially in overflow metabolism), while alleviating the need for measurement of nutrient uptake rates. The resulting modelling frameworks quantify the upkeep cost of a certain metabolic flux as the minimum amount of enzyme required for catalysis. These improvements are based on the use of in vitro turnover numbers or in vivo apparent catalytic rates of enzymes for model parameterization. In this thesis several tools for the estimation and refinement of these parameters based on in vivo proteomics data of Escherichia coli, Saccharomyces cerevisiae, and Chlamydomonas reinhardtii have been developed and applied. The difference between in vitro and in vivo catalytic rate measures for the three microorganisms was systematically analyzed. The results for the facultatively heterotrophic microalga C. reinhardtii considerably expanded the apparent catalytic rate estimates for photosynthetic organisms. Our general finding pointed at a global reduction of enzyme efficiency in heterotrophy compared to other growth scenarios. Independent of the modelled organism, in vivo estimates were shown to improve accuracy of predictions of protein abundances compared to in vitro values for turnover numbers. To further improve the protein abundance predictions, machine learning models were trained that integrate features derived from protein-constrained modelling and codon usage. Combining the two types of features outperformed single feature models and yielded good prediction results without relying on experimental transcriptomic data. The presented work reports valuable advances in the prediction of enzyme allocation in unseen scenarios using protein constrained metabolic models. It marks the first successful application of this modelling framework in the biotechnological important taxon of green microalgae, substantially increasing our knowledge of the enzyme catalytic landscape of phototrophic microorganisms.
Organizations are investing billions on innovation and agility initiatives to stay competitive in their increasingly uncertain business environments. Design Thinking, an innovation approach based on human-centered exploration, ideation and experimentation, has gained increasing popularity. The market for Design Thinking, including software products and general services, is projected to reach 2.500 million $ (US-Dollar) by 2028. A dispersed set of positive outcomes have been attributed to Design Thinking. However, there is no clear understanding of what exactly comprises the impact of Design Thinking and how it is created. To support a billion-dollar market, it is essential to understand the value Design Thinking is bringing to organizations not only to justify large investments, but to continuously improve the approach and its application.
Following a qualitative research approach combined with results from a systematic literature review, the results presented in this dissertation offer a structured understanding of Design Thinking impact. The results are structured along two main perspectives of impact: the individual and the organizational perspective. First, insights from qualitative data analysis demonstrate that measuring and assessing the impact of Design Thinking is currently one central challenge for Design Thinking practitioners in organizations. Second, the interview data revealed several effects Design Thinking has on individuals, demonstrating how Design Thinking can impact boundary management behaviors and enable employees to craft their jobs more actively.
Contributing to innovation management research, the work presented in this dissertation systematically explains the Design Thinking impact, allowing other researchers to both locate and integrate their work better. The results of this research advance the theoretical rigor of Design Thinking impact research, offering multiple theoretical underpinnings explaining the variety of Design Thinking impact. Furthermore, this dissertation contains three specific propositions on how Design Thinking creates an impact: Design Thinking creates an impact through integration, enablement, and engagement. Integration refers to how Design Thinking enables organizations through effectively combining things, such as for example fostering balance between exploitation and exploration activities. Through Engagement, Design Thinking impacts organizations involving users and other relevant stakeholders in their work. Moreover, Design Thinking creates impact through Enablement, making it possible for individuals to enact a specific behavior or experience certain states.
By synthesizing multiple theoretical streams into these three overarching themes, the results of this research can help bridge disciplinary boundaries, for example between business, psychology and design, and enhance future collaborative research. Practitioners benefit from the results as multiple desirable outcomes are detailed in this thesis, such as successful individual job crafting behaviors, which can be expected from practicing Design Thinking. This allows practitioners to enact more evidence-based decision-making concerning Design Thinking implementation. Overall, considering multiple levels of impact as well as a broad range of theoretical underpinnings are paramount to understanding and fostering Design Thinking impact.
Plate tectonic boundaries constitute the suture zones between tectonic plates. They are shaped by a variety of distinct and interrelated processes and play a key role in geohazards and georesource formation. Many of these processes have been previously studied, while many others remain unaddressed or undiscovered. In this work, the geodynamic numerical modeling software ASPECT is applied to shed light on further process interactions at continental plate boundaries. In contrast to natural data, geodynamic modeling has the advantage that processes can be directly quantified and that all parameters can be analyzed over the entire evolution of a structure. Furthermore, processes and interactions can be singled out from complex settings because the modeler has full control over all of the parameters involved. To account for the simplifying character of models in general, I have chosen to study generic geological settings with a focus on the processes and interactions rather than precisely reconstructing a specific region of the Earth.
In Chapter 2, 2D models of continental rifts with different crustal thicknesses between 20 and 50 km and extension velocities in the range of 0.5-10 mm/yr are used to obtain a speed limit for the thermal steady-state assumption, commonly employed to address the temperature fields of continental rifts worldwide. Because the tectonic deformation from ongoing rifting outpaces heat conduction, the temperature field is not in equilibrium, but is characterized by a transient, tectonically-induced heat flow signal. As a result, I find that isotherm depths of the geodynamic evolution models are shallower than a temperature distribution in equilibrium would suggest. This is particularly important for deep isotherms and narrow rifts. In narrow rifts, the magnitude of the transient temperature signal limits a well-founded applicability of the thermal steady-state assumption to extension velocities of 0.5-2 mm/yr. Estimation of the crustal temperature field affects conclusions on all temperature-dependent processes ranging from mineral assemblages to the feasible exploitation of a geothermal reservoir.
In Chapter 3, I model the interactions of different rheologies with the kinematics of folding and faulting using the example of fault-propagation folds in the Andean fold-and-thrust belt. The evolution of the velocity fields from geodynamic models are compared with those from trishear models of the same structure. While the latter use only geometric and kinematic constraints of the main fault, the geodynamic models capture viscous, plastic, and elastic deformation in the entire model domain. I find that both models work equally well for early, and thus relatively simple stages of folding and faulting, while results differ for more complex situations where off-fault deformation and secondary faulting are present. As fault-propagation folds can play an important role in the formation of reservoirs, knowledge of fluid pathways, for example via fractures and faults, is crucial for their characterization.
Chapter 4 deals with a bending transform fault and the interconnections between tectonics and surface processes. In particular, the tectonic evolution of the Dead Sea Fault is addressed where a releasing bend forms the Dead Sea pull-apart basin, while a restraining bend further to the North resulted in the formation of the Lebanese mountains. I ran 3D coupled geodynamic and surface evolution models that included both types of bends in a single setup. I tested various randomized initial strain distributions, showing that basin asymmetry is a consequence of strain localization. Furthermore, by varying the surface process efficiency, I find that the deposition of sediment in the pull-apart basin not only controls basin depth, but also results in a crustal flow component that increases uplift at the restraining bend.
Finally, in Chapter 5, I present the computational basis for adding further complexity to plate boundary models in ASPECT with the implementation of earthquake-like behavior using the rate-and-state friction framework. Despite earthquakes happening on a relatively small time scale, there are many interactions between the seismic cycle and the long time spans of other geodynamic processes. Amongst others, the crustal state of stress as well as the presence of fluids or changes in temperature may alter the frictional behavior of a fault segment. My work provides the basis for a realistic setup of involved structures and processes, which is therefore important to obtain a meaningful estimate for earthquake hazards.
While these findings improve our understanding of continental plate boundaries, further development of geodynamic software may help to reveal even more processes and interactions in the future.
Mantodea, commonly known as mantids, have captivated researchers owing to their enigmatic behavior and ecological significance. This order comprises a diverse array of predatory insects, boasting over 2,400 species globally and inhabiting a wide spectrum of ecosystems. In Iran, the mantid fauna displays remarkable diversity, yet numerous facets of this fauna remain poorly understood, with a significant dearth of systematic and ecological research. This substantial knowledge gap underscores the pressing need for a comprehensive study to advance our understanding of Mantodea in Iran and its neighboring regions.
The principal objective of this investigation was to delve into the ecology and phylogeny of Mantodea within these areas. To accomplish this, our research efforts concentrated on three distinct genera within Iranian Mantodea. These genera were selected due to their limited existing knowledge base and feasibility for in-depth study. Our comprehensive methodology encompassed a multifaceted approach, integrating morphological analysis, molecular techniques, and ecological observations.
Our research encompassed a comprehensive revision of the genus Holaptilon, resulting in the description of four previously unknown species. This extensive effort substantially advanced our understanding of the ecological roles played by Holaptilon and refined its systematic classification. Furthermore, our investigation into Nilomantis floweri expanded its known distribution range to include Iran. By conducting thorough biological assessments, genetic analyses, and ecological niche modeling, we obtained invaluable insights into distribution patterns and genetic diversity within this species. Additionally, our research provided a thorough comprehension of the life cycle, behaviors, and ecological niche modeling of Blepharopsis mendica, shedding new light on the distinctive characteristics of this mantid species. Moreover, we contributed essential knowledge about parasitoids that infect mantid ootheca, laying the foundation for future studies aimed at uncovering the intricate mechanisms governing ecological and evolutionary interactions between parasitoids and Mantodea.
Virtual Reality (VR) leads to the highest level of immersion if presented using a 1:1 mapping of virtual space to physical space—also known as real walking. The advent of inexpensive consumer virtual reality (VR) headsets, all capable of running inside-out position tracking, has brought VR to the home. However, many VR applications do not feature full real walking, but instead, feature a less immersive space-saving technique known as instant teleportation. Given that only 0.3% of home users run their VR experiences in spaces more than 4m2, the most likely explanation is the lack of the physical space required for meaningful use of real walking. In this thesis, we investigate how to overcome this hurdle. We demonstrate how to run 1:1-mapped VR experiences in small physical spaces and we explore the trade-off between space and immersion. (1) We start with a space limit of 15cm. We present DualPanto, a device that allows (blind) VR users to experience the virtual world from a 1:1 mapped bird’s eye perspective—by leveraging haptics. (2) We then relax our space constraints to 50cm, which is what seated users (e.g., on an airplane or train ride) have at their disposal. We leverage the space to represent a standing user in 1:1 mapping, while only compressing the user’s arm movement. We demonstrate our 4 prototype VirtualArms at the example of VR experiences limited to arm movement, such as boxing. (3) Finally, we relax our space constraints further to 3m2 of walkable space, which is what 75% of home users have access to. As well- established in the literature, we implement real walking with the help of portals, also known as “impossible spaces”. While impossible spaces on such dramatic space constraints tend to degenerate into incomprehensible mazes (as demonstrated, for example, by “TraVRsal”), we propose plausibleSpaces: presenting meaningful virtual worlds by adapting various visual elements to impossible spaces. Our techniques push the boundary of spatially meaningful VR interaction in various small spaces. We see further future challenges for new design approaches to immersive VR experiences for the smallest physical spaces in our daily life.
HPI Future SOC Lab
(2024)
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2020. Selected projects have presented their results on April 21st and November 10th 2020 at the Future SOC Lab Day events.
Floods continue to be the leading cause of economic damages and fatalities among natural disasters worldwide. As future climate and exposure changes are projected to intensify these damages, the need for more accurate and scalable flood risk models is rising. Over the past decade, macro-scale flood risk models have evolved from initial proof-of-concepts to indispensable tools for decision-making at global-, nationaland, increasingly, the local-level. This progress has been propelled by the advent of high-performance computing and the availability of global, space-based datasets. However, despite such advancements, these models are rarely validated and consistently fall short of the accuracy achieved by high-resolution local models. While capabilities have improved, significant gaps persist in understanding the behaviours of such macro-scale models, particularly their tendency to overestimate risk. This dissertation aims to address such gaps by examining the scale transfers inherent in the construction and application of coarse macroscale models. To achieve this, four studies are presented that, collectively, address exposure, hazard, and vulnerability components of risk affected by upscaling or downscaling.
The first study focuses on a type of downscaling where coarse flood hazard inundation grids are enhanced to a finer resolution. While such inundation downscaling has been employed in numerous global model chains, ours is the first study to focus specifically on this component, providing an evaluation of the state of the art and a novel algorithm. Findings demonstrate that our novel algorithm is eight times faster than existing methods, offers a slight improvement in accuracy, and generates more physically coherent flood maps in hydraulically challenging regions. When applied to a case study, the algorithm generated a 4m resolution inundation map from 30m hydrodynamic model outputs in 33 s, a 60-fold improvement in runtime with a 25% increase in RMSE compared with direct hydrodynamic modelling. All evaluated downscaling algorithms yielded better accuracy than the coarse hydrodynamic model when compared to observations, demonstrating similar limits of coarse hydrodynamic models reported by others. The substitution of downscaling into flood risk model chains, in place of high-resolution modelling, can drastically improve the lead time of impactbased forecasts and the efficiency of hazard map production. With downscaling, local regions could obtain high resolution local inundation maps by post-processing a global model without the need for expensive modelling or expertise.
The second study focuses on hazard aggregation and its implications for exposure, investigating implicit aggregations commonly used to intersect hazard grids with coarse exposure models. This research introduces a novel spatial classification framework to understand the effects of rescaling flood hazard grids to a coarser resolution. The study derives closed-form analytical solutions for the location and direction of bias from flood grid aggregation, showing that bias will always be present in regions near the edge of inundation. For example, inundation area will be positively biased when water depth grids are aggregated, while volume will be negatively biased when water elevation grids are aggregated. Extending the analysis to effects of hazard aggregation on building exposure, this study shows that exposure in regions at the edge of inundation are an order of magnitude more sensitive to aggregation errors than hazard alone. Among the two aggregation routines considered, averaging water surface elevation grids better preserved flood depths at buildings than averaging of water depth grids. The study provides the first mathematical proof and generalizeable treatment of flood hazard grid aggregation, demonstrating important mechanisms to help flood risk modellers understand and control model behaviour.
The final two studies focus on the aggregation of vulnerability models or flood damage functions, investigating the practice of applying per-asset functions to aggregate exposure models. Both studies extend Jensen’s inequality, a well-known 1906 mathematical proof, to demonstrate how the aggregation of flood damage functions leads to bias. Applying Jensen’s proof in this new context, results show that typically concave flood damage functions will introduce a positive bias (overestimation) when aggregated. This behaviour was further investigated with a simulation experiment including 2 million buildings in Germany, four global flood hazard simulations and three aggregation scenarios. The results show that positive aggregation bias is not distributed evenly in space, meaning some regions identified as “hot spots of risk” in assessments may in fact just be hot spots of aggregation bias. This study provides the first application of Jensen’s inequality to explain the overestimates reported elsewhere and advice for modellers to minimize such artifacts.
In total, this dissertation investigates the complex ways aggregation and disaggregation influence the behaviour of risk models, focusing on the scale-transfers underpinning macro-scale flood risk assessments. Extending a key finding of the flood hazard literature to the broader context of flood risk, this dissertation concludes that all else equal, coarse models overestimate risk. This dissertation goes beyond previous studies by providing mathematical proofs for how and where such bias emerges in aggregation routines, offering a mechanistic explanation for coarse model overestimates. It shows that this bias is spatially heterogeneous, necessitating a deep understanding of how rescaling may bias models to effectively reduce or communicate uncertainties. Further, the dissertation offers specific recommendations to help modellers minimize scale transfers in problematic regions. In conclusion, I argue that such aggregation errors are epistemic, stemming from choices in model structure, and therefore hold greater potential and impetus for study and mitigation. This deeper understanding of uncertainties is essential for improving macro-scale flood risk models and their effectiveness in equitable, holistic, and sustainable flood management.
State space models enjoy wide popularity in mathematical and statistical modelling across disciplines and research fields. Frequent solutions to problems of estimation and forecasting of a latent signal such as the celebrated Kalman filter hereby rely on a set of strong assumptions such as linearity of system dynamics and Gaussianity of noise terms.
We investigate fallacy in mis-specification of the noise terms, that is signal noise
and observation noise, regarding heavy tailedness in that the true dynamic frequently produces observation outliers or abrupt jumps of the signal state due to realizations of these heavy tails not considered by the model. We propose a formalisation of observation noise mis-specification in terms of Huber’s ε-contamination as well as a computationally cheap solution via generalised Bayesian posteriors with a diffusion Stein divergence loss resulting in the diffusion score matching Kalman filter - a modified algorithm akin in complexity to the regular Kalman filter. For this new filter interpretations of novel terms, stability and an ensemble variant are discussed. Regarding signal noise mis-specification, we propose a formalisation in the frame work of change point detection and join ideas from the popular CUSUM algo-
rithm with ideas from Bayesian online change point detection to combine frequent reliability constraints and online inference resulting in a Gaussian mixture model variant of multiple Kalman filters. We hereby exploit open-end sequential probability ratio tests on the evidence of Kalman filters on observation sub-sequences for aggregated inference under notions of plausibility.
Both proposed methods are combined to investigate the double mis-specification problem and discussed regarding their capabilities in reliable and well-tuned uncertainty quantification. Each section provides an introduction to required terminology and tools as well as simulation experiments on the popular target tracking task and the non-linear, chaotic Lorenz-63 system to showcase practical performance of theoretical considerations.
In dieser Arbeit wurde eine reaktive Wand in einem kleinskaligen Laborma\ss stab (Länge~=~40\,cm) entwickelt, die Eisen- und Sulfatbelastungen aus sauren Minenabwässern (engl. \textit{acid mine drainage} (AMD)) mit einer Effizienz von bis zu 30.2 bzw. 24.2\,\% über einen Zeitraum von 146~Tagen (50\,pv) abreinigen können sollte. Als reaktives Material wurde eine Mischung aus Gartenkompost, Buchenholz, Kokosnussschale und Calciumcarbonat verwendet. Die Zugabebedingungen waren eine Eisenkonzentration von 1000\,mg/L, eine Sulfatkonzentration von 3000\,mg/L und ein pH-Wert von 6.2.
Unterschiede in der Materialzusammensetzung ergaben keine grö\ss eren Änderungen in der Sanierungseffizienz von Eisen- und Sulfatbelastungen (12.0 -- 15.4\,\% bzw. 7.0 -- 10.1\,\%) über einen Untersuchungszeitraum von 108~Tagen (41 -- 57\,pv). Der wichtigste Einflussfaktor auf die Abreinigungsleistung von Sulfat- und Eisenbelastungen war die Verweilzeit der AMD-Lösung im reaktiven Material. Diese kann durch eine Verringerung des Durchflusses oder eine Erhöhung der Länge der reaktiven Wand (engl. \textit{Permeable Reactive Barrier} (PRB)) erhöht werden. Ein halbierter Durchfluss erhöhte die Sanierungseffizienzen von Eisen und Sulfat auf 23.4 bzw. 32.7\,\%. Weiterhin stieg die Sanierungseffizienz der Eisenbelastungen auf 24.2\,\% bei einer Erhöhung der Sulfatzugabekonzentration auf 6000\,mg/L. Saure Startbedingungen (pH~=~2.2) konnten, durch das Calciumcarbonat im reaktiven Material, über einen Zeitraum von 47~Tagen (24\,pv) neutralisiert werden. Durch die Neutralisierung der sauren Startbedingungen wurde Calciumcarbonat in der \gls{prb} verbraucht und Calcium-Ionen freigesetzt, die die Sulfatsanierungseffizienz erhöht haben (24.9\,\%). Aufgrund einer Vergrö\ss erung der \gls{prb} in Breite und Tiefe und einer 2D-Parameterbestimmung konnten Randläufigkeiten beobachtet werden, ohne deren Einfluss sich die Sanierungseffizienz für Eisen- und Sulfatbelastungen erhöht (30.2 bzw. 24.2\,\%). \par
Zur \textit{in-situ} Überwachung der \gls{prb} wurden optische Sensoren verwendet, um den pH-Wert, die Sauerstoffkonzentration und die Temperatur zu ermitteln. Es wurden, nach dem Ort und der Zeit aufgelöst, stabile Sauerstoffkonzentrationen und pH-Verläufe detektiert. Auch die Temperatur konnte nach dem Ort aufgelöst ermittelt werden. Damit zeigte diese Arbeit, dass optische Sensoren zur Überwachung der Stabilität einer \gls{prb} für die Reinigung von \gls{amd} verwendet werden können. \par
Mit dem Simulationsprogramm MIN3P wurde eine Simulation erstellt, die die entwickelte PRB darstellt. Die Simulation kann die erhaltenen Laborergebnisse gut wiedergeben. Anschlie\ss end wurde eine simulierte \gls{prb} bei unterschiedlichen Filtergeschwindigkeiten ((4.0 -- 23.5)~$\cdot~\mathrm{10^{-7}}$\,m/s) und Längen der PRB (25 -- 400\,cm) untersucht. Es wurden Zusammenhänge der untersuchten Parameter mit der Sanierungseffizienz von Eisen- und Sulfatbelastungen ermittelt. Diese Zusammenhänge können verwendet werden, um die benötigte Verweilzeit der AMD-Lösung in einem zukünftigen PRB-System, die für die maximal mögliche Sanierungsleistung notwendig ist, zu berechnen.
Proceedings of TripleA 10
(2024)
The TripleA workshop series was founded in 2014 by linguists from Potsdam and Tübingen with the aim of providing a platform for researchers that conduct theoretically-informed linguistic fieldwork on meaning. Its focus is particularly on languages that are under-represented in the current research landscape, including but not limited to languages of Africa, Asia, and Australia, hence TripleA.
For its 10th anniversary, TripleA returned to the University of Potsdam on the 7-9th of June 2023.
The programme included 21 talks dealing with no less than 22 different languages, including three invited talks given by Sihwei Chen (Academia Sinica), Jérémy Pasquereau (Laboratoire de Linguistique de Nantes, CNRS) and Agata Renans (Ruhr-Universität Bochum). Nine of these (invited or peer-reviewed) talks are featured in this volume.