Refine
Has Fulltext
- yes (13412) (remove)
Year of publication
Document Type
- Article (4018)
- Postprint (3298)
- Doctoral Thesis (2562)
- Monograph/Edited Volume (973)
- Review (571)
- Part of Periodical (492)
- Preprint (446)
- Master's Thesis (268)
- Conference Proceeding (246)
- Working Paper (246)
- Other (72)
- Part of a Book (53)
- Habilitation Thesis (52)
- Bachelor Thesis (51)
- Report (44)
- Lecture (10)
- Course Material (7)
- Moving Images (1)
- Sound (1)
- Study Thesis (1)
Language
- German (7076)
- English (6036)
- Spanish (80)
- French (75)
- Multiple languages (62)
- Russian (62)
- Hebrew (9)
- Italian (6)
- Portuguese (2)
- Hungarian (1)
Keywords
- Germany (118)
- Deutschland (106)
- climate change (80)
- Sprachtherapie (77)
- Patholinguistik (73)
- patholinguistics (73)
- Logopädie (72)
- Zeitschrift (71)
- Nachhaltigkeit (61)
- European Union (60)
Institute
- Extern (1406)
- MenschenRechtsZentrum (943)
- Institut für Physik und Astronomie (720)
- Institut für Biochemie und Biologie (714)
- Wirtschaftswissenschaften (584)
- Institut für Chemie (552)
- Institut für Mathematik (521)
- Institut für Romanistik (515)
- Institut für Geowissenschaften (511)
- Mathematisch-Naturwissenschaftliche Fakultät (489)
Durch die Covid-19-Pandemie und den Ukraine- Krieg sind den Kommunen erhebliche finanzielle Mehrbelastungen in Form von zusätzlichen Aufwendungen und Mindererträgen entstanden. Das Land NRW hatte daher mit dem „Gesetz zur Isolierung der aus der Covid-19-Pandemie folgenden Belastungen der kommunalen Haushalte im Land Nordrhein-Westfalen (NKF-COVID-19- Isolierungsgesetz – NKF-CIG)“ vom 29. September 2020 beschlossen, befristet die Aufstellung der Haushalte zu erleichtern und finanzielle Mehrbelastungen bilanziell zu „isolieren“. Mit dem ersten Änderungsgesetz vom 1. Dezember 2021 wurden die Regelungen überarbeitet und der Geltungszeitraum verlängert. Mit dem zweiten Änderungsgesetz vom 9. Dezember 2022 erfolgte eine sachliche und zeitliche Erweiterung. Gleichzeitig wurde das Gesetz umbenannt, um die sachliche Erweiterung um die finanziellen Mehrbelastungen aus dem Ukraine-Krieg zu verdeutlichen (NKF-CUIG). Unser Positionspapier setzt sich in einem ersten Schritt kritisch mit der bilanziellen „Isolierung“ dieser finanziellen Mehrbelastungen mittels Bilanzierungshilfe auseinander und identifiziert sowohl die Herausforderungen bei der genauen Bestimmung dieser finanziellen Mehrbelastungen als auch die Anwendungsprobleme bei der Bildung, dem Ausweis und der Bewertung dieser Bilanzierungshilfe im kommunalen Jahresabschluss. In einem zweiten Schritt werden die Auswirkungen der Bilanzierung einer solchen Bilanzierungshilfe auf die Prüfung des Jahresabschlusses eingehend untersucht und kritisch diskutiert. In einem dritten Schritt wird eine rechtspolitische Bewertung des NKF-CUIG vorgenommen. Zusammenfassend ist festzuhalten, dass eine „Hilfe“, wie sie der Begriff der Bilanzierungshilfe im pragmatischen Sprachgebrauch suggeriert, in keiner Weise festzustellen ist. Auch in Zukunft ist mit Situationen zu rechnen, die der Covid-Pandemie und dem Ukraine-Krieg vergleichbar sind. Auch dann könnten finanzielle Mehrbelastungen die rechtliche Handlungsfähigkeit der Kommunen gefährden. Um diese zu erhalten, sollten vom Landesgesetzgeber jedoch andere Maßnahmen als die Aktivierung einer Bilanzierungshilfe in Betracht gezogen werden. Die alternativen Maßnahmen sollten einerseits den Besonderheiten der historischen Situation und dem Ziel des Erhalts der rechtlichen Handlungsfähigkeit der Kommunen gerecht werden. Sie sollten gleichzeitig aber auch Systembrüche in der Doppik und im Haushaltsrecht sowie unnötige Bürokratielasten vermeiden.
Die Universität Potsdam positioniert sich als Hochschule im digitalen Zeitalter mit dem Ziel, den umfassenden Einsatz von digitalen Medien in Lehre und Studium als gelebte Lehr-, Lern- und Prüfungskultur für alle Studierenden, Lehrenden und Mitarbeitenden zu verwirklichen.
Aufbauend auf den Erfahrungen und Vorarbeiten der letzten Jahre, wie der Bestandsaufnahme E-Learning sowie von früheren Strategien und Leitbildern, mit denen digitale Medien zunehmend in Lehre und Studium integriert wurden, verfügt die Universität Potsdam über eine starke Ausgangsposition im Bereich der digitalen Lehre. Daher zielt die aktuelle E-Learning-Strategie (2023–2028) auf die Weiterentwicklung und Verstetigung dieser Ansätze. Sie identifiziert dabei sechs zentrale Handlungsfelder: "Austausch und Vernetzung", "Content", "Innovation und Verstetigung", "Medienkompetenz", "Qualitätsentwicklung" und "UP und die Welt".
Die Strategie wurde im Rahmen eines partizipativen Prozesses entwickelt, der von der E-Learning-Steuerungsgruppe koordiniert und von Vertreter*innen aus allen Bereichen und allen Statusgruppen der Universität unterstützt wurde. Sie wurde in der 319. Sitzung des Senats am 5. Juli 2023 beschlossen und mit redaktionellen Änderungen 2024 veröffentlicht.
Terrestrial landscape dynamics are dominated by the production, mobilisation, transfer and deposition of sediment. Numerous chemical elements are carried by sediments, making them a key component for ecological processes, as soil constitution and thus plants and animal ecosystems depends on them, and by extension the human species. They are also essential for climate evolution and regulation as marine sedimentation acts as a carbon sink. However, the processes at the origin of their production, mobilisation, transfer can occur suddenly with high-energy content – such as volcanic eruption, mass wasting or flooding events and wildfires – shaking ecosystems and shaping landforms. Besides, in the last era, the human species has shown its ability to disturb landscape dynamics and change sediments cycles. Thus, there is a need for predictive understanding of the processes involved. This relies on un-derstanding of the mechanisms of key processes and their controls, and knowledge of the state and evolution of the Earth’s surface. Classic approaches to these challenges include empirical observations and numerical modeling of geochemical fluxes and surface processes, as well as the study of terrestrial sedimentary archives to better understand the parameters at stake in landscape dynamics and climate changes and the different actions and retroactions between the production, mobilisation, transfer and deposition of sediments which ultimately shape landscapes. Environmental seismology complements these approaches.
Environmental seismology is the science topic investigating the source functions and propagation properties of seismic vibrations triggered by processes happening at or near the Earth’s surface, below and above it - cryosphere, hydrosphere, atmosphere, human habitat, biosphere, etc., to obtain insight into all these physical processes. Indeed, from mass wasting event to rivers, from wild species to hu-man, all these processes are generating seismic waves. Environmental seismology is a rather recent field, with new branches rapidly expanding and at various stages of scientific progress. This thesis is motivated by the goal of learning more on two major natural process hazards (river bedload transport and mass wasting) as well as on human-generated acoustic hazard, while covering the axis of funda-mental research progression, from data exploration and method and theory development to proof of concept, with the twin aims of developing a better understanding of the operation of these specific processes, and of advancing the methods we have at our disposal to study them.
First, I provide a benchmark for assessment of the reliability of existing seismic bedload model inver-sion to retrieve bedload flux from seismic data. Bedload flux measurements are essential to better understand river dynamics, and this can be achieved with environmental seismology. However, due to a lack of well-constrained validation data, the accuracy of the resulting inversions is unknown. I address this gap in Chapter 2.2, reporting a seismic field experiment, and comparing the results to high-quality independent bedload measurements to constrain a seismic bedload model. The study shows that the quality of bedload flux estimates from seismic data strongly depends on the quality of the input data for the model. Direct measurements of relevant parameters, chiefly seismic ground proper-ties needed for the Green's function and the grain size distribution of the moving bedload, considerably improve the model quality over generic approaches using empirical or theoretical functions. I also pro-vide a numerical tool to facilitate the use of water turbulence and bedload seismic inversion models: seismic models for bedload flux and water turbulence require painstaking work to constrain parame-ters describing the ground properties by active seismic study or analysis of passive seismic data, and the grain size distribution via independent measurements. Reasonable predictions can be achieved by using a Monte Carlo approach to optimize the free parameters with respect to the target parameters. The validation of the tool, in Chapter 2.3, with independent measurements of water depth and bedload flux at a study site on the Eshtemoa river in Israël makes it available for reliable use at other sites. The work reported in this chapter has been published in Lagarde et al. 2021 and Dietze et al. 2019b.
In a second study, reported in Chapter 3, I investigate the formation of a failure plane prior to a rock-slide. A better understanding of the dynamics of the preparation phase is essential to determine the timing, volume and mobilization mechanism of a rock slope failure, and this can be achieved with envi-ronmental seismology. I take advantage of a network of seismic stations close to an instable slope recording cracking signals prior to the slope failure, and use a machine learning technique based on hidden Markov models to isolate these signals from the seismic data, retrieving the cumulative number of cracking events in a period of 20 days prior to a large rockslide and 10 days after. The trajectory of the cumulative number of cracks shifts from a rather linear shape in the two weeks prior to the rock-slide to an S-shaped development in the last 27 h before failure. I interpret this change as a switch from initial distributed cracking to localised damage accumulation in the hours prior to the failure. I develop a simple physical model to explain the temporal evolution of crack activity during the S-shape phase, revealing the importance of an internal parameter, the total crack boundary length as the dominant control parameter of failure plane evolution. This study has been published as Lagarde et al. 2023.
Third, I develop a model converting acoustic signals to seismic signals. A part of the acoustic vibrations generated on the Earth’s surface is converted to seismic signals at the ground interface. Consequently, noise pollution may be translated to slope fatigue and rock micro (or macro) fracturing resulting in a degrading effect on landforms. Moreover, this pollution can have negative impacts, such as physical, physiological as well as psychological effects on animal species. At present, the impact of seismic pol-lution generated by acoustic sources is difficult to evaluate. In Chapter 4, I improve and implement a model converting the acoustic pressure generated by a source in the atmosphere to the corresponding seismic signal for a receiver within the ground. The ground is considered to be a porous elastic medium in which wave behaviour can be approximated by the Biot-Stoll model. The model is extended for use of a temporal pressure pulse as an input, and to produce output on a plan 2D map, where the wind effect on the acoustic to seismic coupling can be reproduced. I invest extensive effort in making the model user-friendly, as the project aims at reaching a large audience, comprising, for example, geo-morphologists, biologists and sociologists. Finally, the model is subjected to synthetic testing as well as a qualitative comparison of the predicted ground particle velocity and the seismic signal of a real heli-copter flight as a source of acoustic input.
These studies advance understanding of the operation of specific natural processes in channels and on hillslopes, and bring us closer to designing functioning early warning systems for mass-wasting and flood events. This thesis also raises questions that have not been considered before, such as the con-tribution of human acoustic pollution to the seismic hum and its impact on the natural environment, or the importance of cracks in the self-development of the failure plane prior to slope. Together, these studies question general assumptions usually made regarding the triggering of mass wasting or the hillslope-channel connectivity. Beyond this, the thesis covers the axis of fundamental research progres-sion, from data exploration and method and theory development to proof of concept, and shows how in the rapidly developing field of environmental seismology, an active awareness of progress can help strengthen and accelerate general advances.
On 21 April 2021, the European Commission presented its long-awaited proposal for a Regulation “laying down harmonized rules on Artificial Intelligence”, the so-called “Artificial Intelligence Act” (AIA). This article takes a critical look at the proposed regulation. After an introduction (1), the paper analyzes the unclear preemptive effect of the AIA and EU competences (2), the scope of application (3), the prohibited uses of Artificial Intelligence (AI) (4), the provisions on high-risk AI systems (5), the obligations of providers and users (6), the requirements for AI systems with limited risks (7), the enforcement system (8), the relationship of the AIA with the existing legal framework (9), and the regulatory gaps (10). The last section draws some final conclusions (11).
Mit dem Klima wandelt sich auch notwendig die offene Gesellschaft. Und mit ihr wandelt sich wiederum auch die Verfassung(-sinterpretation). Periodisch wiederkehrende Gesundheits- und Sicherheitskrisen fordern eine dynamische Reaktion des Grundgesetzes auf mit ihnen einhergehende Probleme. In andauernden Krisen wie der Umweltkrise muss die Verfassung gleichzeitig in vielerlei Hinsicht nachhaltig sein. Dabei muss das, was wir unter Freiheit, Klima‑, Umwelt- oder Tierschutz verstehen, immer im Wandel bleiben.
Jurisdiction
(2022)
Keine Reform für die Zukunft
(2021)
Am 1. Januar 2021 trat die jüngste Reform des Erneuerbare-Energien-Gesetzes (EEG) in Kraft. Sie führte mit der finanziellen Beteiligung der Gemeinden an den Erträgen der Windenergie klammheimlich eine verfassungswidrige Abgabe ein: Durch das Zusammenspiel des neuen § 36k EEG 2021 mit der altbekannten EEG-Umlage fließt eine bei den Strom-Endverbrauchern erhobene Abgabe in die kommunalen Haushalte. Das kann auf keine Gesetzgebungskompetenz gestützt werden. Darüber hinaus führt die Deckelung der EEG-Umlage in den Jahren 2021 und 2022 in Verbindung mit § 36k EEG 2021 dazu, dass in verfassungswidriger Weise Bundesmittel den Gemeinden zur freien Verfügung gestellt werden.
Strength of weakness
(2020)
The paper investigates quality management in teaching and learning in higher education institutions from a principal-agent perspective. Based on data gained from semi-structured interviews and from a nation-wide survey with quality managers of German higher education institutions, the study shows how quality managers position themselves in relation to their perception of the interests of other actors in higher education institutions. The paper describes the various interests and discusses the main implications of this constellation of actors. It argues that quality managers, although they may be considered as rather weak actors within the higher education institution, may be characterised as having a strength of weakness due to diverging interests of their principals.
Strategic social media use positively influences organizational goals such as the long-term accrual of social capital, and thus social media information governance has become an increasingly important organizational objective. It is particularly important for humanitarian nongovernmental organizations (HNGOs), whose work relies on accurate and timely information regarding socially altruistic behavior (donations, volunteerism, etc.). Despite the potential of social media for increasing social capital, tensions in governing social media information across an organization's different operational levels (regional, intermediate, and national) pose a difficult challenge. Prominent governance frameworks offer little guidance, as their focus on control and incremental policymaking is largely incompatible with the processes, roles, standards, and metrics needed for managing self-governing social media. This study offers a notion of dynamic and co-evolutionary process management of multi-level organizations as a means of conceptualizing social media information governance for the accrual of organizational social capital. Based on interviews with members of HNGOs, this study reveals tensions that emerge within eight focus areas of accruing social capital in multi-level organizations, explains how dynamic process management can ease those tensions, and proposes corresponding strategy recommendations.
With the latest technological developments and associated new possibilities in teaching, the personalisation of learning is gaining more and more importance. It assumes that individual learning experiences and results could generally be improved when personal learning preferences are considered. To do justice to the complexity of the personalisation possibilities of teaching and learning processes, we illustrate the components of learning and teaching in the digital environment and their interdependencies in an initial model. Furthermore, in a pre-study, we investigate the relationships between the learner's ability to (digital) self-organise, the learner’s prior- knowledge learning in different variants of mode and learning outcomes as one part of this model. With this pre-study, we are taking the first step towards a holistic model of teaching and learning in digital environments.
Die Rolle von Kommunen wird in diesem Buch einem europäischen Vergleich unterzogen. Dabei werden Kategorien wie kommunale Autonomie, Aufgabenprofile, territoriale und politische sowie finanzielle Rahmenbedingungen miteinander verglichen. Auch vergangene und bestehende Reformtrends und -diskurse werden beschrieben und eingeordnet. Die Studie ist eine umfassende Sekundäranalyse und bereitet aktuelle Zahlen aus verschiedenen Quellen auf. Durchgeführt wurde sie von einem Team um Prof. Sabine Kuhlmann vom Lehrstuhl für Politikwissenschaft, Verwaltung und Organisation an der Universität Potsdam.
Alcohol use disorder (AUD) is the most common substance use disorder worldwide. Although dopamine-related findings were often observed in AUD, associated neurobiological mechanisms are still poorly understood. Therefore, in the present study, we investigate D2/3 receptor availability in healthy participants, participants at high risk (HR) to develop addiction (not diagnosed with AUD), and AUD patients in a detoxified stage, applying F-18-fallypride positron emission tomography (F-18-PET). Specifically, D2/3 receptor availability was investigated in (1) 19 low-risk (LR) controls, (2) 19 HR participants, and (3) 20 AUD patients after alcohol detoxification. Quality and severity of addiction were assessed with clinical questionnaires and (neuro)psychological tests. PET data were corrected for age of participants and smoking status. In the dorsal striatum, we observed significant reductions of D2/3 receptor availability in AUD patients compared with LR participants. Further, receptor availability in HR participants was observed to be intermediate between LR and AUD groups (linearly decreasing). Still, in direct comparison, no group difference was observed between LR and HR groups or between HR and AUD groups. Further, the score of the Alcohol Dependence Scale (ADS) was inversely correlated with D2/3 receptor availability in the combined sample. Thus, in line with a dimensional approach, striatal D2/3 receptor availability showed a linear decrease from LR participants to HR participants to AUD patients, which was paralleled by clinical measures. Our study shows that a core neurobiological feature in AUD seems to be detectable in an early, subclinical state, allowing more individualized alcohol prevention programs in the future.
White mica and tourmaline are the dominant hydrothermal alteration minerals at the world-class Panasqueira W-Sn-Cu deposit in Portugal. Thus, understanding the controls on their chemical composition helps to constrain ore formation processes at this deposit and determine their usefulness as pathfinder minerals for mineralization in general. We combine whole-rock geochemistry of altered and unaltered metasedimentary host rocks with in situ LA-ICP-MS measurements of tourmaline and white mica from the alteration halo. Principal component analysis (PCA) is used to better identify geochemical patterns and trends of hydrothermal alteration in the datasets. The hydrothermally altered metasediments are enriched in As, Sn, Cs, Li, W, F, Cu, Rb, Zn, Tl, and Pb relative to unaltered samples. In situ mineral analyses show that most of these elements preferentially partition into white mica over tourmaline (Li, Rb, Cs, Tl, W, and Sn), whereas Zn is enriched in tourmaline. White mica has distinct compositions in different settings within the deposit (greisen, vein selvages, wall rock alteration zone, late fault zone), indicating a compositional evolution with time. In contrast, tourmaline from different settings overlaps in composition, which is ascribed to a stronger dependence on host rock composition and also to the effects of chemical zoning and microinclusions affecting the LA-ICP-MS analyses. Hence, in this deposit, white mica is the better recorder of the fluid composition. The calculated trace-element contents of the Panasqueira mineralizing fluid based on the mica data and estimates of mica-fluid partition coefficients are in good agreement with previous fluid-inclusion analyses. A compilation of mica and tourmaline trace-element compositions from Panasqueira and other W-Sn deposits shows that white mica has good potential as a pathfinder mineral, with characteristically high Li, Cs, Rb, Sn, and W contents. The trace-element contents of hydrothermal tourmaline are more variable. Nevertheless, the compiled data suggest that high Sn and Li contents are distinctive for tourmaline from W-Sn deposits.
In nature, plants are constantly exposed to many transient, but recurring, stresses. Thus, to complete their life cycles, plants require a dynamic balance between capacities to recover following cessation of stress and maintenance of stress memory. Recently, we uncovered a new functional role for macroautophagy/autophagy in regulating recovery from heat stress (HS) and resetting cellular memory of HS inArabidopsis thaliana. Here, we demonstrated that NBR1 (next to BRCA1 gene 1) plays a crucial role as a receptor for selective autophagy during recovery from HS. Immunoblot analysis and confocal microscopy revealed that levels of the NBR1 protein, NBR1-labeled puncta, and NBR1 activity are all higher during the HS recovery phase than before. Co-immunoprecipitation analysis of proteins interacting with NBR1 and comparative proteomic analysis of annbr1-null mutant and wild-type plants identified 58 proteins as potential novel targets of NBR1. Cellular, biochemical and functional genetic studies confirmed that NBR1 interacts with HSP90.1 (heat shock protein 90.1) and ROF1 (rotamase FKBP 1), a member of the FKBP family, and mediates their degradation by autophagy, which represses the response to HS by attenuating the expression ofHSPgenes regulated by the HSFA2 transcription factor. Accordingly, loss-of-function mutation ofNBR1resulted in a stronger HS memory phenotype. Together, our results provide new insights into the mechanistic principles by which autophagy regulates plant response to recurrent HS.
Induced point mutations are important genetic resources for their ability to create hypo- and hypermorphic alleles that are useful for understanding gene functions and breeding. However, such mutant populations have only been developed for a few temperate maize varieties, mainly B73 and W22, yet no tropical maize inbred lines have been mutagenized and made available to the public to date. We developed a novel Ethyl Methanesulfonate (EMS) induced mutation resource in maize comprising 2050 independent M2 mutant families in the elite tropical maize inbred ML10. By phenotypic screening, we showed that this population is of comparable quality with other mutagenized populations in maize. To illustrate the usefulness of this population for gene discovery, we performed rapid mapping-by-sequencing to clone a fasciated-ear mutant and identify a causal promoter deletion in ZmCLE7 (CLE7). Our mapping procedure does not require crossing to an unrelated parent, thus is suitable for mapping subtle traits and ones affected by heterosis. This first EMS population in tropical maize is expected to be very useful for the maize research community. Also, the EMS mutagenesis and rapid mapping-by-sequencing pipeline described here illustrate the power of performing forward genetics in diverse maize germplasms of choice, which can lead to novel gene discovery due to divergent genetic backgrounds.
The European Alps are amongst the regions with highest glacier mass loss rates over the last decades. Under the threat of ongoing climate change, the ability to predict glacier mass balance changes for water and risk management purposes has become imperative. This raises an urgent need for reliable glacier models. The European Alps do not only host glaciers, but also numerous caves containing carbonate formations, called speleothems. Previous studies have shown that those speleothems also grew during times when the cave was covered by a warm-based glacier. In this thesis, I utilise speleothems from the European Alps as archives of local, environmental conditions related to mountain glacier evolution.
Previous studies have shown that speleothem isotope data from the Alps can be strongly affected by in-cave processes. Therefore, part of this thesis focusses on developing an isotope evolution model, which successfully reproduces differences between contemporaneous growing speleothems. The model is used to propose correction approaches for prior calcite precipitation effects on speleothem oxygen isotopes (δ18O). Applications on speleothem records from caves outside of the Alps demonstrate that corrected δ18O agrees better with other records and climate model simulations.
Existing speleothem growth histories and carbon isotope (δ13C) records from Alpine caves located at different elevations are used to infer soil vs. glacier cover and the thermal regime of the glacier over the last glacial cycle. The compatibility with glacier evolution models is statistically assessed. A general agreement between speleothem δ13C-derived information on soil vs. glacier presence and modelled glacier coverage is found. However, glacier retreat during Marine Isotope Stage (MIS) 3 seems to be underestimated by the model. Furthermore, speleothem data provides evidence of surface temperature above the freezing point which is, however, not fully reproduced by the simulations.
History of glacier cover and their thermal regime is explored for the high-elevation cave system Melchsee-Frutt in the Swiss Alps. Based on new (MIS 9b – MIS 7b, MIS 2) and available speleothem δ13C (MIS 7a – 5d) data, warm-based glacier cover is inferred for MIS 8, 7d, 6, and 2. Also a short period of cold-based ice coverage is found for early MIS 6. In a detailed multi-proxy analysis (δ18O, δ13C, Mg/Ca and Sr/Ca), millennial-scale changes in the glacier-related source of the water infiltrating in the karst during MIS 8 and 7d are found and linked to Northern Hemisphere climate variability.
While speleothem records from high-elevation cave sites in the Alps exhibit huge potential for glacier reconstruction, several limitations remain, which are discussed throughout this thesis. Ultimately, recommendations are given to further leverage subglacial speleothems as an archive of glacier dynamics.
Galaxy morphology is a fossil record of how galaxies formed and evolved and can be regarded as a function of the dynamical state of a galaxy. It encodes the physical processes that dominate its evolutionary history, and is strongly aligned with physical properties like stellar mass, star formation rate and local environment. At a distance of ∼50 and 60 kpc, the Magellanic Clouds represent the nearest interacting pair of dwarf irregular galaxies to the Milky Way, rendering them an important test bed for galaxy morphology in the context of galaxy interactions and the effect of the local environment in which they reside. The Large Magellanic Cloud is classified as the prototype for Magellanic Spiral galaxies, with one prominent spiral arm, an offset bar and an inclined rotating disc while the Small Magellanic Cloud is classified as a dwarf Irregular galaxy and is known for its unstructured shape and large depth across the line–of–sight. Resolved stellar populations are powerful probes of a wide range of astrophysical phenomena, the proximity of the Magellanic Clouds allows us to resolve their stellar populations to individual stars that share coherent chemical and age distributions. The coherent properties of resolved stellar populations enable us to analyse them as a function of position within the Magellanic Clouds, offering a picture of the growth of the galaxies’ substructures over time and yielding a comprehensive view of their morphology. Furthermore, investigating the kinematics of the Magellanic Clouds offers valuable insights into their dynamics and evolutionary history. By studying the motions and velocities of stars within these galaxies, we can trace their past interactions, with the Milky Way or with each other and unravel the complex interplay of forces that have influenced the Magellanic Clouds’ formation and evolution.
In Chapter 2, the VISTA survey of the Magellanic Clouds was employed to generate unprecedented high-resolution morphological maps of the Magellanic Clouds in the near-infrared. Utilising colour-magnitude diagrams and theoretical evolutionary models to segregate stellar populations, this approach enabled a comprehensive age tomography of the galaxies. It revealed previously uncharacterised features in their central regions at spatial resolutions of 0.13 kpc (Large Magellanic Cloud) and 0.16 kpc (Small Magellanic Cloud), the findings showcased the impact of tidal interactions on their inner regions. Notably, the study highlighted the enhanced coherent structures in the Large Magellanic Cloud, shedding light on the significant role of the recent Magellanic Clouds’ interaction 200 Myr ago in shaping many of the fine structures. The Small Magellanic Cloud revealed asymmetry in younger populations and irregularities in intermediate-age ones, pointing towards the influence of past tidal interactions.
In Chapter 3, an examination of the outskirts of the Magellanic Clouds led to the identification of new substructures through the use of near-infrared photometry from the VISTA Hemisphere Survey and multi-dimensional phase-space information from Gaia. The distances and proper motions of these substructures were investigated. This analysis revealed the impact of past Magellanic Clouds’ interactions and the influence of the Milky Way’s tidal field on the morphology and kinematics of the Magellanic Clouds. A bi-modal distance distribution was identified within the luminosity function of the red clump stars in the Small Magellanic Cloud, notably in its eastern regions, with the foreground substructure being attributed to the Magellanic Clouds’ interaction around 200 Myr ago. Furthermore, associations with the Counter Bridge and Old Bridge were uncovered through the detection of background and foreground structures in various regions of the SMC.
In chapter 4, a detailed kinematic analysis of the Small Magellanic Cloud was conducted using spectra from the European Southern Observatory Science Archive Facility. The study reveals distinct kinematics in the Wing and bar regions, attributed to interactions with the Large Magellanic Cloud and variations in star formation history. Notably, velocity disparities are observed in the bar’s young main sequence stars, aligning with specific star-forming episodes, and suggesting potential galactic stretching or tidal stripping, as corroborated by proper motion studies.
As followers are becoming more educated and better connected, empowering leadership has gained traction in recent times as an alternative to traditional top-down models of leadership. Several scholars have investigated the relationship between empowering leadership and other variables in different contexts. As most previous studies have focused on the positive aspects of empowering leadership, research on its potential dark side is scarce. Furthermore, no previous study has examined whether and how the transfer of workload from followers to leaders can occur over time, which I proposed can lead to emotional exhaustion and work-family conflict among leaders. Therefore, I proposed that despite the positive outcomes of empowering leadership for both followers and leaders, it may also trigger negative outcomes capable of affecting the well-being of leaders. Drawing on the Conservation of Resources (COR) theory, Job Demand-Resources (JD-R) theory, and Too-Much-of-a-Good-Thing (TMGT) effect model, I investigated this idea. Using follower workload as a moderator, I proposed that the relationship between empowering leadership and leader workload is positive when follower workload is high and negative when follower workload is low. In addition, I examined how empowering leadership interacts with follower workload to affect leader emotional exhaustion and work-family conflict, mediated by leader workload. I proposed that this interaction results in a negative relationship between empowering leadership and both outcomes when follower workload is low, and a positive relationship when it is high.
I tested these hypotheses using data from a three-wave time-lagged design field study with 65 leader-follower dyads consisting of civil servants from different administrative entities of India and Pakistan. The time lag between each study variable was four weeks. At Time 1 (T1), followers answered questions about demographic characteristics, virtual interaction with their leaders, their workload, and the extent to which their leaders practice empowering leadership. At the same time, leaders answered questions about demographic characteristics and their job satisfaction. At Time 2 (T2), leaders provided data on their own workload. Finally, at Time 3 (T3), leaders rated their emotional exhaustion and work-family conflict. A moderated mediation model was tested using PROCESS Model 7 in R. The findings of the study reveal that a significant increase in follower workload through empowering leadership will also increase the leader's workload. Consequently, this increased leader workload leads to a crossover of this interactive effect onto the level of emotional exhaustion and work-family conflict experienced by leaders.
This research offers various contributions to the leadership literature. While empowering leadership has been commonly associated with positive outcomes, my study reveals that it can also lead to negative outcomes. In addition, it shifts the focus of existing research from the effect of empowering leadership on followers to the consequences that it might have for leaders themselves. Overall, my research underscores the need for leaders to consider the potential counterproductive effects of empowering leadership and tailor their approach accordingly.
Human-induced climate change is impacting the global water cycle by, e.g., causing changes in precipitation patterns, evapotranspiration dynamics, cryosphere shrinkage, and complex streamflow trends. These changes, coupled with the increased frequency and severity of extreme hydrometeorological events like floods, droughts, and heatwaves, contribute to hydroclimatic disasters, posing significant implications for local and global infrastructure, human health, and overall productivity.
In the tropical Andes, climate change is evident through warming trends, glacier retreats, and shifts in precipitation patterns, leading to altered risks of floods and droughts, e.g., in the upper Amazon River basin. Projections for the region indicate rising temperatures, potential glacier disappearance or substantial shrinkage, and altered streamflow patterns, highlighting challenges in water availability due to these expected changes and growing human water demand. The evolving trends in hydroclimatic conditions in the tropical Andes present significant challenges to socioeconomic and environmental systems, emphasizing the need for a comprehensive understanding to guide effective adaptation policies and strategies in response to the impacts of climate change in the region.
The main objective of this thesis is to investigate current hydrological dynamics in the tropical Andes of Peru and Ecuador and their responses to climate change. Given the scarcity of hydrometeorological data in the region, this objective was accomplished through a comprehensive data preparation and analysis in combination with hydrological modeling using the Soil and Water Assessment Tool (SWAT) eco-hydrological model. In this context, the initial steps involved assessing, identifying, and/or generating more reliable climate input data to address data limitations.
The thesis introduces RAIN4PE, a high-resolution precipitation dataset for Peru and Ecuador, developed by merging satellite, reanalysis, and ground-based data with surface elevation through the random forest method. Further adjustments of precipitation estimates were made for catchments influenced by fog/cloud water input on the eastern side of the Andes using streamflow data and applying the method of reverse hydrology. RAIN4PE surpasses other global and local precipitation datasets, showcasing superior reliability and accuracy in representing precipitation patterns and simulating hydrological processes across the tropical Andes. This establishes it as the optimal precipitation product for hydrometeorological applications in the region.
Due to the significant biases and limitations of global climate models (GCMs) in representing key atmospheric variables over the tropical Andes, this study developed regionally adapted GCM simulations specifically tailored for Peru and Ecuador. These simulations are known as the BASD-CMIP6-PE dataset, and they were derived using reliable, high-resolution datasets like RAIN4PE as reference data. The BASD-CMIP6-PE dataset shows notable improvements over raw GCM simulations, reflecting enhanced representations of observed climate properties and accurate simulation of streamflow, including high and low flow indices. This renders it suitable for assessing regional climate change impacts on agriculture, water resources, and hydrological extremes.
In addition to generating more accurate climatic input data, a reliable hydrological model is essential for simulating watershed hydrological processes. To tackle this challenge, the thesis presents an innovative multiobjective calibration framework integrating remote sensing vegetation data, baseflow index, discharge goodness-of-fit metrics, and flow duration curve signatures. In contrast to traditional calibration strategies relying solely on discharge goodness-of-fit metrics, this approach enhances the simulation of vegetation, streamflow, and the partitioning of flow into surface runoff and baseflow in a typical Andean catchment. The refined hydrological model calibration strategy was applied to conduct reliable simulations and understand current and future hydrological trajectories in the tropical Andes.
By establishing a region-suitable and thoroughly tested hydrological model with high-resolution and reliable precipitation input data from RAIN4PE, this study provides new insights into the spatiotemporal distribution of water balance components in Peru and transboundary catchments. Key findings underscore the estimation of Peru's total renewable freshwater resource (total river runoff of 62,399 m3/s), with the Peruvian Amazon basin contributing 97.7%. Within this basin, the Amazon-Andes transition region emerges as a pivotal hotspot for water yield (precipitation minus evapotranspiration), characterized by abundant rainfall and lower atmospheric water demand/evapotranspiration. This finding underlines its paramount role in influencing the hydrological variability of the entire Amazon basin.
Subsurface hydrological pathways, particularly baseflow from aquifers, strongly influence water yield in lowland and Andean catchments, sustaining streamflow, especially during the extended dry season. Water yield demonstrates an elevation- and latitude-dependent increase in the Pacific Basin (catchments draining into the Pacific Ocean), while it follows an unimodal curve in the Peruvian Amazon Basin, peaking in the Amazon-Andes transition region. This observation indicates an intricate relationship between water yield and elevation.
In Amazon lowlands rivers, particularly in the Ucayali River, floodplains play a significant role in shaping streamflow seasonality by attenuating and delaying peak flows for up to two months during periods of high discharge. This observation underscores the critical importance of incorporating floodplain dynamics into hydrological simulations and river management strategies for accurate modeling and effective water resource management.
Hydrological responses vary across different land use types in high Andean catchments. Pasture areas exhibit the highest water yield, while agricultural areas and mountain forests show lower yields, emphasizing the importance of puna (high-altitude) ecosystems, such as pastures, páramos, and bofedales, in regulating natural storage.
Projected future hydrological trajectories were analyzed by driving the hydrological model with regionalized GCM simulations provided by the BASD-CMIP6-PE dataset. The analysis considered sustainable (low warming, SSP1-2.6) and fossil fuel-based development (high-end warming, SSP5-8.5) scenarios for the mid (2035-2065) and end (2065-2095) of the century. The projected changes in water yield and streamflow across the tropical Andes exhibit distinct regional and seasonal variations, particularly amplified under a high-end warming scenario towards the end of the century. Projections suggest year-round increases in water yield and streamflow in the Andean regions and decreases in the Amazon lowlands, with exceptions such as the northern Amazon expecting increases during wet seasons. Despite these regional differences, the upper Amazon River's streamflow is projected to remain relatively stable throughout the 21st century. Additionally, projections anticipate a decrease in low flows in the Amazon lowlands and an increased risk of high flows (floods) in the Andean and northern Amazon catchments.
This thesis significantly contributes to enhancing climatic data generation, overcoming regional limitations that previously impeded hydrometeorological research, and creating new opportunities. It plays a crucial role in advancing hydrological model calibration, improving the representation of internal hydrological processes, and achieving accurate results for the right reasons. Novel insights into current hydrological dynamics in the tropical Andes are fundamental for improving water resource management. The anticipated intensified changes in water flows and hydrological extreme patterns under a high-end warming scenario highlight the urgency of implementing emissions mitigation and adaptation measures to address the heightened impacts on water resources.
In fact, the new datasets (RAIN4PE and BASD-CMIP6-PE) have already been utilized by researchers and experts in regional and local-scale projects and catchments in Peru and Ecuador. For instance, they have been applied in river catchments such as Mantaro, Piura, and San Pedro to analyze local historical and future developments in climate and water resources.
Prediction is often regarded as a central and domain-general aspect of cognition. This proposal extends to language, where predictive processing might enable the comprehension of rapidly unfolding input by anticipating upcoming words or their semantic features. To make these predictions, the brain needs to form a representation of the predictive patterns in the environment. Predictive processing theories suggest a continuous learning process that is driven by prediction errors, but much is still to be learned about this mechanism in language comprehension. This thesis therefore combined three electroencephalography (EEG) experiments to explore the relationship between prediction and implicit learning at the level of meaning.
Results from Study 1 support the assumption that the brain constantly infers und updates probabilistic representations of the semantic context, potentially across multiple levels of complexity. N400 and P600 brain potentials could be predicted by semantic surprise based on a probabilistic estimate of previous exposure and a more complex probability representation, respectively.
Subsequent work investigated the influence of prediction errors on the update of semantic predictions during sentence comprehension. In line with error-based learning, unexpected sentence continuations in Study 2 ¬– characterized by large N400 amplitudes ¬– were associated with increased implicit memory compared to expected continuations. Further, Study 3 indicates that prediction errors not only strengthen the representation of the unexpected word, but also update specific predictions made from the respective sentence context. The study additionally provides initial evidence that the amount of unpredicted information as reflected in N400 amplitudes drives this update of predictions, irrespective of the strength of the original incorrect prediction.
Together, these results support a central assumption of predictive processing theories: A probabilistic predictive representation at the level of meaning that is updated by prediction errors. They further propose the N400 ERP component as a possible learning signal. The results also emphasize the need for further research regarding the role of the late positive ERP components in error-based learning. The continuous error-based adaptation described in this thesis allows the brain to improve its predictive representation with the aim to make better predictions in the future.
On the effects of disorder on the ability of oscillatory or directional dynamics to synchronize
(2024)
In this thesis I present a collection of publications of my work, containing analytic results and observations in numerical experiments on the effects of various inhomogeneities, on the ability of coupled oscillators to synchronize their collective dynamics. Most of these works are concerned with the effects of Gaussian and non-Gaussian noise acting on the phase of autonomous oscillators (Secs. 2.1-2.4) or on the direction of higher dimensional state vectors (Secs. 2.5,2.6). I obtain exact and approximate solutions to the non-linear equations governing the distributions of phases, or perform linear stability analysis of the uniform distribution to obtain the transition point from a completely disordered state to partial order or more complicated collective behavior. Other inhomogeneities, that can affect synchronization of coupled oscillators, are irregular, chaotic oscillations or a complex, and possibly random structure in the coupling network. In Section 2.9 I present a new method to define the phase- and frequency linear response function for chaotic oscillators. In Sections 2.4, 2.7 and 2.8 I study synchronization in complex networks of coupled oscillators. Each section in Chapter 2 - Manuscripts, is devoted to one research paper and begins with a list of the main results, a description of my contributions to the work and a short account of the scientific context, i.e. the questions and challenges which started the research and the relation of the work to my other research projects. The manuscripts in this thesis are reproductions of the arXiv versions, i.e. preprints under the creative commons licence.
Data preparation stands as a cornerstone in the landscape of data science workflows, commanding a significant portion—approximately 80%—of a data scientist's time. The extensive time consumption in data preparation is primarily attributed to the intricate challenge faced by data scientists in devising tailored solutions for downstream tasks. This complexity is further magnified by the inadequate availability of metadata, the often ad-hoc nature of preparation tasks, and the necessity for data scientists to grapple with a diverse range of sophisticated tools, each presenting its unique intricacies and demands for proficiency.
Previous research in data management has traditionally concentrated on preparing the content within columns and rows of a relational table, addressing tasks, such as string disambiguation, date standardization, or numeric value normalization, commonly referred to as data cleaning. This focus assumes a perfectly structured input table. Consequently, the mentioned data cleaning tasks can be effectively applied only after the table has been successfully loaded into the respective data cleaning environment, typically in the later stages of the data processing pipeline.
While current data cleaning tools are well-suited for relational tables, extensive data repositories frequently contain data stored in plain text files, such as CSV files, due to their adaptable standard. Consequently, these files often exhibit tables with a flexible layout of rows and columns, lacking a relational structure. This flexibility often results in data being distributed across cells in arbitrary positions, typically guided by user-specified formatting guidelines.
Effectively extracting and leveraging these tables in subsequent processing stages necessitates accurate parsing. This thesis emphasizes what we define as the “structure” of a data file—the fundamental characters within a file essential for parsing and comprehending its content. Concentrating on the initial stages of the data preprocessing pipeline, this thesis addresses two crucial aspects: comprehending the structural layout of a table within a raw data file and automatically identifying and rectifying any structural issues that might hinder its parsing. Although these issues may not directly impact the table's content, they pose significant challenges in parsing the table within the file.
Our initial contribution comprises an extensive survey of commercially available data preparation tools. This survey thoroughly examines their distinct features, the lacking features, and the necessity for preliminary data processing despite these tools. The primary goal is to elucidate the current state-of-the-art in data preparation systems while identifying areas for enhancement. Furthermore, the survey explores the encountered challenges in data preprocessing, emphasizing opportunities for future research and improvement.
Next, we propose a novel data preparation pipeline designed for detecting and correcting structural errors. The aim of this pipeline is to assist users at the initial preprocessing stage by ensuring the correct loading of their data into their preferred systems. Our approach begins by introducing SURAGH, an unsupervised system that utilizes a pattern-based method to identify dominant patterns within a file, independent of external information, such as data types, row structures, or schemata. By identifying deviations from the dominant pattern, it detects ill-formed rows. Subsequently, our structure correction system, TASHEEH, gathers the identified ill-formed rows along with dominant patterns and employs a novel pattern transformation algebra to automatically rectify errors. Our pipeline serves as an end-to-end solution, transforming a structurally broken CSV file into a well-formatted one, usually suitable for seamless loading.
Finally, we introduce MORPHER, a user-friendly GUI integrating the functionalities of both SURAGH and TASHEEH. This interface empowers users to access the pipeline's features through visual elements. Our extensive experiments demonstrate the effectiveness of our data preparation systems, requiring no user involvement. Both SURAGH and TASHEEH outperform existing state-of-the-art methods significantly in both precision and recall.
Das Anliegen der vorliegenden Arbeit ist die Vermittlung des antiken Verhältnisses zwischen Mensch und natürlicher Umgebung im Lateinunterricht sowie ein Vergleich mit der heutigen Situation. Die Ergründung jenes Verhältnisses erfolgt am Beispiel des antiken Bergbaus, eines besonders anschaulichen Feldes der Umweltgeschichte. Denn es weist ein hohes Maß an Aktualität auf sowie ein großes Potential, aus der Beschäftigung mit ihm Erkenntnisse für die Gegenwart zu gewinnen.
Vorgelegt wird eine Unterrichtskonzeption, die zugleich eine Analyse der menschlichen Naturwahrnehmung vornimmt. Zunächst wird dabei die Heterogenität dieser Wahrnehmung in der Antike aufgezeigt und in Bezug zur damals geäußerten Kritik am Bergbau gesetzt. Anschließend werden folgende Teilaspekte behandelt: 1. die antike bergbauliche Technik und Praxis, 2. die damals herrschenden Arbeitsbedingungen, 3. die gewonnenen Rohstoffe und ihre Verwendung sowie 4. die Folgen des Bergbaus für Mensch und Umwelt. Der didaktische Teil besteht aus einem Entwurf für drei Doppelstunden. Er enthält die Lehrmaterialien, die jeweiligen Erläuterungen und den Erwartungshorizont.
Das Professionswissen von Studierenden des Lehramts Primarstufe im Bereich „Haus der Vierecke“
(2024)
Die Professionalisierung angehender Lehrkräfte als bedeutende Steuerungsgröße für die Schulbildung ist eine wesentliche Aufgabe der Lehre an Universitäten. Sie stellt eine Säule des universitären Reformprojekts „PSI-Potsdam“ im Rahmen der „Qualitätsoffensive Lehrerbildung“ dar. Ziel ist die Qualitätssicherung durch Evaluation und Weiterentwicklung von Lehrveranstaltungen mithilfe von Gestaltungsprinzipien zur Vermittlung des Professionswissens.
Die vorliegende Arbeit fokussiert die Wirksamkeit der Lehrveranstaltung „Geometrie und ihre Didaktik 1 und 2“ und untersucht exemplarisch, inwiefern Studierende des Lehramts Primarstufe Mathematik das dort angestrebte Fach- und fachdidaktische Wissen zur Begriffsbildung am Beispiel des Hauses der Vierecke erlangt haben. Angemessene mentale Modelle verschiedener Vierecksarten aufzubauen und diese hierarchisch zueinander in Beziehung zu setzen, erfordert einen aktiven Prozess gemäß dem didaktischen Modell zum Lernen geometrischer Begriffe und stellt somit eine Schwierigkeit für Lernende an Schule und Universität gleichermaßen dar.
Zur Beantwortung der Forschungsfrage wurden in einer qualitativen Studie mit Mixed-Methods-Design zunächst 95 Studierende schriftlich zu ihrem Wissen hinsichtlich des genannten Themas befragt. Anschließend wurde zur Identifikation von Lernhürden und Schwierigkeiten ein Fokusgruppeninterview durchgeführt. Die Auswertung der Daten erfolgte computergestützt mittels einer qualitativen Inhaltsanalyse.
Die Ergebnisse bilden eine große Vielfalt verschiedener Kompetenzstände in allen relevanten Facetten ab. Im Rahmen der geforderten Perspektivübernahme, Ursachenfindung und modellgeleiteten Vorschlägen zu deren Vorbeugung zeigten sich insbesondere Defizite in Form von Fehlvorstellungen. Weiterhin gab es Schwierigkeiten bei der Anwendung und Integration des geforderten Professionswissens in allen betrachteten Wissenskomponenten. Hieraus werden zum einen Entwicklungsvorschläge bezüglich der Lehrveranstaltung abgeleitet, um die fachwissenschaftliche Basis der zukünftigen Lehrkräfte zu stärken. Hierunter fällt es, sensibler mit prototypischen Darstellungen umzugehen und den Begriffsaufbau bei den Studierenden zu stärken, indem unter anderem auf einer Metaebene Zusammenhänge des Hauses der Vierecke im Spiralcurriculum explizit gemacht werden. Zum anderen beziehen sich Vorschläge auf das Studiendesign, speziell den Aufbau der Befragung zur zielführenden Erhebung des fokussierten Professionswissens. Hierfür werden unter anderem eine explizite Erhebung der eigenen Vorstellungen sowie eine Umformulierung der Wissenstestaufgabe mittels Operatoren angeregt.
„Über die vergangenen Jahrzehnte wurde der Ruf nach einer nachhaltigen Entwicklung aufgrund zahlreicher globaler, die gesamte Menschheit betreffender Herausforderungen immer lauter (Kropp, 2019, S. 4).“
Bildung für nachhaltige Entwicklung (BNE) verfolgt das Ziel, Menschen dazu zu befähigen, diesen globalen Herausforderungen aktiv zu begegnen, ihre eigene Zukunft mitzugestalten sowie Verantwortung für die Zukunft nachfolgender Generationen zu übernehmen. Auch der Sachunterricht in der Grundschule sieht sich vor der Aufgabe, die Prinzipien der BNE in die schulische Praxis zu übertragen. Im Zentrum steht dabei die Frage nach geeigneten Zugängen zu diesem perspektivenvernetzenden Thema, die für die Schülerinnen und Schüler motivierend und zugleich bildungswirksam sein sollen. Einen derartigen Zugang innerhalb des Schulunterrichts kann bei angemessener Umsetzung das Imkern darstellen.
Der auf die schulische Praxis ausgerichtete Band 3 der Potsdamer Beiträge zur Innovation des Sachunterrichts präsentiert daher am Beispiel des Imkerns ein Konzept, wie im Rahmen des Sachunterrichts der Grundschule eine praktische Lerntätigkeit der Kinder im Einklang mit den Zielen, Dimensionen und Kompetenzerwartungen der Bildung für nachhaltige Entwicklung ermöglicht werden kann. Der Band richtet sich als Grundlagenwerk an alle Lehrkräfte des Sachunterrichts und dessen Bezugsfächer sowie an andere interessierte Leserinnen und Leser.
Quantified Self, die pro-aktive Selbstvermessung von Menschen, hat sich in den letzten Jahren von einer Nischenanwendung zu einem Massenphänomen entwickelt. Dabei stehen den Nutzern heute vielfältige technische Unterstützungsmöglichkeiten, beispielsweise in Form von Smartphones, Fitness-Trackern oder Gesundheitsapps zur Verfügung, welche eine annähernd lückenlose Überwachung unterschiedlicher Kontextfaktoren einer individuellen Lebenswirklichkeit erlauben.
In der Folge widmet sich diese Arbeit unter anderem der Fragestellung, inwieweit diese intensive und eigen-initiierte Beschäftigung, insbesondere mit gesundheitsbezogenen Daten, die weitgehend als objektiviert und damit belastbar gelten, die Gesundheitskompetenz derart aktiver Menschen erhöhen kann. Darüber hinaus werden Aspekte untersucht, inwieweit die neuen Technologien in der Lage sind, spezifische medizinische Erkenntnisse zu vertiefen und in der Konsequenz die daraus resultierenden Behandlungsprozesse zu verändern.
Während der Ursprung des Quantified Self im 2. Gesundheitsmarkt liegt, geht die vorliegende Arbeit der Frage nach, welche strukturellen, personellen und prozessualen Anknüpfungspunkte perspektivisch im 1. Gesundheitsmarkt existieren werden, wenn ein potentieller Patient in einer stärker emanzipierten Weise den Wunsch verspürt, oder eine entsprechende Forderung stellt, seine gesammelten Gesundheitsdaten in möglichst umfassender Form in eine medizinische Behandlung zu integrieren.
Dabei werden auf der einen Seite aktuelle Entwicklungen im 2. Gesundheitsmarkt untersucht, die gekennzeichnet sind von einer hohen Dynamik und einer großen Intransparenz. Auf der anderen Seite steht der als stark reguliert und wenig digitalisiert geltende 1. Gesundheitsmarkt mit seinen langen Entwicklungszyklen und ausgeprägten Partikularinteressen der verschiedenen Stakeholder.
In diesem Zuge werden aktuelle Entwicklungen des zugrunde liegenden Rechtsrahmens, speziell im Hinblick auf stärker patientenzentrierte und digitalisierte Normen untersucht, wobei insbesondere das Digitale Versorgung Gesetz eine wichtige Rolle einnimmt.
Ziel der Arbeit ist die stärkere Durchdringung von Wechselwirkungen an der Schnittstelle zwischen den beiden Gesundheitsmärkten in Bezug auf die Verwendung von Technologien der Selbstvermessung, um in der Folge zukünftige Geschäftspotentiale für existierende oder neu in den Markt drängende Dienstleister zu eruieren.
Als zentrale Methodik kommt hier eine Delphi-Studie zum Einsatz, die in einem interprofessionellen Ansatz versucht, ein Zukunftsbild dieser derzeit noch sehr jungen Entwicklungen für das Jahr 2030 aufzuzeigen. Eingebettet werden die Ergebnisse in die Untersuchung einer allgemeinen gesellschaftlichen Akzeptanz der skizzierten Veränderungen.
Due to their sessile lifestyle, plants are constantly exposed to pathogens and possess a multi-layered immune system that prevents infection. The first layer of immunity called pattern-triggered immunity (PTI), enables plants to recognise highly conserved molecules that are present in pathogens, resulting in immunity from non-adaptive pathogens. Adapted pathogens interfere with PTI, however the second layer of plant immunity can recognise these virulence factors resulting in a constant evolutionary battle between plant and pathogen. Xanthomonas campestris pv. vesicatoria (Xcv) is the causal agent of bacterial leaf spot disease in tomato and pepper plants. Like many Gram-negative bacteria, Xcv possesses a type-III secretion system, which it uses to translocate type-III effectors (T3E) into plant cells. Xcv has over 30 T3Es that interfere with the immune response of the host and are important for successful infection. One such effector is the Xanthomonas outer protein M (XopM) that shows no similarity to any other known protein. Characterisation of XopM and its role in virulence was the focus of this work.
While screening a tobacco cDNA library for potential host target proteins, the vesicle-associated membrane protein (VAMP)-associated protein 1-2 like (VAP12) was identified. The interaction between XopM and VAP12 was confirmed in the model species Nicotiana benthamiana and Arabidopsis as well as in tomato, a Xcv host. As plants possess multiple VAP proteins, it was determined that the interaction of XopM and VAP is isoform specific.
It could be confirmed that the major sperm protein (MSP) domain of NtVAP12 is sufficient for binding XopM and that binding can be disrupted by substituting one amino acid (T47) within this domain. Most VAP interactors have at least one FFAT (two phenylalanines [FF] in an acidic tract) related motif, screening the amino acid sequence of XopM showed that XopM has two FFAT-related motifs. Substitution of the second residue of each FFAT motif (Y61/F91) disrupts NtVAP12 binding, suggesting that these motifs cooperatively mediate this interaction. Structural modelling using AlphaFold further confirmed that the unstructured N-terminus of XopM binds NtVAP12 at its MSP domain, which was further confirmed by the generation of truncated XopM variants.
Infection of pepper leaves, with a XopM deficient Xcv strain did not result in a reduction of virulence in comparison to the Xcv wildtype, showing that the function of XopM during infection is redundant. Virus-induced gene silencing of NbVAP12 in N. benthamiana plants also did not affect Xcv virulence, which further indicated that interaction with VAP12 is also non-essential for Xcv virulence. Despite such findings, ectopic expression of wildtype XopM and XopMY61A/F91A in transgenic Arabidopsis seedlings enhanced the growth of a non-pathogenic Pseudomonas syringae pv. tomato (Pst) DC3000 strain. XopM was found to interfere with the PTI response allowing Pst growth independent of its binding to VAP. Furthermore, transiently expressed XopM could suppress reactive oxygen species (ROS; one of the earliest PTI responses) production in N. benthamiana leaves. The FFAT double mutant XopMY61A/F91A as well as the C-terminal truncation variant XopM106-519 could still suppress the ROS response while the N-terminal variant XopM1-105 did not. Suppression of ROS production is therefore independent of VAP binding. In addition, tagging the C-terminal variant of XopM with a nuclear localisation signal (NLS; NLS-XopM106-519) resulted in significantly higher ROS production than the membrane localising XopM106-519 variant, indicating that XopM-induced ROS suppression is localisation dependent.
To further characterise XopM, mass spectrometry techniques were used to identify post-translational modifications (PTM) and potential interaction partners. PTM analysis revealed that XopM contains up to 21 phosphorylation sites, which could influence VAP binding. Furthermore, proteins of the Rab family were identified as potential plant protein interaction partners. Rab proteins serve a multitude of functions including vesicle trafficking and have been previously identified as T3E host targets. Taking this into account, a model of virulence of XopM was proposed, with XopM anchoring itself to VAP proteins to potentially access plasma membrane associated proteins. XopM possibly interferes with vesicle trafficking, which in turn suppresses ROS production through an unknown mechanism.
In this work it was shown that XopM targets VAP proteins. The data collected suggests that this T3E uses VAP12 to anchor itself into the right place to carry out its function. While more work is needed to determine how XopM contributes to virulence of Xcv, this study sheds light onto how adapted pathogens overcome the immune response of their hosts. It is hoped that such knowledge will contribute to the development of crops resistant to Xcv in the future.
Efraim Frisch (1873–1942) und Albrecht Mendelssohn Bartholdy (1874–1936) waren im klassischen Zeitalter der Intellektuellen (neben anderem) Zeitschriftenentrepeneure und Gründer der kleinen Zeitschriften Der Neue Merkur (1914–1916/1919–1925) und Europäische Gespräche (1923–1933). Sie stehen (nicht nur mit ihren Zeitschriften) für einen der wiederholt in der Moderne unternommenen Versuche, die in der Aufklärung erschlossenen Ressourcen – demokratischer Republikanismus und universelle und gleiche Rechte für alle Menschen – im Vertrauen auf ihre globale Umsetzbarkeit zu aktivieren. In der Zeit der Weimarer Republik gehörten sie zu den Republikanern, „die Weimar als Symbol ernst nahmen und zäh und mutig bemüht waren, dem Ideal konkreten Inhalt zu verleihen“ (Peter Gay). Ihr bislang unüberliefert gebliebenes Beispiel fügt sich ein in die Demokratiegeschichte der europäischen Moderne, in die Geschichte internationaler Gesellschaftsbeziehungen und die Geschichte der Selbstbehauptung intellektueller Autonomie.
Die zäsurenübergreifend den Zeitraum von 1900 bis ca. 1940 untersuchende Studie ermöglicht wesentliche Einblicke in die Biografien Frischs und Mendelssohn Bartholdys, in die deutsch-französische/europäisch-transatlantische Welt der kleinen (literarisch-politischen) Zeitschriften des frühen 20. Jahrhunderts sowie in das medien-intellektuelle Feld des späten Kaiserreiches und der Weimarer Republik in seiner humanistisch-demokratisch-republikanischen Tendenz. Darüber hinaus beinhaltet sie neue Erkenntnisse zur Geschichte der ‚Heidelberger Vereinigung‘ – der Arbeitsgemeinschaft für eine Politik des Rechts – um Prinz Max von Baden, zur deutschen Friedensdelegation in Versailles 1919 und ihrem Hamburger Nachleben, zum Handbuch der Politik sowie zur ersten amtlichen Aktenpublikation des Auswärtigen Amtes – der Großen Politik der Europäischen Kabinette 1871–1914. Schließlich zu den Bemühungen der ‚Internationalists‘ der 1920er Jahre, eine effektive Ächtung des Angriffskrieges herbeizuführen.
Archive haben die Aufgabe, Wissen zu bewahren und zugänglich zu machen. Die Sammlungen des Museums für Naturkunde Berlin (MfN) wuchsen während der Zeit der europäischen Kolonialexpansion stark an. Naturalien aus der ganzen Welt gelangten nach Berlin und gleichzeitig fand ein wissenschaftlicher Austausch zu denselben statt. Die Spuren dieser Objekte und der Korrespondenzen können im Archiv des Museums nachverfolgt werden. Heute gelten koloniale Kontexte weitestgehend als Unrechtskontexte, deren Aufarbeitung gefordert wird. Um Provenienzforschung betreiben zu können, ist es daher unerlässlich, dass Museen und Archive ihre Sammlungen offenlegen (soweit rechtlich und ethisch möglich) und Außenstehenden den Zugriff ermöglichen.
Im Rahmen dieser Masterarbeit soll der respektvolle Umgang mit Archivgut aus kolonialen Kontexten kritisch reflektiert und Handlungsfelder für einen kulturell angemessenen Umgang mit sensiblen Inhalten aufgezeigt werden. Konkret beziehen sich die Handlungsoptionen auf Archivgut aus kolonialen Kontexten mit Bezug zu Australien. Dabei werden Provenienzforschung, Sensibilität, Mehrsprachigkeit, indigenes kulturelles Wissen (ICIP) sowie Plattform- und Schnittstellenoptionen für die Vernetzung von Daten und Inhalten bedacht. Ziel ist es, vor dem Hintergrund der Archive als Orte kulturellen Gedächtnisses den Umgang mit Archivgut aus kolonialen Kontexten zu reflektieren.
Die bedarfsgerechte Versorgung im Alter zukünftig sicherzustellen, gehört zu den entscheidenden Aufgaben unserer Zeit. Der in Deutschland bestehende Fachkräftemangel sowie der demografische Wandel belasten das Pflegesystem in mehrfacher Hinsicht: In einer alternden Gesellschaft sind immer mehr Menschen auf eine anhaltende Unterstützung angewiesen. Niedrige Geburtenraten und damit verbunden ein sinkender Bevölkerungs-anteil von Menschen im erwerbsfähigen Alter bringen einen bereits heute spürbaren Mangel an beruflich Pflegenden mit sich.
Um eine menschenwürdige Pflege anhaltend zu gewährleisten, müssen vorhandene Ressourcen gezielter eingesetzt und zusätzliche Reserven freigelegt werden. Viele Hoffnungen liegen hier auf technologischen Innovationen. Die Digitalisierung soll das Gesundheitswesen effizienter gestalten und beispielsweise durch Künstliche Intelligenz zeitraubende Prozesse vereinfachen oder sogar automatisieren. Im Kontext der Pflege wird der Einsatz von robotischen Assistenzsystemen diskutiert.
Aus diesem Grund wurde die die Potsdamer Bürger:innenkonferenz „Robotik in der Altenpflege?“ initiiert. Um die Zukunft der Pflege gemeinsam zu gestalten, wurden 3.500 Potsdamer Bürgerinnen und Bürger kontaktiert und schließlich fünfundzwanzig Teilnehmende ausgewählt. Im Frühjahr 2024 kamen sie zusammen, um den verantwortlichen Einsatz von Robotik in der Pflege zu diskutieren.
Die hier vorliegende Erklärung ist das Ergebnis der Bürger:innenkonferenz. Sie enthält die zentralen Positionen der Teilnehmenden.
Die Bürger:innenkonferenz ist Teil des Projekts E-cARE („Ethics Guidelines for Socially Assistive Robots in Elderly Care: An Empirical-Participatory Approach“), welches die Juniorprofessur für Medizinische Ethik mit Schwerpunkt auf Digitalisierung der Fakultät für Gesundheitswissenschaften Brandenburg, Universität Potsdam, durchgeführt hat.
Massive stars (Mini > 8 Msol) are the key feedback agents within galaxies, as they shape their surroundings via their powerful winds, ionizing radiation, and explosive supernovae. Most massive stars are born in binary systems, where interactions with their companions significantly alter their evolution and the feedback they deposit in their host galaxy. Understanding binary evolution, particularly in the low-metallicity environments as proxies for the Early Universe, is crucial for interpreting the rest-frame ultraviolet spectra observed in high-redshift galaxies by telescopes like Hubble and James Webb.
This thesis aims to tackle this challenge by investigating in detail massive binaries within the low-metallicity environment of the Small Magellanic Cloud galaxy. From ultraviolet and multi-epoch optical spectroscopic data, we uncovered post-interaction binaries. To comprehensively characterize these binary systems, their stellar winds, and orbital parameters, we use a multifaceted approach. The Potsdam Wolf-Rayet stellar atmosphere code is employed to obtain the stellar and wind parameters of the stars. Additionally, we perform consistent light and radial velocity fitting with the Physics of Eclipsing Binaries software, allowing for the independent determination of orbital parameters and component masses. Finally, we utilize these results to challenge the standard picture of stellar evolution and improve our understanding of low-metallicity stellar populations by calculating our binary evolution models with the Modules for Experiments in Stellar Astrophysics code.
We discovered the first four O-type post-interaction binaries in the SMC (Chapters 2, 5, and 6). Their primary stars have temperatures similar to other OB stars and reside far from the helium zero-age main sequence, challenging the traditional view of binary evolution. Our stellar evolution models suggest this may be due to enhanced mixing after core-hydrogen burning. Furthermore, we discovered the so-far most massive binary system undergoing mass transfer (Chapter 3), offering a unique opportunity to test mass-transfer efficiency in extreme conditions. Our binary evolution calculations revealed unexpected evolutionary pathways for accreting stars in binaries, potentially providing the missing link to understanding the observed Wolf-Rayet population within the SMC (Chapter 4). The results presented in this thesis unveiled the properties of massive binaries at low-metallicity which challenge the way the spectra of high-redshift galaxies are currently being analyzed as well as our understanding of massive-star feedback within galaxies.
Astrophysical shocks, driven by explosive events such as supernovae, efficiently accelerate charged particles to relativistic energies. The majority of these shocks occur in collisionless plasmas where the energy transfer is dominated by particle-wave interactions.Strong nonrelativistic shocks found in supernova remnants are plausible sites of galactic cosmic ray production, and the observed emission indicates the presence of nonthermal electrons. To participate in the primary mechanism of energy gain - Diffusive Shock Acceleration - electrons must have a highly suprathermal energy, implying a need for very efficient pre-acceleration. This poorly understood aspect of the shock acceleration theory is known as the electron injection problem. Studying electron-scale phenomena requires the use of fully kinetic particle-in-cell (PIC) simulations, which describe collisionless plasma from first principles.
Most published studies consider a homogenous upstream medium, but turbulence is ubiquitous in astrophysical environments and is typically driven at magnetohydrodynamic scales, cascading down to kinetic scales. For the first time, I investigate how preexisting turbulence affects electron acceleration at nonrelativistic shocks using the fully kinetic approach. To accomplish this, I developed a novel simulation framework that allows the study of shocks propagating in turbulent media. It involves simulating slabs of turbulent plasma separately, which are further continuously inserted into a shock simulation. This demands matching of the plasma slabs at the interface. A new procedure of matching electromagnetic fields and currents prevents numerical transients, and the plasma evolves self-consistently. The versatility of this framework has the potential to render simulations more consistent with turbulent systems in various astrophysical environments.
In this Thesis, I present the results of 2D3V PIC simulations of high-Mach-number nonrelativistic shocks with preexisting compressive turbulence in an electron-ion plasma. The chosen amplitudes of the density fluctuations ($\lesssim15\%$) concord with \textit{in situ} measurements in the heliosphere and the local interstellar medium. I explored how these fluctuations impact the dynamics of upstream electrons, the driving of the plasma instabilities, electron heating and acceleration. My results indicate that while the presence of the turbulence enhances variations in the upstream magnetic field, their levels remain too low to influence the behavior of electrons at perpendicular shocks significantly. However, the situation is different at oblique shocks. The external magnetic field inclined at an angle between $50^\circ \lesssim \theta_\text{Bn} \lesssim 75^\circ$ relative to the shock normal allows the escape of fast electrons toward the upstream region. An extended electron foreshock region is formed, where these particles drive various instabilities. Results of an oblique shock with $\theta_\text{Bn}=60^\circ$ propagating in preexisting compressive turbulence show that the foreshock becomes significantly shorter, and the shock-reflected electrons have higher temperatures. Furthermore, the energy spectrum of downstream electrons shows a well-pronounced nonthermal tail that follows a power law with an index up to -2.3.
The methods and results presented in this Thesis could serve as a starting point for more realistic modeling of interactions between shocks and turbulence in plasmas from first principles.
Condensation and crystallization are omnipresent phenomena in nature. The formation of droplets or crystals on a solid surface are familiar processes which, beyond their scientific interest, are required in many technological applications. In recent years, experimental techniques have been developed which allow patterning a substrate with surface domains of molecular thickness, surface area in the mesoscopic scale, and different wettabilities (i.e., different degrees of preference for a substance that is in contact with the substrate). The existence of new patterned surfaces has led to increased theoretical efforts to understand wetting phenomena in such systems.
In this thesis, we deal with some problems related to the equilibrium of phases (e.g., liquid-vapor coexistence) and the kinetics of phase separation in the presence of chemically patterned surfaces. Two different cases are considered: (i) patterned surfaces in contact with liquid and vapor, and (ii) patterned surfaces in contact with a crystalline phase. One of the problems that we have studied is the following: It is widely believed that if air containing water vapor is cooled to its dew point, droplets of water are immediately formed. Although common experience seems to support this view, it is not correct. It is only when air is cooled well below its dew point that the phase transition occurs immediately. A vapor cooled slightly below its dew point is in a metastable state, meaning that the liquid phase is more stable than the vapor, but the formation of droplets requires some time to occur, which can be very long.
It was first pointed out by J. W. Gibbs that the metastability of a vapor depends on the energy necessary to form a nucleus (a droplet of a critical size). Droplets smaller than the critical size will tend to disappear, while droplets larger than the critical size will tend to grow. This is consistent with an energy barrier that has its maximum at the critical size, as is the case for droplets formed directly in the vapor or in contact with a chemically uniform planar wall. Classical nucleation theory describes the time evolution of the condensation in terms of the random process of droplet growth through this energy barrier. This process is activated by thermal fluctuations, which eventually will form a droplet of the critical size.
We consider nucleation of droplets from a vapor on a substrate patterned with easily wettable (lyophilic) circular domains. Under certain conditions of pressure and temperature, the condensation of a droplet on a lyophilic circular domain proceeds through a barrier with two maxima (a double barrier). We have extended classical nucleation theory to account for the kinetics of nucleation through a double barrier, and applied this extension to nucleation on lyophilic circular domains.
Genome-scale metabolic models are mathematical representations of all known reactions occurring in a cell. Combined with constraints based on physiological measurements, these models have been used to accurately predict metabolic fluxes and effects of perturbations (e.g. knock-outs) and to inform metabolic engineering strategies. Recently, protein-constrained models have been shown to increase predictive potential (especially in overflow metabolism), while alleviating the need for measurement of nutrient uptake rates. The resulting modelling frameworks quantify the upkeep cost of a certain metabolic flux as the minimum amount of enzyme required for catalysis. These improvements are based on the use of in vitro turnover numbers or in vivo apparent catalytic rates of enzymes for model parameterization. In this thesis several tools for the estimation and refinement of these parameters based on in vivo proteomics data of Escherichia coli, Saccharomyces cerevisiae, and Chlamydomonas reinhardtii have been developed and applied. The difference between in vitro and in vivo catalytic rate measures for the three microorganisms was systematically analyzed. The results for the facultatively heterotrophic microalga C. reinhardtii considerably expanded the apparent catalytic rate estimates for photosynthetic organisms. Our general finding pointed at a global reduction of enzyme efficiency in heterotrophy compared to other growth scenarios. Independent of the modelled organism, in vivo estimates were shown to improve accuracy of predictions of protein abundances compared to in vitro values for turnover numbers. To further improve the protein abundance predictions, machine learning models were trained that integrate features derived from protein-constrained modelling and codon usage. Combining the two types of features outperformed single feature models and yielded good prediction results without relying on experimental transcriptomic data. The presented work reports valuable advances in the prediction of enzyme allocation in unseen scenarios using protein constrained metabolic models. It marks the first successful application of this modelling framework in the biotechnological important taxon of green microalgae, substantially increasing our knowledge of the enzyme catalytic landscape of phototrophic microorganisms.
Organizations are investing billions on innovation and agility initiatives to stay competitive in their increasingly uncertain business environments. Design Thinking, an innovation approach based on human-centered exploration, ideation and experimentation, has gained increasing popularity. The market for Design Thinking, including software products and general services, is projected to reach 2.500 million $ (US-Dollar) by 2028. A dispersed set of positive outcomes have been attributed to Design Thinking. However, there is no clear understanding of what exactly comprises the impact of Design Thinking and how it is created. To support a billion-dollar market, it is essential to understand the value Design Thinking is bringing to organizations not only to justify large investments, but to continuously improve the approach and its application.
Following a qualitative research approach combined with results from a systematic literature review, the results presented in this dissertation offer a structured understanding of Design Thinking impact. The results are structured along two main perspectives of impact: the individual and the organizational perspective. First, insights from qualitative data analysis demonstrate that measuring and assessing the impact of Design Thinking is currently one central challenge for Design Thinking practitioners in organizations. Second, the interview data revealed several effects Design Thinking has on individuals, demonstrating how Design Thinking can impact boundary management behaviors and enable employees to craft their jobs more actively.
Contributing to innovation management research, the work presented in this dissertation systematically explains the Design Thinking impact, allowing other researchers to both locate and integrate their work better. The results of this research advance the theoretical rigor of Design Thinking impact research, offering multiple theoretical underpinnings explaining the variety of Design Thinking impact. Furthermore, this dissertation contains three specific propositions on how Design Thinking creates an impact: Design Thinking creates an impact through integration, enablement, and engagement. Integration refers to how Design Thinking enables organizations through effectively combining things, such as for example fostering balance between exploitation and exploration activities. Through Engagement, Design Thinking impacts organizations involving users and other relevant stakeholders in their work. Moreover, Design Thinking creates impact through Enablement, making it possible for individuals to enact a specific behavior or experience certain states.
By synthesizing multiple theoretical streams into these three overarching themes, the results of this research can help bridge disciplinary boundaries, for example between business, psychology and design, and enhance future collaborative research. Practitioners benefit from the results as multiple desirable outcomes are detailed in this thesis, such as successful individual job crafting behaviors, which can be expected from practicing Design Thinking. This allows practitioners to enact more evidence-based decision-making concerning Design Thinking implementation. Overall, considering multiple levels of impact as well as a broad range of theoretical underpinnings are paramount to understanding and fostering Design Thinking impact.
Plate tectonic boundaries constitute the suture zones between tectonic plates. They are shaped by a variety of distinct and interrelated processes and play a key role in geohazards and georesource formation. Many of these processes have been previously studied, while many others remain unaddressed or undiscovered. In this work, the geodynamic numerical modeling software ASPECT is applied to shed light on further process interactions at continental plate boundaries. In contrast to natural data, geodynamic modeling has the advantage that processes can be directly quantified and that all parameters can be analyzed over the entire evolution of a structure. Furthermore, processes and interactions can be singled out from complex settings because the modeler has full control over all of the parameters involved. To account for the simplifying character of models in general, I have chosen to study generic geological settings with a focus on the processes and interactions rather than precisely reconstructing a specific region of the Earth.
In Chapter 2, 2D models of continental rifts with different crustal thicknesses between 20 and 50 km and extension velocities in the range of 0.5-10 mm/yr are used to obtain a speed limit for the thermal steady-state assumption, commonly employed to address the temperature fields of continental rifts worldwide. Because the tectonic deformation from ongoing rifting outpaces heat conduction, the temperature field is not in equilibrium, but is characterized by a transient, tectonically-induced heat flow signal. As a result, I find that isotherm depths of the geodynamic evolution models are shallower than a temperature distribution in equilibrium would suggest. This is particularly important for deep isotherms and narrow rifts. In narrow rifts, the magnitude of the transient temperature signal limits a well-founded applicability of the thermal steady-state assumption to extension velocities of 0.5-2 mm/yr. Estimation of the crustal temperature field affects conclusions on all temperature-dependent processes ranging from mineral assemblages to the feasible exploitation of a geothermal reservoir.
In Chapter 3, I model the interactions of different rheologies with the kinematics of folding and faulting using the example of fault-propagation folds in the Andean fold-and-thrust belt. The evolution of the velocity fields from geodynamic models are compared with those from trishear models of the same structure. While the latter use only geometric and kinematic constraints of the main fault, the geodynamic models capture viscous, plastic, and elastic deformation in the entire model domain. I find that both models work equally well for early, and thus relatively simple stages of folding and faulting, while results differ for more complex situations where off-fault deformation and secondary faulting are present. As fault-propagation folds can play an important role in the formation of reservoirs, knowledge of fluid pathways, for example via fractures and faults, is crucial for their characterization.
Chapter 4 deals with a bending transform fault and the interconnections between tectonics and surface processes. In particular, the tectonic evolution of the Dead Sea Fault is addressed where a releasing bend forms the Dead Sea pull-apart basin, while a restraining bend further to the North resulted in the formation of the Lebanese mountains. I ran 3D coupled geodynamic and surface evolution models that included both types of bends in a single setup. I tested various randomized initial strain distributions, showing that basin asymmetry is a consequence of strain localization. Furthermore, by varying the surface process efficiency, I find that the deposition of sediment in the pull-apart basin not only controls basin depth, but also results in a crustal flow component that increases uplift at the restraining bend.
Finally, in Chapter 5, I present the computational basis for adding further complexity to plate boundary models in ASPECT with the implementation of earthquake-like behavior using the rate-and-state friction framework. Despite earthquakes happening on a relatively small time scale, there are many interactions between the seismic cycle and the long time spans of other geodynamic processes. Amongst others, the crustal state of stress as well as the presence of fluids or changes in temperature may alter the frictional behavior of a fault segment. My work provides the basis for a realistic setup of involved structures and processes, which is therefore important to obtain a meaningful estimate for earthquake hazards.
While these findings improve our understanding of continental plate boundaries, further development of geodynamic software may help to reveal even more processes and interactions in the future.
Mantodea, commonly known as mantids, have captivated researchers owing to their enigmatic behavior and ecological significance. This order comprises a diverse array of predatory insects, boasting over 2,400 species globally and inhabiting a wide spectrum of ecosystems. In Iran, the mantid fauna displays remarkable diversity, yet numerous facets of this fauna remain poorly understood, with a significant dearth of systematic and ecological research. This substantial knowledge gap underscores the pressing need for a comprehensive study to advance our understanding of Mantodea in Iran and its neighboring regions.
The principal objective of this investigation was to delve into the ecology and phylogeny of Mantodea within these areas. To accomplish this, our research efforts concentrated on three distinct genera within Iranian Mantodea. These genera were selected due to their limited existing knowledge base and feasibility for in-depth study. Our comprehensive methodology encompassed a multifaceted approach, integrating morphological analysis, molecular techniques, and ecological observations.
Our research encompassed a comprehensive revision of the genus Holaptilon, resulting in the description of four previously unknown species. This extensive effort substantially advanced our understanding of the ecological roles played by Holaptilon and refined its systematic classification. Furthermore, our investigation into Nilomantis floweri expanded its known distribution range to include Iran. By conducting thorough biological assessments, genetic analyses, and ecological niche modeling, we obtained invaluable insights into distribution patterns and genetic diversity within this species. Additionally, our research provided a thorough comprehension of the life cycle, behaviors, and ecological niche modeling of Blepharopsis mendica, shedding new light on the distinctive characteristics of this mantid species. Moreover, we contributed essential knowledge about parasitoids that infect mantid ootheca, laying the foundation for future studies aimed at uncovering the intricate mechanisms governing ecological and evolutionary interactions between parasitoids and Mantodea.
Virtual Reality (VR) leads to the highest level of immersion if presented using a 1:1 mapping of virtual space to physical space—also known as real walking. The advent of inexpensive consumer virtual reality (VR) headsets, all capable of running inside-out position tracking, has brought VR to the home. However, many VR applications do not feature full real walking, but instead, feature a less immersive space-saving technique known as instant teleportation. Given that only 0.3% of home users run their VR experiences in spaces more than 4m2, the most likely explanation is the lack of the physical space required for meaningful use of real walking. In this thesis, we investigate how to overcome this hurdle. We demonstrate how to run 1:1-mapped VR experiences in small physical spaces and we explore the trade-off between space and immersion. (1) We start with a space limit of 15cm. We present DualPanto, a device that allows (blind) VR users to experience the virtual world from a 1:1 mapped bird’s eye perspective—by leveraging haptics. (2) We then relax our space constraints to 50cm, which is what seated users (e.g., on an airplane or train ride) have at their disposal. We leverage the space to represent a standing user in 1:1 mapping, while only compressing the user’s arm movement. We demonstrate our 4 prototype VirtualArms at the example of VR experiences limited to arm movement, such as boxing. (3) Finally, we relax our space constraints further to 3m2 of walkable space, which is what 75% of home users have access to. As well- established in the literature, we implement real walking with the help of portals, also known as “impossible spaces”. While impossible spaces on such dramatic space constraints tend to degenerate into incomprehensible mazes (as demonstrated, for example, by “TraVRsal”), we propose plausibleSpaces: presenting meaningful virtual worlds by adapting various visual elements to impossible spaces. Our techniques push the boundary of spatially meaningful VR interaction in various small spaces. We see further future challenges for new design approaches to immersive VR experiences for the smallest physical spaces in our daily life.
HPI Future SOC Lab
(2024)
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2020. Selected projects have presented their results on April 21st and November 10th 2020 at the Future SOC Lab Day events.
Floods continue to be the leading cause of economic damages and fatalities among natural disasters worldwide. As future climate and exposure changes are projected to intensify these damages, the need for more accurate and scalable flood risk models is rising. Over the past decade, macro-scale flood risk models have evolved from initial proof-of-concepts to indispensable tools for decision-making at global-, nationaland, increasingly, the local-level. This progress has been propelled by the advent of high-performance computing and the availability of global, space-based datasets. However, despite such advancements, these models are rarely validated and consistently fall short of the accuracy achieved by high-resolution local models. While capabilities have improved, significant gaps persist in understanding the behaviours of such macro-scale models, particularly their tendency to overestimate risk. This dissertation aims to address such gaps by examining the scale transfers inherent in the construction and application of coarse macroscale models. To achieve this, four studies are presented that, collectively, address exposure, hazard, and vulnerability components of risk affected by upscaling or downscaling.
The first study focuses on a type of downscaling where coarse flood hazard inundation grids are enhanced to a finer resolution. While such inundation downscaling has been employed in numerous global model chains, ours is the first study to focus specifically on this component, providing an evaluation of the state of the art and a novel algorithm. Findings demonstrate that our novel algorithm is eight times faster than existing methods, offers a slight improvement in accuracy, and generates more physically coherent flood maps in hydraulically challenging regions. When applied to a case study, the algorithm generated a 4m resolution inundation map from 30m hydrodynamic model outputs in 33 s, a 60-fold improvement in runtime with a 25% increase in RMSE compared with direct hydrodynamic modelling. All evaluated downscaling algorithms yielded better accuracy than the coarse hydrodynamic model when compared to observations, demonstrating similar limits of coarse hydrodynamic models reported by others. The substitution of downscaling into flood risk model chains, in place of high-resolution modelling, can drastically improve the lead time of impactbased forecasts and the efficiency of hazard map production. With downscaling, local regions could obtain high resolution local inundation maps by post-processing a global model without the need for expensive modelling or expertise.
The second study focuses on hazard aggregation and its implications for exposure, investigating implicit aggregations commonly used to intersect hazard grids with coarse exposure models. This research introduces a novel spatial classification framework to understand the effects of rescaling flood hazard grids to a coarser resolution. The study derives closed-form analytical solutions for the location and direction of bias from flood grid aggregation, showing that bias will always be present in regions near the edge of inundation. For example, inundation area will be positively biased when water depth grids are aggregated, while volume will be negatively biased when water elevation grids are aggregated. Extending the analysis to effects of hazard aggregation on building exposure, this study shows that exposure in regions at the edge of inundation are an order of magnitude more sensitive to aggregation errors than hazard alone. Among the two aggregation routines considered, averaging water surface elevation grids better preserved flood depths at buildings than averaging of water depth grids. The study provides the first mathematical proof and generalizeable treatment of flood hazard grid aggregation, demonstrating important mechanisms to help flood risk modellers understand and control model behaviour.
The final two studies focus on the aggregation of vulnerability models or flood damage functions, investigating the practice of applying per-asset functions to aggregate exposure models. Both studies extend Jensen’s inequality, a well-known 1906 mathematical proof, to demonstrate how the aggregation of flood damage functions leads to bias. Applying Jensen’s proof in this new context, results show that typically concave flood damage functions will introduce a positive bias (overestimation) when aggregated. This behaviour was further investigated with a simulation experiment including 2 million buildings in Germany, four global flood hazard simulations and three aggregation scenarios. The results show that positive aggregation bias is not distributed evenly in space, meaning some regions identified as “hot spots of risk” in assessments may in fact just be hot spots of aggregation bias. This study provides the first application of Jensen’s inequality to explain the overestimates reported elsewhere and advice for modellers to minimize such artifacts.
In total, this dissertation investigates the complex ways aggregation and disaggregation influence the behaviour of risk models, focusing on the scale-transfers underpinning macro-scale flood risk assessments. Extending a key finding of the flood hazard literature to the broader context of flood risk, this dissertation concludes that all else equal, coarse models overestimate risk. This dissertation goes beyond previous studies by providing mathematical proofs for how and where such bias emerges in aggregation routines, offering a mechanistic explanation for coarse model overestimates. It shows that this bias is spatially heterogeneous, necessitating a deep understanding of how rescaling may bias models to effectively reduce or communicate uncertainties. Further, the dissertation offers specific recommendations to help modellers minimize scale transfers in problematic regions. In conclusion, I argue that such aggregation errors are epistemic, stemming from choices in model structure, and therefore hold greater potential and impetus for study and mitigation. This deeper understanding of uncertainties is essential for improving macro-scale flood risk models and their effectiveness in equitable, holistic, and sustainable flood management.
State space models enjoy wide popularity in mathematical and statistical modelling across disciplines and research fields. Frequent solutions to problems of estimation and forecasting of a latent signal such as the celebrated Kalman filter hereby rely on a set of strong assumptions such as linearity of system dynamics and Gaussianity of noise terms.
We investigate fallacy in mis-specification of the noise terms, that is signal noise
and observation noise, regarding heavy tailedness in that the true dynamic frequently produces observation outliers or abrupt jumps of the signal state due to realizations of these heavy tails not considered by the model. We propose a formalisation of observation noise mis-specification in terms of Huber’s ε-contamination as well as a computationally cheap solution via generalised Bayesian posteriors with a diffusion Stein divergence loss resulting in the diffusion score matching Kalman filter - a modified algorithm akin in complexity to the regular Kalman filter. For this new filter interpretations of novel terms, stability and an ensemble variant are discussed. Regarding signal noise mis-specification, we propose a formalisation in the frame work of change point detection and join ideas from the popular CUSUM algo-
rithm with ideas from Bayesian online change point detection to combine frequent reliability constraints and online inference resulting in a Gaussian mixture model variant of multiple Kalman filters. We hereby exploit open-end sequential probability ratio tests on the evidence of Kalman filters on observation sub-sequences for aggregated inference under notions of plausibility.
Both proposed methods are combined to investigate the double mis-specification problem and discussed regarding their capabilities in reliable and well-tuned uncertainty quantification. Each section provides an introduction to required terminology and tools as well as simulation experiments on the popular target tracking task and the non-linear, chaotic Lorenz-63 system to showcase practical performance of theoretical considerations.
In dieser Arbeit wurde eine reaktive Wand in einem kleinskaligen Laborma\ss stab (Länge~=~40\,cm) entwickelt, die Eisen- und Sulfatbelastungen aus sauren Minenabwässern (engl. \textit{acid mine drainage} (AMD)) mit einer Effizienz von bis zu 30.2 bzw. 24.2\,\% über einen Zeitraum von 146~Tagen (50\,pv) abreinigen können sollte. Als reaktives Material wurde eine Mischung aus Gartenkompost, Buchenholz, Kokosnussschale und Calciumcarbonat verwendet. Die Zugabebedingungen waren eine Eisenkonzentration von 1000\,mg/L, eine Sulfatkonzentration von 3000\,mg/L und ein pH-Wert von 6.2.
Unterschiede in der Materialzusammensetzung ergaben keine grö\ss eren Änderungen in der Sanierungseffizienz von Eisen- und Sulfatbelastungen (12.0 -- 15.4\,\% bzw. 7.0 -- 10.1\,\%) über einen Untersuchungszeitraum von 108~Tagen (41 -- 57\,pv). Der wichtigste Einflussfaktor auf die Abreinigungsleistung von Sulfat- und Eisenbelastungen war die Verweilzeit der AMD-Lösung im reaktiven Material. Diese kann durch eine Verringerung des Durchflusses oder eine Erhöhung der Länge der reaktiven Wand (engl. \textit{Permeable Reactive Barrier} (PRB)) erhöht werden. Ein halbierter Durchfluss erhöhte die Sanierungseffizienzen von Eisen und Sulfat auf 23.4 bzw. 32.7\,\%. Weiterhin stieg die Sanierungseffizienz der Eisenbelastungen auf 24.2\,\% bei einer Erhöhung der Sulfatzugabekonzentration auf 6000\,mg/L. Saure Startbedingungen (pH~=~2.2) konnten, durch das Calciumcarbonat im reaktiven Material, über einen Zeitraum von 47~Tagen (24\,pv) neutralisiert werden. Durch die Neutralisierung der sauren Startbedingungen wurde Calciumcarbonat in der \gls{prb} verbraucht und Calcium-Ionen freigesetzt, die die Sulfatsanierungseffizienz erhöht haben (24.9\,\%). Aufgrund einer Vergrö\ss erung der \gls{prb} in Breite und Tiefe und einer 2D-Parameterbestimmung konnten Randläufigkeiten beobachtet werden, ohne deren Einfluss sich die Sanierungseffizienz für Eisen- und Sulfatbelastungen erhöht (30.2 bzw. 24.2\,\%). \par
Zur \textit{in-situ} Überwachung der \gls{prb} wurden optische Sensoren verwendet, um den pH-Wert, die Sauerstoffkonzentration und die Temperatur zu ermitteln. Es wurden, nach dem Ort und der Zeit aufgelöst, stabile Sauerstoffkonzentrationen und pH-Verläufe detektiert. Auch die Temperatur konnte nach dem Ort aufgelöst ermittelt werden. Damit zeigte diese Arbeit, dass optische Sensoren zur Überwachung der Stabilität einer \gls{prb} für die Reinigung von \gls{amd} verwendet werden können. \par
Mit dem Simulationsprogramm MIN3P wurde eine Simulation erstellt, die die entwickelte PRB darstellt. Die Simulation kann die erhaltenen Laborergebnisse gut wiedergeben. Anschlie\ss end wurde eine simulierte \gls{prb} bei unterschiedlichen Filtergeschwindigkeiten ((4.0 -- 23.5)~$\cdot~\mathrm{10^{-7}}$\,m/s) und Längen der PRB (25 -- 400\,cm) untersucht. Es wurden Zusammenhänge der untersuchten Parameter mit der Sanierungseffizienz von Eisen- und Sulfatbelastungen ermittelt. Diese Zusammenhänge können verwendet werden, um die benötigte Verweilzeit der AMD-Lösung in einem zukünftigen PRB-System, die für die maximal mögliche Sanierungsleistung notwendig ist, zu berechnen.
Proceedings of TripleA 10
(2024)
The TripleA workshop series was founded in 2014 by linguists from Potsdam and Tübingen with the aim of providing a platform for researchers that conduct theoretically-informed linguistic fieldwork on meaning. Its focus is particularly on languages that are under-represented in the current research landscape, including but not limited to languages of Africa, Asia, and Australia, hence TripleA.
For its 10th anniversary, TripleA returned to the University of Potsdam on the 7-9th of June 2023.
The programme included 21 talks dealing with no less than 22 different languages, including three invited talks given by Sihwei Chen (Academia Sinica), Jérémy Pasquereau (Laboratoire de Linguistique de Nantes, CNRS) and Agata Renans (Ruhr-Universität Bochum). Nine of these (invited or peer-reviewed) talks are featured in this volume.
Um in der Schule bereits frühzeitig ein Verständnis für informatische Prozesse zu vermitteln wurde das neue Informatikfach Digitale Welt für die Klassenstufe 5 konzipiert mit der bundesweit einmaligen Verbindung von Informatik mit anwendungsbezogenen und gesellschaftlich relevanten Bezügen zur Ökologie und Ökonomie. Der Technische Report gibt eine Handreichung zur Einführung des neuen Faches.
The global drylands cover nearly half of the terrestrial surface and are home to more than two billion people. In many drylands, ongoing land-use change transforms near-natural savanna vegetation to agricultural land to increase food production. In Southern Africa, these heterogenous savanna ecosystems are also recognized as habitats of many protected animal species, such as elephant, lion and large herds of diverse herbivores, which are of great value for the tourism industry. Here, subsistence farmers and livestock herder communities often live in close proximity to nature conservation areas. Although these land-use transformations are different regarding the future they aspire to, both processes, nature conservation with large herbivores and agricultural intensification, have in common, that they change the vegetation structure of savanna ecosystems, usually leading to destruction of trees, shrubs and the woody biomass they consist of.
Such changes in woody vegetation cover and biomass are often regarded as forms of land degradation and forest loss. Global forest conservation approaches and international programs aim to stop degradation processes, also to conserve the carbon bound within wood from volatilization into earth’s atmosphere. In search for mitigation options against global climate change savannas are increasingly discussed as potential carbon sinks. Savannas, however, are not forests, in that they are naturally shaped by and adapted to disturbances, such as wildfires and herbivory. Unlike in forests, disturbances are necessary for stable, functioning savanna ecosystems and prevent these ecosystems from forming closed forest stands. Their consequently lower levels of carbon storage in woody vegetation have long been the reason for savannas to be overlooked as a potential carbon sink but recently the question was raised if carbon sequestration programs (such as REDD+) could also be applied to savanna ecosystems. However, heterogenous vegetation structure and chronic disturbances hamper the quantification of carbon stocks in savannas, and current procedures of carbon storage estimation entail high uncertainties due to methodological obstacles. It is therefore challenging to assess how future land-use changes such as agricultural intensification or increasing wildlife densities will impact the carbon storage balance of African drylands.
In this thesis, I address the research gap of accurately quantifying carbon storage in vegetation and soils of disturbance-prone savanna ecosystems. I further analyse relevant drivers for both ecosystem compartments and their implications for future carbon storage under land-use change. Moreover, I show that in savannas different carbon storage pools vary in their persistence to disturbance, causing carbon bound in shrub vegetation to be most likely to experience severe losses under land-use change while soil organic carbon stored in subsoils is least likely to be impacted by land-use change in the future.
I start with summarizing conventional approaches to carbon storage assessment and where and for which reasons they fail to accurately estimated savanna ecosystem carbon storage. Furthermore, I outline which future-making processes drive land-use change in Southern Africa along two pathways of land-use transformation and how these are likely to influence carbon storage. In the following chapters, I propose a new method of carbon storage estimation which is adapted to the specific conditions of disturbance-prone ecosystems and demonstrate the advantages of this approach in relation to existing forestry methods. Specifically, I highlight sources for previous over- and underestimation of savanna carbon stocks which the proposed methodology resolves. In the following chapters, I apply the new method to analyse impacts of land-use change on carbon storage in woody vegetation in conjunction with the soil compartment. With this interdisciplinary approach, I can demonstrate that indeed both, agricultural intensification and nature conservation with large herbivores, reduce woody carbon storage above- and belowground, but partly sequesters this carbon into the soil organic carbon stock. I then quantify whole-ecosystem carbon storage in different ecosystem compartments (above- and belowground woody carbon in shrubs and trees, respectively, as well as topsoil and subsoil organic carbon) of two savanna vegetation types (scrub savanna and savanna woodland). Moreover, in a space-for-time substitution I analyse how land-use changes impact carbon storage in each compartment and in the whole ecosystem. Carbon storage compartments are found to differ in their persistence to land-use change with carbon bound in shrub biomass being least persistent to future changes and subsoil organic carbon being most stable under changing land-use. I then explore which individual land-use change effects act as drivers of carbon storage through Generalized Additive Models (GAMs) and uncover non-linear effects, especially of elephant browsing, with implications for future carbon storage. In the last chapter, I discuss my findings in the larger context of this thesis and discuss relevant implications for land-use change and future-making decisions in rural Africa.
Laser induced switching offers an attractive possibility to manipulate small magnetic domains for prospective memory and logic devices on ultrashort time scales. Moreover, optical control of magnetization without high applied magnetic fields allows manipulation of magnetic domains individually and locally, without expensive heat dissipation. One of the major challenges for developing novel optically controlled magnetic memory and logic devices is reliable formation and annihilation of non-volatile magnetic domains that can serve as memory bits in ambient conditions. Magnetic skyrmions, topologically nontrivial spin textures, have been studied intensively since their discovery due to their stability and scalability in potential spintronic devices. However, skyrmion formation and, especially, annihilation processes are still not completely understood and further investigation on such mechanisms are needed. The aim of this thesis is to contribute to better understanding of the physical processes behind the optical control of magnetism in thin films, with the goal of optimizing material parameters and methods for their potential use in next generation memory and logic devices.
First part of the thesis is dedicated to investigation of all-optical helicity-dependent switching (AO-HDS) as a method for magnetization manipulation. AO-HDS in Co/Pt multilayer and CoFeB alloys with and without the presence of Dzyaloshinskii-Moriya interaction (DMI), which is a type of exchange interaction, have been investigated by magnetic imaging using photo-emission electron microscopy (PEEM) in combination with X-ray magnetic circular dichroism (XMCD). The results show that in a narrow range of the laser fluence, circularly polarized laser light induces a drag on domain walls. This enables a local deterministic transformation of the magnetic domain pattern from stripes to bubbles in out-of-plane magnetized Co/Pt multilayers, only controlled by the helicity of ultrashort laser pulses. The temperature and characteristic fields at which the stripe-bubble transformation occurs has been calculated using theory for isolated magnetic bubbles, using as parameters experimentally determined average size of stripe domains and the magnetic layer thickness.
The second part of the work aims at purely optical formation and annihilation of magnetic skyrmions by a single laser pulse. The presence of a skyrmion phase in the investigated CoFeB alloys was first confirmed using a Kerr microscope. Then the helicity-dependent skyrmion manipulation was studied using AO-HDS at different laser fluences. It was found that formation or annihilation individual skyrmions using AO-HDS is possible, but not always reliable, as fluctuations in the laser fluence or position can easily overwrite the helicity-dependent effect of AO-HDS. However, the experimental results and magnetic simulations showed that the threshold values for the laser fluence for the formation and annihilation of skyrmions are different. A higher fluence is required for skyrmion formation, and existing skyrmions can be annihilated by pulses with a slightly lower fluence. This provides a further option for controlling formation and annihilation of skyrmions using the laser fluence. Micromagnetic simulations provide additional insights into the formation and annihilation mechanism.
The ability to manipulate the magnetic state of individual skyrmions is of fundamental importance for magnetic data storage technologies. Our results show for the first time that the optical formation and annihilation of skyrmions is possible without changing the external field. These results enable further investigations to optimise the magnetic layer to maximise the energy gap between the formation and annihilation barrier. As a result, unwanted switching due to small laser fluctuations can be avoided and fully deterministic optical switching can be achieved.
„Wir Juden verwalten den geistigen Besitz eines Volkes, das uns die Berechtigung und die Fähigkeit dazu abspricht.“ In diesem Satz kulminiert der Aufsatz „Deutsch-jüdischer Parnaß“, den der jüdische Verfasser Moritz Goldstein 1912 in der nationalkonservativen Kunst- und Kulturzeitschrift Der Kunstwart publizieren lässt.
In seiner Abhandlung durchleuchtet Goldstein das kulturelle Leben und Schaffen seiner jüdischen wie nichtjüdischen Zeitgenossen und ihre gesellschaftlichen Begegnungsorte. Er prangert eine vermeintliche Passivität jüdisch-deutscher Künstlerinnen und Künstler an, die sich in einem administrativen Akt mit deutscher Kultur beschäftigten, aber nicht selbst kreativ seien. In gleicher Manier kommt auch seine Kritik an den nichtjüdischen Deutschen daher, denen er vorwirft, Jüdinnen und Juden ihre kulturelle Schaffenskraft und Deutschheit abzusprechen. Sie sähen Jüdinnen und Juden trotz aller Bemühungen und Gefühle als „ganz undeutsch.“ Aus diesem attestierten Distanzverhältnis beider Gruppen fordert er selbstbewusst die Dissimilation und die Errichtung einer eigenen jüdischen Kulturlandschaft.
Goldstein evoziert mit seinem Text am Vorabend des Ersten Weltkrieges eine Debatte innerhalb kulturkonservativer deutscher Kreise, in der renommierte Autoren ihre Ansichten über Jüdischkeit und Deutschheit preisgeben. Unter ihnen befinden sich der Kunstwart-Herausgeber Ferdinand Avenarius, der Lyriker Ernst Lissauer, der völkische Schriftsteller Philipp Stauff, der Zionist Ludwig Strauss und Jakob Loewenberg, Mitglied im Centralverein deutscher Staatsbürger jüdischen Glaubens.
Moritz Goldsteins „Deutsch-jüdischer Parnaß“ stößt eine Debatte an, die zur Blaupause wird für das Verhältnis von jüdischen und nichtjüdischen Deutschen im ausgehenden Kaiserreich und für einen langen Nachhall in der deutsch-jüdischen Geschichte sorgt.
Jews and Muslims have lived in the territory of modern-day Austria for centuries untold, yet often continue to be construed as the essential “other.” This essay explores a selection of sometimes divergent, sometimes convergent historical experiences amongst these two broad population groups, focusing specifically on demographic diversity, community-building, discrimination and persecution, and the post-war situation. The ultimate aim is to illuminate paradigmatically through the Austrian case study the complex multicultural mosaic of historical Central Europe, the understanding of which, so our contention, sheds a critical light on the often divisive present-day debates concerning immigration and diversity in Austria and Central Europe more broadly. It furthermore opens up a hitherto understudied field of historical research, namely the entangled history of Jews and Muslims in modern Europe.
The Jewish museums established in the fin-de-siècle Habsburg Empire postulated the unity of “the Jewish people,” with custodians constructing an “us” (Jews) in distinction to the “other” (non-Jews). In the difference-oriented frenzy of the time, Jewish identity was predominantly presented as Central European, enlightened, not overly religious, and middle-class. Then, when the Viennese Jewish Museum opened its doors in 1895, the painters Isidor Kaufmann and David Kohn created an installation called “Die Gute Stube” (The Parlor). This exhibit housed books, furniture, as well as decorative and ritual objects of the kind that were thought to be found in typical Eastern European Jewish households. However, as this article argues, this attempted visualization of the essence of Judaism and the range of Jewish life worlds promoted a paradigmatic stereotype with which Jewish museums would have to struggle for decades to come.
Even though Salonican Jews are not typically associated with the Habsburg Empire, some of them, nonetheless, lived there. This paper aims to examine the formation of these Salonican Jews’ (self-)identification by studying their social interactions with the local Viennese population such as the Viennese Sephardi or the Greek-Orthodox communities. The change of the milieu within which they found themselves subsequently impacted their self-perception. Thus, the issue of the surrounding environment and their relations with other groups became central to their self-understanding, as will be demonstrated. By examining different aspects, like migration patterns, financial decisions and family ties, one can understand how their intersection influenced Salonica Jews’ self-identification, which, at the same time, shaped and was shaped by the surrounding milieu. Within this framework, these people perceived themselves and were perceived as Salonican, Sephardi, Jewish, and as subjects of the Emperor.
“Domestic Foreigners”
(2024)
This paper examines the relationship between the Sephardic Jewish community of Vienna and the Ottoman and Habsburg Empires in the latter half of the 19th century. The community’s legal status was transformed following the emancipation of Austrian Jews, but very few first-hand accounts of these changes exist today. The primary sources analyzed in this paper are Judezmo-language newspapers published in Vienna at that time. The paper emphasizes the historical and political contexts surrounding these sources, particularly the community’s close ties to the Ottoman and Habsburg regimes.
Shared Spaces
(2024)
Galicia was home to the largest Jewish population of the Cisleithanian part of the Habsburg Empire. After the Josephinian “German-Jewish schools” had closed already in 1806, educational patterns differed from those in Moravia and Bohemia, where Jewish children received a secular education in a more consistent “Jewish” space. In Galicia in the constitutional era (post-1867), however, with mandatory education enforced, public schools became a shared space in which Jews and (Catholic) Christians functioned together. In Galicia, most Jewish children received public education but usually constituted a religious minority in the student body. The article analyzes how the school space, calendar, and routines were adjusted to accommodate the multi-religious character of the student body.
The article analyzes the interdependences between the history of the Habsburg Empire and the names of its Jewish inhabitants. Until today, these names tell stories about this close relationship and they are an everlasting symbol of this era. By focusing on names, this paper shows how state policies towards Jews shifted over time, and how the perspective on names and name regulations can be a tool to connect and investigate both Habsburg and Jewish studies.
This article aims to demonstrate the exceptional potential of Habsburg military records for the study of Jewish history during Europe’s Age of Revolution. We begin with the random discovery of six Jewish veterans of Freikorps Grün Loudon – a unit of mercenary freebooters – which fought for the Habsburgs during the first war against the French Republic (1792 – 97). A careful re-reading of the available archival evidence reveals that these men were the survivors of a much larger group numbering at least two dozen Jewish soldiers. While Jewish conscripts had been drafted into the Habsburg army since 1788, the fact that Jews could also serve – even volunteer – as professional soldiers in that period is completely new to us. When considered together, the personal circumstances and service experiences of the Jewish soldiers of Freikorps Grün Loudon enable us to make several observations about their motivation as well as their position vis-à-vis their non-Jewish comrades.
This article brings two seemingly disconnected historiographic models of periodization into conversation: Habsburg studies and Habsburg Jewish studies. It argues for an expansion of the temporal frameworks of both fields to highlight historical continuities connecting the Holy Roman and Habsburg Empire at least from a structural perspective. These historical continuums are a useful analytical lens when applied to marginalized groups, like early modern Jews, in tandem with a central group of contemporary powerholders, such as the Habsburg nobility. Using Bohemia as a case study, this essay juxtaposes questions of transregional transfer of cultural, economic, and social capital with the challenges of Jewish marginalization and discrimination to highlight the changing yet interconnected imperial landscapes.
Hardy inequalities on graphs
(2024)
The dissertation deals with a central inequality of non-linear potential theory, the Hardy inequality. It states that the non-linear energy functional can be estimated from below by a pth power of a weighted p-norm, p>1. The energy functional consists of a divergence part and an arbitrary potential part. Locally summable infinite graphs were chosen as the underlying space. Previous publications on Hardy inequalities on graphs have mainly considered the special case p=2, or locally finite graphs without a potential part.
Two fundamental questions now arise quite naturally: For which graphs is there a Hardy inequality at all? And, if it exists, is there a way to obtain an optimal weight? Answers to these questions are given in Theorem 10.1 and Theorem 12.1. Theorem 10.1 gives a number of characterizations; among others, there is a Hardy inequality on a graph if and only if there is a Green's function. Theorem 12.1 gives an explicit formula to compute optimal Hardy weights for locally finite graphs under some additional technical assumptions. Examples show that Green's functions are good candidates to be used in the formula.
Emphasis is also placed on illustrating the theory with examples. The focus is on natural numbers, Euclidean lattices, trees and star graphs. Finally, a non-linear version of the Heisenberg uncertainty principle and a Rellich inequality are derived from the Hardy inequality.
Resolving the evolutionary history of two hippotragin antelopes using archival and ancient DNA
(2024)
African antelopes are iconic but surprisingly understudied in terms of their genetics, especially when it comes to their evolutionary history and genetic diversity. The age of genomics provides an opportunity to investigate evolution using whole nuclear genomes. Decreasing sequencing costs enable the recovery of multiple loci per genome, giving more power to single specimen analyses and providing higher resolution insights into species and populations that can help guide conservation efforts. This age of genomics has only recently begun for African antelopes. Many African bovids have a declining population trend and hence, are often endangered. Consequently, contemporary samples from the wild are often hard to collect. In these cases, ex situ samples from contemporary captive populations or in the form of archival or ancient DNA (aDNA) from historical museum or archaeological/paleontological specimens present a great research opportunity with the latter two even offering a window to information about the past. However, the recovery of aDNA is still considered challenging from regions with prevailing climatic conditions that are deemed adverse for DNA preservation like the African continent. This raises the question if DNA recovery from fossils as old as the early Holocene from these regions is possible.
This thesis focuses on investigating the evolutionary history and genetic diversity of two species: the addax (Addax nasomaculatus) and the blue antelope (Hippotragus leucophaeus). The addax is critically endangered and might even already be extinct in the wild, while the blue antelope became extinct ~1800 AD, becoming the first extinct large African mammal species in historical times. Together, the addax and the blue antelope can inform us about current and past extinction events and the knowledge gained can help guide conservation efforts of threatened species. The three studies used ex situ samples and present the first nuclear whole genome data for both species. The addax study used historical museum specimens and a contemporary sample from a captive population. The two studies on the blue antelope used mainly historical museum specimens but also fossils, and resulted in the recovery of the oldest paleogenome from Africa at that time.
The aim of the first study was to assess the genetic diversity and the evolutionary history of the addax. It found that the historical wild addax population showed only limited phylogeographic structuring, indicating that the addax was a highly mobile and panmictic population and suggesting that the current European captive population might be missing the majority of the historical mitochondrial diversity. It also found the nuclear and mitochondrial diversity in the addax to be rather low compared to other wild ungulate species. Suggestions on how to best save the remaining genetic diversity are presented. The European zoo population was shown to exhibit no or only minor levels of inbreeding, indicating good prospects for the restoration of the species in the wild. The trajectory of the addax’s effective population size indicated a major bottleneck in the late Pleistocene and a low effective population size well before recent human impact led to the species being critically endangered today.
The second study set out to investigate the identities of historical blue antelope specimens using aDNA techniques. Results showed that six out of ten investigated specimens were misidentified, demonstrating the blue antelope to be one of the scarcest mammal species in historical natural history collections, with almost no bone reference material. The preliminary analysis of the mitochondrial genomes suggested a low diversity and hence low population size at the time of the European colonization of southern Africa.
Study three presents the results of the analyses of two blue antelope nuclear genomes, one ~200 years old and another dating to the early Holocene, 9,800–9,300 cal years BP. A fossil-calibrated phylogeny dated the divergence time of the three historically extant Hippotragus species to ~2.86 Ma and demonstrated the blue and the sable antelope (H. niger) to be sister species. In addition, ancient gene flow from the roan (H. equinus) into the blue antelope was detected. A comparison with the roan and the sable antelope indicated that the blue antelope had a much lower nuclear diversity, suggesting a low population size since at least the early Holocene. This concurs with findings from the fossil record that show a considerable decline in abundance after the Pleistocene–Holocene transition. Moreover, it suggests that the blue antelope persisted throughout the Holocene regardless of a low population size, indicating that human impact in the colonial era was a major factor in the blue antelope’s extinction.
This thesis uses aDNA analyses to provide deeper insights into the evolutionary history and genetic diversity of the addax and the blue antelope. Human impact likely was the main driver of extinction in the blue antelope, and is likely the main factor threatening the addax today. This thesis demonstrates the value of ex situ samples for science and conservation, and suggests to include genetic data for conservation assessments of species. It further demonstrates the beneficial use of aDNA for the taxonomic identification of historically important specimens in natural history collections. Finally, the successful retrieval of a paleogenome from the early Holocene of Africa using shotgun sequencing shows that DNA retrieval from samples of that age is possible from regions generally deemed unfavorable for DNA preservation, opening up new research opportunities. All three studies enhance our knowledge of African antelopes, contributing to the general understanding of African large mammal evolution and to the conservation of these and similarly threatened species.
Background: Societies worldwide have become more diverse yet continue to be inequitable. Understanding how youth growing up in these societies are socialized and consequently develop racial knowledge has important implications not only for their well-being but also for building more just societies. Importantly, there is a lack of research on these topics in Germany and Europe in general.
Aim and Method: The overarching aim of the dissertation is to investigate 1) where and how ethnic-racial socialization (ERS) happens in inequitable societies and 2) how it relates to youth’s development of racial knowledge, which comprises racial beliefs (e.g., prejudice, attitudes), behaviors (e.g., actions preserving or disrupting inequities), and identities (e.g., inclusive, cultural). Guided by developmental, cultural, and ecological theories of socialization and development, I first explored how family, as a crucial socialization context, contributes to the preservation or disruption of racism and xenophobia in inequitable societies through its influence on children’s racial beliefs and behaviors. I conducted a literature review and developed a conceptual model bridging research on ethnic-racial socialization and intergroup relations (Study 1). After documenting the lack of research on socialization and development of racial knowledge within and beyond family contexts outside of the U.S., I conducted a qualitative study to explore ERS in Germany through the lens of racially marginalized youth (Study 2). Then, I conducted two quantitative studies to explore the separate and interacting relations of multiple (i.e., family, school) socialization contexts for the development of racial beliefs and behaviors (Study 3), and identities (Studies 3, 4) in Germany. Participants of Study 2 were 26 young adults (aged between 19 and 32) of Turkish, Kurdish, East, and Southeast Asian heritage living across different cities in Germany. Study 3 was conducted with 503 eighth graders of immigrant and non-immigrant descent (Mage = 13.67) in Berlin, Study 4 included 311 early to mid-adolescents of immigrant descent (Mage= 13.85) in North Rhine-Westphalia with diverse cultural backgrounds.
Results and Conclusion: The findings revealed that privileged or marginalized positions of families in relation to their ethnic-racial and religious background in society entail differential experiences and thus are an important determining factor for the content/process of socialization and development of youth’s racial knowledge. Until recently, ERS research mostly focused on investigating how racially marginalized families have been the sources of support for their children in resisting racism and how racially privileged families contribute to transmission of information upholding racism (Study 1). ERS for racially marginalized youth in Germany centered heritage culture, discrimination, and resistance strategies to racism, yet resistance strategies transmitted to youth mostly help to survive racism (e.g., working hard) by upholding it instead of liberating themselves from racism by disrupting it (e.g., self-advocacy, Study 2). Furthermore, when families and schools foster heritage and intercultural learning, both contexts may separately promote stronger identification with heritage culture and German identities, and more prosocial intentions towards disadvantaged groups (i.e., refugees) among youth (Studies 3, 4). However, equal treatment in the school context led to mixed results: equal treatment was either unrelated to inclusive identity, or positively related to German and negatively related to heritage culture identities (Studies 3, 4). Additionally, youth receiving messages highlighting strained and preferential intergroup relations at home while attending schools promoting assimilation may develop a stronger heritage culture identity (Study 4). In conclusion, ERS happened across various social contexts (i.e., family, community centers, school, neighborhood, peer). ERS promoting heritage and intercultural learning, at least in one social context (family or school), might foster youth’s racial knowledge manifesting in stronger belonging to multiple cultures and in prosocial intentions toward disadvantaged groups. However, there is a need for ERS targeting increasing awareness of discrimination across social contexts of youth and teaching youth resistance strategies for liberation from racism.
During the last decades, therapeutical proteins have risen to great significance in the pharmaceutical industry. As non-human proteins that are introduced into the human body cause a distinct immune system reaction that triggers their rapid clearance, most newly approved protein pharmaceuticals are shielded by modification with synthetic polymers to significantly improve their blood circulation time. All such clinically approved protein-polymer conjugates contain polyethylene glycol (PEG) and its conjugation is denoted as PEGylation. However, many patients develop anti-PEG antibodies which cause a rapid clearance of PEGylated molecules upon repeated administration. Therefore, the search for alternative polymers that can replace PEG in therapeutic applications has become important. In addition, although the blood circulation time is significantly prolonged, the therapeutic activity of some conjugates is decreased compared to the unmodified protein. The reason is that these conjugates are formed by the traditional conjugation method that addresses the protein's lysine side chains. As proteins have many solvent exposed lysines, this results in a somewhat uncontrolled attachment of polymer chains, leading to a mixture of regioisomers, with some of them eventually affecting the therapeutic performance.
This thesis investigates a novel method for ligating macromolecules in a site-specific manner, using enzymatic catalysis. Sortase A is used as the enzyme: It is a well-studied transpeptidase which is able to catalyze the intermolecular ligation of two peptides. This process is commonly referred to as sortase-mediated ligation (SML). SML constitutes an equilibrium reaction, which limits product yield. Two previously reported methods to overcome this major limitation were tested with polymers without using an excessive amount of one reactant.
Specific C- or N-terminal peptide sequences (recognition sequence and nucleophile) as part of the protein are required for SML. The complementary peptide was located at the polymer chain end. Grafting-to was used to avoid damaging the protein during polymerization. To be able to investigate all possible combinations (protein-recognition sequence and nucleophile-protein as well as polymer-recognition sequence and nucleophile-polymer) all necessary building blocks were synthesized. Polymerization via reversible deactivation radical polymerization (RDRP) was used to achieve a narrow molecular weight distribution of the polymers, which is required for therapeutic use.
The synthesis of the polymeric building blocks was started by synthesizing the peptide via automated solid-phase peptide synthesis (SPPS) to avoid post-polymerization attachment and to enable easy adaptation of changes in the peptide sequence. To account for the different functionalities (free N- or C-terminus) required for SML, different linker molecules between resin and peptide were used.
To facilitate purification, the chain transfer agent (CTA) for reversible addition-fragmentation chain-transfer (RAFT) polymerization was coupled to the resin-immobilized recognition sequence peptide. The acrylamide and acrylate-based monomers used in this thesis were chosen for their potential to replace PEG.
Following that, surface-initiated (SI) ATRP and RAFT polymerization were attempted, but failed. As a result, the newly developed method of xanthate-supported photo-iniferter (XPI) RAFT polymerization in solution was used successfully to obtain a library of various peptide-polymer conjugates with different chain lengths and narrow molar mass distributions.
After peptide side chain deprotection, these constructs were used first to ligate two polymers via SML, which was successful but revealed a limit in polymer chain length (max. 100 repeat units). When utilizing equimolar amounts of reactants, the use of Ni2+ ions in combination with a histidine after the recognition sequence to remove the cleaved peptide from the equilibrium maximized product formation with conversions of up to 70 %.
Finally, a model protein and a nanobody with promising properties for therapeutical use were biotechnologically modified to contain the peptide sequences required for SML. Using the model protein for C- or N-terminal SML with various polymers did not result in protein-polymer conjugates. The reason is most likely the lack of accessibility of the protein termini to the enzyme. Using the nanobody for C-terminal SML, on the other hand, was successful. However, a similar polymer chain length limit was observed as in polymer-polymer SML. Furthermore, in case of the synthesis of protein-polymer conjugates, it was more effective to shift the SML equilibrium by using an excess of polymer than by employing the Ni2+ ion strategy.
Overall, the experimental data from this work provides a good foundation for future research in this promising field; however, more research is required to fully understand the potential and limitations of using SML for protein-polymer synthesis. In future, the method explored in this dissertation could prove to be a very versatile pathway to obtain therapeutic protein-polymer conjugates that exhibit high activities and long blood circulation times.
Zuerst erschienen in:
Alexander von Humboldt-Stiftung. Mitteilungen, 5. Jg., Heft 38, Oktober 1980, S. 27–36.
Off-road adventures
(2024)
This article focuses on the visual qualities of Alexander von Humboldt’s statistical tables in his Political Essay on the Kingdom of New Spain (1808–1811, 2nd ed. 1825–1827) with special attention to how such composites of numbers, alphabetical script, and semiotic elements relate to narrative writing. I argue that Humboldt’s tables/tableaus open up spaces inside his narrative that fragment the reading process, inviting new conversations, connections, and ideas.
Die deutsch-kubanische Forschungs- und Digitalisierungsinitiative „Proyecto Humboldt Digital“ (ProHD) hat während ihrer Projektlaufzeit (2019–2023) wichtige Quellen zum Thema „Humboldt und Kuba“ erstmals digital erschlossen. Als Kooperation zwischen der Berlin-Brandenburgischen Akademie der Wissenschaften und der Oficina del Historiador de la Ciudad de La Habana hat ProHD damit wichtige Akzente für die Archivdigitalisierung, die digitale Editionsphilologie und die digitale Wissenschaftskommunikation in Kuba gesetzt. Das Korpus der erschlossenen Bestände wird hier in fünf Schlaglichtern vorgestellt: 1) Quellen zur Humboldt’schen Forschungsreise, 2) Juan Luis de la Cuesta, 3) Materialien zu Kuba aus dem Humboldt-Nachlass, 4) Zensur und Beschlagnahme des Essai politique sur l’île de Cuba, 5) Francisco de Arango y Parreño.
La traduction en langue chinoise du premier volume du monumental ouvrage scientifique d'Alexander von Humboldt, intitulé «Cosmos», a vu le jour en 2023 sous l՚égide de la prestigieuse maison d՚édition de l՚Université de Pékin. Dans sa postface éclairée, la traductrice émérite, Gao Hong, éclaire la lanterne des lecteurs chinois sur la fresque cosmique esquissée par Humboldt, révélant ainsi les intrications entre les phénomènes naturels et leur pertinence à l՚échelle de l'univers tout entier. Gao Hong narre son propre périple aux côtés de Humboldt, tout en distillant ses réflexions personnelles sur cette fresque cosmique. «Cosmos» d՚Humboldt, œuvre scientifique par excellence, transcende également les sphères esthétiques et artistiques, exprimant invariablement une profonde vénération pour l՚univers. Restituer en chinois la «beauté géométrique» de la langue allemande, empreinte d՚une rigueur structurelle, constitue un défi singulier, le chinois se caractérisant par sa fluidité, sa souplesse et sa poésie imagée, en totale antithèse avec l՚allemand. En qualité de traducteur, il importe de naviguer librement entre ces deux mondes linguistiques distincts.
In dem Aufsatz wird ein Brief erstmalig veröffentlicht, in dem Alexander von Humboldt im Jahr 1849 bei einem Minister der liberalen Regierung von Kurhessen die Verdienste eines an der Universität in Marburg lehrenden jungen Professors hervorhob. Die Rede ist hier von dem später durch bahnbrechende Entdeckungen berühmten Physiologen Carl Ludwig. Vermittelt wurde das Schreiben durch den Humboldt nahestehenden Mediziner und Physiologen Emil du Bois-Reymond. Der Empfehlungsbrief, mit dem Humboldt versuchte, Ludwigs finanzielle Situation zu verbessern, ist ein Beispiel für die Förderung junger Forscher wie auch freier wissenschaftlicher Institutionen durch Humboldt.
Personalmanagement und KWI
(2024)
Die Digitalisierung ist ein wesentlicher Bestandteil aktueller Verwaltungsreformen. Trotz der hohen Bedeutung und langjähriger Bemühungen bleibt die Bilanz der Verwaltungsdigitalisierung in Deutschland ambivalent. Diese Studie konzentriert sich auf drei erfolgreiche Digitalisierungsvorhaben aus dem Onlinezugangsgesetz (OZG) und analysiert mittels problemzentrierter Expertenbefragung Einflussfaktoren auf die Umsetzung von OZG-Vorhaben und den Einfluss des Managements in diesem Prozess. Die Analyse erfolgt theoriegeleitet basierend auf dem Ansatz der begrenzten Rationalität und der ökonomischen Theorie der Bürokratie. Die Ergebnisse zeigen, dass anzunehmen ist, dass die identifizierten Einflussfaktoren unterschiedlich auf Nachnutzbarkeit und Reifegrad von Verwaltungsleistungen wirken und als Folgen begrenzter Rationalität im menschlichen Problemlösungsprozess interpretiert werden können. Managerinnen unterstützen die operativen Akteure bei der Umsetzung, indem sie deren begrenzte Rationalität mit geeigneten Strategien adressieren. Dazu können sie Ressourcen bereitstellen, mit ihrer Expertise unterstützen, Informationen zugänglich machen, Entscheidungswege verändern sowie zur Konfliktlösung beitragen. Die Studie bietet wertvolle Einblicke in die tatsächliche Managementpraxis und leitet daraus Empfehlungen für die Umsetzung öffentlicher Digitalisierungsvorhaben sowie für die Steuerung öffentlicher Verwaltungen ab. Diese Studie liefert einen wichtigen Beitrag zum Verständnis des Einflusses des Managements in der Verwaltungsdigitalisierung. Die Studie unterstreicht außerdem die Notwendigkeit weiterer Forschung in diesem Bereich, um die Praktiken und Herausforderungen der Verwaltungsdigitalisierung besser zu verstehen und effektiv zu adressieren.
This thesis explores word order variability in verb-final languages. Verb-final languages have a reputation for a high amount of word order variability. However, that reputation amounts to an urban myth due to a lack of systematic investigation. This thesis provides such a systematic investigation by presenting original data from several verb-final languages with a focus on four Uralic ones: Estonian, Udmurt, Meadow Mari, and South Sámi. As with every urban myth, there is a kernel of truth in that many unrelated verb-final languages share a particular kind of word order variability, A-scrambling, in which the fronted elements do not receive a special information-structural role, such as topic or contrastive focus. That word order variability goes hand in hand with placing focussed phrases further to the right in the position directly in front of the verb. Variations on this pattern are exemplified by Uyghur, Standard Dargwa, Eastern Armenian, and three of the Uralic languages, Estonian, Udmurt, and Meadow Mari. So far for the kernel of truth, but the fourth Uralic language, South Sámi, is comparably rigid and does not feature this particular kind of word order variability. Further such comparably rigid, non-scrambling verb-final languages are Dutch, Afrikaans, Amharic, and Korean. In contrast to scrambling languages, non-scrambling languages feature obligatory subject movement, causing word order rigidity next to other typical EPP effects.
The EPP is a defining feature of South Sámi clause structure in general. South Sámi exhibits a one-of-a-kind alternation between SOV and SAuxOV order that is captured by the assumption of the EPP and obligatory movement of auxiliaries but not lexical verbs. Other languages that allow for SAuxOV order either lack an alternation because the auxiliary is obligatorily present (Macro-Sudan SAuxOVX languages), or feature an alternation between SVO and SAuxOV (Kru languages; V2 with underlying OV as a fringe case). In the SVO–SAuxOV languages, both auxiliaries and lexical verbs move. Hence, South Sámi shows that the textbook difference between the VO languages English and French, whether verb movement is restricted to auxiliaries, also extends to OV languages. SAuxOV languages are an outlier among OV languages in general but are united by the presence of the EPP.
Word order variability is not restricted to the preverbal field in verb-final languages, as most of them feature postverbal elements (PVE). PVE challenge the notion of verb-finality in a language. Strictly verb-final languages without any clause-internal PVE are rare. This thesis charts the first structural and descriptive typology of PVE. Verb-final languages vary in the categories they allow as PVE. Allowing for non-oblique PVE is a pivotal threshold: when non-oblique PVE are allowed, PVE can be used for information-structural effects. Many areally and genetically unrelated languages only allow for given PVE but differ in whether the PVE are contrastive. In those languages, verb-finality is not at stake since verb-medial orders are marked. In contrast, the Uralic languages Estonian and Udmurt allow for any PVE, including information focus. Verb-medial orders can be used in the same contexts as verb-final orders without semantic and pragmatic differences. As such, verb placement is subject to actual free variation. The underlying verb-finality of Estonian and Udmurt can only be inferred from a range of diagnostics indicating optional verb movement in both languages. In general, it is not possible to account for PVE with a uniform analysis: rightwards merge, leftward verb movement, and rightwards phrasal movement are required to capture the cross- and intralinguistic variation.
Knowing that a language is verb-final does not allow one to draw conclusions about word order variability in that language. There are patterns of homogeneity, such as the word order variability driven by directly preverbal focus and the givenness of postverbal elements, but those are not brought about by verb-finality alone. Preverbal word order variability is restricted by the more abstract property of obligatory subject movement, whereas the determinant of postverbal word order variability has to be determined in the future.
The automotive industry is a prime example of digital technologies reshaping mobility. Connected, autonomous, shared, and electric (CASE) trends lead to new emerging players that threaten existing industrial-aged companies. To respond, incumbents need to bridge the gap between contrasting product architecture and organizational principles in the physical and digital realms. Over-the-air (OTA) technology, that enables seamless software updates and on-demand feature additions for customers, is an example of CASE-driven digital product innovation. Through an extensive longitudinal case study of an OTA initiative by an industrial- aged automaker, this dissertation explores how incumbents accomplish digital product innovation. Building on modularity, liminality, and the mirroring hypothesis, it presents a process model that explains the triggers, mechanisms, and outcomes of this process. In contrast to the literature, the findings emphasize the primacy of addressing product architecture challenges over organizational ones and highlight the managerial implications for success.
Human activities modify nature worldwide via changes in the environment, biodiversity and the functioning of ecosystems, which in turn disrupt ecosystem services and feed back negatively on humans. A pressing challenge is thus to limit our impact on nature, and this requires detailed understanding of the interconnections between the environment, biodiversity and ecosystem functioning. These three components of ecosystems each include multiple dimensions, which interact with each other in different ways, but we lack a comprehensive picture of their interconnections and underlying mechanisms. Notably, diversity is often viewed as a single facet, namely species diversity, while many more facets exist at different levels of biological organisation (e.g. genetic, phenotypic, functional, multitrophic diversity), and multiple diversity facets together constitute the raw material for adaptation to environmental changes and shape ecosystem functioning. Consequently, investigating the multidimensionality of ecosystems, and in particular the links between multifaceted diversity, environmental changes and ecosystem functions, is crucial for ecological research, management and conservation. This thesis aims to explore several aspects of this question theoretically.
I investigate three broad topics in this thesis. First, I focus on how food webs with varying levels of functional diversity across three trophic levels buffer environmental changes, such as a sudden addition of nutrients or long-term changes (e.g. warming or eutrophication). I observed that functional diversity generally enhanced ecological stability (i.e. the buffering capacity of the food web) by increasing trophic coupling. More precisely, two aspects of ecological stability (resistance and resilience) increased even though a third aspect (the inverse of the time required for the system to reach its post-perturbation state) decreased with increasing functional diversity. Second, I explore how several diversity facets served as a raw material for different sources of adaptation and how these sources affected multiple ecosystem functions across two trophic levels. Considering several sources of adaptation enabled the interplay between ecological and evolutionary processes, which affected trophic coupling and thereby ecosystem functioning. Third, I reflect further on the multifaceted nature of diversity by developing an index K able to quantify the facet of functional diversity, which is itself multifaceted. K can provide a comprehensive picture of functional diversity and is a rather good predictor of ecosystem functioning. Finally I synthesise the interdependent mechanisms (complementarity and selection effects, trophic coupling and adaptation) underlying the relationships between multifaceted diversity, ecosystem functioning and the environment, and discuss the generalisation of my findings across ecosystems and further perspectives towards elaborating an operational biodiversity-ecosystem functioning framework for research and conservation.
Homomorphisms are a fundamental concept in mathematics expressing the similarity of structures. They provide a framework that captures many of the central problems of computer science with close ties to various other fields of science. Thus, many studies over the last four decades have been devoted to the algorithmic complexity of homomorphism problems. Despite their generality, it has been found that non-uniform homomorphism problems, where the target structure is fixed, frequently feature complexity dichotomies. Exploring the limits of these dichotomies represents the common goal of this line of research.
We investigate the problem of counting homomorphisms to a fixed structure over a finite field of prime order and its algorithmic complexity. Our emphasis is on graph homomorphisms and the resulting problem #_{p}Hom[H] for a graph H and a prime p. The main research question is how counting over a finite field of prime order affects the complexity.
In the first part of this thesis, we tackle the research question in its generality and develop a framework for studying the complexity of counting problems based on category theory. In the absence of problem-specific details, results in the language of category theory provide a clear picture of the properties needed and highlight common ground between different branches of science. The proposed problem #Mor^{C}[B] of counting the number of morphisms to a fixed object B of C is abstract in nature and encompasses important problems like constraint satisfaction problems, which serve as a leading example for all our results. We find explanations and generalizations for a plethora of results in counting complexity. Our main technical result is that specific matrices of morphism counts are non-singular. The strength of this result lies in its algebraic nature. First, our proofs rely on carefully constructed systems of linear equations, which we know to be uniquely solvable. Second, by exchanging the field that the matrix is defined by to a finite field of order p, we obtain analogous results for modular counting. For the latter, cancellations are implied by automorphisms of order p, but intriguingly we find that these present the only obstacle to translating our results from exact counting to modular counting. If we restrict our attention to reduced objects without automorphisms of order p, we obtain results analogue to those for exact counting. This is underscored by a confluent reduction that allows this restriction by constructing a reduced object for any given object. We emphasize the strength of the categorial perspective by applying the duality principle, which yields immediate consequences for the dual problem of counting the number of morphisms from a fixed object.
In the second part of this thesis, we focus on graphs and the problem #_{p}Hom[H]. We conjecture that automorphisms of order p capture all possible cancellations and that, for a reduced graph H, the problem #_{p}Hom[H] features the complexity dichotomy analogue to the one given for exact counting by Dyer and Greenhill. This serves as a generalization of the conjecture by Faben and Jerrum for the modulus 2. The criterion for tractability is that H is a collection of complete bipartite and reflexive complete graphs. From the findings of part one, we show that the conjectured dichotomy implies dichotomies for all quantum homomorphism problems, in particular counting vertex surjective homomorphisms and compactions modulo p. Since the tractable cases in the dichotomy are solved by trivial computations, the study of the intractable cases remains. As an initial problem in a series of reductions capable of implying hardness, we employ the problem of counting weighted independent sets in a bipartite graph modulo prime p. A dichotomy for this problem is shown, stating that the trivial cases occurring when a weight is congruent modulo p to 0 are the only tractable cases. We reduce the possible structure of H to the bipartite case by a reduction to the restricted homomorphism problem #_{p}Hom^{bip}[H] of counting modulo p the number of homomorphisms between bipartite graphs that maintain a given order of bipartition. This reduction does not have an impact on the accessibility of the technical results, thanks to the generality of the findings of part one. In order to prove the conjecture, it suffices to show that for a connected bipartite graph that is not complete, #_{p}Hom^{bip}[H] is #_{p}P-hard. Through a rigorous structural study of bipartite graphs, we establish this result for the rich class of bipartite graphs that are (K_{3,3}\{e}, domino)-free. This overcomes in particular the substantial hurdle imposed by squares, which leads us to explore the global structure of H and prove the existence of explicit structures that imply hardness.
Among the different meanings carried by numerical information, cardinality is fundamental for survival and for the development of basic as well as of higher numerical skills. Importantly, the human brain inherits from evolution a predisposition to map cardinality onto space, as revealed by the presence of spatial-numerical associations (SNAs) in humans and animals. Here, the mapping of cardinal information onto physical space is addressed as a hallmark signature characterizing numerical cognition.
According to traditional approaches, cognition is defined as complex forms of internal information processing, taking place in the brain (cognitive processor). On the contrary, embodied cognition approaches define cognition as functionally linked to perception and action, in the continuous interaction between a biological body and its physical and sociocultural environment.
Embracing the principles of the embodied cognition perspective, I conducted four novel studies designed to unveil how SNAs originate, develop, and adapt, depending on characteristics of the organism, the context, and their interaction. I structured my doctoral thesis in three levels. At the grounded level (Study 1), I unfold the biological foundations underlying the tendency to map cardinal information across space; at the embodied level (Study 2), I reveal the impact of atypical motor development on the construction of SNAs; at the situated level (Study 3), I document the joint influence of visuospatial attention and task properties on SNAs. Furthermore, I experimentally investigate the presence of associations between physical and numerical distance, another numerical property fundamental for the development of efficient mathematical minds (Study 4).
In Study 1, I present the Brain’s Asymmetric Frequency Tuning hypothesis that relies on hemispheric asymmetries for processing spatial frequencies, a low-level visual feature that the (in)vertebrate brain extracts from any visual scene to create a coherent percept of the world. Computational analyses of the power spectra of the original stimuli used to document the presence of SNAs in human newborns and animals, support the brain’s asymmetric frequency tuning as a theoretical account and as an evolutionarily inherited mechanism scaffolding the universal and innate tendency to represent cardinality across horizontal space.
In Study 2, I explore SNAs in children with rare genetic neuromuscular diseases: spinal muscular atrophy (SMA) and Duchenne muscular dystrophy (DMD). SMA children never accomplish independent motoric exploration of their environment; in contrast, DMD children do explore but later lose this ability. The different SNAs reported by the two groups support the critical role of early sensorimotor experiences in the spatial representation of cardinality.
In Study 3, I directly compare the effects of overt attentional orientation during explicit and implicit processing of numerical magnitude. First, the different effects of attentional orienting based on the type of assessment support different mechanisms underlying SNAs during explicit and implicit assessment of numerical magnitude. Secondly, the impact of vertical shifts of attention on the processing of numerical distance sheds light on the correspondence between numerical distance and peri-personal distance.
In Study 4, I document the presence of different SNAs, driven by numerical magnitude and numerical distance, by employing different response mappings (left vs. right and near vs. distant).
In the field of numerical cognition, the four studies included in the present thesis contribute to unveiling how the characteristics of the organism and the environment influence the emergence, the development, and the flexibility of our attitude to represent cardinal information across space, thus supporting the predictions of the embodied cognition approach. Furthermore, they inform a taxonomy of body-centred factors (biological properties of the brain and sensorimotor system) modulating the spatial representation of cardinality throughout the course of life, at the grounded, embodied, and situated levels.
If the awareness for different variables influencing SNAs over the course of life is important, it is equally important to consider the organism as a whole in its sensorimotor interaction with the world. Inspired by my doctoral research, here I propose a holistic perspective that considers the role of evolution, embodiment, and environment in the association of cardinal information with directional space. The new perspective advances the current approaches to SNAs, both at the conceptual and at the methodological levels.
Unveiling how the mental representation of cardinality emerges, develops, and adapts is necessary to shape efficient mathematical minds and achieve economic productivity, technological progress, and a higher quality of life.
«Musik erfinden und gestalten» hat grosses musikpädagogisches Potenzial: mit Klängen experimentieren, ein Gespür für dramaturgische Verläufe entwickeln, nonverbal kommunizieren – Musik erfinden und gestalten eröffnet ein breites Feld musikalischer Aktivitäten und Erfahrungsmöglichkeiten. Doch im regulären Musikunterricht in der Volksschule der Schweiz sind produktionsdidaktische Ansätze noch eher die Ausnahme und Musiklehrkräften fehlt es an Anleitungsstrategien.
Für das vorliegende Buch untersuchte der Autor in Form einer Design-based-Research-Studie, wie Primarlehrkräfte ihre Anleitungsstrategien bei der Durchführung von musikalischen Gestaltungsprozessen in ihren Schulklassen schrittweise entwickeln. Dabei begleitete der Forscher die Lehrkräfte in der schulischen Praxis und intervenierte gezielt mit Reflexionsimpulsen, um den Professionalisierungsprozess zu unterstützen.
Daraus wurden drei Reflexionstools generiert: Das Reflexionstool try-outs beinhaltet konkrete Handlungsanregungen und Reflexionsfragen für das Anleiten musikalischer Gestaltungsprozesse. Das Onlinetool improspider ist ein Selbstreflexionsinstrument zur Einschätzung personaler Orientierungen. Das Kompetenzmodell Kompetenzflyer bietet eine Reflexionsfolie für die Ansteuerung eigenständiger Kompetenzerwerbsschritte.
Die Reflexionstools sind außerdem online in Form eines Lernobjekts verfügbar.
Water stored in the unsaturated soil as soil moisture is a key component of the hydrological cycle influencing numerous hydrological processes including hydrometeorological extremes. Soil moisture influences flood generation processes and during droughts when precipitation is absent, it provides plant with transpirable water, thereby sustaining plant growth and survival in agriculture and natural ecosystems.
Soil moisture stored in deeper soil layers e.g. below 100 cm is of particular importance for providing plant transpirable water during dry periods. Not being directly connected to the atmosphere and located outside soil layers with the highest root densities, water in these layers is less susceptible to be rapidly evaporated and transpired. Instead, it provides longer-term soil water storage increasing the drought tolerance of plants and ecosystems.
Given the importance of soil moisture in the context of hydro-meteorological extremes in a warming climate, its monitoring is part of official national adaption strategies to a changing climate. Yet, soil moisture is highly variable in time and space which challenges its monitoring on spatio-temporal scales relevant for flood and drought risk modelling and forecasting.
Introduced over a decade ago, Cosmic-Ray Neutron Sensing (CRNS) is a noninvasive geophysical method that allows for the estimation of soil moisture at relevant spatio-temporal scales of several hectares at a high, subdaily temporal resolution. CRNS relies on the detection of secondary neutrons above the soil surface which are produced from high-energy cosmic-ray particles in the atmosphere and the ground. Neutrons in a specific epithermal energy range are sensitive to the amount of hydrogen present in the surroundings of the CRNS neutron detector. Due to same mass as the hydrogen nucleus, neutrons lose kinetic energy upon collision and are subsequently absorbed when reaching low, thermal energies. A higher amount of hydrogen therefore leads to fewer neutrons being detected per unit time. Assuming that the largest amount of hydrogen is stored in most terrestrial ecosystems as soil moisture, changes of soil moisture can be estimated through an inverse relationship with observed neutron intensities.
Although important scientific advancements have been made to improve the methodological framework of CRNS, several open challenges remain, of which some are addressed in the scope of this thesis. These include the influence of atmospheric variables such as air pressure and absolute air humidity, as well as, the impact of variations in incoming primary cosmic-ray intensity on observed epithermal and thermal neutron signals and their correction. Recently introduced advanced neutron-to-soil moisture transfer functions are expected to improve CRNS-derived soil moisture estimates, but potential improvements need to be investigated at study sites with differing environmental conditions. Sites with strongly heterogeneous, patchy soil moisture distributions challenge existing transfer functions and further research is required to assess the impact of, and correction of derived soil moisture estimates under heterogeneous site conditions. Despite its capability of measuring representative averages of soil moisture at the field scale, CRNS lacks an integration depth below the first few decimetres of the soil. Given the importance of soil moisture also in deeper soil layers, increasing the observational window of CRNS through modelling approaches or in situ measurements is of high importance for hydrological monitoring applications.
By addressing these challenges, this thesis aids to closing knowledge gaps and finding answers to some of the open questions in CRNS research. Influences of different environmental variables are quantified, correction approaches are being tested and developed. Neutron-to-soil moisture transfer functions are evaluated and approaches to reduce effects of heterogeneous soil moisture distributions are presented. Lastly, soil moisture estimates from larger soil depths are derived from CRNS through modified, simple modelling approaches and in situ estimates by using CRNS as a downhole technique. Thereby, this thesis does not only illustrate the potential of new, yet undiscovered applications of CRNS in future but also opens a new field of CRNS research. Consequently, this thesis advances the methodological framework of CRNS for above-ground and downhole applications. Although the necessity of further research in order to fully exploit the potential of CRNS needs to be emphasised, this thesis contributes to current hydrological research and not least to advancing hydrological monitoring approaches being of utmost importance in context of intensifying hydro-meteorological extremes in a changing climate.
Overcoming natural biomass limitations in gram-negative bacteria through synthetic carbon fixation
(2024)
The carbon demands of an ever-increasing human population and the concomitant rise in net carbon emissions requires CO2 sequestering approaches for production of carbon-containing molecules. Microbial production of carbon-containing products from plant-based sugars could replace current fossil-based production. However, this form of sugar-based microbial production directly competes with human food supply and natural ecosystems. Instead, one-carbon feedstocks derived from CO2 and renewable energy were proposed as an alternative. The one carbon molecule formate is a stable, readily soluble and safe-to-store energetic mediator that can be electrochemically generated from CO2 and (excess off-peak) renewable electricity. Formate-based microbial production could represent a promising approach for a circular carbon economy. However, easy-to-engineer and efficient formate-utilizing microbes are lacking. Multiple synthetic metabolic pathways were designed for better-than-nature carbon fixation. Among them, the reductive glycine pathway was proposed as the most efficient pathway for aerobic formate assimilation. While some of these pathways have been successfully engineered in microbial hosts, these synthetic strains did so far not exceed the performance of natural strains. In this work, I engineered and optimized two different synthetic formate assimilation pathways in gram-negative bacteria to exceed the limits of a natural carbon fixation pathway, the Calvin cycle.
The first chapter solidified Cupriavidus necator as a promising formatotrophic host to produce value-added chemicals. The formate tolerance of C. necator was assessed and a production pathway for crotonate established in a modularized fashion. Last, bioprocess optimization was leveraged to produce crotonate from formate at a titer of 148 mg/L.
In the second chapter, I chromosomally integrated and optimized the synthetic reductive glycine pathway in C. necator using a transposon-mediated selection approach. The insertion methodology allowed selection for condition-specific tailored pathway expression as improved pathway performance led to better growth. I then showed my engineered strains to exceed the biomass yields of the Calvin cycle utilizing wildtype C. necator on formate. This demonstrated for the first time the superiority of a synthetic formate assimilation pathway and by extension of synthetic carbon fixation efforts as a whole.
In chapter 3, I engineered a segment of a synthetic carbon fixation cycle in Escherichia coli. The GED cycle was proposed as a Calvin cycle alternative that does not perform a wasteful oxygenation reaction and is more energy efficient. The pathways simple architecture and reasonable driving force made it a promising candidate for enhanced carbon fixation. I created a deletion strain that coupled growth to carboxylation via the GED pathway segment. The CO2 dependence of the engineered strain and 13C-tracer analysis confirmed operation of the pathway in vivo.
In the final chapter, I present my efforts of implementing the GED cycle also in C. necator, which might be a better-suited host, as it is accustomed to formatotrophic and hydrogenotrophic growth. To provide the carboxylation substrate in vivo, I engineered C. necator to utilize xylose as carbon source and created a selection strain for carboxylase activity. I verify activity of the key enzyme, the carboxylase, in the decarboxylative direction. Although CO2-dependent growth of the strain was not obtained, I showed that all enzymes required for operation of the GED cycle are active in vivo in C. necator.
I then evaluate my success with engineering a linear and cyclical one-carbon fixation pathway in two different microbial hosts. The linear reductive glycine pathway presents itself as a much simpler metabolic solution for formate dependent growth over the sophisticated establishment of hard-to-balance carbon fixation cycles. Last, I highlight advantages and disadvantages of C. necator as an upcoming microbial benchmark organism for synthetic metabolism efforts and give and outlook on its potential for the future of C1-based manufacturing.
Portal Transfer 2024
(2024)
Liebe Leserinnen und Leser, die eigene „Blase“ verlassen, Perspektiven wechseln, Silo-Mentalität überwinden – was der Wissenschaft in ihrem Innern gelingt, ja gelingen muss, um erfolgreich zu sein, stellt sie in ihrer Außenwirkung noch immer vor Herausforderungen. Dabei gehört es doch inzwischen zum Selbstverständnis moderner Universitäten, öffentlich zu erklären, woran in ihren Räumen geforscht wird, sich in gesellschaftliche Diskurse einzubringen und ihre Erkenntnisse zügig in die Praxis zu überführen.
Die Universität Potsdam hat diese Transferaufgaben neben Lehre und Forschung als dritte Säule installiert und ihrem Gebäude damit noch mehr Stabilität verliehen. Seit Jahren gehört sie im nationalen Vergleich zu den erfolgreichsten Hochschulen, wenn es darum geht, Start-ups zu fördern und aus der Forschung heraus Unternehmen zu gründen: In diesem Magazin berichten wir von der Potassco Solutions GmbH des Informatikers Torsten Schaub, der mit seinem KI-System Clingo komplexe Optimierungsprobleme in Betrieben löst. Oder von der SEQSTANT GmbH, die mit innovativer Diagnostik Erreger von Atemwegserkrankungen in Echtzeit bestimmen kann. Wir zeigen aber auch, wie Forschungsteams mit der Industrie kooperieren, zum Beispiel mit der K-UTEC im thüringischen Sondershausen, um mit wissenschaftlichem Knowhow dazu beizutragen, dass dort in Produktionsabfällen kein wertvolles Lithium verloren geht.
Richtet sich der Technologietransfer vor allem an die Wirtschaft, so hilft der Wissenstransfer der gesamten Gesellschaft. Besonders stark ist die Universität Potsdam hier in der Bildung, denn mit ihren Lehramtsabsolventen schickt sie auch gleich den aktuellen Stand der Unterrichtsforschung in die Schulpraxis. Immer häufiger zieht dabei die Digitalisierung in die Klassenzimmer ein. Wie das gut gelingen kann, ist in diesem Magazin zu lesen. Zudem erklären wir, was die Sportwissenschaft zur Therapie von Depressionen beitragen kann oder wie die Umweltforschung das Risikomanagement in von Hochwasser bedrohten Regionen verbessern will. Ob in öffentlichen Verwaltungen oder politischen Institutionen – überall ist wissenschaftliche Expertise gefragt. Wir zeigen das am Beispiel von Frauke Brosius-Gersdorf, die als Juristin die Bundesregierung zur Regulierung des Schwangerschaftsabbruchs berät.
Der kürzeste Weg des Wissens aus der Universität in die Praxis führt zweifelsohne über die Alumni, die als Fach- und Führungskräfte im Land und darüber hinaus wirksam werden. Dass dieser Weg schon während des Studiums beginnen kann, beweisen die vielen studentischen Initiativen, die hier zu Wort kommen. Sie alle scheuen nicht das Rampenlicht: ob bei Science Slams auf den Bühnen im Land Brandenburg, bei den TEDx-Talks im Hans Otto Theater, beim Kunst-Rundgang in der Potsdamer Waschhaus-Arena oder mit englischsprachigem Schauspiel an der Uni. Öffentlich in Erscheinung treten, neue Formen finden, um Wissen in die Breite der Bevölkerung zu tragen – auch das gehört zum Transfer. Genau wie dieses Magazin.