Refine
Has Fulltext
- yes (7621) (remove)
Year of publication
Document Type
- Doctoral Thesis (2479)
- Postprint (2265)
- Monograph/Edited Volume (844)
- Article (815)
- Part of Periodical (325)
- Master's Thesis (237)
- Working Paper (190)
- Preprint (137)
- Review (74)
- Conference Proceeding (63)
Language
- English (4545)
- German (2958)
- Multiple languages (34)
- Spanish (28)
- French (20)
- Russian (18)
- Hebrew (9)
- Italian (4)
- Portuguese (2)
- Romanian (1)
Is part of the Bibliography
- yes (7621) (remove)
Keywords
- climate change (73)
- Klimawandel (51)
- machine learning (45)
- Modellierung (43)
- Germany (42)
- climate (39)
- Nachhaltigkeit (37)
- Deutschland (34)
- Patholinguistik (34)
- patholinguistics (34)
Institute
- Institut für Biochemie und Biologie (679)
- Institut für Physik und Astronomie (595)
- Mathematisch-Naturwissenschaftliche Fakultät (486)
- Institut für Geowissenschaften (480)
- Institut für Chemie (478)
- Wirtschaftswissenschaften (426)
- MenschenRechtsZentrum (382)
- Extern (344)
- Institut für Umweltwissenschaften und Geographie (278)
- Department Linguistik (242)
Prediction is often regarded as a central and domain-general aspect of cognition. This proposal extends to language, where predictive processing might enable the comprehension of rapidly unfolding input by anticipating upcoming words or their semantic features. To make these predictions, the brain needs to form a representation of the predictive patterns in the environment. Predictive processing theories suggest a continuous learning process that is driven by prediction errors, but much is still to be learned about this mechanism in language comprehension. This thesis therefore combined three electroencephalography (EEG) experiments to explore the relationship between prediction and implicit learning at the level of meaning.
Results from Study 1 support the assumption that the brain constantly infers und updates probabilistic representations of the semantic context, potentially across multiple levels of complexity. N400 and P600 brain potentials could be predicted by semantic surprise based on a probabilistic estimate of previous exposure and a more complex probability representation, respectively.
Subsequent work investigated the influence of prediction errors on the update of semantic predictions during sentence comprehension. In line with error-based learning, unexpected sentence continuations in Study 2 ¬– characterized by large N400 amplitudes ¬– were associated with increased implicit memory compared to expected continuations. Further, Study 3 indicates that prediction errors not only strengthen the representation of the unexpected word, but also update specific predictions made from the respective sentence context. The study additionally provides initial evidence that the amount of unpredicted information as reflected in N400 amplitudes drives this update of predictions, irrespective of the strength of the original incorrect prediction.
Together, these results support a central assumption of predictive processing theories: A probabilistic predictive representation at the level of meaning that is updated by prediction errors. They further propose the N400 ERP component as a possible learning signal. The results also emphasize the need for further research regarding the role of the late positive ERP components in error-based learning. The continuous error-based adaptation described in this thesis allows the brain to improve its predictive representation with the aim to make better predictions in the future.
On the effects of disorder on the ability of oscillatory or directional dynamics to synchronize
(2024)
In this thesis I present a collection of publications of my work, containing analytic results and observations in numerical experiments on the effects of various inhomogeneities, on the ability of coupled oscillators to synchronize their collective dynamics. Most of these works are concerned with the effects of Gaussian and non-Gaussian noise acting on the phase of autonomous oscillators (Secs. 2.1-2.4) or on the direction of higher dimensional state vectors (Secs. 2.5,2.6). I obtain exact and approximate solutions to the non-linear equations governing the distributions of phases, or perform linear stability analysis of the uniform distribution to obtain the transition point from a completely disordered state to partial order or more complicated collective behavior. Other inhomogeneities, that can affect synchronization of coupled oscillators, are irregular, chaotic oscillations or a complex, and possibly random structure in the coupling network. In Section 2.9 I present a new method to define the phase- and frequency linear response function for chaotic oscillators. In Sections 2.4, 2.7 and 2.8 I study synchronization in complex networks of coupled oscillators. Each section in Chapter 2 - Manuscripts, is devoted to one research paper and begins with a list of the main results, a description of my contributions to the work and a short account of the scientific context, i.e. the questions and challenges which started the research and the relation of the work to my other research projects. The manuscripts in this thesis are reproductions of the arXiv versions, i.e. preprints under the creative commons licence.
Data preparation stands as a cornerstone in the landscape of data science workflows, commanding a significant portion—approximately 80%—of a data scientist's time. The extensive time consumption in data preparation is primarily attributed to the intricate challenge faced by data scientists in devising tailored solutions for downstream tasks. This complexity is further magnified by the inadequate availability of metadata, the often ad-hoc nature of preparation tasks, and the necessity for data scientists to grapple with a diverse range of sophisticated tools, each presenting its unique intricacies and demands for proficiency.
Previous research in data management has traditionally concentrated on preparing the content within columns and rows of a relational table, addressing tasks, such as string disambiguation, date standardization, or numeric value normalization, commonly referred to as data cleaning. This focus assumes a perfectly structured input table. Consequently, the mentioned data cleaning tasks can be effectively applied only after the table has been successfully loaded into the respective data cleaning environment, typically in the later stages of the data processing pipeline.
While current data cleaning tools are well-suited for relational tables, extensive data repositories frequently contain data stored in plain text files, such as CSV files, due to their adaptable standard. Consequently, these files often exhibit tables with a flexible layout of rows and columns, lacking a relational structure. This flexibility often results in data being distributed across cells in arbitrary positions, typically guided by user-specified formatting guidelines.
Effectively extracting and leveraging these tables in subsequent processing stages necessitates accurate parsing. This thesis emphasizes what we define as the “structure” of a data file—the fundamental characters within a file essential for parsing and comprehending its content. Concentrating on the initial stages of the data preprocessing pipeline, this thesis addresses two crucial aspects: comprehending the structural layout of a table within a raw data file and automatically identifying and rectifying any structural issues that might hinder its parsing. Although these issues may not directly impact the table's content, they pose significant challenges in parsing the table within the file.
Our initial contribution comprises an extensive survey of commercially available data preparation tools. This survey thoroughly examines their distinct features, the lacking features, and the necessity for preliminary data processing despite these tools. The primary goal is to elucidate the current state-of-the-art in data preparation systems while identifying areas for enhancement. Furthermore, the survey explores the encountered challenges in data preprocessing, emphasizing opportunities for future research and improvement.
Next, we propose a novel data preparation pipeline designed for detecting and correcting structural errors. The aim of this pipeline is to assist users at the initial preprocessing stage by ensuring the correct loading of their data into their preferred systems. Our approach begins by introducing SURAGH, an unsupervised system that utilizes a pattern-based method to identify dominant patterns within a file, independent of external information, such as data types, row structures, or schemata. By identifying deviations from the dominant pattern, it detects ill-formed rows. Subsequently, our structure correction system, TASHEEH, gathers the identified ill-formed rows along with dominant patterns and employs a novel pattern transformation algebra to automatically rectify errors. Our pipeline serves as an end-to-end solution, transforming a structurally broken CSV file into a well-formatted one, usually suitable for seamless loading.
Finally, we introduce MORPHER, a user-friendly GUI integrating the functionalities of both SURAGH and TASHEEH. This interface empowers users to access the pipeline's features through visual elements. Our extensive experiments demonstrate the effectiveness of our data preparation systems, requiring no user involvement. Both SURAGH and TASHEEH outperform existing state-of-the-art methods significantly in both precision and recall.
Das Anliegen der vorliegenden Arbeit ist die Vermittlung des antiken Verhältnisses zwischen Mensch und natürlicher Umgebung im Lateinunterricht sowie ein Vergleich mit der heutigen Situation. Die Ergründung jenes Verhältnisses erfolgt am Beispiel des antiken Bergbaus, eines besonders anschaulichen Feldes der Umweltgeschichte. Denn es weist ein hohes Maß an Aktualität auf sowie ein großes Potential, aus der Beschäftigung mit ihm Erkenntnisse für die Gegenwart zu gewinnen.
Vorgelegt wird eine Unterrichtskonzeption, die zugleich eine Analyse der menschlichen Naturwahrnehmung vornimmt. Zunächst wird dabei die Heterogenität dieser Wahrnehmung in der Antike aufgezeigt und in Bezug zur damals geäußerten Kritik am Bergbau gesetzt. Anschließend werden folgende Teilaspekte behandelt: 1. die antike bergbauliche Technik und Praxis, 2. die damals herrschenden Arbeitsbedingungen, 3. die gewonnenen Rohstoffe und ihre Verwendung sowie 4. die Folgen des Bergbaus für Mensch und Umwelt. Der didaktische Teil besteht aus einem Entwurf für drei Doppelstunden. Er enthält die Lehrmaterialien, die jeweiligen Erläuterungen und den Erwartungshorizont.
Das Professionswissen von Studierenden des Lehramts Primarstufe im Bereich „Haus der Vierecke“
(2024)
Die Professionalisierung angehender Lehrkräfte als bedeutende Steuerungsgröße für die Schulbildung ist eine wesentliche Aufgabe der Lehre an Universitäten. Sie stellt eine Säule des universitären Reformprojekts „PSI-Potsdam“ im Rahmen der „Qualitätsoffensive Lehrerbildung“ dar. Ziel ist die Qualitätssicherung durch Evaluation und Weiterentwicklung von Lehrveranstaltungen mithilfe von Gestaltungsprinzipien zur Vermittlung des Professionswissens.
Die vorliegende Arbeit fokussiert die Wirksamkeit der Lehrveranstaltung „Geometrie und ihre Didaktik 1 und 2“ und untersucht exemplarisch, inwiefern Studierende des Lehramts Primarstufe Mathematik das dort angestrebte Fach- und fachdidaktische Wissen zur Begriffsbildung am Beispiel des Hauses der Vierecke erlangt haben. Angemessene mentale Modelle verschiedener Vierecksarten aufzubauen und diese hierarchisch zueinander in Beziehung zu setzen, erfordert einen aktiven Prozess gemäß dem didaktischen Modell zum Lernen geometrischer Begriffe und stellt somit eine Schwierigkeit für Lernende an Schule und Universität gleichermaßen dar.
Zur Beantwortung der Forschungsfrage wurden in einer qualitativen Studie mit Mixed-Methods-Design zunächst 95 Studierende schriftlich zu ihrem Wissen hinsichtlich des genannten Themas befragt. Anschließend wurde zur Identifikation von Lernhürden und Schwierigkeiten ein Fokusgruppeninterview durchgeführt. Die Auswertung der Daten erfolgte computergestützt mittels einer qualitativen Inhaltsanalyse.
Die Ergebnisse bilden eine große Vielfalt verschiedener Kompetenzstände in allen relevanten Facetten ab. Im Rahmen der geforderten Perspektivübernahme, Ursachenfindung und modellgeleiteten Vorschlägen zu deren Vorbeugung zeigten sich insbesondere Defizite in Form von Fehlvorstellungen. Weiterhin gab es Schwierigkeiten bei der Anwendung und Integration des geforderten Professionswissens in allen betrachteten Wissenskomponenten. Hieraus werden zum einen Entwicklungsvorschläge bezüglich der Lehrveranstaltung abgeleitet, um die fachwissenschaftliche Basis der zukünftigen Lehrkräfte zu stärken. Hierunter fällt es, sensibler mit prototypischen Darstellungen umzugehen und den Begriffsaufbau bei den Studierenden zu stärken, indem unter anderem auf einer Metaebene Zusammenhänge des Hauses der Vierecke im Spiralcurriculum explizit gemacht werden. Zum anderen beziehen sich Vorschläge auf das Studiendesign, speziell den Aufbau der Befragung zur zielführenden Erhebung des fokussierten Professionswissens. Hierfür werden unter anderem eine explizite Erhebung der eigenen Vorstellungen sowie eine Umformulierung der Wissenstestaufgabe mittels Operatoren angeregt.
„Über die vergangenen Jahrzehnte wurde der Ruf nach einer nachhaltigen Entwicklung aufgrund zahlreicher globaler, die gesamte Menschheit betreffender Herausforderungen immer lauter (Kropp, 2019, S. 4).“
Bildung für nachhaltige Entwicklung (BNE) verfolgt das Ziel, Menschen dazu zu befähigen, diesen globalen Herausforderungen aktiv zu begegnen, ihre eigene Zukunft mitzugestalten sowie Verantwortung für die Zukunft nachfolgender Generationen zu übernehmen. Auch der Sachunterricht in der Grundschule sieht sich vor der Aufgabe, die Prinzipien der BNE in die schulische Praxis zu übertragen. Im Zentrum steht dabei die Frage nach geeigneten Zugängen zu diesem perspektivenvernetzenden Thema, die für die Schülerinnen und Schüler motivierend und zugleich bildungswirksam sein sollen. Einen derartigen Zugang innerhalb des Schulunterrichts kann bei angemessener Umsetzung das Imkern darstellen.
Der auf die schulische Praxis ausgerichtete Band 3 der Potsdamer Beiträge zur Innovation des Sachunterrichts präsentiert daher am Beispiel des Imkerns ein Konzept, wie im Rahmen des Sachunterrichts der Grundschule eine praktische Lerntätigkeit der Kinder im Einklang mit den Zielen, Dimensionen und Kompetenzerwartungen der Bildung für nachhaltige Entwicklung ermöglicht werden kann. Der Band richtet sich als Grundlagenwerk an alle Lehrkräfte des Sachunterrichts und dessen Bezugsfächer sowie an andere interessierte Leserinnen und Leser.
Quantified Self, die pro-aktive Selbstvermessung von Menschen, hat sich in den letzten Jahren von einer Nischenanwendung zu einem Massenphänomen entwickelt. Dabei stehen den Nutzern heute vielfältige technische Unterstützungsmöglichkeiten, beispielsweise in Form von Smartphones, Fitness-Trackern oder Gesundheitsapps zur Verfügung, welche eine annähernd lückenlose Überwachung unterschiedlicher Kontextfaktoren einer individuellen Lebenswirklichkeit erlauben.
In der Folge widmet sich diese Arbeit unter anderem der Fragestellung, inwieweit diese intensive und eigen-initiierte Beschäftigung, insbesondere mit gesundheitsbezogenen Daten, die weitgehend als objektiviert und damit belastbar gelten, die Gesundheitskompetenz derart aktiver Menschen erhöhen kann. Darüber hinaus werden Aspekte untersucht, inwieweit die neuen Technologien in der Lage sind, spezifische medizinische Erkenntnisse zu vertiefen und in der Konsequenz die daraus resultierenden Behandlungsprozesse zu verändern.
Während der Ursprung des Quantified Self im 2. Gesundheitsmarkt liegt, geht die vorliegende Arbeit der Frage nach, welche strukturellen, personellen und prozessualen Anknüpfungspunkte perspektivisch im 1. Gesundheitsmarkt existieren werden, wenn ein potentieller Patient in einer stärker emanzipierten Weise den Wunsch verspürt, oder eine entsprechende Forderung stellt, seine gesammelten Gesundheitsdaten in möglichst umfassender Form in eine medizinische Behandlung zu integrieren.
Dabei werden auf der einen Seite aktuelle Entwicklungen im 2. Gesundheitsmarkt untersucht, die gekennzeichnet sind von einer hohen Dynamik und einer großen Intransparenz. Auf der anderen Seite steht der als stark reguliert und wenig digitalisiert geltende 1. Gesundheitsmarkt mit seinen langen Entwicklungszyklen und ausgeprägten Partikularinteressen der verschiedenen Stakeholder.
In diesem Zuge werden aktuelle Entwicklungen des zugrunde liegenden Rechtsrahmens, speziell im Hinblick auf stärker patientenzentrierte und digitalisierte Normen untersucht, wobei insbesondere das Digitale Versorgung Gesetz eine wichtige Rolle einnimmt.
Ziel der Arbeit ist die stärkere Durchdringung von Wechselwirkungen an der Schnittstelle zwischen den beiden Gesundheitsmärkten in Bezug auf die Verwendung von Technologien der Selbstvermessung, um in der Folge zukünftige Geschäftspotentiale für existierende oder neu in den Markt drängende Dienstleister zu eruieren.
Als zentrale Methodik kommt hier eine Delphi-Studie zum Einsatz, die in einem interprofessionellen Ansatz versucht, ein Zukunftsbild dieser derzeit noch sehr jungen Entwicklungen für das Jahr 2030 aufzuzeigen. Eingebettet werden die Ergebnisse in die Untersuchung einer allgemeinen gesellschaftlichen Akzeptanz der skizzierten Veränderungen.
Due to their sessile lifestyle, plants are constantly exposed to pathogens and possess a multi-layered immune system that prevents infection. The first layer of immunity called pattern-triggered immunity (PTI), enables plants to recognise highly conserved molecules that are present in pathogens, resulting in immunity from non-adaptive pathogens. Adapted pathogens interfere with PTI, however the second layer of plant immunity can recognise these virulence factors resulting in a constant evolutionary battle between plant and pathogen. Xanthomonas campestris pv. vesicatoria (Xcv) is the causal agent of bacterial leaf spot disease in tomato and pepper plants. Like many Gram-negative bacteria, Xcv possesses a type-III secretion system, which it uses to translocate type-III effectors (T3E) into plant cells. Xcv has over 30 T3Es that interfere with the immune response of the host and are important for successful infection. One such effector is the Xanthomonas outer protein M (XopM) that shows no similarity to any other known protein. Characterisation of XopM and its role in virulence was the focus of this work.
While screening a tobacco cDNA library for potential host target proteins, the vesicle-associated membrane protein (VAMP)-associated protein 1-2 like (VAP12) was identified. The interaction between XopM and VAP12 was confirmed in the model species Nicotiana benthamiana and Arabidopsis as well as in tomato, a Xcv host. As plants possess multiple VAP proteins, it was determined that the interaction of XopM and VAP is isoform specific.
It could be confirmed that the major sperm protein (MSP) domain of NtVAP12 is sufficient for binding XopM and that binding can be disrupted by substituting one amino acid (T47) within this domain. Most VAP interactors have at least one FFAT (two phenylalanines [FF] in an acidic tract) related motif, screening the amino acid sequence of XopM showed that XopM has two FFAT-related motifs. Substitution of the second residue of each FFAT motif (Y61/F91) disrupts NtVAP12 binding, suggesting that these motifs cooperatively mediate this interaction. Structural modelling using AlphaFold further confirmed that the unstructured N-terminus of XopM binds NtVAP12 at its MSP domain, which was further confirmed by the generation of truncated XopM variants.
Infection of pepper leaves, with a XopM deficient Xcv strain did not result in a reduction of virulence in comparison to the Xcv wildtype, showing that the function of XopM during infection is redundant. Virus-induced gene silencing of NbVAP12 in N. benthamiana plants also did not affect Xcv virulence, which further indicated that interaction with VAP12 is also non-essential for Xcv virulence. Despite such findings, ectopic expression of wildtype XopM and XopMY61A/F91A in transgenic Arabidopsis seedlings enhanced the growth of a non-pathogenic Pseudomonas syringae pv. tomato (Pst) DC3000 strain. XopM was found to interfere with the PTI response allowing Pst growth independent of its binding to VAP. Furthermore, transiently expressed XopM could suppress reactive oxygen species (ROS; one of the earliest PTI responses) production in N. benthamiana leaves. The FFAT double mutant XopMY61A/F91A as well as the C-terminal truncation variant XopM106-519 could still suppress the ROS response while the N-terminal variant XopM1-105 did not. Suppression of ROS production is therefore independent of VAP binding. In addition, tagging the C-terminal variant of XopM with a nuclear localisation signal (NLS; NLS-XopM106-519) resulted in significantly higher ROS production than the membrane localising XopM106-519 variant, indicating that XopM-induced ROS suppression is localisation dependent.
To further characterise XopM, mass spectrometry techniques were used to identify post-translational modifications (PTM) and potential interaction partners. PTM analysis revealed that XopM contains up to 21 phosphorylation sites, which could influence VAP binding. Furthermore, proteins of the Rab family were identified as potential plant protein interaction partners. Rab proteins serve a multitude of functions including vesicle trafficking and have been previously identified as T3E host targets. Taking this into account, a model of virulence of XopM was proposed, with XopM anchoring itself to VAP proteins to potentially access plasma membrane associated proteins. XopM possibly interferes with vesicle trafficking, which in turn suppresses ROS production through an unknown mechanism.
In this work it was shown that XopM targets VAP proteins. The data collected suggests that this T3E uses VAP12 to anchor itself into the right place to carry out its function. While more work is needed to determine how XopM contributes to virulence of Xcv, this study sheds light onto how adapted pathogens overcome the immune response of their hosts. It is hoped that such knowledge will contribute to the development of crops resistant to Xcv in the future.
Efraim Frisch (1873–1942) und Albrecht Mendelssohn Bartholdy (1874–1936) waren im klassischen Zeitalter der Intellektuellen (neben anderem) Zeitschriftenentrepeneure und Gründer der kleinen Zeitschriften Der Neue Merkur (1914–1916/1919–1925) und Europäische Gespräche (1923–1933). Sie stehen (nicht nur mit ihren Zeitschriften) für einen der wiederholt in der Moderne unternommenen Versuche, die in der Aufklärung erschlossenen Ressourcen – demokratischer Republikanismus und universelle und gleiche Rechte für alle Menschen – im Vertrauen auf ihre globale Umsetzbarkeit zu aktivieren. In der Zeit der Weimarer Republik gehörten sie zu den Republikanern, „die Weimar als Symbol ernst nahmen und zäh und mutig bemüht waren, dem Ideal konkreten Inhalt zu verleihen“ (Peter Gay). Ihr bislang unüberliefert gebliebenes Beispiel fügt sich ein in die Demokratiegeschichte der europäischen Moderne, in die Geschichte internationaler Gesellschaftsbeziehungen und die Geschichte der Selbstbehauptung intellektueller Autonomie.
Die zäsurenübergreifend den Zeitraum von 1900 bis ca. 1940 untersuchende Studie ermöglicht wesentliche Einblicke in die Biografien Frischs und Mendelssohn Bartholdys, in die deutsch-französische/europäisch-transatlantische Welt der kleinen (literarisch-politischen) Zeitschriften des frühen 20. Jahrhunderts sowie in das medien-intellektuelle Feld des späten Kaiserreiches und der Weimarer Republik in seiner humanistisch-demokratisch-republikanischen Tendenz. Darüber hinaus beinhaltet sie neue Erkenntnisse zur Geschichte der ‚Heidelberger Vereinigung‘ – der Arbeitsgemeinschaft für eine Politik des Rechts – um Prinz Max von Baden, zur deutschen Friedensdelegation in Versailles 1919 und ihrem Hamburger Nachleben, zum Handbuch der Politik sowie zur ersten amtlichen Aktenpublikation des Auswärtigen Amtes – der Großen Politik der Europäischen Kabinette 1871–1914. Schließlich zu den Bemühungen der ‚Internationalists‘ der 1920er Jahre, eine effektive Ächtung des Angriffskrieges herbeizuführen.
Archive haben die Aufgabe, Wissen zu bewahren und zugänglich zu machen. Die Sammlungen des Museums für Naturkunde Berlin (MfN) wuchsen während der Zeit der europäischen Kolonialexpansion stark an. Naturalien aus der ganzen Welt gelangten nach Berlin und gleichzeitig fand ein wissenschaftlicher Austausch zu denselben statt. Die Spuren dieser Objekte und der Korrespondenzen können im Archiv des Museums nachverfolgt werden. Heute gelten koloniale Kontexte weitestgehend als Unrechtskontexte, deren Aufarbeitung gefordert wird. Um Provenienzforschung betreiben zu können, ist es daher unerlässlich, dass Museen und Archive ihre Sammlungen offenlegen (soweit rechtlich und ethisch möglich) und Außenstehenden den Zugriff ermöglichen.
Im Rahmen dieser Masterarbeit soll der respektvolle Umgang mit Archivgut aus kolonialen Kontexten kritisch reflektiert und Handlungsfelder für einen kulturell angemessenen Umgang mit sensiblen Inhalten aufgezeigt werden. Konkret beziehen sich die Handlungsoptionen auf Archivgut aus kolonialen Kontexten mit Bezug zu Australien. Dabei werden Provenienzforschung, Sensibilität, Mehrsprachigkeit, indigenes kulturelles Wissen (ICIP) sowie Plattform- und Schnittstellenoptionen für die Vernetzung von Daten und Inhalten bedacht. Ziel ist es, vor dem Hintergrund der Archive als Orte kulturellen Gedächtnisses den Umgang mit Archivgut aus kolonialen Kontexten zu reflektieren.
Die bedarfsgerechte Versorgung im Alter zukünftig sicherzustellen, gehört zu den entscheidenden Aufgaben unserer Zeit. Der in Deutschland bestehende Fachkräftemangel sowie der demografische Wandel belasten das Pflegesystem in mehrfacher Hinsicht: In einer alternden Gesellschaft sind immer mehr Menschen auf eine anhaltende Unterstützung angewiesen. Niedrige Geburtenraten und damit verbunden ein sinkender Bevölkerungs-anteil von Menschen im erwerbsfähigen Alter bringen einen bereits heute spürbaren Mangel an beruflich Pflegenden mit sich.
Um eine menschenwürdige Pflege anhaltend zu gewährleisten, müssen vorhandene Ressourcen gezielter eingesetzt und zusätzliche Reserven freigelegt werden. Viele Hoffnungen liegen hier auf technologischen Innovationen. Die Digitalisierung soll das Gesundheitswesen effizienter gestalten und beispielsweise durch Künstliche Intelligenz zeitraubende Prozesse vereinfachen oder sogar automatisieren. Im Kontext der Pflege wird der Einsatz von robotischen Assistenzsystemen diskutiert.
Aus diesem Grund wurde die die Potsdamer Bürger:innenkonferenz „Robotik in der Altenpflege?“ initiiert. Um die Zukunft der Pflege gemeinsam zu gestalten, wurden 3.500 Potsdamer Bürgerinnen und Bürger kontaktiert und schließlich fünfundzwanzig Teilnehmende ausgewählt. Im Frühjahr 2024 kamen sie zusammen, um den verantwortlichen Einsatz von Robotik in der Pflege zu diskutieren.
Die hier vorliegende Erklärung ist das Ergebnis der Bürger:innenkonferenz. Sie enthält die zentralen Positionen der Teilnehmenden.
Die Bürger:innenkonferenz ist Teil des Projekts E-cARE („Ethics Guidelines for Socially Assistive Robots in Elderly Care: An Empirical-Participatory Approach“), welches die Juniorprofessur für Medizinische Ethik mit Schwerpunkt auf Digitalisierung der Fakultät für Gesundheitswissenschaften Brandenburg, Universität Potsdam, durchgeführt hat.
Massive stars (Mini > 8 Msol) are the key feedback agents within galaxies, as they shape their surroundings via their powerful winds, ionizing radiation, and explosive supernovae. Most massive stars are born in binary systems, where interactions with their companions significantly alter their evolution and the feedback they deposit in their host galaxy. Understanding binary evolution, particularly in the low-metallicity environments as proxies for the Early Universe, is crucial for interpreting the rest-frame ultraviolet spectra observed in high-redshift galaxies by telescopes like Hubble and James Webb.
This thesis aims to tackle this challenge by investigating in detail massive binaries within the low-metallicity environment of the Small Magellanic Cloud galaxy. From ultraviolet and multi-epoch optical spectroscopic data, we uncovered post-interaction binaries. To comprehensively characterize these binary systems, their stellar winds, and orbital parameters, we use a multifaceted approach. The Potsdam Wolf-Rayet stellar atmosphere code is employed to obtain the stellar and wind parameters of the stars. Additionally, we perform consistent light and radial velocity fitting with the Physics of Eclipsing Binaries software, allowing for the independent determination of orbital parameters and component masses. Finally, we utilize these results to challenge the standard picture of stellar evolution and improve our understanding of low-metallicity stellar populations by calculating our binary evolution models with the Modules for Experiments in Stellar Astrophysics code.
We discovered the first four O-type post-interaction binaries in the SMC (Chapters 2, 5, and 6). Their primary stars have temperatures similar to other OB stars and reside far from the helium zero-age main sequence, challenging the traditional view of binary evolution. Our stellar evolution models suggest this may be due to enhanced mixing after core-hydrogen burning. Furthermore, we discovered the so-far most massive binary system undergoing mass transfer (Chapter 3), offering a unique opportunity to test mass-transfer efficiency in extreme conditions. Our binary evolution calculations revealed unexpected evolutionary pathways for accreting stars in binaries, potentially providing the missing link to understanding the observed Wolf-Rayet population within the SMC (Chapter 4). The results presented in this thesis unveiled the properties of massive binaries at low-metallicity which challenge the way the spectra of high-redshift galaxies are currently being analyzed as well as our understanding of massive-star feedback within galaxies.
Astrophysical shocks, driven by explosive events such as supernovae, efficiently accelerate charged particles to relativistic energies. The majority of these shocks occur in collisionless plasmas where the energy transfer is dominated by particle-wave interactions.Strong nonrelativistic shocks found in supernova remnants are plausible sites of galactic cosmic ray production, and the observed emission indicates the presence of nonthermal electrons. To participate in the primary mechanism of energy gain - Diffusive Shock Acceleration - electrons must have a highly suprathermal energy, implying a need for very efficient pre-acceleration. This poorly understood aspect of the shock acceleration theory is known as the electron injection problem. Studying electron-scale phenomena requires the use of fully kinetic particle-in-cell (PIC) simulations, which describe collisionless plasma from first principles.
Most published studies consider a homogenous upstream medium, but turbulence is ubiquitous in astrophysical environments and is typically driven at magnetohydrodynamic scales, cascading down to kinetic scales. For the first time, I investigate how preexisting turbulence affects electron acceleration at nonrelativistic shocks using the fully kinetic approach. To accomplish this, I developed a novel simulation framework that allows the study of shocks propagating in turbulent media. It involves simulating slabs of turbulent plasma separately, which are further continuously inserted into a shock simulation. This demands matching of the plasma slabs at the interface. A new procedure of matching electromagnetic fields and currents prevents numerical transients, and the plasma evolves self-consistently. The versatility of this framework has the potential to render simulations more consistent with turbulent systems in various astrophysical environments.
In this Thesis, I present the results of 2D3V PIC simulations of high-Mach-number nonrelativistic shocks with preexisting compressive turbulence in an electron-ion plasma. The chosen amplitudes of the density fluctuations ($\lesssim15\%$) concord with \textit{in situ} measurements in the heliosphere and the local interstellar medium. I explored how these fluctuations impact the dynamics of upstream electrons, the driving of the plasma instabilities, electron heating and acceleration. My results indicate that while the presence of the turbulence enhances variations in the upstream magnetic field, their levels remain too low to influence the behavior of electrons at perpendicular shocks significantly. However, the situation is different at oblique shocks. The external magnetic field inclined at an angle between $50^\circ \lesssim \theta_\text{Bn} \lesssim 75^\circ$ relative to the shock normal allows the escape of fast electrons toward the upstream region. An extended electron foreshock region is formed, where these particles drive various instabilities. Results of an oblique shock with $\theta_\text{Bn}=60^\circ$ propagating in preexisting compressive turbulence show that the foreshock becomes significantly shorter, and the shock-reflected electrons have higher temperatures. Furthermore, the energy spectrum of downstream electrons shows a well-pronounced nonthermal tail that follows a power law with an index up to -2.3.
The methods and results presented in this Thesis could serve as a starting point for more realistic modeling of interactions between shocks and turbulence in plasmas from first principles.
Condensation and crystallization are omnipresent phenomena in nature. The formation of droplets or crystals on a solid surface are familiar processes which, beyond their scientific interest, are required in many technological applications. In recent years, experimental techniques have been developed which allow patterning a substrate with surface domains of molecular thickness, surface area in the mesoscopic scale, and different wettabilities (i.e., different degrees of preference for a substance that is in contact with the substrate). The existence of new patterned surfaces has led to increased theoretical efforts to understand wetting phenomena in such systems.
In this thesis, we deal with some problems related to the equilibrium of phases (e.g., liquid-vapor coexistence) and the kinetics of phase separation in the presence of chemically patterned surfaces. Two different cases are considered: (i) patterned surfaces in contact with liquid and vapor, and (ii) patterned surfaces in contact with a crystalline phase. One of the problems that we have studied is the following: It is widely believed that if air containing water vapor is cooled to its dew point, droplets of water are immediately formed. Although common experience seems to support this view, it is not correct. It is only when air is cooled well below its dew point that the phase transition occurs immediately. A vapor cooled slightly below its dew point is in a metastable state, meaning that the liquid phase is more stable than the vapor, but the formation of droplets requires some time to occur, which can be very long.
It was first pointed out by J. W. Gibbs that the metastability of a vapor depends on the energy necessary to form a nucleus (a droplet of a critical size). Droplets smaller than the critical size will tend to disappear, while droplets larger than the critical size will tend to grow. This is consistent with an energy barrier that has its maximum at the critical size, as is the case for droplets formed directly in the vapor or in contact with a chemically uniform planar wall. Classical nucleation theory describes the time evolution of the condensation in terms of the random process of droplet growth through this energy barrier. This process is activated by thermal fluctuations, which eventually will form a droplet of the critical size.
We consider nucleation of droplets from a vapor on a substrate patterned with easily wettable (lyophilic) circular domains. Under certain conditions of pressure and temperature, the condensation of a droplet on a lyophilic circular domain proceeds through a barrier with two maxima (a double barrier). We have extended classical nucleation theory to account for the kinetics of nucleation through a double barrier, and applied this extension to nucleation on lyophilic circular domains.
Genome-scale metabolic models are mathematical representations of all known reactions occurring in a cell. Combined with constraints based on physiological measurements, these models have been used to accurately predict metabolic fluxes and effects of perturbations (e.g. knock-outs) and to inform metabolic engineering strategies. Recently, protein-constrained models have been shown to increase predictive potential (especially in overflow metabolism), while alleviating the need for measurement of nutrient uptake rates. The resulting modelling frameworks quantify the upkeep cost of a certain metabolic flux as the minimum amount of enzyme required for catalysis. These improvements are based on the use of in vitro turnover numbers or in vivo apparent catalytic rates of enzymes for model parameterization. In this thesis several tools for the estimation and refinement of these parameters based on in vivo proteomics data of Escherichia coli, Saccharomyces cerevisiae, and Chlamydomonas reinhardtii have been developed and applied. The difference between in vitro and in vivo catalytic rate measures for the three microorganisms was systematically analyzed. The results for the facultatively heterotrophic microalga C. reinhardtii considerably expanded the apparent catalytic rate estimates for photosynthetic organisms. Our general finding pointed at a global reduction of enzyme efficiency in heterotrophy compared to other growth scenarios. Independent of the modelled organism, in vivo estimates were shown to improve accuracy of predictions of protein abundances compared to in vitro values for turnover numbers. To further improve the protein abundance predictions, machine learning models were trained that integrate features derived from protein-constrained modelling and codon usage. Combining the two types of features outperformed single feature models and yielded good prediction results without relying on experimental transcriptomic data. The presented work reports valuable advances in the prediction of enzyme allocation in unseen scenarios using protein constrained metabolic models. It marks the first successful application of this modelling framework in the biotechnological important taxon of green microalgae, substantially increasing our knowledge of the enzyme catalytic landscape of phototrophic microorganisms.
Organizations are investing billions on innovation and agility initiatives to stay competitive in their increasingly uncertain business environments. Design Thinking, an innovation approach based on human-centered exploration, ideation and experimentation, has gained increasing popularity. The market for Design Thinking, including software products and general services, is projected to reach 2.500 million $ (US-Dollar) by 2028. A dispersed set of positive outcomes have been attributed to Design Thinking. However, there is no clear understanding of what exactly comprises the impact of Design Thinking and how it is created. To support a billion-dollar market, it is essential to understand the value Design Thinking is bringing to organizations not only to justify large investments, but to continuously improve the approach and its application.
Following a qualitative research approach combined with results from a systematic literature review, the results presented in this dissertation offer a structured understanding of Design Thinking impact. The results are structured along two main perspectives of impact: the individual and the organizational perspective. First, insights from qualitative data analysis demonstrate that measuring and assessing the impact of Design Thinking is currently one central challenge for Design Thinking practitioners in organizations. Second, the interview data revealed several effects Design Thinking has on individuals, demonstrating how Design Thinking can impact boundary management behaviors and enable employees to craft their jobs more actively.
Contributing to innovation management research, the work presented in this dissertation systematically explains the Design Thinking impact, allowing other researchers to both locate and integrate their work better. The results of this research advance the theoretical rigor of Design Thinking impact research, offering multiple theoretical underpinnings explaining the variety of Design Thinking impact. Furthermore, this dissertation contains three specific propositions on how Design Thinking creates an impact: Design Thinking creates an impact through integration, enablement, and engagement. Integration refers to how Design Thinking enables organizations through effectively combining things, such as for example fostering balance between exploitation and exploration activities. Through Engagement, Design Thinking impacts organizations involving users and other relevant stakeholders in their work. Moreover, Design Thinking creates impact through Enablement, making it possible for individuals to enact a specific behavior or experience certain states.
By synthesizing multiple theoretical streams into these three overarching themes, the results of this research can help bridge disciplinary boundaries, for example between business, psychology and design, and enhance future collaborative research. Practitioners benefit from the results as multiple desirable outcomes are detailed in this thesis, such as successful individual job crafting behaviors, which can be expected from practicing Design Thinking. This allows practitioners to enact more evidence-based decision-making concerning Design Thinking implementation. Overall, considering multiple levels of impact as well as a broad range of theoretical underpinnings are paramount to understanding and fostering Design Thinking impact.
Plate tectonic boundaries constitute the suture zones between tectonic plates. They are shaped by a variety of distinct and interrelated processes and play a key role in geohazards and georesource formation. Many of these processes have been previously studied, while many others remain unaddressed or undiscovered. In this work, the geodynamic numerical modeling software ASPECT is applied to shed light on further process interactions at continental plate boundaries. In contrast to natural data, geodynamic modeling has the advantage that processes can be directly quantified and that all parameters can be analyzed over the entire evolution of a structure. Furthermore, processes and interactions can be singled out from complex settings because the modeler has full control over all of the parameters involved. To account for the simplifying character of models in general, I have chosen to study generic geological settings with a focus on the processes and interactions rather than precisely reconstructing a specific region of the Earth.
In Chapter 2, 2D models of continental rifts with different crustal thicknesses between 20 and 50 km and extension velocities in the range of 0.5-10 mm/yr are used to obtain a speed limit for the thermal steady-state assumption, commonly employed to address the temperature fields of continental rifts worldwide. Because the tectonic deformation from ongoing rifting outpaces heat conduction, the temperature field is not in equilibrium, but is characterized by a transient, tectonically-induced heat flow signal. As a result, I find that isotherm depths of the geodynamic evolution models are shallower than a temperature distribution in equilibrium would suggest. This is particularly important for deep isotherms and narrow rifts. In narrow rifts, the magnitude of the transient temperature signal limits a well-founded applicability of the thermal steady-state assumption to extension velocities of 0.5-2 mm/yr. Estimation of the crustal temperature field affects conclusions on all temperature-dependent processes ranging from mineral assemblages to the feasible exploitation of a geothermal reservoir.
In Chapter 3, I model the interactions of different rheologies with the kinematics of folding and faulting using the example of fault-propagation folds in the Andean fold-and-thrust belt. The evolution of the velocity fields from geodynamic models are compared with those from trishear models of the same structure. While the latter use only geometric and kinematic constraints of the main fault, the geodynamic models capture viscous, plastic, and elastic deformation in the entire model domain. I find that both models work equally well for early, and thus relatively simple stages of folding and faulting, while results differ for more complex situations where off-fault deformation and secondary faulting are present. As fault-propagation folds can play an important role in the formation of reservoirs, knowledge of fluid pathways, for example via fractures and faults, is crucial for their characterization.
Chapter 4 deals with a bending transform fault and the interconnections between tectonics and surface processes. In particular, the tectonic evolution of the Dead Sea Fault is addressed where a releasing bend forms the Dead Sea pull-apart basin, while a restraining bend further to the North resulted in the formation of the Lebanese mountains. I ran 3D coupled geodynamic and surface evolution models that included both types of bends in a single setup. I tested various randomized initial strain distributions, showing that basin asymmetry is a consequence of strain localization. Furthermore, by varying the surface process efficiency, I find that the deposition of sediment in the pull-apart basin not only controls basin depth, but also results in a crustal flow component that increases uplift at the restraining bend.
Finally, in Chapter 5, I present the computational basis for adding further complexity to plate boundary models in ASPECT with the implementation of earthquake-like behavior using the rate-and-state friction framework. Despite earthquakes happening on a relatively small time scale, there are many interactions between the seismic cycle and the long time spans of other geodynamic processes. Amongst others, the crustal state of stress as well as the presence of fluids or changes in temperature may alter the frictional behavior of a fault segment. My work provides the basis for a realistic setup of involved structures and processes, which is therefore important to obtain a meaningful estimate for earthquake hazards.
While these findings improve our understanding of continental plate boundaries, further development of geodynamic software may help to reveal even more processes and interactions in the future.
Mantodea, commonly known as mantids, have captivated researchers owing to their enigmatic behavior and ecological significance. This order comprises a diverse array of predatory insects, boasting over 2,400 species globally and inhabiting a wide spectrum of ecosystems. In Iran, the mantid fauna displays remarkable diversity, yet numerous facets of this fauna remain poorly understood, with a significant dearth of systematic and ecological research. This substantial knowledge gap underscores the pressing need for a comprehensive study to advance our understanding of Mantodea in Iran and its neighboring regions.
The principal objective of this investigation was to delve into the ecology and phylogeny of Mantodea within these areas. To accomplish this, our research efforts concentrated on three distinct genera within Iranian Mantodea. These genera were selected due to their limited existing knowledge base and feasibility for in-depth study. Our comprehensive methodology encompassed a multifaceted approach, integrating morphological analysis, molecular techniques, and ecological observations.
Our research encompassed a comprehensive revision of the genus Holaptilon, resulting in the description of four previously unknown species. This extensive effort substantially advanced our understanding of the ecological roles played by Holaptilon and refined its systematic classification. Furthermore, our investigation into Nilomantis floweri expanded its known distribution range to include Iran. By conducting thorough biological assessments, genetic analyses, and ecological niche modeling, we obtained invaluable insights into distribution patterns and genetic diversity within this species. Additionally, our research provided a thorough comprehension of the life cycle, behaviors, and ecological niche modeling of Blepharopsis mendica, shedding new light on the distinctive characteristics of this mantid species. Moreover, we contributed essential knowledge about parasitoids that infect mantid ootheca, laying the foundation for future studies aimed at uncovering the intricate mechanisms governing ecological and evolutionary interactions between parasitoids and Mantodea.
Virtual Reality (VR) leads to the highest level of immersion if presented using a 1:1 mapping of virtual space to physical space—also known as real walking. The advent of inexpensive consumer virtual reality (VR) headsets, all capable of running inside-out position tracking, has brought VR to the home. However, many VR applications do not feature full real walking, but instead, feature a less immersive space-saving technique known as instant teleportation. Given that only 0.3% of home users run their VR experiences in spaces more than 4m2, the most likely explanation is the lack of the physical space required for meaningful use of real walking. In this thesis, we investigate how to overcome this hurdle. We demonstrate how to run 1:1-mapped VR experiences in small physical spaces and we explore the trade-off between space and immersion. (1) We start with a space limit of 15cm. We present DualPanto, a device that allows (blind) VR users to experience the virtual world from a 1:1 mapped bird’s eye perspective—by leveraging haptics. (2) We then relax our space constraints to 50cm, which is what seated users (e.g., on an airplane or train ride) have at their disposal. We leverage the space to represent a standing user in 1:1 mapping, while only compressing the user’s arm movement. We demonstrate our 4 prototype VirtualArms at the example of VR experiences limited to arm movement, such as boxing. (3) Finally, we relax our space constraints further to 3m2 of walkable space, which is what 75% of home users have access to. As well- established in the literature, we implement real walking with the help of portals, also known as “impossible spaces”. While impossible spaces on such dramatic space constraints tend to degenerate into incomprehensible mazes (as demonstrated, for example, by “TraVRsal”), we propose plausibleSpaces: presenting meaningful virtual worlds by adapting various visual elements to impossible spaces. Our techniques push the boundary of spatially meaningful VR interaction in various small spaces. We see further future challenges for new design approaches to immersive VR experiences for the smallest physical spaces in our daily life.
HPI Future SOC Lab
(2024)
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2020. Selected projects have presented their results on April 21st and November 10th 2020 at the Future SOC Lab Day events.
Floods continue to be the leading cause of economic damages and fatalities among natural disasters worldwide. As future climate and exposure changes are projected to intensify these damages, the need for more accurate and scalable flood risk models is rising. Over the past decade, macro-scale flood risk models have evolved from initial proof-of-concepts to indispensable tools for decision-making at global-, nationaland, increasingly, the local-level. This progress has been propelled by the advent of high-performance computing and the availability of global, space-based datasets. However, despite such advancements, these models are rarely validated and consistently fall short of the accuracy achieved by high-resolution local models. While capabilities have improved, significant gaps persist in understanding the behaviours of such macro-scale models, particularly their tendency to overestimate risk. This dissertation aims to address such gaps by examining the scale transfers inherent in the construction and application of coarse macroscale models. To achieve this, four studies are presented that, collectively, address exposure, hazard, and vulnerability components of risk affected by upscaling or downscaling.
The first study focuses on a type of downscaling where coarse flood hazard inundation grids are enhanced to a finer resolution. While such inundation downscaling has been employed in numerous global model chains, ours is the first study to focus specifically on this component, providing an evaluation of the state of the art and a novel algorithm. Findings demonstrate that our novel algorithm is eight times faster than existing methods, offers a slight improvement in accuracy, and generates more physically coherent flood maps in hydraulically challenging regions. When applied to a case study, the algorithm generated a 4m resolution inundation map from 30m hydrodynamic model outputs in 33 s, a 60-fold improvement in runtime with a 25% increase in RMSE compared with direct hydrodynamic modelling. All evaluated downscaling algorithms yielded better accuracy than the coarse hydrodynamic model when compared to observations, demonstrating similar limits of coarse hydrodynamic models reported by others. The substitution of downscaling into flood risk model chains, in place of high-resolution modelling, can drastically improve the lead time of impactbased forecasts and the efficiency of hazard map production. With downscaling, local regions could obtain high resolution local inundation maps by post-processing a global model without the need for expensive modelling or expertise.
The second study focuses on hazard aggregation and its implications for exposure, investigating implicit aggregations commonly used to intersect hazard grids with coarse exposure models. This research introduces a novel spatial classification framework to understand the effects of rescaling flood hazard grids to a coarser resolution. The study derives closed-form analytical solutions for the location and direction of bias from flood grid aggregation, showing that bias will always be present in regions near the edge of inundation. For example, inundation area will be positively biased when water depth grids are aggregated, while volume will be negatively biased when water elevation grids are aggregated. Extending the analysis to effects of hazard aggregation on building exposure, this study shows that exposure in regions at the edge of inundation are an order of magnitude more sensitive to aggregation errors than hazard alone. Among the two aggregation routines considered, averaging water surface elevation grids better preserved flood depths at buildings than averaging of water depth grids. The study provides the first mathematical proof and generalizeable treatment of flood hazard grid aggregation, demonstrating important mechanisms to help flood risk modellers understand and control model behaviour.
The final two studies focus on the aggregation of vulnerability models or flood damage functions, investigating the practice of applying per-asset functions to aggregate exposure models. Both studies extend Jensen’s inequality, a well-known 1906 mathematical proof, to demonstrate how the aggregation of flood damage functions leads to bias. Applying Jensen’s proof in this new context, results show that typically concave flood damage functions will introduce a positive bias (overestimation) when aggregated. This behaviour was further investigated with a simulation experiment including 2 million buildings in Germany, four global flood hazard simulations and three aggregation scenarios. The results show that positive aggregation bias is not distributed evenly in space, meaning some regions identified as “hot spots of risk” in assessments may in fact just be hot spots of aggregation bias. This study provides the first application of Jensen’s inequality to explain the overestimates reported elsewhere and advice for modellers to minimize such artifacts.
In total, this dissertation investigates the complex ways aggregation and disaggregation influence the behaviour of risk models, focusing on the scale-transfers underpinning macro-scale flood risk assessments. Extending a key finding of the flood hazard literature to the broader context of flood risk, this dissertation concludes that all else equal, coarse models overestimate risk. This dissertation goes beyond previous studies by providing mathematical proofs for how and where such bias emerges in aggregation routines, offering a mechanistic explanation for coarse model overestimates. It shows that this bias is spatially heterogeneous, necessitating a deep understanding of how rescaling may bias models to effectively reduce or communicate uncertainties. Further, the dissertation offers specific recommendations to help modellers minimize scale transfers in problematic regions. In conclusion, I argue that such aggregation errors are epistemic, stemming from choices in model structure, and therefore hold greater potential and impetus for study and mitigation. This deeper understanding of uncertainties is essential for improving macro-scale flood risk models and their effectiveness in equitable, holistic, and sustainable flood management.
State space models enjoy wide popularity in mathematical and statistical modelling across disciplines and research fields. Frequent solutions to problems of estimation and forecasting of a latent signal such as the celebrated Kalman filter hereby rely on a set of strong assumptions such as linearity of system dynamics and Gaussianity of noise terms.
We investigate fallacy in mis-specification of the noise terms, that is signal noise
and observation noise, regarding heavy tailedness in that the true dynamic frequently produces observation outliers or abrupt jumps of the signal state due to realizations of these heavy tails not considered by the model. We propose a formalisation of observation noise mis-specification in terms of Huber’s ε-contamination as well as a computationally cheap solution via generalised Bayesian posteriors with a diffusion Stein divergence loss resulting in the diffusion score matching Kalman filter - a modified algorithm akin in complexity to the regular Kalman filter. For this new filter interpretations of novel terms, stability and an ensemble variant are discussed. Regarding signal noise mis-specification, we propose a formalisation in the frame work of change point detection and join ideas from the popular CUSUM algo-
rithm with ideas from Bayesian online change point detection to combine frequent reliability constraints and online inference resulting in a Gaussian mixture model variant of multiple Kalman filters. We hereby exploit open-end sequential probability ratio tests on the evidence of Kalman filters on observation sub-sequences for aggregated inference under notions of plausibility.
Both proposed methods are combined to investigate the double mis-specification problem and discussed regarding their capabilities in reliable and well-tuned uncertainty quantification. Each section provides an introduction to required terminology and tools as well as simulation experiments on the popular target tracking task and the non-linear, chaotic Lorenz-63 system to showcase practical performance of theoretical considerations.
In dieser Arbeit wurde eine reaktive Wand in einem kleinskaligen Laborma\ss stab (Länge~=~40\,cm) entwickelt, die Eisen- und Sulfatbelastungen aus sauren Minenabwässern (engl. \textit{acid mine drainage} (AMD)) mit einer Effizienz von bis zu 30.2 bzw. 24.2\,\% über einen Zeitraum von 146~Tagen (50\,pv) abreinigen können sollte. Als reaktives Material wurde eine Mischung aus Gartenkompost, Buchenholz, Kokosnussschale und Calciumcarbonat verwendet. Die Zugabebedingungen waren eine Eisenkonzentration von 1000\,mg/L, eine Sulfatkonzentration von 3000\,mg/L und ein pH-Wert von 6.2.
Unterschiede in der Materialzusammensetzung ergaben keine grö\ss eren Änderungen in der Sanierungseffizienz von Eisen- und Sulfatbelastungen (12.0 -- 15.4\,\% bzw. 7.0 -- 10.1\,\%) über einen Untersuchungszeitraum von 108~Tagen (41 -- 57\,pv). Der wichtigste Einflussfaktor auf die Abreinigungsleistung von Sulfat- und Eisenbelastungen war die Verweilzeit der AMD-Lösung im reaktiven Material. Diese kann durch eine Verringerung des Durchflusses oder eine Erhöhung der Länge der reaktiven Wand (engl. \textit{Permeable Reactive Barrier} (PRB)) erhöht werden. Ein halbierter Durchfluss erhöhte die Sanierungseffizienzen von Eisen und Sulfat auf 23.4 bzw. 32.7\,\%. Weiterhin stieg die Sanierungseffizienz der Eisenbelastungen auf 24.2\,\% bei einer Erhöhung der Sulfatzugabekonzentration auf 6000\,mg/L. Saure Startbedingungen (pH~=~2.2) konnten, durch das Calciumcarbonat im reaktiven Material, über einen Zeitraum von 47~Tagen (24\,pv) neutralisiert werden. Durch die Neutralisierung der sauren Startbedingungen wurde Calciumcarbonat in der \gls{prb} verbraucht und Calcium-Ionen freigesetzt, die die Sulfatsanierungseffizienz erhöht haben (24.9\,\%). Aufgrund einer Vergrö\ss erung der \gls{prb} in Breite und Tiefe und einer 2D-Parameterbestimmung konnten Randläufigkeiten beobachtet werden, ohne deren Einfluss sich die Sanierungseffizienz für Eisen- und Sulfatbelastungen erhöht (30.2 bzw. 24.2\,\%). \par
Zur \textit{in-situ} Überwachung der \gls{prb} wurden optische Sensoren verwendet, um den pH-Wert, die Sauerstoffkonzentration und die Temperatur zu ermitteln. Es wurden, nach dem Ort und der Zeit aufgelöst, stabile Sauerstoffkonzentrationen und pH-Verläufe detektiert. Auch die Temperatur konnte nach dem Ort aufgelöst ermittelt werden. Damit zeigte diese Arbeit, dass optische Sensoren zur Überwachung der Stabilität einer \gls{prb} für die Reinigung von \gls{amd} verwendet werden können. \par
Mit dem Simulationsprogramm MIN3P wurde eine Simulation erstellt, die die entwickelte PRB darstellt. Die Simulation kann die erhaltenen Laborergebnisse gut wiedergeben. Anschlie\ss end wurde eine simulierte \gls{prb} bei unterschiedlichen Filtergeschwindigkeiten ((4.0 -- 23.5)~$\cdot~\mathrm{10^{-7}}$\,m/s) und Längen der PRB (25 -- 400\,cm) untersucht. Es wurden Zusammenhänge der untersuchten Parameter mit der Sanierungseffizienz von Eisen- und Sulfatbelastungen ermittelt. Diese Zusammenhänge können verwendet werden, um die benötigte Verweilzeit der AMD-Lösung in einem zukünftigen PRB-System, die für die maximal mögliche Sanierungsleistung notwendig ist, zu berechnen.
Proceedings of TripleA 10
(2024)
The TripleA workshop series was founded in 2014 by linguists from Potsdam and Tübingen with the aim of providing a platform for researchers that conduct theoretically-informed linguistic fieldwork on meaning. Its focus is particularly on languages that are under-represented in the current research landscape, including but not limited to languages of Africa, Asia, and Australia, hence TripleA.
For its 10th anniversary, TripleA returned to the University of Potsdam on the 7-9th of June 2023.
The programme included 21 talks dealing with no less than 22 different languages, including three invited talks given by Sihwei Chen (Academia Sinica), Jérémy Pasquereau (Laboratoire de Linguistique de Nantes, CNRS) and Agata Renans (Ruhr-Universität Bochum). Nine of these (invited or peer-reviewed) talks are featured in this volume.
Um in der Schule bereits frühzeitig ein Verständnis für informatische Prozesse zu vermitteln wurde das neue Informatikfach Digitale Welt für die Klassenstufe 5 konzipiert mit der bundesweit einmaligen Verbindung von Informatik mit anwendungsbezogenen und gesellschaftlich relevanten Bezügen zur Ökologie und Ökonomie. Der Technische Report gibt eine Handreichung zur Einführung des neuen Faches.
The global drylands cover nearly half of the terrestrial surface and are home to more than two billion people. In many drylands, ongoing land-use change transforms near-natural savanna vegetation to agricultural land to increase food production. In Southern Africa, these heterogenous savanna ecosystems are also recognized as habitats of many protected animal species, such as elephant, lion and large herds of diverse herbivores, which are of great value for the tourism industry. Here, subsistence farmers and livestock herder communities often live in close proximity to nature conservation areas. Although these land-use transformations are different regarding the future they aspire to, both processes, nature conservation with large herbivores and agricultural intensification, have in common, that they change the vegetation structure of savanna ecosystems, usually leading to destruction of trees, shrubs and the woody biomass they consist of.
Such changes in woody vegetation cover and biomass are often regarded as forms of land degradation and forest loss. Global forest conservation approaches and international programs aim to stop degradation processes, also to conserve the carbon bound within wood from volatilization into earth’s atmosphere. In search for mitigation options against global climate change savannas are increasingly discussed as potential carbon sinks. Savannas, however, are not forests, in that they are naturally shaped by and adapted to disturbances, such as wildfires and herbivory. Unlike in forests, disturbances are necessary for stable, functioning savanna ecosystems and prevent these ecosystems from forming closed forest stands. Their consequently lower levels of carbon storage in woody vegetation have long been the reason for savannas to be overlooked as a potential carbon sink but recently the question was raised if carbon sequestration programs (such as REDD+) could also be applied to savanna ecosystems. However, heterogenous vegetation structure and chronic disturbances hamper the quantification of carbon stocks in savannas, and current procedures of carbon storage estimation entail high uncertainties due to methodological obstacles. It is therefore challenging to assess how future land-use changes such as agricultural intensification or increasing wildlife densities will impact the carbon storage balance of African drylands.
In this thesis, I address the research gap of accurately quantifying carbon storage in vegetation and soils of disturbance-prone savanna ecosystems. I further analyse relevant drivers for both ecosystem compartments and their implications for future carbon storage under land-use change. Moreover, I show that in savannas different carbon storage pools vary in their persistence to disturbance, causing carbon bound in shrub vegetation to be most likely to experience severe losses under land-use change while soil organic carbon stored in subsoils is least likely to be impacted by land-use change in the future.
I start with summarizing conventional approaches to carbon storage assessment and where and for which reasons they fail to accurately estimated savanna ecosystem carbon storage. Furthermore, I outline which future-making processes drive land-use change in Southern Africa along two pathways of land-use transformation and how these are likely to influence carbon storage. In the following chapters, I propose a new method of carbon storage estimation which is adapted to the specific conditions of disturbance-prone ecosystems and demonstrate the advantages of this approach in relation to existing forestry methods. Specifically, I highlight sources for previous over- and underestimation of savanna carbon stocks which the proposed methodology resolves. In the following chapters, I apply the new method to analyse impacts of land-use change on carbon storage in woody vegetation in conjunction with the soil compartment. With this interdisciplinary approach, I can demonstrate that indeed both, agricultural intensification and nature conservation with large herbivores, reduce woody carbon storage above- and belowground, but partly sequesters this carbon into the soil organic carbon stock. I then quantify whole-ecosystem carbon storage in different ecosystem compartments (above- and belowground woody carbon in shrubs and trees, respectively, as well as topsoil and subsoil organic carbon) of two savanna vegetation types (scrub savanna and savanna woodland). Moreover, in a space-for-time substitution I analyse how land-use changes impact carbon storage in each compartment and in the whole ecosystem. Carbon storage compartments are found to differ in their persistence to land-use change with carbon bound in shrub biomass being least persistent to future changes and subsoil organic carbon being most stable under changing land-use. I then explore which individual land-use change effects act as drivers of carbon storage through Generalized Additive Models (GAMs) and uncover non-linear effects, especially of elephant browsing, with implications for future carbon storage. In the last chapter, I discuss my findings in the larger context of this thesis and discuss relevant implications for land-use change and future-making decisions in rural Africa.
Laser induced switching offers an attractive possibility to manipulate small magnetic domains for prospective memory and logic devices on ultrashort time scales. Moreover, optical control of magnetization without high applied magnetic fields allows manipulation of magnetic domains individually and locally, without expensive heat dissipation. One of the major challenges for developing novel optically controlled magnetic memory and logic devices is reliable formation and annihilation of non-volatile magnetic domains that can serve as memory bits in ambient conditions. Magnetic skyrmions, topologically nontrivial spin textures, have been studied intensively since their discovery due to their stability and scalability in potential spintronic devices. However, skyrmion formation and, especially, annihilation processes are still not completely understood and further investigation on such mechanisms are needed. The aim of this thesis is to contribute to better understanding of the physical processes behind the optical control of magnetism in thin films, with the goal of optimizing material parameters and methods for their potential use in next generation memory and logic devices.
First part of the thesis is dedicated to investigation of all-optical helicity-dependent switching (AO-HDS) as a method for magnetization manipulation. AO-HDS in Co/Pt multilayer and CoFeB alloys with and without the presence of Dzyaloshinskii-Moriya interaction (DMI), which is a type of exchange interaction, have been investigated by magnetic imaging using photo-emission electron microscopy (PEEM) in combination with X-ray magnetic circular dichroism (XMCD). The results show that in a narrow range of the laser fluence, circularly polarized laser light induces a drag on domain walls. This enables a local deterministic transformation of the magnetic domain pattern from stripes to bubbles in out-of-plane magnetized Co/Pt multilayers, only controlled by the helicity of ultrashort laser pulses. The temperature and characteristic fields at which the stripe-bubble transformation occurs has been calculated using theory for isolated magnetic bubbles, using as parameters experimentally determined average size of stripe domains and the magnetic layer thickness.
The second part of the work aims at purely optical formation and annihilation of magnetic skyrmions by a single laser pulse. The presence of a skyrmion phase in the investigated CoFeB alloys was first confirmed using a Kerr microscope. Then the helicity-dependent skyrmion manipulation was studied using AO-HDS at different laser fluences. It was found that formation or annihilation individual skyrmions using AO-HDS is possible, but not always reliable, as fluctuations in the laser fluence or position can easily overwrite the helicity-dependent effect of AO-HDS. However, the experimental results and magnetic simulations showed that the threshold values for the laser fluence for the formation and annihilation of skyrmions are different. A higher fluence is required for skyrmion formation, and existing skyrmions can be annihilated by pulses with a slightly lower fluence. This provides a further option for controlling formation and annihilation of skyrmions using the laser fluence. Micromagnetic simulations provide additional insights into the formation and annihilation mechanism.
The ability to manipulate the magnetic state of individual skyrmions is of fundamental importance for magnetic data storage technologies. Our results show for the first time that the optical formation and annihilation of skyrmions is possible without changing the external field. These results enable further investigations to optimise the magnetic layer to maximise the energy gap between the formation and annihilation barrier. As a result, unwanted switching due to small laser fluctuations can be avoided and fully deterministic optical switching can be achieved.
Hardy inequalities on graphs
(2024)
The dissertation deals with a central inequality of non-linear potential theory, the Hardy inequality. It states that the non-linear energy functional can be estimated from below by a pth power of a weighted p-norm, p>1. The energy functional consists of a divergence part and an arbitrary potential part. Locally summable infinite graphs were chosen as the underlying space. Previous publications on Hardy inequalities on graphs have mainly considered the special case p=2, or locally finite graphs without a potential part.
Two fundamental questions now arise quite naturally: For which graphs is there a Hardy inequality at all? And, if it exists, is there a way to obtain an optimal weight? Answers to these questions are given in Theorem 10.1 and Theorem 12.1. Theorem 10.1 gives a number of characterizations; among others, there is a Hardy inequality on a graph if and only if there is a Green's function. Theorem 12.1 gives an explicit formula to compute optimal Hardy weights for locally finite graphs under some additional technical assumptions. Examples show that Green's functions are good candidates to be used in the formula.
Emphasis is also placed on illustrating the theory with examples. The focus is on natural numbers, Euclidean lattices, trees and star graphs. Finally, a non-linear version of the Heisenberg uncertainty principle and a Rellich inequality are derived from the Hardy inequality.
Resolving the evolutionary history of two hippotragin antelopes using archival and ancient DNA
(2024)
African antelopes are iconic but surprisingly understudied in terms of their genetics, especially when it comes to their evolutionary history and genetic diversity. The age of genomics provides an opportunity to investigate evolution using whole nuclear genomes. Decreasing sequencing costs enable the recovery of multiple loci per genome, giving more power to single specimen analyses and providing higher resolution insights into species and populations that can help guide conservation efforts. This age of genomics has only recently begun for African antelopes. Many African bovids have a declining population trend and hence, are often endangered. Consequently, contemporary samples from the wild are often hard to collect. In these cases, ex situ samples from contemporary captive populations or in the form of archival or ancient DNA (aDNA) from historical museum or archaeological/paleontological specimens present a great research opportunity with the latter two even offering a window to information about the past. However, the recovery of aDNA is still considered challenging from regions with prevailing climatic conditions that are deemed adverse for DNA preservation like the African continent. This raises the question if DNA recovery from fossils as old as the early Holocene from these regions is possible.
This thesis focuses on investigating the evolutionary history and genetic diversity of two species: the addax (Addax nasomaculatus) and the blue antelope (Hippotragus leucophaeus). The addax is critically endangered and might even already be extinct in the wild, while the blue antelope became extinct ~1800 AD, becoming the first extinct large African mammal species in historical times. Together, the addax and the blue antelope can inform us about current and past extinction events and the knowledge gained can help guide conservation efforts of threatened species. The three studies used ex situ samples and present the first nuclear whole genome data for both species. The addax study used historical museum specimens and a contemporary sample from a captive population. The two studies on the blue antelope used mainly historical museum specimens but also fossils, and resulted in the recovery of the oldest paleogenome from Africa at that time.
The aim of the first study was to assess the genetic diversity and the evolutionary history of the addax. It found that the historical wild addax population showed only limited phylogeographic structuring, indicating that the addax was a highly mobile and panmictic population and suggesting that the current European captive population might be missing the majority of the historical mitochondrial diversity. It also found the nuclear and mitochondrial diversity in the addax to be rather low compared to other wild ungulate species. Suggestions on how to best save the remaining genetic diversity are presented. The European zoo population was shown to exhibit no or only minor levels of inbreeding, indicating good prospects for the restoration of the species in the wild. The trajectory of the addax’s effective population size indicated a major bottleneck in the late Pleistocene and a low effective population size well before recent human impact led to the species being critically endangered today.
The second study set out to investigate the identities of historical blue antelope specimens using aDNA techniques. Results showed that six out of ten investigated specimens were misidentified, demonstrating the blue antelope to be one of the scarcest mammal species in historical natural history collections, with almost no bone reference material. The preliminary analysis of the mitochondrial genomes suggested a low diversity and hence low population size at the time of the European colonization of southern Africa.
Study three presents the results of the analyses of two blue antelope nuclear genomes, one ~200 years old and another dating to the early Holocene, 9,800–9,300 cal years BP. A fossil-calibrated phylogeny dated the divergence time of the three historically extant Hippotragus species to ~2.86 Ma and demonstrated the blue and the sable antelope (H. niger) to be sister species. In addition, ancient gene flow from the roan (H. equinus) into the blue antelope was detected. A comparison with the roan and the sable antelope indicated that the blue antelope had a much lower nuclear diversity, suggesting a low population size since at least the early Holocene. This concurs with findings from the fossil record that show a considerable decline in abundance after the Pleistocene–Holocene transition. Moreover, it suggests that the blue antelope persisted throughout the Holocene regardless of a low population size, indicating that human impact in the colonial era was a major factor in the blue antelope’s extinction.
This thesis uses aDNA analyses to provide deeper insights into the evolutionary history and genetic diversity of the addax and the blue antelope. Human impact likely was the main driver of extinction in the blue antelope, and is likely the main factor threatening the addax today. This thesis demonstrates the value of ex situ samples for science and conservation, and suggests to include genetic data for conservation assessments of species. It further demonstrates the beneficial use of aDNA for the taxonomic identification of historically important specimens in natural history collections. Finally, the successful retrieval of a paleogenome from the early Holocene of Africa using shotgun sequencing shows that DNA retrieval from samples of that age is possible from regions generally deemed unfavorable for DNA preservation, opening up new research opportunities. All three studies enhance our knowledge of African antelopes, contributing to the general understanding of African large mammal evolution and to the conservation of these and similarly threatened species.
Background: Societies worldwide have become more diverse yet continue to be inequitable. Understanding how youth growing up in these societies are socialized and consequently develop racial knowledge has important implications not only for their well-being but also for building more just societies. Importantly, there is a lack of research on these topics in Germany and Europe in general.
Aim and Method: The overarching aim of the dissertation is to investigate 1) where and how ethnic-racial socialization (ERS) happens in inequitable societies and 2) how it relates to youth’s development of racial knowledge, which comprises racial beliefs (e.g., prejudice, attitudes), behaviors (e.g., actions preserving or disrupting inequities), and identities (e.g., inclusive, cultural). Guided by developmental, cultural, and ecological theories of socialization and development, I first explored how family, as a crucial socialization context, contributes to the preservation or disruption of racism and xenophobia in inequitable societies through its influence on children’s racial beliefs and behaviors. I conducted a literature review and developed a conceptual model bridging research on ethnic-racial socialization and intergroup relations (Study 1). After documenting the lack of research on socialization and development of racial knowledge within and beyond family contexts outside of the U.S., I conducted a qualitative study to explore ERS in Germany through the lens of racially marginalized youth (Study 2). Then, I conducted two quantitative studies to explore the separate and interacting relations of multiple (i.e., family, school) socialization contexts for the development of racial beliefs and behaviors (Study 3), and identities (Studies 3, 4) in Germany. Participants of Study 2 were 26 young adults (aged between 19 and 32) of Turkish, Kurdish, East, and Southeast Asian heritage living across different cities in Germany. Study 3 was conducted with 503 eighth graders of immigrant and non-immigrant descent (Mage = 13.67) in Berlin, Study 4 included 311 early to mid-adolescents of immigrant descent (Mage= 13.85) in North Rhine-Westphalia with diverse cultural backgrounds.
Results and Conclusion: The findings revealed that privileged or marginalized positions of families in relation to their ethnic-racial and religious background in society entail differential experiences and thus are an important determining factor for the content/process of socialization and development of youth’s racial knowledge. Until recently, ERS research mostly focused on investigating how racially marginalized families have been the sources of support for their children in resisting racism and how racially privileged families contribute to transmission of information upholding racism (Study 1). ERS for racially marginalized youth in Germany centered heritage culture, discrimination, and resistance strategies to racism, yet resistance strategies transmitted to youth mostly help to survive racism (e.g., working hard) by upholding it instead of liberating themselves from racism by disrupting it (e.g., self-advocacy, Study 2). Furthermore, when families and schools foster heritage and intercultural learning, both contexts may separately promote stronger identification with heritage culture and German identities, and more prosocial intentions towards disadvantaged groups (i.e., refugees) among youth (Studies 3, 4). However, equal treatment in the school context led to mixed results: equal treatment was either unrelated to inclusive identity, or positively related to German and negatively related to heritage culture identities (Studies 3, 4). Additionally, youth receiving messages highlighting strained and preferential intergroup relations at home while attending schools promoting assimilation may develop a stronger heritage culture identity (Study 4). In conclusion, ERS happened across various social contexts (i.e., family, community centers, school, neighborhood, peer). ERS promoting heritage and intercultural learning, at least in one social context (family or school), might foster youth’s racial knowledge manifesting in stronger belonging to multiple cultures and in prosocial intentions toward disadvantaged groups. However, there is a need for ERS targeting increasing awareness of discrimination across social contexts of youth and teaching youth resistance strategies for liberation from racism.
During the last decades, therapeutical proteins have risen to great significance in the pharmaceutical industry. As non-human proteins that are introduced into the human body cause a distinct immune system reaction that triggers their rapid clearance, most newly approved protein pharmaceuticals are shielded by modification with synthetic polymers to significantly improve their blood circulation time. All such clinically approved protein-polymer conjugates contain polyethylene glycol (PEG) and its conjugation is denoted as PEGylation. However, many patients develop anti-PEG antibodies which cause a rapid clearance of PEGylated molecules upon repeated administration. Therefore, the search for alternative polymers that can replace PEG in therapeutic applications has become important. In addition, although the blood circulation time is significantly prolonged, the therapeutic activity of some conjugates is decreased compared to the unmodified protein. The reason is that these conjugates are formed by the traditional conjugation method that addresses the protein's lysine side chains. As proteins have many solvent exposed lysines, this results in a somewhat uncontrolled attachment of polymer chains, leading to a mixture of regioisomers, with some of them eventually affecting the therapeutic performance.
This thesis investigates a novel method for ligating macromolecules in a site-specific manner, using enzymatic catalysis. Sortase A is used as the enzyme: It is a well-studied transpeptidase which is able to catalyze the intermolecular ligation of two peptides. This process is commonly referred to as sortase-mediated ligation (SML). SML constitutes an equilibrium reaction, which limits product yield. Two previously reported methods to overcome this major limitation were tested with polymers without using an excessive amount of one reactant.
Specific C- or N-terminal peptide sequences (recognition sequence and nucleophile) as part of the protein are required for SML. The complementary peptide was located at the polymer chain end. Grafting-to was used to avoid damaging the protein during polymerization. To be able to investigate all possible combinations (protein-recognition sequence and nucleophile-protein as well as polymer-recognition sequence and nucleophile-polymer) all necessary building blocks were synthesized. Polymerization via reversible deactivation radical polymerization (RDRP) was used to achieve a narrow molecular weight distribution of the polymers, which is required for therapeutic use.
The synthesis of the polymeric building blocks was started by synthesizing the peptide via automated solid-phase peptide synthesis (SPPS) to avoid post-polymerization attachment and to enable easy adaptation of changes in the peptide sequence. To account for the different functionalities (free N- or C-terminus) required for SML, different linker molecules between resin and peptide were used.
To facilitate purification, the chain transfer agent (CTA) for reversible addition-fragmentation chain-transfer (RAFT) polymerization was coupled to the resin-immobilized recognition sequence peptide. The acrylamide and acrylate-based monomers used in this thesis were chosen for their potential to replace PEG.
Following that, surface-initiated (SI) ATRP and RAFT polymerization were attempted, but failed. As a result, the newly developed method of xanthate-supported photo-iniferter (XPI) RAFT polymerization in solution was used successfully to obtain a library of various peptide-polymer conjugates with different chain lengths and narrow molar mass distributions.
After peptide side chain deprotection, these constructs were used first to ligate two polymers via SML, which was successful but revealed a limit in polymer chain length (max. 100 repeat units). When utilizing equimolar amounts of reactants, the use of Ni2+ ions in combination with a histidine after the recognition sequence to remove the cleaved peptide from the equilibrium maximized product formation with conversions of up to 70 %.
Finally, a model protein and a nanobody with promising properties for therapeutical use were biotechnologically modified to contain the peptide sequences required for SML. Using the model protein for C- or N-terminal SML with various polymers did not result in protein-polymer conjugates. The reason is most likely the lack of accessibility of the protein termini to the enzyme. Using the nanobody for C-terminal SML, on the other hand, was successful. However, a similar polymer chain length limit was observed as in polymer-polymer SML. Furthermore, in case of the synthesis of protein-polymer conjugates, it was more effective to shift the SML equilibrium by using an excess of polymer than by employing the Ni2+ ion strategy.
Overall, the experimental data from this work provides a good foundation for future research in this promising field; however, more research is required to fully understand the potential and limitations of using SML for protein-polymer synthesis. In future, the method explored in this dissertation could prove to be a very versatile pathway to obtain therapeutic protein-polymer conjugates that exhibit high activities and long blood circulation times.
In dem Aufsatz wird ein Brief erstmalig veröffentlicht, in dem Alexander von Humboldt im Jahr 1849 bei einem Minister der liberalen Regierung von Kurhessen die Verdienste eines an der Universität in Marburg lehrenden jungen Professors hervorhob. Die Rede ist hier von dem später durch bahnbrechende Entdeckungen berühmten Physiologen Carl Ludwig. Vermittelt wurde das Schreiben durch den Humboldt nahestehenden Mediziner und Physiologen Emil du Bois-Reymond. Der Empfehlungsbrief, mit dem Humboldt versuchte, Ludwigs finanzielle Situation zu verbessern, ist ein Beispiel für die Förderung junger Forscher wie auch freier wissenschaftlicher Institutionen durch Humboldt.
Personalmanagement und KWI
(2024)
Die Digitalisierung ist ein wesentlicher Bestandteil aktueller Verwaltungsreformen. Trotz der hohen Bedeutung und langjähriger Bemühungen bleibt die Bilanz der Verwaltungsdigitalisierung in Deutschland ambivalent. Diese Studie konzentriert sich auf drei erfolgreiche Digitalisierungsvorhaben aus dem Onlinezugangsgesetz (OZG) und analysiert mittels problemzentrierter Expertenbefragung Einflussfaktoren auf die Umsetzung von OZG-Vorhaben und den Einfluss des Managements in diesem Prozess. Die Analyse erfolgt theoriegeleitet basierend auf dem Ansatz der begrenzten Rationalität und der ökonomischen Theorie der Bürokratie. Die Ergebnisse zeigen, dass anzunehmen ist, dass die identifizierten Einflussfaktoren unterschiedlich auf Nachnutzbarkeit und Reifegrad von Verwaltungsleistungen wirken und als Folgen begrenzter Rationalität im menschlichen Problemlösungsprozess interpretiert werden können. Managerinnen unterstützen die operativen Akteure bei der Umsetzung, indem sie deren begrenzte Rationalität mit geeigneten Strategien adressieren. Dazu können sie Ressourcen bereitstellen, mit ihrer Expertise unterstützen, Informationen zugänglich machen, Entscheidungswege verändern sowie zur Konfliktlösung beitragen. Die Studie bietet wertvolle Einblicke in die tatsächliche Managementpraxis und leitet daraus Empfehlungen für die Umsetzung öffentlicher Digitalisierungsvorhaben sowie für die Steuerung öffentlicher Verwaltungen ab. Diese Studie liefert einen wichtigen Beitrag zum Verständnis des Einflusses des Managements in der Verwaltungsdigitalisierung. Die Studie unterstreicht außerdem die Notwendigkeit weiterer Forschung in diesem Bereich, um die Praktiken und Herausforderungen der Verwaltungsdigitalisierung besser zu verstehen und effektiv zu adressieren.
This thesis explores word order variability in verb-final languages. Verb-final languages have a reputation for a high amount of word order variability. However, that reputation amounts to an urban myth due to a lack of systematic investigation. This thesis provides such a systematic investigation by presenting original data from several verb-final languages with a focus on four Uralic ones: Estonian, Udmurt, Meadow Mari, and South Sámi. As with every urban myth, there is a kernel of truth in that many unrelated verb-final languages share a particular kind of word order variability, A-scrambling, in which the fronted elements do not receive a special information-structural role, such as topic or contrastive focus. That word order variability goes hand in hand with placing focussed phrases further to the right in the position directly in front of the verb. Variations on this pattern are exemplified by Uyghur, Standard Dargwa, Eastern Armenian, and three of the Uralic languages, Estonian, Udmurt, and Meadow Mari. So far for the kernel of truth, but the fourth Uralic language, South Sámi, is comparably rigid and does not feature this particular kind of word order variability. Further such comparably rigid, non-scrambling verb-final languages are Dutch, Afrikaans, Amharic, and Korean. In contrast to scrambling languages, non-scrambling languages feature obligatory subject movement, causing word order rigidity next to other typical EPP effects.
The EPP is a defining feature of South Sámi clause structure in general. South Sámi exhibits a one-of-a-kind alternation between SOV and SAuxOV order that is captured by the assumption of the EPP and obligatory movement of auxiliaries but not lexical verbs. Other languages that allow for SAuxOV order either lack an alternation because the auxiliary is obligatorily present (Macro-Sudan SAuxOVX languages), or feature an alternation between SVO and SAuxOV (Kru languages; V2 with underlying OV as a fringe case). In the SVO–SAuxOV languages, both auxiliaries and lexical verbs move. Hence, South Sámi shows that the textbook difference between the VO languages English and French, whether verb movement is restricted to auxiliaries, also extends to OV languages. SAuxOV languages are an outlier among OV languages in general but are united by the presence of the EPP.
Word order variability is not restricted to the preverbal field in verb-final languages, as most of them feature postverbal elements (PVE). PVE challenge the notion of verb-finality in a language. Strictly verb-final languages without any clause-internal PVE are rare. This thesis charts the first structural and descriptive typology of PVE. Verb-final languages vary in the categories they allow as PVE. Allowing for non-oblique PVE is a pivotal threshold: when non-oblique PVE are allowed, PVE can be used for information-structural effects. Many areally and genetically unrelated languages only allow for given PVE but differ in whether the PVE are contrastive. In those languages, verb-finality is not at stake since verb-medial orders are marked. In contrast, the Uralic languages Estonian and Udmurt allow for any PVE, including information focus. Verb-medial orders can be used in the same contexts as verb-final orders without semantic and pragmatic differences. As such, verb placement is subject to actual free variation. The underlying verb-finality of Estonian and Udmurt can only be inferred from a range of diagnostics indicating optional verb movement in both languages. In general, it is not possible to account for PVE with a uniform analysis: rightwards merge, leftward verb movement, and rightwards phrasal movement are required to capture the cross- and intralinguistic variation.
Knowing that a language is verb-final does not allow one to draw conclusions about word order variability in that language. There are patterns of homogeneity, such as the word order variability driven by directly preverbal focus and the givenness of postverbal elements, but those are not brought about by verb-finality alone. Preverbal word order variability is restricted by the more abstract property of obligatory subject movement, whereas the determinant of postverbal word order variability has to be determined in the future.
The automotive industry is a prime example of digital technologies reshaping mobility. Connected, autonomous, shared, and electric (CASE) trends lead to new emerging players that threaten existing industrial-aged companies. To respond, incumbents need to bridge the gap between contrasting product architecture and organizational principles in the physical and digital realms. Over-the-air (OTA) technology, that enables seamless software updates and on-demand feature additions for customers, is an example of CASE-driven digital product innovation. Through an extensive longitudinal case study of an OTA initiative by an industrial- aged automaker, this dissertation explores how incumbents accomplish digital product innovation. Building on modularity, liminality, and the mirroring hypothesis, it presents a process model that explains the triggers, mechanisms, and outcomes of this process. In contrast to the literature, the findings emphasize the primacy of addressing product architecture challenges over organizational ones and highlight the managerial implications for success.
Human activities modify nature worldwide via changes in the environment, biodiversity and the functioning of ecosystems, which in turn disrupt ecosystem services and feed back negatively on humans. A pressing challenge is thus to limit our impact on nature, and this requires detailed understanding of the interconnections between the environment, biodiversity and ecosystem functioning. These three components of ecosystems each include multiple dimensions, which interact with each other in different ways, but we lack a comprehensive picture of their interconnections and underlying mechanisms. Notably, diversity is often viewed as a single facet, namely species diversity, while many more facets exist at different levels of biological organisation (e.g. genetic, phenotypic, functional, multitrophic diversity), and multiple diversity facets together constitute the raw material for adaptation to environmental changes and shape ecosystem functioning. Consequently, investigating the multidimensionality of ecosystems, and in particular the links between multifaceted diversity, environmental changes and ecosystem functions, is crucial for ecological research, management and conservation. This thesis aims to explore several aspects of this question theoretically.
I investigate three broad topics in this thesis. First, I focus on how food webs with varying levels of functional diversity across three trophic levels buffer environmental changes, such as a sudden addition of nutrients or long-term changes (e.g. warming or eutrophication). I observed that functional diversity generally enhanced ecological stability (i.e. the buffering capacity of the food web) by increasing trophic coupling. More precisely, two aspects of ecological stability (resistance and resilience) increased even though a third aspect (the inverse of the time required for the system to reach its post-perturbation state) decreased with increasing functional diversity. Second, I explore how several diversity facets served as a raw material for different sources of adaptation and how these sources affected multiple ecosystem functions across two trophic levels. Considering several sources of adaptation enabled the interplay between ecological and evolutionary processes, which affected trophic coupling and thereby ecosystem functioning. Third, I reflect further on the multifaceted nature of diversity by developing an index K able to quantify the facet of functional diversity, which is itself multifaceted. K can provide a comprehensive picture of functional diversity and is a rather good predictor of ecosystem functioning. Finally I synthesise the interdependent mechanisms (complementarity and selection effects, trophic coupling and adaptation) underlying the relationships between multifaceted diversity, ecosystem functioning and the environment, and discuss the generalisation of my findings across ecosystems and further perspectives towards elaborating an operational biodiversity-ecosystem functioning framework for research and conservation.
Homomorphisms are a fundamental concept in mathematics expressing the similarity of structures. They provide a framework that captures many of the central problems of computer science with close ties to various other fields of science. Thus, many studies over the last four decades have been devoted to the algorithmic complexity of homomorphism problems. Despite their generality, it has been found that non-uniform homomorphism problems, where the target structure is fixed, frequently feature complexity dichotomies. Exploring the limits of these dichotomies represents the common goal of this line of research.
We investigate the problem of counting homomorphisms to a fixed structure over a finite field of prime order and its algorithmic complexity. Our emphasis is on graph homomorphisms and the resulting problem #_{p}Hom[H] for a graph H and a prime p. The main research question is how counting over a finite field of prime order affects the complexity.
In the first part of this thesis, we tackle the research question in its generality and develop a framework for studying the complexity of counting problems based on category theory. In the absence of problem-specific details, results in the language of category theory provide a clear picture of the properties needed and highlight common ground between different branches of science. The proposed problem #Mor^{C}[B] of counting the number of morphisms to a fixed object B of C is abstract in nature and encompasses important problems like constraint satisfaction problems, which serve as a leading example for all our results. We find explanations and generalizations for a plethora of results in counting complexity. Our main technical result is that specific matrices of morphism counts are non-singular. The strength of this result lies in its algebraic nature. First, our proofs rely on carefully constructed systems of linear equations, which we know to be uniquely solvable. Second, by exchanging the field that the matrix is defined by to a finite field of order p, we obtain analogous results for modular counting. For the latter, cancellations are implied by automorphisms of order p, but intriguingly we find that these present the only obstacle to translating our results from exact counting to modular counting. If we restrict our attention to reduced objects without automorphisms of order p, we obtain results analogue to those for exact counting. This is underscored by a confluent reduction that allows this restriction by constructing a reduced object for any given object. We emphasize the strength of the categorial perspective by applying the duality principle, which yields immediate consequences for the dual problem of counting the number of morphisms from a fixed object.
In the second part of this thesis, we focus on graphs and the problem #_{p}Hom[H]. We conjecture that automorphisms of order p capture all possible cancellations and that, for a reduced graph H, the problem #_{p}Hom[H] features the complexity dichotomy analogue to the one given for exact counting by Dyer and Greenhill. This serves as a generalization of the conjecture by Faben and Jerrum for the modulus 2. The criterion for tractability is that H is a collection of complete bipartite and reflexive complete graphs. From the findings of part one, we show that the conjectured dichotomy implies dichotomies for all quantum homomorphism problems, in particular counting vertex surjective homomorphisms and compactions modulo p. Since the tractable cases in the dichotomy are solved by trivial computations, the study of the intractable cases remains. As an initial problem in a series of reductions capable of implying hardness, we employ the problem of counting weighted independent sets in a bipartite graph modulo prime p. A dichotomy for this problem is shown, stating that the trivial cases occurring when a weight is congruent modulo p to 0 are the only tractable cases. We reduce the possible structure of H to the bipartite case by a reduction to the restricted homomorphism problem #_{p}Hom^{bip}[H] of counting modulo p the number of homomorphisms between bipartite graphs that maintain a given order of bipartition. This reduction does not have an impact on the accessibility of the technical results, thanks to the generality of the findings of part one. In order to prove the conjecture, it suffices to show that for a connected bipartite graph that is not complete, #_{p}Hom^{bip}[H] is #_{p}P-hard. Through a rigorous structural study of bipartite graphs, we establish this result for the rich class of bipartite graphs that are (K_{3,3}\{e}, domino)-free. This overcomes in particular the substantial hurdle imposed by squares, which leads us to explore the global structure of H and prove the existence of explicit structures that imply hardness.
Among the different meanings carried by numerical information, cardinality is fundamental for survival and for the development of basic as well as of higher numerical skills. Importantly, the human brain inherits from evolution a predisposition to map cardinality onto space, as revealed by the presence of spatial-numerical associations (SNAs) in humans and animals. Here, the mapping of cardinal information onto physical space is addressed as a hallmark signature characterizing numerical cognition.
According to traditional approaches, cognition is defined as complex forms of internal information processing, taking place in the brain (cognitive processor). On the contrary, embodied cognition approaches define cognition as functionally linked to perception and action, in the continuous interaction between a biological body and its physical and sociocultural environment.
Embracing the principles of the embodied cognition perspective, I conducted four novel studies designed to unveil how SNAs originate, develop, and adapt, depending on characteristics of the organism, the context, and their interaction. I structured my doctoral thesis in three levels. At the grounded level (Study 1), I unfold the biological foundations underlying the tendency to map cardinal information across space; at the embodied level (Study 2), I reveal the impact of atypical motor development on the construction of SNAs; at the situated level (Study 3), I document the joint influence of visuospatial attention and task properties on SNAs. Furthermore, I experimentally investigate the presence of associations between physical and numerical distance, another numerical property fundamental for the development of efficient mathematical minds (Study 4).
In Study 1, I present the Brain’s Asymmetric Frequency Tuning hypothesis that relies on hemispheric asymmetries for processing spatial frequencies, a low-level visual feature that the (in)vertebrate brain extracts from any visual scene to create a coherent percept of the world. Computational analyses of the power spectra of the original stimuli used to document the presence of SNAs in human newborns and animals, support the brain’s asymmetric frequency tuning as a theoretical account and as an evolutionarily inherited mechanism scaffolding the universal and innate tendency to represent cardinality across horizontal space.
In Study 2, I explore SNAs in children with rare genetic neuromuscular diseases: spinal muscular atrophy (SMA) and Duchenne muscular dystrophy (DMD). SMA children never accomplish independent motoric exploration of their environment; in contrast, DMD children do explore but later lose this ability. The different SNAs reported by the two groups support the critical role of early sensorimotor experiences in the spatial representation of cardinality.
In Study 3, I directly compare the effects of overt attentional orientation during explicit and implicit processing of numerical magnitude. First, the different effects of attentional orienting based on the type of assessment support different mechanisms underlying SNAs during explicit and implicit assessment of numerical magnitude. Secondly, the impact of vertical shifts of attention on the processing of numerical distance sheds light on the correspondence between numerical distance and peri-personal distance.
In Study 4, I document the presence of different SNAs, driven by numerical magnitude and numerical distance, by employing different response mappings (left vs. right and near vs. distant).
In the field of numerical cognition, the four studies included in the present thesis contribute to unveiling how the characteristics of the organism and the environment influence the emergence, the development, and the flexibility of our attitude to represent cardinal information across space, thus supporting the predictions of the embodied cognition approach. Furthermore, they inform a taxonomy of body-centred factors (biological properties of the brain and sensorimotor system) modulating the spatial representation of cardinality throughout the course of life, at the grounded, embodied, and situated levels.
If the awareness for different variables influencing SNAs over the course of life is important, it is equally important to consider the organism as a whole in its sensorimotor interaction with the world. Inspired by my doctoral research, here I propose a holistic perspective that considers the role of evolution, embodiment, and environment in the association of cardinal information with directional space. The new perspective advances the current approaches to SNAs, both at the conceptual and at the methodological levels.
Unveiling how the mental representation of cardinality emerges, develops, and adapts is necessary to shape efficient mathematical minds and achieve economic productivity, technological progress, and a higher quality of life.
«Musik erfinden und gestalten» hat grosses musikpädagogisches Potenzial: mit Klängen experimentieren, ein Gespür für dramaturgische Verläufe entwickeln, nonverbal kommunizieren – Musik erfinden und gestalten eröffnet ein breites Feld musikalischer Aktivitäten und Erfahrungsmöglichkeiten. Doch im regulären Musikunterricht in der Volksschule der Schweiz sind produktionsdidaktische Ansätze noch eher die Ausnahme und Musiklehrkräften fehlt es an Anleitungsstrategien.
Für das vorliegende Buch untersuchte der Autor in Form einer Design-based-Research-Studie, wie Primarlehrkräfte ihre Anleitungsstrategien bei der Durchführung von musikalischen Gestaltungsprozessen in ihren Schulklassen schrittweise entwickeln. Dabei begleitete der Forscher die Lehrkräfte in der schulischen Praxis und intervenierte gezielt mit Reflexionsimpulsen, um den Professionalisierungsprozess zu unterstützen.
Daraus wurden drei Reflexionstools generiert: Das Reflexionstool try-outs beinhaltet konkrete Handlungsanregungen und Reflexionsfragen für das Anleiten musikalischer Gestaltungsprozesse. Das Onlinetool improspider ist ein Selbstreflexionsinstrument zur Einschätzung personaler Orientierungen. Das Kompetenzmodell Kompetenzflyer bietet eine Reflexionsfolie für die Ansteuerung eigenständiger Kompetenzerwerbsschritte.
Die Reflexionstools sind außerdem online in Form eines Lernobjekts verfügbar.
Water stored in the unsaturated soil as soil moisture is a key component of the hydrological cycle influencing numerous hydrological processes including hydrometeorological extremes. Soil moisture influences flood generation processes and during droughts when precipitation is absent, it provides plant with transpirable water, thereby sustaining plant growth and survival in agriculture and natural ecosystems.
Soil moisture stored in deeper soil layers e.g. below 100 cm is of particular importance for providing plant transpirable water during dry periods. Not being directly connected to the atmosphere and located outside soil layers with the highest root densities, water in these layers is less susceptible to be rapidly evaporated and transpired. Instead, it provides longer-term soil water storage increasing the drought tolerance of plants and ecosystems.
Given the importance of soil moisture in the context of hydro-meteorological extremes in a warming climate, its monitoring is part of official national adaption strategies to a changing climate. Yet, soil moisture is highly variable in time and space which challenges its monitoring on spatio-temporal scales relevant for flood and drought risk modelling and forecasting.
Introduced over a decade ago, Cosmic-Ray Neutron Sensing (CRNS) is a noninvasive geophysical method that allows for the estimation of soil moisture at relevant spatio-temporal scales of several hectares at a high, subdaily temporal resolution. CRNS relies on the detection of secondary neutrons above the soil surface which are produced from high-energy cosmic-ray particles in the atmosphere and the ground. Neutrons in a specific epithermal energy range are sensitive to the amount of hydrogen present in the surroundings of the CRNS neutron detector. Due to same mass as the hydrogen nucleus, neutrons lose kinetic energy upon collision and are subsequently absorbed when reaching low, thermal energies. A higher amount of hydrogen therefore leads to fewer neutrons being detected per unit time. Assuming that the largest amount of hydrogen is stored in most terrestrial ecosystems as soil moisture, changes of soil moisture can be estimated through an inverse relationship with observed neutron intensities.
Although important scientific advancements have been made to improve the methodological framework of CRNS, several open challenges remain, of which some are addressed in the scope of this thesis. These include the influence of atmospheric variables such as air pressure and absolute air humidity, as well as, the impact of variations in incoming primary cosmic-ray intensity on observed epithermal and thermal neutron signals and their correction. Recently introduced advanced neutron-to-soil moisture transfer functions are expected to improve CRNS-derived soil moisture estimates, but potential improvements need to be investigated at study sites with differing environmental conditions. Sites with strongly heterogeneous, patchy soil moisture distributions challenge existing transfer functions and further research is required to assess the impact of, and correction of derived soil moisture estimates under heterogeneous site conditions. Despite its capability of measuring representative averages of soil moisture at the field scale, CRNS lacks an integration depth below the first few decimetres of the soil. Given the importance of soil moisture also in deeper soil layers, increasing the observational window of CRNS through modelling approaches or in situ measurements is of high importance for hydrological monitoring applications.
By addressing these challenges, this thesis aids to closing knowledge gaps and finding answers to some of the open questions in CRNS research. Influences of different environmental variables are quantified, correction approaches are being tested and developed. Neutron-to-soil moisture transfer functions are evaluated and approaches to reduce effects of heterogeneous soil moisture distributions are presented. Lastly, soil moisture estimates from larger soil depths are derived from CRNS through modified, simple modelling approaches and in situ estimates by using CRNS as a downhole technique. Thereby, this thesis does not only illustrate the potential of new, yet undiscovered applications of CRNS in future but also opens a new field of CRNS research. Consequently, this thesis advances the methodological framework of CRNS for above-ground and downhole applications. Although the necessity of further research in order to fully exploit the potential of CRNS needs to be emphasised, this thesis contributes to current hydrological research and not least to advancing hydrological monitoring approaches being of utmost importance in context of intensifying hydro-meteorological extremes in a changing climate.
Overcoming natural biomass limitations in gram-negative bacteria through synthetic carbon fixation
(2024)
The carbon demands of an ever-increasing human population and the concomitant rise in net carbon emissions requires CO2 sequestering approaches for production of carbon-containing molecules. Microbial production of carbon-containing products from plant-based sugars could replace current fossil-based production. However, this form of sugar-based microbial production directly competes with human food supply and natural ecosystems. Instead, one-carbon feedstocks derived from CO2 and renewable energy were proposed as an alternative. The one carbon molecule formate is a stable, readily soluble and safe-to-store energetic mediator that can be electrochemically generated from CO2 and (excess off-peak) renewable electricity. Formate-based microbial production could represent a promising approach for a circular carbon economy. However, easy-to-engineer and efficient formate-utilizing microbes are lacking. Multiple synthetic metabolic pathways were designed for better-than-nature carbon fixation. Among them, the reductive glycine pathway was proposed as the most efficient pathway for aerobic formate assimilation. While some of these pathways have been successfully engineered in microbial hosts, these synthetic strains did so far not exceed the performance of natural strains. In this work, I engineered and optimized two different synthetic formate assimilation pathways in gram-negative bacteria to exceed the limits of a natural carbon fixation pathway, the Calvin cycle.
The first chapter solidified Cupriavidus necator as a promising formatotrophic host to produce value-added chemicals. The formate tolerance of C. necator was assessed and a production pathway for crotonate established in a modularized fashion. Last, bioprocess optimization was leveraged to produce crotonate from formate at a titer of 148 mg/L.
In the second chapter, I chromosomally integrated and optimized the synthetic reductive glycine pathway in C. necator using a transposon-mediated selection approach. The insertion methodology allowed selection for condition-specific tailored pathway expression as improved pathway performance led to better growth. I then showed my engineered strains to exceed the biomass yields of the Calvin cycle utilizing wildtype C. necator on formate. This demonstrated for the first time the superiority of a synthetic formate assimilation pathway and by extension of synthetic carbon fixation efforts as a whole.
In chapter 3, I engineered a segment of a synthetic carbon fixation cycle in Escherichia coli. The GED cycle was proposed as a Calvin cycle alternative that does not perform a wasteful oxygenation reaction and is more energy efficient. The pathways simple architecture and reasonable driving force made it a promising candidate for enhanced carbon fixation. I created a deletion strain that coupled growth to carboxylation via the GED pathway segment. The CO2 dependence of the engineered strain and 13C-tracer analysis confirmed operation of the pathway in vivo.
In the final chapter, I present my efforts of implementing the GED cycle also in C. necator, which might be a better-suited host, as it is accustomed to formatotrophic and hydrogenotrophic growth. To provide the carboxylation substrate in vivo, I engineered C. necator to utilize xylose as carbon source and created a selection strain for carboxylase activity. I verify activity of the key enzyme, the carboxylase, in the decarboxylative direction. Although CO2-dependent growth of the strain was not obtained, I showed that all enzymes required for operation of the GED cycle are active in vivo in C. necator.
I then evaluate my success with engineering a linear and cyclical one-carbon fixation pathway in two different microbial hosts. The linear reductive glycine pathway presents itself as a much simpler metabolic solution for formate dependent growth over the sophisticated establishment of hard-to-balance carbon fixation cycles. Last, I highlight advantages and disadvantages of C. necator as an upcoming microbial benchmark organism for synthetic metabolism efforts and give and outlook on its potential for the future of C1-based manufacturing.
Portal Transfer 2024
(2024)
Liebe Leserinnen und Leser, die eigene „Blase“ verlassen, Perspektiven wechseln, Silo-Mentalität überwinden – was der Wissenschaft in ihrem Innern gelingt, ja gelingen muss, um erfolgreich zu sein, stellt sie in ihrer Außenwirkung noch immer vor Herausforderungen. Dabei gehört es doch inzwischen zum Selbstverständnis moderner Universitäten, öffentlich zu erklären, woran in ihren Räumen geforscht wird, sich in gesellschaftliche Diskurse einzubringen und ihre Erkenntnisse zügig in die Praxis zu überführen.
Die Universität Potsdam hat diese Transferaufgaben neben Lehre und Forschung als dritte Säule installiert und ihrem Gebäude damit noch mehr Stabilität verliehen. Seit Jahren gehört sie im nationalen Vergleich zu den erfolgreichsten Hochschulen, wenn es darum geht, Start-ups zu fördern und aus der Forschung heraus Unternehmen zu gründen: In diesem Magazin berichten wir von der Potassco Solutions GmbH des Informatikers Torsten Schaub, der mit seinem KI-System Clingo komplexe Optimierungsprobleme in Betrieben löst. Oder von der SEQSTANT GmbH, die mit innovativer Diagnostik Erreger von Atemwegserkrankungen in Echtzeit bestimmen kann. Wir zeigen aber auch, wie Forschungsteams mit der Industrie kooperieren, zum Beispiel mit der K-UTEC im thüringischen Sondershausen, um mit wissenschaftlichem Knowhow dazu beizutragen, dass dort in Produktionsabfällen kein wertvolles Lithium verloren geht.
Richtet sich der Technologietransfer vor allem an die Wirtschaft, so hilft der Wissenstransfer der gesamten Gesellschaft. Besonders stark ist die Universität Potsdam hier in der Bildung, denn mit ihren Lehramtsabsolventen schickt sie auch gleich den aktuellen Stand der Unterrichtsforschung in die Schulpraxis. Immer häufiger zieht dabei die Digitalisierung in die Klassenzimmer ein. Wie das gut gelingen kann, ist in diesem Magazin zu lesen. Zudem erklären wir, was die Sportwissenschaft zur Therapie von Depressionen beitragen kann oder wie die Umweltforschung das Risikomanagement in von Hochwasser bedrohten Regionen verbessern will. Ob in öffentlichen Verwaltungen oder politischen Institutionen – überall ist wissenschaftliche Expertise gefragt. Wir zeigen das am Beispiel von Frauke Brosius-Gersdorf, die als Juristin die Bundesregierung zur Regulierung des Schwangerschaftsabbruchs berät.
Der kürzeste Weg des Wissens aus der Universität in die Praxis führt zweifelsohne über die Alumni, die als Fach- und Führungskräfte im Land und darüber hinaus wirksam werden. Dass dieser Weg schon während des Studiums beginnen kann, beweisen die vielen studentischen Initiativen, die hier zu Wort kommen. Sie alle scheuen nicht das Rampenlicht: ob bei Science Slams auf den Bühnen im Land Brandenburg, bei den TEDx-Talks im Hans Otto Theater, beim Kunst-Rundgang in der Potsdamer Waschhaus-Arena oder mit englischsprachigem Schauspiel an der Uni. Öffentlich in Erscheinung treten, neue Formen finden, um Wissen in die Breite der Bevölkerung zu tragen – auch das gehört zum Transfer. Genau wie dieses Magazin.
Have you already swiped or liked this morning? Have you taken part in a video conference at work, used or programmed a database? Have you paid with your smartphone on the way home, listened to a podcast, or extended the lending of books you borrowed from the library? And in the evening, have you filled out your tax return application on ELSTER.de on your tablet, shopped online, or paid invoices before you were tempted to watch a series on a streaming platform?
Our lives are entirely digitalized.
These changes make many things faster, easier, and more efficient. But keeping pace with these changes demands a lot from us, and not everyone succeeds. There are people who prefer to go to the bank to make a transfer, leave the programming to the experts, send their tax return by mail, and only use their smartphone to make phone calls. They don’t want to keep pace, or maybe they can’t. They haven’t learned these things. Others, younger people, grow up as “digital natives” surrounded by digital devices, tools, and processes. But does that mean they really know how to use them? Or do they also need digital education?
But what does successful digital education actually look like? Does it teach us how to use a tablet, how to google properly, and how to write Excel spreadsheets? Perhaps it’s about more than that. It’s about understanding the comprehensive change that has been taking hold of our world since it was broken down into digital ones and zeros and rebuilt virtually. But how do we learn to live in a world of digitality – with all that it entails, and to our benefit?
For the new issue of “Portal Wissen”, we looked around at the university and interviewed researchers about the role that the connection between digitalization and learning plays in the research of various disciplines. We spoke to Katharina Scheiter, Professor of Digital Education, about the future of German schools and had several experts show us examples of how digital tools can improve learning in schools. We also talked to computer science and agricultural researchers about how even experienced farmers can still learn a lot about their land and their work thanks to digital tools. We spoke to educational researchers who are using big data to analyze how boys and girls learn and what the possible causes for differences are. Education and political scientist Nina Kolleck, on the other hand, looks at education against the backdrop of globalization and relies on the analysis of large amounts of social media data.
Of course, we don’t lose sight of the diversity of research at the University of Potsdam. We learn, for example, what alternatives to antibiotics could soon be available. This magazine also looks at stress and how it makes us ill as well as the research into sustainable ore extraction.
A new feature of our magazine is a whole series of shorter articles that invite you to browse and read: from research news and photographic insights into laboratories to simple explanations of complex phenomena and outlooks into the wider world of research to a small scientific utopia and a personal thanks to research. All this in the name of education, of course. Enjoy your read!
Portal = Welt retten
(2024)
Fragen beantworten, Unbekanntes erklären, Rätsel lösen – und die gewonnenen Erkenntnisse zum Nutzen der Menschheit einsetzen: Das treibt Wissenschaftler*innen auf der ganzen Welt an. Forschung ist keine Geheimwissenschaft, die im stillen Kämmerlein passiert. Sie dient im besten Fall allen. Sie funktioniert voraussetzungsfrei und ergebnisoffen, und gerade deshalb können Forschungsergebnisse notwendige Innovationen, Transformation oder Umdenken fördern und auf diese Weise die Welt verändern. Zum Besseren, so die Hoffnung. Für diese Ausgabe der „Portal“ haben wir Universitätspräsident Prof. Oliver Günther, Ph.D. und die Ökologin Prof. Dr. Damaris Zurell gefragt, ob Wissenschaft die Welt retten kann. Sie sind sich einig: Forschung trägt dazu bei, dass viele Menschen ein lebenswertes und erfülltes Leben führen können. Sie betonen aber auch: Wissenschaft kann das nicht allein erreichen, für echte Veränderungen braucht es Politik, Wirtschaft und Gesellschaft.
Wie wichtig es ist, dass wissenschaftliche Erkenntnisse uns zum Handeln bewegen, davon erzählen auch die vielen anderen Geschichten in diesem Heft. Denn in Potsdam tragen nicht nur Wissenschaftler*innen, sondern auch Studierende und Beschäftigte in Technik und Verwaltung dazu bei, die Universität, ihr Umfeld oder „die Welt da draußen“ Stück für Stück besser zu machen. Jonathan Schorsch zum Beispiel, Professor für Jüdische Religions- und Geistesgeschichte, hat den „Grünen Sabbat“ ins Leben gerufen: einen Tag in der Woche, an dem wir der Erde – und uns selbst – eine kleine Pause gönnen. Der Jurist Andreas Zimmermann berichtet von einem Verfahren vor dem Internationalen Gerichtshof zum Klimawandel, an dem er als Forscher beteiligt ist, und seine Kollegin Dr. Anna von Rebay kämpft als Anwältin für die Rechte des Meeres vor Ausbeutung und Verschmutzung. Der Voltaire-Preisträger Gera Gizaw erzählt von einem Flüchtlingscamp in Kenia aus die Geschichten der Menschen dort und der Medizinethiker Robert Ranisch zeigt, wie die Pflege künftig für noch mehr Wohl sorgen kann. Hochschulangehörige engagieren sich für den Bildungsaufstieg von Menschen aus nicht-akademischen Familien und der Student Tobias Föhl kämpft bei ONE gegen Armut auf der Welt. Mitarbeiter aus der Musikwissenschaft verlängern das Leben von alten Möbeln und Musikinstrumenten, Studierende arbeiten mit Jugendfeuerwehren aus der Region zusammen. Der Better World Award wirft ein Licht auf innovative Ideen, die schnellstmöglich ihren Weg aus der Uni in die Öffentlichkeit finden sollten. Wie wichtig die Kommunikation wissenschaftlicher Erkenntnisse ist, zeigen Julia Wandt und Kristin Küter, die Menschen aus dem Wissenschaftsbetrieb beraten, die Anfeindungen ausgesetzt sind. Denn damit es vorangeht, damit Lösungen für Probleme dieser Welt gefunden werden, darf eines nicht geschehen: dass die Forschung verstummt.
Skepticism
(2022)
This dissertation offers new and original readings of three major texts in the history of Western philosophy: Descartes’s “First Meditation,” Kant’s “Transcendental Deduction,” and his “Refutation of Idealism.” The book argues that each text addresses the problem of skepticism and posits that they have a hitherto underappreciated, organic relationship to one another. The dissertation begins with an analysis of Descartes’ “First Meditation,” which I argue offers two distinct and independent skeptical arguments that differ in both aim and scope. I call these arguments the “veil of ideas” argument and the “author of my origin” argument. My reading counters the standard interpretation of the text, which sees it as offering three stages of doubt, namely the occasional fallibility of the senses, the dream hypothesis, and the evil demon hypothesis. Building on this, the central argument of the dissertation is that Kant’s “Transcendental Deduction” actually transforms and radicalizes Descartes’s Author of My Origin argument, reconceiving its meaning within the framework of Kant’s own transcendental idealist philosophy. Finally, I argue that the Refutation of Idealism offers a similarly radicalized version of Descartes’s Veil of Ideas argument, albeit translated into the framework of transcendental idealism.
The experience of premenstrual syndrome (PMS) affects up to 90% of individuals with an active menstrual cycle and involves a spectrum of aversive physiological and psychological symptoms in the days leading up to menstruation (Tschudin et al., 2010). Despite its high prevalence, the precise origins of PMS remain elusive, with influences ranging from hormonal fluctuations to cognitive, social, and cultural factors (Hunter, 2007; Matsumoto et al., 2013).
Biologically, hormonal fluctuations, particularly in gonadal steroids, are commonly believed to be implicated in PMS, with the central factor being varying susceptibilities to the fluctuations between individuals and cycles (Rapkin & Akopians, 2012). Allopregnanolone (ALLO), a neuroactive steroid and progesterone metabolite, has emerged as a potential link to PMS symptoms (Hantsoo & Epperson, 2020). ALLO is a positive allosteric modulator of the GABAA receptor, influencing inhibitory communication (Rupprecht, 2003; Andréen et al., 2006). Different susceptibility to ALLO fluctuations throughout the cycle may lead to reduced GABAergic signal transmission during the luteal phase of the menstrual cycle.
The GABAergic system's broad influence leads to a number of affected physiological systems, including a consistent reduction in vagally mediated heart rate variability (vmHRV) during the luteal phase (Schmalenberger et al., 2019). This reduction in vmHRV is more pronounced in individuals with high PMS symptoms (Baker et al., 2008; Matsumoto et al., 2007). Fear conditioning studies have shown inconsistent associations with cycle phases, suggesting a complex interplay between physiological parameters and PMS-related symptoms (Carpenter et al., 2022; Epperson et al., 2007; Milad et al., 2006).
The neurovisceral integration model posits that vmHRV reflects the capacity of the central autonomous network (CAN), which is responsible for regulatory processes on behavioral, cognitive, and autonomous levels (Thayer & Lane, 2000, 2009). Fear learning, mediated within the CAN, is suggested to be indicative of vmHRV's capacity for successful
VI
regulation (Battaglia & Thayer, 2022). Given the GABAergic mediation of central inhibitory functional connectivity in the CAN, which may be affected by ALLO fluctuations, this thesis proposes that fluctuating CAN activity in the luteal phase contributes to diverse aversive symptoms in PMS.
A research program was designed to empirically test these propositions. Study 1 investigated fear discrimination during different menstrual cycle phases and its interaction with vmHRV, revealing nuanced effects on acoustic startle response and skin conductance response. While there was heightened fear discrimination in acoustic startle responses in participants in the luteal phase, there was an interaction between menstrual cycle phase and vmHRV in skin conductance responses. In this measure, heightened fear discrimination during the luteal phase was only visible in individuals with high resting vmHRV; those with low vmHRV showed reduced fear discrimination and higher overall responses.
Despite affecting the vast majority of menstruating people, there are very limited tools available to reliably assess these symptoms in the German speaking area. Study 2 aimed at closing this gap, by translating and validating a German version of the short version of the Premenstrual Assessment Form (Allen et al., 1991), providing a reliable tool for future investigations, which closes the gap in PMS questionnaires in the German-speaking research area.
Study 3 employed a diary study paradigm to explore daily associations between vmHRV and PMS symptoms. The results showed clear simultaneous fluctuations between the two constructs with a peak in PMS and a low point in vmHRV a few days before menstruation onset. The association between vmHRV and PMS was driven by psychological PMS symptoms.
Based on the theoretical considerations regarding the neurovisceral perspective on PMS, another interesting construct to consider is attentional control, as it is closely related to functions of the CAN. Study 4 delved into attentional control and vmHRV differences between menstrual cycle phases, demonstrating an interaction between cycle phase and PMS symptoms. In a pilot, we found reduced vmHRV and attentional control during the luteal phase only in participants who reported strong PMS.
While Studies 1-4 provided evidence for the mechanisms underlying PMS, Studies 5 and 6 investigated short- and long-term intervention protocols to ameliorate PMS symptomatology. Study 5 explored the potential of heart rate variability biofeedback (HRVB) in alleviating PMS symptoms and a number of other outcome measures. In a waitlist-control design, participants underwent a 4-week smartphone-based HRVB intervention. The results revealed positive effects on PMS, with larger effect sizes on psychological symptoms, as well as on depressive symptoms, anxiety/stress and attentional control.
Finally, Study 6 examined the acute effects of HRVB on attentional control. The study found positive impact but only in highly stressed individuals.
The thesis, based on this comprehensive research program, expands our understanding of PMS as an outcome of CAN fluctuations mediated by GABAA receptor reactivity. The results largely support the model. These findings not only deepen our understanding of PMS but also offer potential avenues for therapeutic interventions. The promising results of smartphone-based HRVB training suggest a non-pharmacological approach to managing PMS symptoms, although further research is needed to confirm its efficacy.
In conclusion, this thesis illuminates the complex web of factors contributing to PMS, providing valuable insights into its etiological underpinnings and potential interventions. By elucidating the relationships between hormonal fluctuations, CAN activity, and psychological responses, this research contributes to more effective treatments for individuals grappling with the challenges of PMS. The findings hold promise for improving the quality of life for those affected by this prevalent and often debilitating condition.
Der Semi-Parlamentarismus beschreibt das Regierungssystem, in dem die Regierung von einem Teil des Parlaments gewählt wird und abberufen werden kann, von einem anderen Teil des Parlaments aber unabhängig ist. Beide Kammern müssen dabei der Gesetzgebung zustimmen. Dieses von Steffen Ganghof klassifizierte System ergänzt gängige Regierungssystemtypologien, wie sie beispielsweise von David Samuels und Matthew Shugart genutzt werden. Der Semi-Parlamentarismus ist der logische Gegenpart zum Semi-Präsidentialismus, bei dem nur ein Teil der Exekutive von der Legislative abhängt, während im Semi-Parlamentarismus die Exekutive von nur einem Teil der Legislative abhängt. Der Semi-Parlamentarismus verkörpert so ein System der Gewaltenteilung ohne einen exekutiven Personalismus, wie er durch die Direktwahl und Unabhängigkeit der Regierungchef:in im Präsidentialismus hervorgerufen wird. Dadurch ist der Semi-Parlamentarismus geeignet, Unterschiede zwischen Parlamentarismus und Präsidentialismus auf den separaten Einfluss der Gewaltenteilung und des exekutiven Personalismus zurückzuführen. Die Untersuchung des Semi-Parlamentarismus ist daher für die Regierungssystemliteratur insgesamt von Bedeutung. Der Semi-Parlamentarismus ist dabei kein rein theoretisches Konstrukt, sondern existiert im australischen Bundesstaat, den australischen Substaaten und Japan.
Die vorliegende Dissertation untersucht erstmals umfassend die Gesetzgebung der semi-parlamentarischen Staaten als solchen. Der Fokus liegt dabei auf den zweiten Kammern, da diese durch die Unabhängigkeit von der Regierung der eigentliche Ort der Gesetzgebung sind. Die Gesetzgebung in Parlamentarismus und Präsidentialismus unterscheidet sich insbesondere in der Geschlossenheit der Parteien, der Koalitionsbildung und dem legislativen Erfolg der Regierungen. Diese Punkte sind daher auch von besonderem Interesse bei der Analyse des Semi-Parlamentarismus. Die semi-parlamentarischen Staaten unterscheiden sich auch untereinander teilweise erheblich in der institutionellen Ausgestaltung wie den Wahlsystemen oder den verfügbaren Mitteln zur Überwindung von Blockadesituationen. Die Darstellung und die Analyse der Auswirkungen dieser Unterschiede auf die Gesetzgebung ist neben dem Vergleich des Semi-Parlamentarismus mit anderen Systemen das zweite wesentliche Ziel dieser Arbeit.
Als Fundament der Analyse habe ich einen umfangreichen Datensatz erhoben, der alle Legislaturperioden der australischen Staaten zwischen 1997 und 2019 umfasst. Wesentliche Bestandteile des Datensatzes sind alle namentlichen Abstimmungen beider Kammern, alle
eingebrachten und verabschiedeten Gesetzen der Regierung sowie die mit Hilfe eines Expert-Surveys erhobenen Parteipositionen in den relevanten Politikfeldern auf substaatlicher Ebene.
Hauptsächlich mit der Hilfe von Mixed-Effects- und Fractional-Response-Analysen kann ich so zeigen, dass der Semi-Parlamentarismus in vielen Aspekten eher parlamentarischen als präsidentiellen Systemen gleicht. Nur die Koalitionsbildung erfolgt deutlich flexibler und unterscheidet sich daher von der typischen parlamentarischen Koalitionsbildung. Die Analysen legen nahe, dass wesentliche Unterschiede zwischen Parlamentarismus und Präsidentialismus eher auf den exekutiven Personalismus als auf die Gewaltenteilung zurückzuführen sind.
Zwischen den semi-parlamentarischen Staaten scheinen vor allem die Kontrolle des Medians beider Parlamentskammern durch die Regierung und die Möglichkeit der Regierung, die zweite Kammer mitaufzulösen, zu entscheidenden Unterschieden in der Gesetzgebung zu führen. Die Kontrolle des Medians ermöglicht eine flexible Koalitionsbildung und führt zu höheren legislativen Erfolgsraten. Ebenso führt eine möglichst leichte Auflösungsmöglichkeit der zweiten Kammern zu höheren legislativen Erfolgsraten. Die Parteigeschlossenheit ist unabhängig von diesen Aspekten in beiden Kammern der semi-parlamentarischen Parlamente sehr hoch.
Zum dreißigjährigen Bestehen des Kommunalwissenschaftlichen Instituts an der Universität Potsdam vereint dieser Jubiläumsband kurze Aufsätze von ehemaligen und aktuellen Vorstandsmitgliedern, von Ehrenmitgliedern des Vorstands, langjährigen wissenschaftlichen Mitarbeitern des Instituts und aktuellen wissenschaftlichen Kooperationspartnern. Die insgesamt zwölf Beiträge befassen sich mit den Kommunalwissenschaften und der Geschichte des Kommunalwissenschaftlichen Instituts, mit aktuellen kommunalwissenschaftlichen Fragestellungen und wissenschaftlichen Kooperationen des KWI. Der vom KWI-Vorstand herausgegebene Band soll einen breiten Blick auf 30 Jahre Kommunalwissenschaften in Brandenburg und an der Universität Potsdam werfen und einen Ausblick auf zukünftige kommunalwissenschaftliche Forschung geben.
Deep learning has seen widespread application in many domains, mainly for its ability to learn data representations from raw input data. Nevertheless, its success has so far been coupled with the availability of large annotated (labelled) datasets. This is a requirement that is difficult to fulfil in several domains, such as in medical imaging. Annotation costs form a barrier in extending deep learning to clinically-relevant use cases. The labels associated with medical images are scarce, since the generation of expert annotations of multimodal patient data at scale is non-trivial, expensive, and time-consuming. This substantiates the need for algorithms that learn from the increasing amounts of unlabeled data. Self-supervised representation learning algorithms offer a pertinent solution, as they allow solving real-world (downstream) deep learning tasks with fewer annotations. Self-supervised approaches leverage unlabeled samples to acquire generic features about different concepts, enabling annotation-efficient downstream task solving subsequently.
Nevertheless, medical images present multiple unique and inherent challenges for existing self-supervised learning approaches, which we seek to address in this thesis: (i) medical images are multimodal, and their multiple modalities are heterogeneous in nature and imbalanced in quantities, e.g. MRI and CT; (ii) medical scans are multi-dimensional, often in 3D instead of 2D; (iii) disease patterns in medical scans are numerous and their incidence exhibits a long-tail distribution, so it is oftentimes essential to fuse knowledge from different data modalities, e.g. genomics or clinical data, to capture disease traits more comprehensively; (iv) Medical scans usually exhibit more uniform color density distributions, e.g. in dental X-Rays, than natural images. Our proposed self-supervised methods meet these challenges, besides significantly reducing the amounts of required annotations.
We evaluate our self-supervised methods on a wide array of medical imaging applications and tasks. Our experimental results demonstrate the obtained gains in both annotation-efficiency and performance; our proposed methods outperform many approaches from related literature. Additionally, in case of fusion with genetic modalities, our methods also allow for cross-modal interpretability. In this thesis, not only we show that self-supervised learning is capable of mitigating manual annotation costs, but also our proposed solutions demonstrate how to better utilize it in the medical imaging domain. Progress in self-supervised learning has the potential to extend deep learning algorithms application to clinical scenarios.
Arctic climate change is marked by intensified warming compared to global trends and a significant reduction in Arctic sea ice which can intricately influence mid-latitude atmospheric circulation through tropo- and stratospheric pathways. Achieving accurate simulations of current and future climate demands a realistic representation of Arctic climate processes in numerical climate models, which remains challenging.
Model deficiencies in replicating observed Arctic climate processes often arise due to inadequacies in representing turbulent boundary layer interactions that determine the interactions between the atmosphere, sea ice, and ocean. Many current climate models rely on parameterizations developed for mid-latitude conditions to handle Arctic turbulent boundary layer processes.
This thesis focuses on modified representation of the Arctic atmospheric processes and understanding their resulting impact on large-scale mid-latitude atmospheric circulation within climate models. The improved turbulence parameterizations, recently developed based on Arctic measurements, were implemented in the global atmospheric circulation model ECHAM6. This involved modifying the stability functions over sea ice and ocean for stable stratification and changing the roughness length over sea ice for all stratification conditions. Comprehensive analyses are conducted to assess the impacts of these modifications on ECHAM6's simulations of the Arctic boundary layer, overall atmospheric circulation, and the dynamical pathways between the Arctic and mid-latitudes.
Through a step-wise implementation of the mentioned parameterizations into ECHAM6, a series of sensitivity experiments revealed that the combined impacts of the reduced roughness length and the modified stability functions are non-linear. Nevertheless, it is evident that both modifications consistently lead to a general decrease in the heat transfer coefficient, being in close agreement with the observations.
Additionally, compared to the reference observations, the ECHAM6 model falls short in accurately representing unstable and strongly stable conditions.
The less frequent occurrence of strong stability restricts the influence of the modified stability functions by reducing the affected sample size. However, when focusing solely on the specific instances of a strongly stable atmosphere, the sensible heat flux approaches near-zero values, which is in line with the observations. Models employing commonly used surface turbulence parameterizations were shown to have difficulties replicating the near-zero sensible heat flux in strongly stable stratification.
I also found that these limited changes in surface layer turbulence parameterizations have a statistically significant impact on the temperature and wind patterns across multiple pressure levels, including the stratosphere, in both the Arctic and mid-latitudes. These significant signals vary in strength, extent, and direction depending on the specific month or year, indicating a strong reliance on the background state.
Furthermore, this research investigates how the modified surface turbulence parameterizations may influence the response of both stratospheric and tropospheric circulation to Arctic sea ice loss.
The most suitable parameterizations for accurately representing Arctic boundary layer turbulence were identified from the sensitivity experiments. Subsequently, the model's response to sea ice loss is evaluated through extended ECHAM6 simulations with different prescribed sea ice conditions.
The simulation with adjusted surface turbulence parameterizations better reproduced the observed Arctic tropospheric warming in vertical extent, demonstrating improved alignment with the reanalysis data. Additionally, unlike the control experiments, this simulation successfully reproduced specific circulation patterns linked to the stratospheric pathway for Arctic-mid-latitude linkages. Specifically, an increased occurrence of the Scandinavian-Ural blocking regime (negative phase of the North Atlantic Oscillation) in early (late) winter is observed. Overall, it can be inferred that improving turbulence parameterizations at the surface layer can improve the ECHAM6's response to sea ice loss.
Im Kontext der zunehmenden Relevanz des Umgangs mit Digitalität im schulischen Unterricht und der daraus resultierenden Popularität von Gaming und Gamification als Lehrmethoden ist das Ziel dieser Arbeit, Game Design als konstruktivistische Herangehensweise an Computerspiele zu untersuchen. Genauer geht es darum, diese Methode hinsichtlich der Tauglichkeit für den Kunstunterricht zu analysieren. Dazu wird darauf eingegangen, inwiefern Game Design als Instruktionsmethode generell Lernen fördert bzw. zur Ausbildung einer Digital Literacy geeignet ist. Der Schwerpunkt liegt darin, Game Design im Hinblick auf die zentralen Kompetenz- und Lerndimensionen des Kunstunterrichts zu beleuchten. Genauer sind damit die künstlerische Produktion und die ästhetische Rezeption als die beiden maßgeblichen künstlerisch-ästhetischen Handlungskompetenzen gemeint sowie die ästhetische Erfahrung als besonderes Lernerlebnis, welches im kunstpädagogischen Diskurs neben den beschriebenen Kompetenzen als höchstes Ziel der Lehre gilt. Ebendiese drei Dimensionen funktionieren hierbei als Analyseebenen der untersuchten Methode. Game Design stellt sich dabei als weitestgehend förderlich für alle drei benannten Bereiche heraus, wobei es in Bezug auf die sinnliche Wahrnehmung im Prozess der ästhetischen Rezeption nur eine ergänzende Funktion annimmt. Es werden nicht alle Bereiche der Gestaltungsfelder der künstlerischen Produktion angesprochen. Ein experimentell-offenes künstlerisches Arbeiten wird ebenso nicht zwangsläufig ermöglicht. Jedoch werden alle anderen Bestandteile dieser Kompetenzdimensionen angesprochen und insbesondere die ästhetische Erfahrung vollumfänglich gefördert. Die digitale Spielentwicklung lässt sich somit aus kunstpädagogischer Perspektive für den Einsatz im Kunstunterricht legitimieren. Mit Ausblick auf STEAM Education und einen projektorientierten Unterricht ist sie sogar zu empfehlen.
The “HPI Future SOC Lab” is a cooperation of the Hasso Plattner Institute (HPI) and industry partners. Its mission is to enable and promote exchange and interaction between the research community and the industry partners.
The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores and 2 TB main memory. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies.
This technical report presents results of research projects executed in 2019. Selected projects have presented their results on April 9th and November 12th 2019 at the Future SOC Lab Day events.
The wide distribution of location-acquisition technologies means that large volumes of spatio-temporal data are continuously being accumulated. Positioning systems such as GPS enable the tracking of various moving objects' trajectories, which are usually represented by a chronologically ordered sequence of observed locations. The analysis of movement patterns based on detailed positional information creates opportunities for applications that can improve business decisions and processes in a broad spectrum of industries (e.g., transportation, traffic control, or medicine). Due to the large data volumes generated in these applications, the cost-efficient storage of spatio-temporal data is desirable, especially when in-memory database systems are used to achieve interactive performance requirements.
To efficiently utilize the available DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes structures). By considering horizontal data partitioning, we can independently apply different tuning options on a fine-grained level. However, the selection of cost and performance-balancing configurations is challenging, due to the vast number of possible setups consisting of mutually dependent individual decisions.
In this thesis, we introduce multiple approaches to improve spatio-temporal data management by automatically optimizing diverse tuning options for the application-specific access patterns and data characteristics. Our contributions are as follows:
(1) We introduce a novel approach to determine fine-grained table configurations for spatio-temporal workloads. Our linear programming (LP) approach jointly optimizes the (i) data compression, (ii) ordering, (iii) indexing, and (iv) tiering. We propose different models which address cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload, memory budgets, and data characteristics. To yield maintainable and robust configurations, we further extend our LP-based approach to incorporate reconfiguration costs as well as optimizations for multiple potential workload scenarios.
(2) To optimize the storage layout of timestamps in columnar databases, we present a heuristic approach for the workload-driven combined selection of a data layout and compression scheme. By considering attribute decomposition strategies, we are able to apply application-specific optimizations that reduce the memory footprint and improve performance.
(3) We introduce an approach that leverages past trajectory data to improve the dispatch processes of transportation network companies. Based on location probabilities, we developed risk-averse dispatch strategies that reduce critical delays.
(4) Finally, we used the use case of a transportation network company to evaluate our database optimizations on a real-world dataset. We demonstrate that workload-driven fine-grained optimizations allow us to reduce the memory footprint (up to 71% by equal performance) or increase the performance (up to 90% by equal memory size) compared to established rule-based heuristics.
Individually, our contributions provide novel approaches to the current challenges in spatio-temporal data mining and database research. Combining them allows in-memory databases to store and process spatio-temporal data more cost-efficiently.
This thesis presents an attempt to use source code synthesised from Coq formalisations of device drivers for existing (micro)kernel operating systems, with a particular focus on the Linux Kernel.
In the first part, the technical background and related work are described. The focus is here on the possible approaches to synthesising certified software with Coq, namely the extraction to functional languages using the Coq extraction plugin and the extraction to Clight code using the CertiCoq plugin. It is noted that the implementation of CertiCoq is verified, whereas this is not the case for the Coq extraction plugin. Consequently, there is a correctness guarantee for the generated Clight code which does not hold for the code being generated by the Coq extraction plugin. Furthermore, the differences between user space and kernel space software are discussed in relation to Linux device drivers. It is elaborated that it is not possible to generate working Linux kernel module components using the Coq extraction plugin without significant modifications. In contrast, it is possible to produce working user space drivers both with the Coq extraction plugin and CertiCoq. The subsequent parts describe the main contributions of the thesis.
In the second part, it is demonstrated how to extend the Coq extraction plugin to synthesise foreign function calls between the functional language OCaml and the imperative language C. This approach has the potential to improve the type-safety of user space drivers. Furthermore, it is shown that the code being synthesised by CertiCoq cannot be used in kernel space without modifications to the necessary runtime. Consequently, the necessary modifications to the runtimes of CertiCoq and VeriFFI are introduced, resulting in the runtimes becoming compatible components of a Linux kernel module. Furthermore, justifications for the transformations are provided and possible further extensions to both plugins and solutions to failing garbage collection calls in kernel space are discussed.
The third part presents a proof of concept device driver for the Linux Kernel. To achieve this, the event handler of the original PC Speaker driver is partially formalised in Coq. Furthermore, some relevant formal properties of the formalised functionality are discussed. Subsequently, a kernel module is defined, utilising the modified variants of CertiCoq and VeriFFI to compile a working device driver. It is furthermore shown that it is possible to compile the synthesised code with CompCert, thereby extending the guarantee of correctness to the assembly layer. This is followed by a performance evaluation that compares a naive formalisation of the PC speaker functionality with the original PC Speaker driver pointing out the weaknesses in the formalisation and possible improvements. The part closes with a summary of the results, their implications and open questions being raised.
The last part lists all used sources, separated into scientific literature, documentations or reference manuals and artifacts, i.e. source code.
This thesis focuses on the molecular evolution of Macroscelidea, commonly referred to as sengis. Sengis are a mammalian order belonging to the Afrotherians, one of the four major clades of placental mammals. Sengis currently consist of twenty extant species, all of which are endemic to the African continent. They can be separated in two families, the soft-furred sengis (Macroscelididae) and the giant sengis (Rhynchocyonidae). While giant sengis can be exclusively found in forest habitats, the different soft-furred sengi species dwell in a broad range of habitats, from tropical rain-forests to rocky deserts.
Our knowledge on the evolutionary history of sengis is largely incomplete. The high level of superficial morphological resemblance among different sengi species (especially the soft-furred sengis) has for example led to misinterpretations of phylogenetic relationships, based on morphological characters. With the rise of DNA based taxonomic inferences, multiple new genera were defined and new species described. Yet, no full taxon molecular phylogeny exists, hampering the answering of basic taxonomic questions. This lack of knowledge can be to some extent attributed to the limited availability of fresh-tissue samples for DNA extraction. The broad African distribution, partly in political unstable regions and low population densities complicate contemporary sampling approaches. Furthermore, the DNA information available usually covers only short stretches of the mitochondrial genome and thus a single genetic locus with limited informational content.
Developments in DNA extraction and library protocols nowadays offer the opportunity to access DNA from museum specimens, collected over the past centuries and stored in natural history museums throughout the world. Thus, the difficulties in fresh-sample acquisition for molecular biological studies can be overcome by the application of museomics, the research field which emerged from those laboratory developments.
This thesis uses fresh-tissue samples as well as a vast collection museum specimens to investigate multiple aspects about the macroscelidean evolutionary history. Chapter 4 of this thesis focuses on the phylogenetic relationships of all currently known sengi species. By accessing DNA information from museum specimens in combination of fresh tissue samples and publicly available genetic resources it produces the first full taxon molecular phylogeny of sengis. It confirms the monophyly of the genus Elephantulus and discovers multiple deeply divergent lineages within different species, highlighting the need for species specific approaches. The study furthermore focuses on the evolutionary time frame of sengis by evaluating the impact of commonly varied parameters on tree dating. The results of the study show, that the mitochondrial information used in previous studies to temporal calibrate the Macroscelidean phylogeny led to an overestimation of node ages within sengis. Especially soft-furred sengis are thus much younger than previously assumed. The refined knowledge of nodes ages within sengis offer the opportunity to link e.g. speciation events to environmental changes.
Chapter 5 focuses on the genus Petrodromus with its single representative Petrodromus tetradactylus. It again exploits the opportunities of museomics and gathers a comprehensive, multi-locus genetic dataset of P. tetradactylus individuals, distributed across most the known range of this species. It reveals multiple deeply divergent lineages within Petrodromus, whereby some could possibly be associated to previously described sub-species, at least one was formerly unknown. It underscores the necessity for a revision of the genus Petrodromus through the integration of both molecular and morphological evidence. The study, furthermore identifies changing forest distributions through climatic oscillations as main factor shaping the genetic structure of Petrodromus.
Chapter 6 uses fresh tissue samples to extent the genomic resources of sengis by thirteen new nuclear genomes, of which two were de-novo assembled. An extensive dataset of more than 8000 protein coding one-to-one orthologs allows to further refine and confirm the temporal time frame of sengi evolution found in Chapter 4. This study moreover investigates the role of gene-flow and incomplete lineage sorting (ILS) in sengi evolution. In addition it identifies clade specific genes of possible outstanding evolutionary importance and links them to potential phenotypic traits affected. A closer investigation of olfactory receptor proteins reveals clade specific differences. A comparison of the demographic past of sengis to other small African mammals does not reveal a sengi specific pattern.
Die Zusammenarbeit zwischen Lehr- und anderen Fachkräften stellt in Modellen inklusiver Schul- und Unterrichtsentwicklung sowie Schuleffektivität ein wichtiges Element dar. Wenngleich Kooperation als bedeutsam postuliert wird, so belegen Studien, dass diese bisher überwiegend in autonomieerhaltenden Formen praktiziert wird. Als entwicklungsförderlich gelten jedoch v.a. komplexere Formen der Zusammenarbeit. Vor dem Hintergrund inklusiver Bildung und dem Anspruch einer bestmöglichen individuellen Entwicklung der Schüler*innen stellt die Zusammenarbeit von Lehr- und Fachkräften folglich ein sehr bedeutsames Thema dar. Es ist zu hinterfragen, wie sich die Zusammenarbeit zwischen Lehr- und Fachkräften im Primar- wie Sekundarstufenbereich an inklusiven Schulen gestaltet, welche Faktoren diese beeinflussen und welche Relevanz die unterschiedlichen Formen der Zusammenarbeit im Prozess inklusiver Schulentwicklung einnehmen. Bestehende Forschungsdesiderata aufgrei-fend, fokussiert die vorliegende Dissertation auf die realisierte Zusammenarbeit von Lehr- und Fachkräften im Primar- und Sekundarstufenbereich inklusiver Schulen, am Beispiel des Landes Brandenburg. Neben den realisierten Formen der Zusammenarbeit, stehen insbesondere die Identifikation von Kooperationsmustern von Lehr- und Fachkräften sowie von Schulen, und deren Zusammenhänge mit der Leistungsentwicklung von Schüler*innen im Kern des Forschungsinteresses.
Die vorliegende Dissertation bearbeitet insgesamt sechs Forschungsfragen, welche in drei Teilstudien adressiert werden: Zunächst werden mittels deskriptiver Analysen sowie Mehrebenenmodellierungen die Ausgangslage multiprofessioneller Kooperation (erste Forschungsfrage) sowie deren Rahmenbedingungen (zweite Forschungsfrage) im Primar- wie Sekundarstufenbereich erfasst (Teilstudie 1). Lehr- und Fachkräfte kooperierten überwiegend in autonomieerhaltenden, austauschbasierten Formen. Weiterhin zeigte sich, dass insbesondere die individuelle Offenheit zur Zusammenarbeit sowie die subjektiv wahrgenommene Unterstützung durch die Schulleitung bedeutsame Faktoren für die Realisierung multiprofessioneller Kooperation darstellten. Die Fragestellungen drei und vier befassen sich mit der Identifikation von Mustern im Kooperationsverhalten (Teilstudie 2). Zum einen geht es hierbei um personenbezogene Profile von Lehr- und Fachkräften (dritte Forschungsfrage), zum anderen um schulbezogene Profile (vierte Forschungsfrage), welche mittels des personenzentrierten Ansatzes der latenten Profilanalysen unter Berücksichtigung der Mehrebenenstruktur identifiziert werden. Hinsichtlich des individuellen Kooperationsverhaltens konnten vier Profile eruiert werden, bzgl. des schulspezifischen Kooperationsverhaltens drei. Die Mehrheit der Lehr- und Fachkräfte konnte im „regularly“-Profil verortet werden, d.h. nach eigener Einschätzung kooperierten diese überdurchschnittlich häufig im Austausch miteinander und arbeitsteilig, aber auch regelmäßig kokonstruktiv. Auf Schulebene zeigte sich, dass etwa jede zweite inklusive Schule im Land Brandenburg über eine hoch ausgeprägte Kooperationskultur verfügte. Im Fokus der Teilstudie 3 wird den Fragen nachgegangen, in welchem Zusammenhang die schulspezifischen Kooperationskulturen mit der Leistungsentwicklung von Schüler*innen in der Primar- wie Sekundarstufe steht. Mittels autoregressiver Mehrebenenanalysen wird einerseits der Zusammenhang mit der Leistungsentwicklung aller Schüler*innen (fünfte Forschungsfrage) untersucht, sowie spezifisch auf die Entwicklung von Schüler*innen mit und ohne sonderpädagogischem Förderbedarf (sechste Forschungsfrage) fokussiert. Ein zentrales Ergebnis war hierbei, dass Schüler*innen mit sonderpädagogischem Förderbedarf in der Primar- wie Sekundarstufe in ihrer Leistungsentwicklung am stärksten profitierten, wenn sie an Schulen lernten, an denen sich die Lehr- und Fachkräfte sehr regelmäßig über Lernstände der Schüler*innen austauschten (Austausch), Arbeitspakete für differenzierte Lernangebote erarbeiteten und verteilten (Arbeitsteilung) und darüber hinaus gelegentlich gemeinsam Problemlösungen entwickelten (Kokonstruktion).
Die Ergebnisse werden vor dem Hintergrund der postulierten Relevanz multiprofessioneller Kooperation für inklusive Schul- und Unterrichtsentwicklungsprozesse eingeordnet und diskutiert. Weiterhin werden verschiedene praktische Implikationen für die Unterstützung multiprofessioneller Zusammenarbeit im Primar- und Sekundarstufenbereich abgeleitet.
ADHS bei Jugendlichen
(2024)
ADHS galt lange als eine Störung des Kindesalters. Aber bis zu 80 % der Patient:innen sind auch noch als Jugendliche betroffen. Gerade sie brauchen Hilfe bei ihren Problemen!
In der Schule müssen sie öfter die Klasse wiederholen, im sozialen und emotionalen Bereich gibt es Konflikte mit Gleichaltrigen und Eltern. Unbehandelt drohen psychische Störungen, Drogenmissbrauch oder delinquentes Verhalten.
Das vorliegende Lerntraining ist das erste multimodale Behandlungskonzept für Jugendliche im Alter von 12 bis 17 Jahren. Es werden konkrete Probleme und Aufgaben aus Schule und Umwelt behandelt, um daran allgemeine Strategien herzuleiten. Eltern und Lehrer werden intensiv in die Behandlung mit einbezogen.
Die vorliegende Arbeit thematisiert die Synthese und die Polymerisation von Monomeren auf der Basis nachwachsender Rohstoffe wie zum Beispiel in Gewürzen und ätherischen Ölen enthaltenen kommerziell verfügbaren Phenylpropanoiden (Eugenol, Isoeugenol, Zimtalkohol, Anethol und Estragol) und des Terpenoids Myrtenol sowie ausgehend von der Rinde einer Birke (Betula pendula) und der Korkeiche (Quercus suber). Ausgewählte Phenylpropanoide (Eugenol, Isoeugenol und Zimtalkohol) und das Terpenoid Myrtenol wurden zunächst in den jeweiligen Laurylester überführt und anschließend das olefinische Strukturelement epoxidiert, wobei 4 neue (2-Methoxy-4-(oxiran-2-ylmethyl)phenyldodecanoat, 2-Methoxy-4-(3-methyl-oxiran-2-yl)phenyldodecanoat, (3-Phenyloxiran-2-yl)methyldodecanoat, (7,7-Dimethyl-3-oxatricyclo[4.1.1.02,4]octan-2-yl)methyldodecanoat) und 2 bereits bekannte monofunktionelle Epoxide (2-(4-Methoxybenzyl)oxiran und 2-(4-Methoxyphenyl)-3-methyloxiran) erhalten wurden, die mittels 1H-NMR-, 13C-NMR- und FT-IR-Spektroskopie sowie mit DSC untersucht wurden. Die Photo-DSC Untersuchung der Epoxidmonomere in einer kationischen Photopolymerisation bei 40 °C ergab die maximale Polymerisationsgeschwindigkeit (Rpmax: 0,005 s-1 bis 0,038 s-1) sowie die Zeit (tmax: 13 s bis 26 s) bis zum Erreichen des Rpmax-Wertes und führte zu flüssigen Oligomeren, deren zahlenmittlerer Polymerisationsgrad mit 3 bis 6 mittels GPC bestimmt wurde. Die Umsetzung von 2-Methoxy-4-(oxiran-2-ylmethyl)phenyldodecanoat mit Methacrylsäure ergab ein Isomerengemisch (2-Methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat und 2-Methoxy-4-(2-(methacryl-oyloxy)-3-hydroxypropyl)phenyldodecanoat), das mittels Photo-DSC in einer freien radikalischen Photopolymerisation untersucht wurde (Rpmax: 0,105 s-1 und tmax: 5 s), die zu festen in Chloroform unlöslichen Polymeren führte.
Aus Korkpulver und gemahlener Birkenrinde wurden selektiv 2 kristalline ω-Hydroxyfettsäuren (9,10-Epoxy-18-hydroxyoctadecansäure und 22-Hydroxydocosansäure) isoliert. Die kationische Photopolymerisation der 9,10-Epoxy-18-hydroxyoctadecansäure ergab einen nahezu farblosen transparenten und bei Raumtemperatur elastischen Film, welcher ein Anwendungspotential für Oberflächenbeschichtungen hat. Aus der Reaktion von 9,10-Epoxy-18-hydroxyoctadecansäure mit Methacrylsäure wurde ein bei Raumtemperatur flüssiges Gemisch aus zwei Konstitutionsisomeren (9,18-Dihydroxy-10-(methacryloyloxy)octadecansäure und 9-(Methacryloyloxy)-10,18-dihydroxyoctadecansäure) erhalten (Tg: -60 °C). Die radikalische Photopolymerisation dieser Konstitutionsisomere wurde ebenfalls mittels Photo-DSC untersucht (Rpmax: 0,098 s-1 und tmax: 3,8 s). Die Reaktion von 22-Hydroxydocosansäure mit Methacryloylchlorid ergab die kristalline 22-(Methacryloyloxy)docosansäure, welche ebenfalls in einer radikalischen Photopolymerisation mittels Photo-DSC untersucht wurde (Rpmax: 0,023 s-1 und tmax: 9,6 s).
Die mittels AIBN in Dimethylsulfoxid initiierte Homopolymerisation der 22-(Methacryloyloxy)docosansäure und der Isomerengemische bestehend aus 2-Methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat und 2-Methoxy-4-(2-(methacryl-oyloxy)-3-hydroxypropyl)phenyldodecanoat sowie aus 9,18-Dihydroxy-10-(methacryloy-loxy)octadecansäure und 9-(Methacryloyloxy)-10,18-dihydroxyoctadecansäure ergab feste lösliche Polymere, die mittels 1H-NMR- und FT-IR-Spektroskopie, GPC (Poly(2-methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat / 2-methoxy-4-(2-(methacryloyloxy)-3-hydroxypropyl)phenyldodecanoat): Pn = 94) und DSC (Poly(2-methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat / 2-methoxy-4-(2-(methacryloyloxy)-3-hydroxypropyl)phenyldodecanoat): Tg: 52 °C; Poly(9,18-dihydroxy-10-(methacryloyloxy)-octadecansäure / 9-(methacryloyloxy)-10,18-dihydroxyoctadecansäure): Tg: 10 °C; Poly(22-(methacryloyloxy)docosansäure): Tm: 74,1 °C, wobei der Schmelzpunkt mit dem des Photopolymers (Tm = 76,8 °C) vergleichbar ist) charakterisiert wurden.
Das bereits bekannte Monomer 4-(4-Methacryloyloxyphenyl)butan-2-on wurde ausgehend von 4-(4-Hydroxyphenyl)butan-2-on hergestellt, welches aus Birkenrinde gewonnen werden kann, und unter identischen Bedingungen für einen Vergleich mit den neuen Monomeren polymerisiert. Die freie radikalische Polymerisation führte zu Poly(4-(4-methacryloyloxyphenyl)butan-2-on) (Pn: 214 und Tg: 83 °C). Neben der Homopolymerisation wurde eine statistische Copolymerisation des Isomerengemisches 2-Methoxy-4-(2-hydroxy-3-(methacryl-oyloxy)propyl)phenyldodecanoat / 2-Methoxy-4-(2-(methacryloyloxy)-3-hydroxypropyl)-phenyldodecanoat mit 4-(4-Methacryloyloxyphenyl)butan-2-on untersucht, wobei ein äquimolarer Einsatz der Ausgangsmonomere zu einem Anstieg der Ausbeute, der Molmassenverteilung und der Dispersität des Copolymers (Tg: 44 °C) führte. Die unter Verwendung von Diethylcarbonat als „grünes“ Lösungsmittel mittels AIBN initiierten freien radikalischen Homopolymerisationen von 4-(4-Methacryloyloxyphenyl)butan-2-on und von Laurylmethacrylat ergaben vergleichbare Polymerisationsgrade der Homopolymere (Pn: 150), welche jedoch aufgrund ihrer Strukturunterschiede deutlich unterschiedliche Glasübergangstemperaturen hatten (Poly(4-(4-methacryloyloxyphenyl)butan-2-on): Tg: 70 °C, Poly(laurylmethacrylat) Tg: -49 °C. Eine statistische Copolymerisation äquimolarer Stoffmengen der beiden Monomere in Diethylcarbonat führte bei einer Polymerisationszeit von 60 Minuten zu einem leicht bevorzugten Einbau des 4-(4-Methacryloyloxyphenyl)butan-2-on in das Copolymer (Tg: 17 °C). Copolymerisationsdiagramme für die freien radikalischen Copolymerisationen von 4-(4-Methacryloyloxyphenyl)butan-2-on mit n-Butylmethacrylat beziehungsweise 2-(Dimethylamino)ethylmethacrylat (t: 20 min bis 60 min; Molenbrüche (X) für 4-(4-Methacryloyloxyphenyl)butan-2-on: 0,2; 0,4; 0,6 und 0,8) zeigten ein nahezu ideales azeotropes Copolymerisationsverhalten, obwohl ein leicht bevorzugter Einbau von 4-(4-Methacryloyloxyphenyl)butan-2-on in das jeweilige Copolymer beobachtet wurde. Dabei korreliert ein Anstieg der Ausbeute und der Glasübergangstemperatur der erhaltenen Copolymere mit einem zunehmenden Gehalt an 4-(4-Methacryloyloxyphenyl)butan-2-on im Reaktionsgemisch. Die unter Einsatz der modifizierten Gibbs-DiMarzio-Gleichung berechneten Glasübergangstemperaturen der Copolymere stimmten mit den gemessenen Werten gut überein. Das ist eine gute Ausgangsbasis für die Bestimmung der Glasübergangstemperatur eines Copolymers mit einer beliebigen Zusammensetzung.
We analyze how conventional emissions trading schemes (ETS) can be modified by introducing “clean-up certificates” to allow for a phase of net-negative emissions. Clean-up certificates bundle the permission to emit CO2 with the obligation for its removal. We show that demand for such certificates is determined by cost-saving technological progress, the discount rate and the length of the compliance period. Introducing extra clean-up certificates into an existing ETS reduces near-term carbon prices and mitigation efforts. In contrast, substituting ETS allowances with clean-up certificates reduces cumulative emissions without depressing carbon prices or mitigation in the near term. We calibrate our model to the EU ETS and identify reforms where simultaneously (i) ambition levels rise, (ii) climate damages fall, (iii) revenues from carbon prices rise and (iv) carbon prices and aggregate mitigation cost fall. For reducing climate damages, roughly half of the issued clean-up certificates should replace conventional ETS allowances. In the context of the EU ETS, a European Carbon Central Bank could manage the implementation of cleanup certificates and could serve as an enforcement mechanism.
Ecosystems play a pivotal role in addressing climate change but are also highly susceptible to drastic environmental changes. Investigating their historical dynamics can enhance our understanding of how they might respond to unprecedented future environmental shifts. With Arctic lakes currently under substantial pressure from climate change, lessons from the past can guide our understanding of potential disruptions to these lakes. However, individual lake systems are multifaceted and complex. Traditional isolated lake studies often fail to provide a global perspective because localized nuances—like individual lake parameters, catchment areas, and lake histories—can overshadow broader conclusions. In light of these complexities, a more nuanced approach is essential to analyze lake systems in a global context.
A key to addressing this challenge lies in the data-driven analysis of sedimentological records from various northern lake systems. This dissertation emphasizes lake systems in the northern Eurasian region, particularly in Russia (n=59). For this doctoral thesis, we collected sedimentological data from various sources, which required a standardized framework for further analysis. Therefore, we designed a conceptual model for integrating and standardizing heterogeneous multi-proxy data into a relational database management system (PostgreSQL). Creating a database from the collected data enabled comparative numerical analyses between spatially separated lakes as well as between different proxies.
When analyzing numerous lakes, establishing a common frame of reference was crucial. We achieved this by converting proxy values from depth dependency to age dependency. This required consistent age calculations across all lakes and proxies using one age-depth modeling software. Recognizing the broader implications and potential pitfalls of this, we developed the LANDO approach ("Linked Age and Depth Modelling"). LANDO is an innovative integration of multiple age-depth modeling software into a singular, cohesive platform (Jupyter Notebook). Beyond its ability to aggregate data from five renowned age-depth modeling software, LANDO uniquely empowers users to filter out implausible model outcomes using robust geoscientific data. Our method is not only novel but also significantly enhances the accuracy and reliability of lake analyses.
Considering the preceding steps, this doctoral thesis further examines the relationship between carbon in sediments and temperature over the last 21,000 years. Initially, we hypothesized a positive correlation between carbon accumulation in lakes and modelled paleotemperature. Our homogenized dataset from heterogeneous lakes confirmed this association, even if the highest temperatures throughout our observation period do not correlate with the highest carbon values. We assume that rapid warming events contribute more to high accumulation, while sustained warming leads to carbon outgassing. Considering the current high concentration of carbon in the atmosphere and rising temperatures, ongoing climate change could cause northern lake systems to contribute to a further increase in atmospheric carbon (positive feedback loop). While our findings underscore the reliability of both our standardized data and the LANDO method, expanding our dataset might offer even greater assurance in our conclusions.
Improving permafrost dynamics in land surface models: insights from dual sensitivity experiments
(2024)
The thawing of permafrost and the subsequent release of greenhouse gases constitute one of the most significant and uncertain positive feedback loops in the context of climate change, making predictions regarding changes in permafrost coverage of paramount importance. To address these critical questions, climate scientists have developed Land Surface Models (LSMs) that encompass a multitude of physical soil processes. This thesis is committed to advancing our understanding and refining precise representations of permafrost dynamics within LSMs, with a specific focus on the accurate modeling of heat fluxes, an essential component for simulating permafrost physics.
The first research question overviews fundamental model prerequisites for the representation of permafrost soils within land surface modeling. It includes a first-of-its-kind comparison between LSMs in CMIP6 to reveal their differences and shortcomings in key permafrost physics parameters. Overall, each of these LSMs represents a unique approach to simulating soil processes and their interactions with the climate system. Choosing the most appropriate model for a particular application depends on factors such as the spatial and temporal scale of the simulation, the specific research question, and available computational resources.
The second research question evaluates the performance of the state-of-the-art Community Land Model (CLM5) in simulating Arctic permafrost regions. Our approach overcomes traditional evaluation limitations by individually addressing depth, seasonality, and regional variations, providing a comprehensive assessment of permafrost and soil temperature dynamics. I compare CLM5's results with three extensive datasets: (1) soil temperatures from 295 borehole stations, (2) active layer thickness (ALT) data from the Circumpolar Active Layer Monitoring Network (CALM), and (3) soil temperatures, ALT, and permafrost extent from the ESA Climate Change Initiative (ESA-CCI). The results show that CLM5 aligns well with ESA-CCI and CALM for permafrost extent and ALT but reveals a significant global cold temperature bias, notably over Siberia. These results echo a persistent challenge identified in numerous studies: the existence of a systematic 'cold bias' in soil temperature over permafrost regions. To address this challenge, the following research questions propose dual sensitivity experiments.
The third research question represents the first study to apply a Plant Functional Type (PFT)-based approach to derive soil texture and soil organic matter (SOM), departing from the conventional use of coarse-resolution global data in LSMs. This novel method results in a more uniform distribution of soil organic matter density (OMD) across the domain, characterized by reduced OMD values in most regions. However, changes in soil texture exhibit a more intricate spatial pattern. Comparing the results to observations reveals a significant reduction in the cold bias observed in the control run. This method shows noticeable improvements in permafrost extent, but at the cost of an overestimation in ALT. These findings emphasize the model's high sensitivity to variations in soil texture and SOM content, highlighting the crucial role of soil composition in governing heat transfer processes and shaping the seasonal variation of soil temperatures in permafrost regions.
Expanding upon a site experiment conducted in Trail Valley Creek by \citet{dutch_impact_2022}, the fourth research question extends the application of the snow scheme proposed by \citet{sturm_thermal_1997} to cover the entire Arctic domain. By employing a snow scheme better suited to the snow density profile observed over permafrost regions, this thesis seeks to assess its influence on simulated soil temperatures. Comparing this method to observational datasets reveals a significant reduction in the cold bias that was present in the control run. In most regions, the Sturm run exhibits a substantial decrease in the cold bias. However, there is a distinctive overshoot with a warm bias observed in mountainous areas. The Sturm experiment effectively addressed the overestimation of permafrost extent in the control run, albeit resulting in a substantial reduction in permafrost extent over mountainous areas. ALT results remain relatively consistent compared to the control run. These outcomes align with our initial hypothesis, which anticipated that the reduced snow insulation in the Sturm run would lead to higher winter soil temperatures and a more accurate representation of permafrost physics.
In summary, this thesis demonstrates significant advancements in understanding permafrost dynamics and its integration into LSMs. It has meticulously unraveled the intricacies involved in the interplay between heat transfer, soil properties, and snow dynamics in permafrost regions. These insights offer novel perspectives on model representation and performance.
Dieses Literatur-Review verfolgt angesichts des gegenwärtigen, gesteigerten öffentlichen Interesses zum Thema von Arbeitszeitverkürzungsmodellen mit Gehaltsausgleich das Ziel, den aktuellen deutsch- und englischsprachigen Forschungsstand zum möglichen Nutzen von Arbeitszeitverkürzungen mit Gehaltsausgleich (AZV+) für den öffentlichen Arbeitgeber dar-zustellen und kritisch auszuwerten. Das Review basiert auf insgesamt zehn Publikationen, die zum großen Teil zu dem Schluss kommen, dass AZV+ zu keinen negativen Effekten, sondern zu entweder neutralen oder auch mehrheitlich positiven Auswirkungen auf die Arbeitgebendenseite führen. Dabei handelt es sich insbesondere um verbesserte Stresslevel, gesundheitliche Aspekte, gleichbleibende oder erhöhte Produktivität und Motivation/Energie sowie verringerte Absentismuszahlen. Die Anreiz-Beitrags-Theorie bietet sich als Erklärungsmodell für diese Ergebnisse gut an, da sie Aussagen darüber trifft, inwiefern Anreizsysteme wie eine AZV+ für Arbeitnehmende durch deren subjektive Bedürfnisbefriedigung unter Einhaltung bestimmter Grenzen (keine Überschreitung der Beitragsforderungen durch Anpassung des Workload) zu Effekten führen kann, die sich indirekt auch positiv hinsichtlich der Organisationsziele aus-wirken. Die ebenfalls angewandten motivationstheoretischen Elemente der Cognitive Evaluation Theory und der Motivation Crowding Theorie eignen sich weniger gut in ihrer Erklärungskraft der untersuchten Effekte, da die Differenzierung verschiedener Motivationsarten im Rahmen der hier untersuchten Studien unerheblich zu sein scheint. Insgesamt ist die Studienlage zu dem Thema AZV+ generell, und auch speziell im öffentlichen Sektor, sehr dünn und bietet kaum Möglichkeiten für generalisierende Aus-sagen, sodass ein großer Forschungsbedarf zu diesem Thema besteht.
Today, near-surface investigations are frequently conducted using non-destructive or minimally invasive methods of applied geophysics, particularly in the fields of civil engineering, archaeology, geology, and hydrology. One field that plays an increasingly central role in research and engineering is the examination of sedimentary environments, for example, for characterizing near-surface groundwater systems. A commonly employed method in this context is ground-penetrating radar (GPR). In this technique, short electromagnetic pulses are emitted into the subsurface by an antenna, which are then reflected, refracted, or scattered at contrasts in electromagnetic properties (such as the water table). A receiving antenna records these signals in terms of their amplitudes and travel times. Analysis of the recorded signals allows for inferences about the subsurface, such as the depth of the groundwater table or the composition and characteristics of near-surface sediment layers. Due to the high resolution of the GPR method and continuous technological advancements, GPR data acquisition is increasingly performed in three-dimensional (3D) fashion today.
Despite the considerable temporal and technical efforts involved in data acquisition and processing, the resulting 3D data sets (providing high-resolution images of the subsurface) are typically interpreted manually. This is generally an extremely time-consuming analysis step. Therefore, representative 2D sections highlighting distinctive reflection structures are often selected from the 3D data set. Regions showing similar structures are then grouped into so-called radar facies. The results obtained from 2D sections are considered representative of the entire investigated area. Interpretations conducted in this manner are often incomplete and highly dependent on the expertise of the interpreters, making them generally non-reproducible.
A promising alternative or complement to manual interpretation is the use of GPR attributes. Instead of using the recorded data directly, derived quantities characterizing distinctive reflection structures in 3D are applied for interpretation. Using various field and synthetic data sets, this thesis investigates which attributes are particularly suitable for this purpose. Additionally, the study demonstrates how selected attributes can be utilized through specific processing and classification methods to create 3D facies models. The ability to generate attribute-based 3D GPR facies models allows for partially automated and more efficient interpretations in the future. Furthermore, the results obtained in this manner describe the subsurface in a reproducible and more comprehensive manner than what has typically been achievable through manual interpretation methods.
Legitimiertes Unrecht
(2024)
Das Oberste Gericht der DDR war integraler Bestandteil der sozialistischen Staatsführung und unterlag strengen Denk- und Organisationsstrukturen. Es war eng in die politische Agenda der SED eingebunden und genoss keinerlei Unabhängigkeit. Die Auslegung des DDR-Rechts durch das Gericht orientierte sich ausschließlich an den innen- und außenpolitischen Interessen der SED. Dies galt auch für die Rechtsprechung in Fällen der Republikflucht und ihrer gesetzlichen Vorläufer. Die höchste Gerichtsinstanz im Staat war aktiv an der Gestaltung und Umsetzung der Strafjustiz gegen Republikflüchtige beteiligt, was wesentlich zur Festigung der Herrschaftsgewalt der SED beitrug. Die vorliegende Untersuchung analysiert Urteile des Obersten Gerichts im historisch-politischen Kontext und zeigt auf, dass die Urteilspraxis ausschließlich im Interesse parteipolitischer Ziele handelte und weder dem Volk noch der eigentlichen Rechtsfindung verpflichtet war. Des Weiteren wird der maßgebliche Beitrag des Obersten Gerichts an der schrittweisen Kriminalisierung der Bürger der DDR beleuchtet. Dies wirft ein kritisches Licht auf die Rolle des Rechtssystems bei der Sicherung von Rechtsstaatlichkeit und Menschenrechten in autoritären Regimen.
The dynamic landscape of digital transformation entails an impact on industrial-age manufacturing companies that goes beyond product offerings, changing operational paradigms, and requiring an organization-wide metamorphosis. An initiative to address the given challenges is the creation of Digital Innovation Units (DIUs) – departments or distinct legal entities that use new structures and practices to develop digital products, services, and business models and support or drive incumbents’ digital transformation. With more than 300 units in German-speaking countries alone and an increasing number of scientific publications, DIUs have become a widespread phenomenon in both research and practice.
This dissertation examines the evolution process of DIUs in the manufacturing
industry during their first three years of operation, through an extensive longitudinal single-case study and several cross-case syntheses of seven DIUs. Building on the lenses of organizational change and development, time, and socio-technical systems, this research provides insights into the fundamentals, temporal dynamics, socio-technical interactions, and relational dynamics of a DIU’s evolution process. Thus, the dissertation promotes a dynamic understanding of DIUs and adds a two-dimensional perspective to the often one-dimensional view of these units and their interactions with the main organization throughout the startup and growth phases of a DIU.
Furthermore, the dissertation constructs a phase model that depicts the early stages of DIU evolution based on these findings and by incorporating literature from information systems research. As a result, it illustrates the progressive intensification of collaboration between the DIU and the main organization. After being implemented, the DIU sparks initial collaboration and instigates change within (parts of) the main organization. Over time, it adapts to the corporate environment to some extent, responding to changing circumstances in order to contribute to long-term transformation. Temporally, the DIU drives the early phases of cooperation and adaptation in particular, while the main organization triggers the first major evolutionary step and realignment of the DIU.
Overall, the thesis identifies DIUs as malleable organizational structures that are crucial for digital transformation. Moreover, it provides guidance for practitioners on the process of building a new DIU from scratch or optimizing an existing one.
The present thesis looks at cultural conceptualisations in relation to DEATH in Irish English from a Cultural Linguistic perspective and puts a special focus on the diachronic development of these conceptualisations. For the study, a corpus consisting of 1,400 death notices from the Dublin-based national newspaper The Irish Times from 14 historical periods between 1859 and 2023 was compiled, resulting in a highly specialised 70,000-word corpus. First, the manual qualitative analysis of the death notices produced evidence for eight superordinate cultural conceptualisations surrounding DEATH, namely, in the order of their frequency THE DEAD ARE TO BE REMEMBERED OR REGRETTED, DEATH IS SOMETHING POSITIVE, DEATH IS REST, DEATH IS A JOURNEY, DYING IS THE BEGINNING OF ANOTHER LIFE, DEATH IS (NOT) A TABOO, DEATH IS GOD’S WILL, and DEATH IS THE END. These conceptualisations were derived from linguistic expressions in the death notices that have these conceptualisations as a cognitive basis. Second, the quantitative comparison of the individual conceptualisations detected diachronic variation, which is interconnected with historical and social developments in Ireland. The thesis, therefore, illustrates the applicability of Cultural Linguistics as an adequate method for diachronic studies interested in culturally determined developments of conceptualisations.
-Ottmar Ette, Ingo Schwarz: „Ein junges, neues Geschlecht wird besseres liefern als das alte“. Ein Empfehlungsbrief Alexander von Humboldts für Carl Ludwig
-GAO Hong: Nachgedanken zur Übersetzung des ersten Bandes von Humboldts Kosmos
-Tobias Kraft: Neue Quellen zu Humboldts Kuba-Forschung. Das „Digitale Dossier“ des Proyecto Humboldt Digital (2019 – 2023)
-Vera Kutzinski: Off-Road Adventures: Reading Statistics in Alexander von Humboldt’s Political Essay on the Kingdom of New Spain
-Krzysztof Zielnica: Alexander von Humboldt und Polen – zum 150. Jahrestag seiner Reise nach Warschau. Mit einleitenden Worten von Ingo Schwarz
Das Forschungsprojekt „Workflow-Management-Systeme für Open-Access-Hochschulverlage (OA-WFMS)” ist eine Kooperation zwischen der HTWK Leipzig und der Universität Potsdam. Ziel ist es, die Bedarfe von Universitäts- und Hochschulverlagen und Anforderungen an ein Workflow-Management-Systeme (WFMS) zu analysieren, um daraus ein generisches Lastenheft zu erstellen. Das WFMS soll den Publikationsprozess in OA-Verlagen erleichtern, beschleunigen sowie die Verbreitung von Open Access und das nachhaltige, digitale wissenschaftliche Publizieren fördern.
Das Projekt baut auf den Ergebnissen der Projekte „Open-Access-Hochschulverlag (OA-HVerlag)“ und „Open-Access-Strukturierte-Kommunikation (OA-STRUKTKOMM)“ auf. Der diesem Bericht zugrunde liegende Auftaktworkshop fand 2024 in Leipzig mit Vertreter:innen von zehn Institutionen statt. Der Workshop diente dazu, Herausforderungen und Anforderungen an ein WFMS zu ermitteln sowie bestehende Lösungsansätze und Tools zu diskutieren.
Im Workshop wurden folgende Fragen behandelt:
a. Wie kann die Organisation und Überwachung von Publikationsprozessen in wissenschaftlichen Verlagen durch ein WFMS effizient gestaltet werden?
b. Welche Anforderungen muss ein WFMS erfüllen, um Publikationsprozesse optimal zu unterstützen?
c. Welche Schnittstellen müssen berücksichtigt werden, um die Interoperabilität der Systeme zu garantieren?
d. Welche bestehenden Lösungsansätze und Tools sind bereits im Einsatz und welche Vor- und Nachteile haben diese?
Der Workshop gliederte sich in zwei Teile : Teil 1 behandelte Herausforderungen und Anforderungen (Fragen a. bis c.), Teil 2 bestehende Lösungen und Tools (Frage d.). Die Ergebnisse des Workshops fließen in die Bedarfsanalyse des Forschungsprojekts ein.
Die im Bericht dokumentierten Ergebnisse zeigen die Vielzahl der Herausforderungen der bestehenden Ansätze bezüglich des OA-Publikationsmanagements . Die Herausforderungen zeigen sich insbesondere bei der Systemheterogenität, den individuellen Anpassungsbedarfen und der Notwendigkeit der systematischen Dokumentation. Die eingesetzten Unterstützungssysteme und Tools wie Dateiablagen, Projektmanagement- und Kommunikationstools können insgesamt den Anforderungen nicht genügen, für Teillösungen sind sie jedoch nutzbar. Deshalb muss die Integration bestehender Systeme in ein zu entwickelndes OA-WFMS in Betracht gezogen und die Interoperabilität der miteinander interagierenden Systeme gewährleistet werden. Die Beteiligten des Workshops waren sich einig, dass das OA-WFMS flexibel und modular aufgebaut werden soll. Einer konsortialen Softwareentwicklung und einem gemeinsamen Betrieb im Verbund wurde der Vorrang gegeben.
Der Workshop lieferte wertvolle Einblicke in die Arbeit der Hochschulverlage und bildet somit eine solide Grundlage für die in Folge zu erarbeitende weitere Bedarfsanalyse und die Erstellung des generischen Lastenheftes.
A comprehensive study on seismic hazard and earthquake triggering is crucial for effective mitigation of earthquake risks. The destructive nature of earthquakes motivates researchers to work on forecasting despite the apparent randomness of the earthquake occurrences. Understanding their underlying mechanisms and patterns is vital, given their potential for widespread devastation and loss of life. This thesis combines methodologies, including Coulomb stress calculations and aftershock analysis, to shed light on earthquake complexities, ultimately enhancing seismic hazard assessment.
The Coulomb failure stress (CFS) criterion is widely used to predict the spatial distributions of aftershocks following large earthquakes. However, uncertainties associated with CFS calculations arise from non-unique slip inversions and unknown fault networks, particularly due to the choice of the assumed aftershocks (receiver) mechanisms. Recent studies have proposed alternative stress quantities and deep neural network approaches as superior to CFS with predefined receiver mechanisms. To challenge these propositions, I utilized 289 slip inversions from the SRCMOD database to calculate more realistic CFS values for a layered-half space and variable receiver mechanisms. The analysis also investigates the impact of magnitude cutoff, grid size variation, and aftershock duration on the ranking of stress metrics using receiver operating characteristic (ROC) analysis. Results reveal the performance of stress metrics significantly improves after accounting for receiver variability and for larger aftershocks and shorter time periods, without altering the relative ranking of the different stress metrics.
To corroborate Coulomb stress calculations with the findings of earthquake source studies in more detail, I studied the source properties of the 2005 Kashmir earthquake and its aftershocks, aiming to unravel the seismotectonics of the NW Himalayan syntaxis. I simultaneously relocated the mainshock and its largest aftershocks using phase data, followed by a comprehensive analysis of Coulomb stress changes on the aftershock planes. By computing the Coulomb failure stress changes on the aftershock faults, I found that all large aftershocks lie in regions of positive stress change, indicating triggering by either co-seismic or post-seismic slip on the mainshock fault.
Finally, I investigated the relationship between mainshock-induced stress changes and associated seismicity parameters, in particular those of the frequency-magnitude (Gutenberg-Richter) distribution and the temporal aftershock decay (Omori-Utsu law). For that purpose, I used my global data set of 127 mainshock-aftershock sequences with the calculated Coulomb Stress (ΔCFS) and the alternative receiver-independent stress metrics in the vicinity of the mainshocks and analyzed the aftershocks properties depend on the stress values. Surprisingly, the results show a clear positive correlation between the Gutenberg-Richter b-value and induced stress, contrary to expectations from laboratory experiments. This observation highlights the significance of structural heterogeneity and strength variations in seismicity patterns. Furthermore, the study demonstrates that aftershock productivity increases nonlinearly with stress, while the Omori-Utsu parameters c and p systematically decrease with increasing stress changes. These partly unexpected findings have significant implications for future estimations of aftershock hazard.
The findings in this thesis provides valuable insights into earthquake triggering mechanisms by examining the relationship between stress changes and aftershock occurrence. The results contribute to improved understanding of earthquake behavior and can aid in the development of more accurate probabilistic-seismic hazard forecasts and risk reduction strategies.
We study the effect of energy and transport policies on pollution in two developing country cities. We use a quantitative equilibrium model with choice of housing, energy use, residential location, transport mode, and energy technology. Pollution comes from commuting and residential energy use. The model parameters are calibrated to replicate key variables for two developing country cities, Maputo, Mozambique, and Yogyakarta, Indonesia. In the counterfactual simulations, we study how various transport and energy policies affect equilibrium pollution. Policies may be induce rebound effects from increasing residential energy use or switching to high emission modes or locations. In general, these rebound effects tend to be largest for subsidies to public transport or modern residential energy technology.
While the economic harm of cartels is caused by their price-increasing effect, sanctioning by courts rather targets at the underlying process of firms reaching a price-fixing agreement. This paper provides experimental evidence on the question whether such sanctioning meets the economic target, i.e., whether evidence of a collusive meeting of the firms and of the content of their communication reliably predicts subsequent prices. We find that already the mere mutual agreement to meet predicts a strong increase in prices. Conversely, express distancing from communication completely nullifies its otherwise price-increasing effect. Using machine learning, we show that communication only increases prices if it is very explicit about how the cartel plans to behave.
Die interventionelle Behandlung des Vorhofflimmerns verursacht häufiger als in der Vergangenheit wahrgenommen eine Beeinträchtigung benachbarter Gewebe und Organe. Im Vordergrund der Betrachtungen dieser Arbeit stehen Schäden des Oesophagus, die aufgrund der schlechten Vorhersagbarkeit, des zeitlich verzögerten Auftretens und der fatalen Prognose bei Ausbildung einer atrio-oesophagealen Fistel besondere Relevanz haben.
Das Vorhofflimmern selbst ist nicht mit einer unmittelbaren vitalen Bedrohung verbunden, aber durch seine Komplikationen (z.B. Herzinsuffizienz, Schlaganfall) dennoch prognostisch relevant. Durch Antiarrhythmika gelingt keine Verbesserung der Rhythmuskontrolle (Arrhythmie-Freiheit), eine katheterinterventionelle Behandlung ist der medikamentösen Therapie überlegen. Durch eine frühzeitige und erfolgreiche Behandlung des Vorhofflimmerns konnte eine Verbesserung klinischer Endpunkte und der Prognose erreicht werden. Das Risiko einer invasiven Behandlung (insbesondere hinsichtlich des Auftretens prognoserelevanter Komplikationen) muss jedoch bei der Indikationsstellung und der Prozedur-Durchführung bedacht und gegenüber den günstigen Effekten der Behandlung abgewogen werden.
Untersuchungen zur Vermeidung der sehr seltenen atrio-oesophagealen Fisteln bedienen sich Surrogat-Parametern, hier bisher ausschließlich den ablationsinduzierten Schleimhaut-Läsionen des Oesophagus. Die Untersuchungen dieser Arbeit zeigen ein komplexeres Bild der (peri)-oesophagealen Schädigungen nach Vorhofflimmern-Ablation mit thermischen Energiequellen.
(1) Neue Definition der Oesophagus-Schäden: Oesophageale und perioesophageale Beeinträchtigungen treten sehr häufig auf (nach der hier verwendeten erweiterten Definition bei zwei Drittel der Patienten) und sind unabhängig von der verwendeten Ablationsenergie. Unterschiede finden sich in den Manifestationen der Oesophagus-Schäden für die verschiedenen Energie-Protokolle, ohne dass der Mechanismus hierfür aufgeklärt ist. Diese Arbeit beschreibt die unterschiedlichen Ausprägungen thermischer Oesophagus-Schäden, deren Determinanten und pathophysiologische Relevanz.
(2) Die Detektion (zum Teil subtiler) Oesophagus-Schäden ist maßgeblich von der Intensität der Nachsorge abhängig. Eine Beschränkung auf subjektive Schilderungen (z.B. Schmerzen beim Schluckakt, Sodbrennen) ist irreführend, die Mehrzahl der Veränderungen bleibt asymptomatisch, die Symptome der ausgebildeten atrio-oesophagealen Fistel (meist nach mehreren Wochen) bereits mit einer sehr schlechten Prognose belastet. Eine Endoskopie der Speiseröhre findet in den meisten elektrophysiologischen Zentren nicht oder nur bei anhaltenden Symptomen statt und kann ausschließlich Mukosa-Läsionen nachweisen. Damit wird das Ausmaß des oesophagealen und perioesophagealen Schadens bei Weitem unterschätzt. Veränderungen des perioesophagealen Raums, deren klinische Relevanz (noch) unklar ist, werden nicht erfasst, und damit ein Wandödem und Schäden im Gewebe zwischen linkem Vorhof und Speiseröhre (einschl. Nerven und Gefäßen) ignoriert.
Die Studien tragen auch zur Neubewertung etablierter Messgrößen und Risikofaktoren der Oesophagus-Schäden bei.
(3) Das Temperaturmonitoring im Oesophagus anhand der Maximalabweichungen ist erst für Extremwerte aussagekräftig und dadurch nicht hilfreich, Oesophagus-Läsionen zu vermeiden. Die komplexe Analyse der Temperatur-Rohdaten (bisher nur offline möglich) liefert in der AUC für RF-Ablationen einen prädiktiven Parameter für Oesophagus-Schäden, der eine Strukturierung der weiteren endoskopischen Diagnostik erlaubt. Ein vergleich¬barer Wert für die Cryoablationen konnte in den Analysen nicht gefunden werden.
(4) Eine chronische Entzündung des unteren Oesophagus-Drittels behindert nicht nur das Abheilen einer thermischen Oesophagus-Läsion, sondern kann das Auftreten solcher Läsionen durch die Ablation begünstigen. Die große Zahl vorbestehender Oesophagus-Veränderungen, die eine erhöhte Vulnerabilität anzeigen, und die Bedeutung für die Ent¬stehung thermischer Läsionen können der Ansatzpunkt präventiver Maßnahmen sein.
Ergänzend werden Ausprägungen der Oesophagus-Schäden durch umfangreiche Diagnostik erfasst und beschrieben, die aus pathophysiologischen Überlegungen relevant sein können.
(5) Die systematische Erweiterung der bildgebenden Diagnostik auf den perioesophagealen Raum durch Endosonographie zeigte, dass Schleimhaut-Läsionen alleine nur einen geringen Teil der Oesophagus-Schäden darstellen. Schleimhaut-Läsionen infolge einer instrumentellen Verletzung sind nicht mit dem Risiko der Ausbildung einer atrio-oesophagealen Fistel verbunden und unterstreichen die pathophysiologische Relevanz der perioesophagealen Veränderungen.
(6) Eine funktionelle Diagnostik thermischer Schäden des perioesophagealen Vagus-Plexus identifiziert Patienten mit Oesophagus-Schäden, die bildgebend nicht erfasst wurden, jedoch in ihren Auswirkungen (Nahrungsretention und gastro-oesophagealer Reflux) zur Läsionsprogression beitragen können.
This thesis presents a comprehensive exploration of the application of DNA origami nanofork antennas (DONAs) in the field of spectroscopy, with a particular focus on the structural analysis of Cytochrome C (CytC) at the single-molecule level. The research encapsulates the design, optimization, and application of DONAs in enhancing the sensitivity and specificity of Raman spectroscopy, thereby offering new insights into protein structures and interactions.
The initial phase of the study involved the meticulous optimization of DNA origami structures. This process was pivotal in developing nanoscale tools that could significantly enhance the capabilities of Raman spectroscopy. The optimized DNA origami nanoforks, in both dimer and aggregate forms, demonstrated an enhanced ability to detect and analyze molecular vibrations, contributing to a more nuanced understanding of protein dynamics.
A key aspect of this research was the comparative analysis between the dimer and aggregate forms of DONAs. This comparison revealed that while both configurations effectively identified oxidation and spin states of CytC, the aggregate form offered a broader range of detectable molecular states due to its prolonged signal emission and increased number of molecules. This extended duration of signal emission in the aggregates was attributed to the collective hotspot area, enhancing overall signal stability and sensitivity.
Furthermore, the study delved into the analysis of the Amide III band using the DONA system. Observations included a transient shift in the Amide III band's frequency, suggesting dynamic alterations in the secondary structure of CytC. These shifts, indicative of transitions between different protein structures, were crucial in understanding the protein’s functional mechanisms and interactions.
The research presented in this thesis not only contributes significantly to the field of spectroscopy but also illustrates the potential of interdisciplinary approaches in biosensing. The use of DNA origami-based systems in spectroscopy has opened new avenues for research, offering a detailed and comprehensive understanding of protein structures and interactions. The insights gained from this research are expected to have lasting implications in scientific fields ranging from drug development to the study of complex biochemical pathways. This thesis thus stands as a testament to the power of integrating nanotechnology, biochemistry, and spectroscopic techniques in addressing complex scientific questions.
This study on the Messianic Jewish movement and its relationship to the Torah explores the various aspects of the relationship to the Torah on the basis of 10 interviews with selected Yeshua-believing Jews in leadership positions. The selection of interviewees results in a range of different positions typical of the movement as a whole, which overlap in many respects but are often fundamentally different and sometimes contradictory. Particular attention is paid to the theologically based, divergent and contradictory positions in an attempt to make these understandable.
After a brief introduction to the Messianic Jewish movement, aspects of the Messianic Jewish dual identity are examined and their relevance for the relationship to the Torah is demonstrated. This is followed by an overview of the forums in which Yeshua-believing Jews discuss their relationship to the Torah. The extensive bibliography at the end of the work provides an insight into a lively discussion process within the movement that is still far from complete. A briefly annotated differentiation of terms serves as an overview of the most important meanings of Torah used in the Messianic Jewish movement. Following this preliminary work, the field study is presented. A description of the research field and methodological reflections precede the interviews. In the interviews, the associations with the term Torah are first recorded and the conceptual meaning and use clarified. This already reveals some serious differences. The theological positions and understandings of Torah are presented with the biographical context and main field of influence, and the most important formative influences are named. The points on which they all agree are noted first, as they serve as a common basis. All study the written Torah and consider it, as well as the rest of the Tanakh and the writings of the New Testament in their present form, to be divinely inspired and authoritative. All have found a positive approach to the Torah according to their own definition of the term. For all of them, the written Torah and the Tanakh point to Yeshua. All agree that Yeshua did not abrogate the Torah, but fulfilled it. And all feel a responsibility as a Jew to the Torah in some way. With regard to keeping commandments, all say that no one can earn their way to heaven by doing so. G-d's faithfulness to His promises to Israel is affirmed by all, but whether the new covenant in Yeshua superseded the old covenant of Mt. Sinai, or whether it is simply added to the already existing covenant of Sinai, whether ritual commandments are to continue to be kept after Yeshua's death and resurrection and the destruction of the Temple, whether the commandments aiming at separation from the nations should continue to be kept, whether and under what conditions rabbinic halacha should be followed and what individuals do and teach in their families and communities - all this is discussed interview by interview. It becomes clear how different ways of reading and weighting key scriptures produce different positions. Just as the diversity of positions in relation to the Torah already suggests, the interview partners are divided on the question of a Messianic Jewish Halacha. But here too, the term halacha is interpreted differently by the representatives. At the end of the field study, the attempts to produce Messianic Jewish Halacha and the problems and points of criticism expressed by other interviewees are explained. The work concludes with a theological framework able to contain all the different positions and relationships to the Torah and some starting points for a possible Messianic Jewish hermeneutic theology of the Torah.
Die 2016 verabschiedeten Sustainable Development Goals (SDGs) der Vereinten Nationen sind Referenzrahmen von Nachhaltigkeitsstrategien auf Bundes- Landes- und kommunaler Ebene geworden. Städte rückten im Zuge der Agenda 2030 in den Mittelpunkt. Ihre Verwaltungen befinden sich dabei in einem herausfordernden Spannungsfeld: Einerseits haben die SDGs den holistischen Anspruch, vollständig in das Handeln der Kommunen integriert zu werden. Andererseits ist für eine effektive Umsetzung eine starke Anpassung der SDGs an den lokalen Kontext notwendig. Die vorliegende Arbeit betrachtet anhand einer Fallstudie die Frage, wie Kommunen die Nachhaltigkeitsziele der Vereinten Nationen in ihre Handlungsprogramme und Nachhaltigkeitsstrategien übersetzen, und welche Faktoren Einfluss auf diesen Prozess haben. Dabei wird ein translationstheoretischer Ansatz verwendet, der die Übertragung einer Idee in einen lokalen Kontext als aktiven Transfer versteht, bei dem das Handeln der beteiligten Akteure und deren Konstruktion der aufzunehmenden Idee im Fokus steht. Die Translation wird mit Hilfe von qualitativen Interviews nachvollzogen und analysiert. Die Ergebnisse zeigen, dass die SDGs zwar anhand ihrer Relevanz für die Kommune gefiltert werden, der normative Anspruch der SDGs aber erhalten bleibt und angesichts des als gering beurteilten Fortschritts der Kommune besonderes Gewicht erhält. Zentrale Einflussfaktoren für die Translation sind die verfügbaren personellen und finanziellen Ressourcen, die Akzeptanz für die SDGs in Verwaltung, Politik und Gesellschaft und nicht zuletzt das persönliche Engagement einzelner Verwaltungsmitarbeiter*innen.
Knowledge about causal structures is crucial for decision support in various domains. For example, in discrete manufacturing, identifying the root causes of failures and quality deviations that interrupt the highly automated production process requires causal structural knowledge. However, in practice, root cause analysis is usually built upon individual expert knowledge about associative relationships. But, "correlation does not imply causation", and misinterpreting associations often leads to incorrect conclusions. Recent developments in methods for causal discovery from observational data have opened the opportunity for a data-driven examination. Despite its potential for data-driven decision support, omnipresent challenges impede causal discovery in real-world scenarios. In this thesis, we make a threefold contribution to improving causal discovery in practice.
(1) The growing interest in causal discovery has led to a broad spectrum of methods with specific assumptions on the data and various implementations. Hence, application in practice requires careful consideration of existing methods, which becomes laborious when dealing with various parameters, assumptions, and implementations in different programming languages. Additionally, evaluation is challenging due to the lack of ground truth in practice and limited benchmark data that reflect real-world data characteristics.
To address these issues, we present a platform-independent modular pipeline for causal discovery and a ground truth framework for synthetic data generation that provides comprehensive evaluation opportunities, e.g., to examine the accuracy of causal discovery methods in case of inappropriate assumptions.
(2) Applying constraint-based methods for causal discovery requires selecting a conditional independence (CI) test, which is particularly challenging in mixed discrete-continuous data omnipresent in many real-world scenarios. In this context, inappropriate assumptions on the data or the commonly applied discretization of continuous variables reduce the accuracy of CI decisions, leading to incorrect causal structures.
Therefore, we contribute a non-parametric CI test leveraging k-nearest neighbors methods and prove its statistical validity and power in mixed discrete-continuous data, as well as the asymptotic consistency when used in constraint-based causal discovery. An extensive evaluation of synthetic and real-world data shows that the proposed CI test outperforms state-of-the-art approaches in the accuracy of CI testing and causal discovery, particularly in settings with low sample sizes.
(3) To show the applicability and opportunities of causal discovery in practice, we examine our contributions in real-world discrete manufacturing use cases. For example, we showcase how causal structural knowledge helps to understand unforeseen production downtimes or adds decision support in case of failures and quality deviations in automotive body shop assembly lines.
The mobile-immobile model (MIM) has been established in geoscience in the context of contaminant transport in groundwater. Here the tracer particles effectively immobilise, e.g., due to diffusion into dead-end pores or sorption. The main idea of the MIM is to split the total particle density into a mobile and an immobile density. Individual tracers switch between the mobile and immobile state following a two-state telegraph process, i.e., the residence times in each state are distributed exponentially. In geoscience the focus lies on the breakthrough curve (BTC), which is the concentration at a fixed location over time. We apply the MIM to biological experiments with a special focus on anomalous scaling regimes of the mean squared displacement (MSD) and non-Gaussian displacement distributions. As an exemplary system, we have analysed the motion of tau proteins, that diffuse freely inside axons of neurons. Their free diffusion thereby corresponds to the mobile state of the MIM. Tau proteins stochastically bind to microtubules, which effectively immobilises the tau proteins until they unbind and continue diffusing. Long immobilisation durations compared to the mobile durations give rise to distinct non-Gaussian Laplace shaped distributions. It is accompanied by a plateau in the MSD for initially mobile tracer particles at relevant intermediate timescales. An equilibrium fraction of initially mobile tracers gives rise to non-Gaussian displacements at intermediate timescales, while the MSD remains linear at all times. In another setting bio molecules diffuse in a biosensor and transiently bind to specific receptors, where advection becomes relevant in the mobile state. The plateau in the MSD observed for the advection-free setting and long immobilisation durations persists also for the case with advection. We find a new clear regime of anomalous diffusion with non-Gaussian distributions and a cubic scaling of the MSD. This regime emerges for initially mobile and for initially immobile tracers. For an equilibrium fraction of initially mobile tracers we observe an intermittent ballistic scaling of the MSD. The long-time effective diffusion coefficient is enhanced by advection, which we physically explain with the variance of mobile durations. Finally, we generalize the MIM to incorporate arbitrary immobilisation time distributions and focus on a Mittag-Leffler immobilisation time distribution with power-law tail ~ t^(-1-mu) with 0<mu<1 and diverging mean immobilisation durations. A fit of our model to the BTC of experimental data from tracer particles in aquifers matches the BTC including the power-law tail. We use the fit parameters for plotting the displacement distributions and the MSD. We find Gaussian normal diffusion at short times and long-time power-law decay of mobile mass accompanied by anomalous diffusion at long times. The long-time diffusion is subdiffusive in the advection-free setting, while it is either subdiffusive for 0<mu<1/2 or superdiffusive for 1/2<mu<1 when advection is present. In the long-time limit we show equivalence of our model to a bi-fractional diffusion equation.
Long-term bacteria-fungi-plant associations in permafrost soils inferred from palaeometagenomics
(2024)
The arctic is warming 2 – 4 times faster than the global average, resulting in a strong feedback on northern ecosystems such as boreal forests, which cover a vast area of the high northern latitudes. With ongoing global warming, the treeline subsequently migrates northwards into tundra areas. The consequences of turning ecosystems are complex: on the one hand, boreal forests are storing large amounts of global terrestrial carbon and act as a carbon sink, dragging carbon dioxide out of the global carbon cycle, suggesting an enhanced carbon uptake with increased tree cover. On the other hand, with the establishment of trees, the albedo effect of tundra decreases, leading to enhanced soil warming. Meanwhile, permafrost thaws, releasing large amounts of previously stored carbon into the atmosphere. So far, mainly vegetation dynamics have been assessed when studying the impact of warming onto ecosystems. Most land plants are living in close symbiosis with bacterial and fungal communities, sustaining their growth in nutrient poor habitats. However, the impact of climate change on these subsoil communities alongside changing vegetation cover remains poorly understood. Therefore, a better understanding of soil community dynamics on multi millennial timescales is inevitable when addressing the development of entire ecosystems. Unravelling long-term cross-kingdom dependencies between plant, fungi, and bacteria is not only a milestone for the assessment of warming on boreal ecosystems. On top, it also is the basis for agriculture strategies to sustain society with sufficient food in a future warming world.
The first objective of this thesis was to assess ancient DNA as a proxy for reconstructing the soil microbiome (Manuscripts I, II, III, IV). Research findings across these projects enable a comprehensive new insight into the relationships of soil microorganisms to the surrounding vegetation. First, this was achieved by establishing (Manuscript I) and applying (Manuscript II) a primer pair for the selective amplification of ancient fungal DNA from lake sediment samples with the metabarcoding approach. To assess fungal and plant co-variation, the selected primer combination (ITS67, 5.8S) amplifying the ITS1 region was applied on samples from five boreal and arctic lakes. The obtained data showed that the establishment of fungal communities is impacted by warming as the functional ecological groups are shifting. Yeast and saprotroph dominance during the Late Glacial declined with warming, while the abundance of mycorrhizae and parasites increased with warming. The overall species richness was also alternating. The results were compared to shotgun sequencing data reconstructing fungi and bacteria (Manuscripts III, IV), yielding overall comparable results to the metabarcoding approach. Nonetheless, the comparison also pointed out a bias in the metabarcoding, potentially due to varying ITS lengths or copy numbers per genome.
The second objective was to trace fungus-plant interaction changes over time (Manuscripts II, III). To address this, metabarcoding targeting the ITS1 region for fungi and the chloroplast P6 loop for plants for the selective DNA amplification was applied (Manuscript II). Further, shotgun sequencing data was compared to the metabarcoding results (Manuscript III). Overall, the results between the metabarcoding and the shotgun approaches were comparable, though a bias in the metabarcoding was assumed. We demonstrated that fungal shifts were coinciding with changes in the vegetation. Yeast and lichen were mainly dominant during the Late Glacial with tundra vegetation, while warming in the Holocene lead to the expansion of boreal forests with increasing mycorrhizae and parasite abundance. Aside, we highlighted that Pinaceae establishment is dependent on mycorrhizal fungi such as Suillineae, Inocybaceae, or Hyaloscypha species also on long-term scales.
The third objective of the thesis was to assess soil community development on a temporal gradient (Manuscripts III, IV). Shotgun sequencing was applied on sediment samples from the northern Siberian lake Lama and the soil microbial community dynamics compared to ecosystem turnover. Alongside, podzolization processes from basaltic bedrock were recovered (Manuscript III). Additionally, the recovered soil microbiome was compared to shotgun data from granite and sandstone catchments (Manuscript IV, Appendix). We assessed if the establishment of the soil microbiome is dependent on the plant taxon and as such comparable between multiple geographic locations or if the community establishment is driven by abiotic soil properties and as such the bedrock area. We showed that the development of soil communities is to a great extent driven by the vegetation changes and temperature variation, while time only plays a minor role. The analyses showed general ecological similarities especially between the granite and basalt locations, while the microbiome on species-level was rather site-specific. A greater number of correlated soil taxa was detected for deep-rooting boreal taxa in comparison to grasses with shallower roots. Additionally, differences between herbaceous taxa of the late Glacial compared to taxa of the Holocene were revealed.
With this thesis, I demonstrate the necessity to investigate subsoil community dynamics on millennial time scales as it enables further understanding of long-term ecosystem as well as soil development processes and such plant establishment. Further, I trace long-term processes leading to podzolization which supports the development of applied carbon capture strategies under future global warming.
We examine how the gender of business-owners is related to the wages paid to female relative to male employees working in their firms. Using Finnish register data and employing firm fixed effects, we find that the gender pay gap is – starting from a gender pay gap of 11 to 12 percent - two to three percentage-points lower for hourly wages in female-owned firms than in male-owned firms. Results are robust to how the wage is measured, as well as to various further robustness checks. More importantly, we find substantial differences between industries. While, for instance, in the manufacturing sector, the gender of the owner plays no role for the gender pay gap, in several service sector industries, like ICT or business services, no or a negligible gender pay gap can be found, but only when firms are led by female business owners. Businesses in male ownership maintain a gender pay gap of around 10 percent also in the latter industries. With increasing firm size, the influence of the gender of the owner, however, fades. In large firms, it seems that others – firm managers – determine wages and no differences in the pay gap are observed between male- and female-owned firms.
Der russische Krimi
(2024)
Die erste umfassende Darstellung des Kriminalgenres in Russland. Sie geht auf Bücher und Filme ein und berücksichtigt die Debatten der Literaturkritik, da sich die Kulturpolitik während der sowjetischen Jahrzehnte schwer damit tat, dem Kriminalgenre überhaupt ein Existenzrecht zubilligen. Sympathie für die Miliz zu erzeugen wurde schließlich offizieller Zweck dieses politisch zu einer Nischenexistenz gedrängten Genres. Entsprechend liegt ein Akzent der Studie auf der Ideologie, besonders bei der Darstellung der Helden und ihrer Gegner und der Lebenswelt, die die Leser als ihre wiedererkennen sollten. Dabei erfahren sie eine Menge über die Gesellschaft, vor allem über deren sonst eher verschwiegene Schattenseiten.
Nicht zuletzt wegen der langen Entbehrung spannender Texte wurde der Krimi nach dem Ende des Sozialismus zu dem Bestsellergenre schlechthin. Am Beispiel des Frauenkrimis (Marinina und Nachfolgerinnen) und des postmodernen Krimis (Akunin) wird die postsowjetische Entwicklung bis in die 2010er Jahre gezeigt.
The Arctic is the hot spot of the ongoing, global climate change. Over the last decades, near-surface temperatures in the Arctic have been rising almost four times faster than on global average. This amplified warming of the Arctic and the associated rapid changes of its environment are largely influenced by interactions between individual components of the Arctic climate system. On daily to weekly time scales, storms can have major impacts on the Arctic sea-ice cover and are thus an important part of these interactions within the Arctic climate. The sea-ice impacts of storms are related to high wind speeds, which enhance the drift and deformation of sea ice, as well as to changes in the surface energy budget in association with air mass advection, which impact the seasonal sea-ice growth and melt.
The occurrence of storms in the Arctic is typically associated with the passage of transient cyclones. Even though the above described mechanisms how storms/cyclones impact the Arctic sea ice are in principal known, there is a lack of statistical quantification of these effects. In accordance with that, the overarching objective of this thesis is to statistically quantify cyclone impacts on sea-ice concentration (SIC) in the Atlantic Arctic Ocean over the last four decades. In order to further advance the understanding of the related mechanisms, an additional objective is to separate dynamic and thermodynamic cyclone impacts on sea ice and assess their relative importance. Finally, this thesis aims to quantify recent changes in cyclone impacts on SIC. These research objectives are tackled utilizing various data sets, including atmospheric and oceanic reanalysis data as well as a coupled model simulation and a cyclone tracking algorithm.
Results from this thesis demonstrate that cyclones are significantly impacting SIC in the Atlantic Arctic Ocean from autumn to spring, while there are mostly no significant impacts in summer. The strength and the sign (SIC decreasing or SIC increasing) of the cyclone impacts strongly depends on the considered daily time scale and the region of the Atlantic Arctic Ocean. Specifically, an initial decrease in SIC (day -3 to day 0 relative to the cyclone) is found in the Greenland, Barents and Kara Seas, while SIC increases following cyclones (day 0 to day 5 relative to the cyclone) are mostly limited to the Barents and Kara Seas.
For the cold season, this results in a pronounced regional difference between overall (day -3 to day 5 relative to the cyclone) SIC-decreasing cyclone impacts in the Greenland Sea and overall SIC-increasing cyclone impacts in the Barents and Kara Seas. A cyclone case study based on a coupled model simulation indicates that both dynamic and thermodynamic mechanisms contribute to cyclone impacts on sea ice in winter. A typical pattern consisting of an initial dominance of dynamic sea-ice changes followed by enhanced thermodynamic ice growth after the cyclone passage was found. This enhanced ice growth after the cyclone passage most likely also explains the (statistical) overall SIC-increasing effects of cyclones in the Barents and Kara Seas in the cold season.
Significant changes in cyclone impacts on SIC over the last four decades have emerged throughout the year. These recent changes are strongly varying from region to region and month to month. The strongest trends in cyclone impacts on SIC are found in autumn in the Barents and Kara Seas. Here, the magnitude of destructive cyclone impacts on SIC has approximately doubled over the last four decades. The SIC-increasing effects following the cyclone passage have particularly weakened in the Barents Sea in autumn. As a consequence, previously existing overall SIC-increasing cyclone impacts in this region in autumn have recently disappeared. Generally, results from this thesis show that changes in the state of the sea-ice cover (decrease in mean sea-ice concentration and thickness) and near-surface air temperature are most important for changed cyclone impacts on SIC, while changes in cyclone properties (i.e. intensity) do not play a significant role.
This work analyzed functional and regulatory aspects of the so far little characterized EPSIN N-terminal Homology (ENTH) domain-containing protein EPSINOID2 in Arabidopsis thaliana. ENTH domain proteins play accessory roles in the formation of clathrin-coated vesicles (CCVs) (Zouhar and Sauer 2014). Their ENTH domain interacts with membranes and their typically long, unstructured C-terminus contains binding motifs for adaptor protein complexes and clathrin itself. There are seven ENTH domain proteins in Arabidopsis. Four of them possess the canonical long C-terminus and participate in various, presumably CCV-related intracellular transport processes (Song et al. 2006; Lee et al. 2007; Sauer et al. 2013; Collins et al. 2020; Heinze et al. 2020; Mason et al. 2023). The remaining three ENTH domain proteins, however, have severely truncated C-termini and were termed EPSINOIDs (Zouhar and Sauer 2014; Freimuth 2015). Their functions are currently unclear. Preceding studies focusing on EPSINOID2 indicated a role in root hair formation: epsinoid2 T DNA mutants exhibited an increased root hair density and EPSINOID2-GFP was specifically located in non-hair cell files in the Arabidopsis root epidermis (Freimuth 2015, 2019).
In this work, it was clearly shown that loss of EPSINOID2 leads to an increase in root hair density through analyses of three independent mutant alleles, including a newly generated CRISPR/Cas9 full deletion mutant. The ectopic root hairs emerging from non-hair positions in all epsinoid2 mutant alleles are most likely not a consequence of altered cell fate, because extensive genetic analyses placed EPSINOID2 downstream of the established epidermal patterning network. Thus, EPSINOID2 seems to act as a cell autonomous inhibitor of root hair formation. Attempts to confirm this hypothesis by ectopically overexpressing EPSINOID2 led to the discovery of post-transcriptional and -translational regulation through different mechanisms. One involves the little characterized miRNA844-3p. Interference with this pathway resulted in ectopic EPSINOID2 overexpression and decreased root hair density, confirming it as negative factor in root hair formation. A second mechanism likely involves proteasomal degradation. Treatment with proteasomal inhibitor MG132 led to EPSINOID2-GFP accumulation, and a KEN box degron motif was identified in the EPSINOID2 sequence associated with degradation through a ubiquitin/proteasome-dependent pathway. In line with a tight dose regulation, genetic analyses of all three mutant alleles indicate that EPSINOID2 is haploinsufficient. Lastly, it was revealed that, although EPSINOID2 promoter activity was found in all epidermal cells, protein accumulation was observed in N-cells only, hinting at yet another layer of regulation.
Jahresbericht 2023
(2024)
Dieser Jahresbericht umfasst den Berichtszeitraum 2023, in dem Forschung und Lehre wieder in Präsenz stattfinden konnten. Begegnung und Austausch in Hörsaal und Seminarraum, auf Konferenzpaneln und während Kaffeepausen sind wieder möglich, aber die Möglichkeiten von Homeoffice und Onlinekommunikation bleiben weiter bestehen, wie die Erfahrung zeigt.
Das MenschenRechtsZentrum als interdisziplinär arbeitende, zentrale wissenschaftliche Einrichtung der Universität Potsdam hat es im Berichtszeitraum erneut unternommen, juristische, philosophische, geschichts- und kultur- sowie politikwissenschaftliche Perspektiven auf das Thema Menschenrechte in Forschung und Lehre miteinander zu verbinden.
Die Wissenschaftler*innen des MenschenRechtsZentrums lehren an den Fakultäten, denen sie angehören. Hier werden daher nur diejenigen Aktivitäten angeführt, die einen Bezug zur Arbeit des MenschenRechtsZentrums sowie zu menschenrechtlichen Fragestellungen haben; weitergehende Informationen finden sich auf den Homepages der jeweiligen Personen.
Mindful Eating
(2024)
Maladaptive eating behaviors such as emotional eating, external eating, and loss-of-control eating are widespread in the general population. Moreover, they are associated to adverse health outcomes and well-known for their role in the development and maintenance of eating disorders and obesity (i.e., eating and weight disorders). Eating and weight disorders are associated with crucial burden for individuals as well as high costs for society in general. At the same time, corresponding treatments yield poor outcomes. Thus, innovative concepts are needed to improve prevention and treatment of these conditions.
The Buddhist concept of mindfulness (i.e., paying attention to the present moment without judgement) and its delivery via mindfulness-based intervention programs (MBPs) has gained wide popularity in the area of maladaptive eating behaviors and associated eating and weight disorders over the last two decades. Though previous findings on their effects seem promising, the current assessment of mindfulness and its mere application via multi-component MBPs hampers to draw conclusions on the extent to which mindfulness-immanent qualities actually account for the effects (e.g., the modification of maladaptive eating behaviors). However, this knowledge is pivotal for interpreting previous effects correctly and for avoiding to cause harm in particularly vulnerable groups such as those with eating and weight disorders.
To address these shortcomings, recent research has focused on the context-specific approach of mindful eating (ME) to investigate underlying mechanisms of action. ME can be considered a subdomain of generic mindfulness describing it specifically in relation to the process of eating and associated feelings, thoughts, and motives, thus including a variety of different attitudes and behaviors. However, there is no universal operationalization and the current assessment of ME suffers from different limitations. Specifically, current measurement instruments are not suited for a comprehensive assessment of the multiple facets of the construct that are currently discussed as important in the literature. This in turn hampers comparisons of different ME facets which would allow to evaluate their particular effect on maladaptive eating behaviors. This knowledge is needed to tailor prevention and treatment of associated eating and weight disorders properly and to explore potential underlying mechanisms of action which have so far been proposed mainly on theoretical grounds.
The dissertation at hand aims to provide evidence-based fundamental research that contributes to our understanding of how mindfulness, more specifically its context-specific form of ME, impacts maladaptive eating behaviors and, consequently, how it could be used appropriately to enrich the current prevention and treatment approaches for eating and weight disorders in the future.
Specifically, in this thesis, three scientific manuscripts applying several qualitative and quantitative techniques in four sequential studies are presented. These manuscripts were published in or submitted to three scientific peer-reviewed journals to shed light on the following questions:
I. How can ME be measured comprehensively and in a reliable and valid way to advance the understanding of how mindfulness works in the context of eating?
II. Does the context-specific construct of ME have an advantage over the generic concept in advancing the understanding of how mindfulness is related to maladaptive eating behaviors?
III. Which ME facets are particularly useful in explaining maladaptive eating behaviors?
IV. Does training a particular ME facet result in changes in maladaptive eating behaviors?
To answer the first research question (Paper 1), a multi-method approach using three subsequent studies was applied to develop and validate a comprehensive self-report instrument to assess the multidimensional construct of ME - the Mindful Eating Inventory (MEI). Study 1 aimed to create an initial version of the MEI by following a three-step approach: First, a comprehensive item pool was compiled by including selected and adapted items of the existing ME questionnaires and supplementing them with items derived from an extensive literature review. Second, the preliminary item pool was complemented and checked for content validity by experts in the field of eating behavior and/or mindfulness (N = 15). Third, the item pool was further refined through qualitative methods: Three focus groups comprising laypersons (N = 16) were used as a check for applicability. Subsequently, think-aloud protocols (N = 10) served as a last check of comprehensibility and elimination of ambiguities.
The resulting initial MEI version was tested in Study 2 in an online convenience sample (N = 828) to explore its factor structure using exploratory factor analysis (EFA). Results were used to shorten the questionnaire in accordance with qualitative and quantitative criteria yielding the final MEI version which encompasses 30 items. These items were assigned to seven ME facets: (1) ‘Accepting and Non-attached Attitude towards one’s own eating experience’ (ANA), (2) ‘Awareness of Senses while Eating’ (ASE), (3) ‘Eating in Response to awareness of Fullness‘ (ERF), (4) ‘Awareness of eating Triggers and Motives’ (ATM), (5) ‘Interconnectedness’ (CON), (6) ‘Non-Reactive Stance’ (NRS) and (7) Focused Attention on Eating’ (FAE).
Study 3 sought to confirm the found facets and the corresponding factor structure in an independent online convenience sample (N = 612) using confirmatory factor analysis (CFA). The study served as further indication of the assumed multidimensionality of ME (the correlational seven-factor model was shown to be superior to a single-factor model). Psychometric properties of the MEI, regarding factorial validity, internal consistency, retest-reliability, and observed criterion validity using a wide range of eating-specific and general health-related outcomes, showed the inventory to be suitable for a comprehensive, reliable and valid assessment of ME. These findings were complemented by demonstrating measurement invariance of the MEI regarding gender. In accordance with the factor structure of the MEI, Paper 1 offers an empirically-derived definition of ME, succeeding in overcoming ambiguities and problems of previous attempts at defining the construct.
To answer the second and third research questions (Paper 2) a subsample of Study 2 from the MEI validation studies (N = 292) was analyzed. Incremental validity of ME beyond generic mindfulness was shown using hierarchical regression models concerning the outcome variables of maladaptive eating behaviors (emotional eating and uncontrolled eating) and nutrition behaviors (consumption of energy-dense food). Multiple regression analyses were applied to investigate the impact of the seven different ME facets (identified in Paper 1) on the same outcome variables. The following ME facets significantly contributed to explaining variance in maladaptive eating and nutrition behaviors: Accepting and Non-attached Attitude towards one`s own eating experience (ANA), Eating in Response to awareness of Fullness (ERF), the Awareness of eating Triggers and Motives (ATM), and a Non-Reactive Stance (NRS, i.e., an observing, non-impulsive attitude towards eating triggers). Results suggest that these ME facets are promising variables to consider when a) investigating potential underlying mechanisms of mindfulness and MBPs in the context of eating and b) addressing maladaptive eating behaviors in general as well as in the prevention and treatment of eating and weight disorders.
To answer the fourth research question (Paper 3), a training on an isolated exercise (‘9 Hunger’) based on the previously identified ME facet ATM was designed to explore its particular association with changes in maladaptive eating behaviors and thus to preliminary explore one possible mechanism of action. The online study was realized using a randomized controlled trial (RCT) design. Latent Change Scores (LCS) across three measurement points (before the training, directly after the training and three months later) were compared between the intervention group (n = 211) and a waitlist control group (n = 188). Short- and longer-term effects of the training could be shown on maladaptive eating behaviors (emotional eating, external eating, loss-of-control eating) and associated outcomes (intuitive eating, ME, self-compassion, well-being). Findings serve as preliminary empirical evidence that MBPs might influence maladaptive eating behaviors through an enhanced non-judgmental awareness of and distinguishment between eating motives and triggers (i.e., ATM). This mechanism of action had previously only been hypothesized from a theoretical perspective. Since maladaptive eating behaviors are associated with eating and weight disorders, the findings can enhance our understanding of the general effects of MBPs on these conditions.
The integration of the different findings leads to several suggestions of how ME might enrich different kinds of future interventions on maladaptive eating behaviors to improve health in general or the prevention and treatment of eating and weight disorders in particular. Strengths of the thesis (e.g., deliberate specific methodology, variety of designs and methods, high number of participants) are emphasized. The main limitations particularly regarding sample characteristics (e.g., higher level of formal education, fewer males, self-selected) are discussed to arrive at an outline for future studies (e.g., including multi-modal-multi-method approaches, clinical eating disorder samples and youth samples) to improve upcoming research on ME and underlying mechanisms of action of MBPs for maladaptive eating behaviors and associated eating and weight disorders.
This thesis enriches current research on mindfulness in the context of eating by providing fundamental research on the core of the ME construct. Thereby it delivers a reliable and valid instrument to comprehensively assess ME in future studies as well as an operational definition of the construct. Findings on ME facet level might inform upcoming research and practice on how to address maladaptive eating behaviors appropriately in interventions. The ME skill ‘Awareness of eating Triggers and Motives (ATM)’ as one particular mechanism of action should be further investigated in representative community and specific clinical samples to examine the validity of the results in these groups and to justify an application of the concept to the general population as well as to subgroups with eating and weight disorders in particular.
In conclusion, findings of the current thesis can be used to set future research on mindfulness, more specifically ME, and its underlying mechanism in the context of eating on a more evidence-based footing. This knowledge can inform upcoming prevention and treatment to tailor MBPs on maladaptive eating behaviors and associated eating and weight disorders appropriately.
Heat stress (HS) is a major abiotic stress that negatively affects plant growth and productivity. However, plants have developed various adaptive mechanisms to cope with HS, including the acquisition and maintenance of thermotolerance, which allows them to respond more effectively to subsequent stress episodes. HS memory includes type II transcriptional memory which is characterized by enhanced re-induction of a subset of HS memory genes upon recurrent HS. In this study, new regulators of HS memory in A. thaliana were identified through the characterization of rein mutants.
The rein1 mutant carries a premature stop in CYCLIN-DEPENDENT-KINASE 8 (CDK8) which is part of the cyclin kinase module of the Mediator complex. Rein1 seedlings show impaired type II transcriptional memory in multiple heat-responsive genes upon re-exposure to HS. Additionally, the mutants exhibit a significant deficiency in HS memory at the physiological level. Interaction studies conducted in this work indicate that CDK8 associates with the memory HEAT SHOCK FACTORs HSAF2 and HSFA3. The results suggest that CDK8 plays a crucial role in HS memory in plants together with other memory HSFs, which may be potential targets of the CDK8 kinase function. Understanding the role and interaction network of the Mediator complex during HS-induced transcriptional memory will be an exciting aspect of future HS memory research.
The second characterized mutant, rein2, was selected based on its strongly impaired pAPX2::LUC re-induction phenotype. In gene expression analysis, the mutant revealed additional defects in the initial induction of HS memory genes. Along with this observation, basal thermotolerance was impaired similarly as HS memory at the physiological level in rein2. Sequencing of backcrossed bulk segregants with subsequent fine mapping narrowed the location of REIN2 to a 1 Mb region on chromosome 1. This interval contains the At1g65440 gene, which encodes the histone chaperone SPT6L. SPT6L interacts with chromatin remodelers and bridges them to the transcription machinery to regulate nucleosome and Pol II occupancy around the transcriptional start site. The EMS-induced missense mutation in SPT6L may cause altered HS-induced gene expression in rein2, possibly triggered by changes in the chromatin environment resulting from altered histone chaperone function.
Expanding research on screen-derived factors that modify type II transcriptional memory has the potential to enhance our understanding of HS memory in plants. Discovering connections between previously identified memory factors will help to elucidate the underlying network of HS memory. This knowledge can initiate new approaches to improve heat resilience in crops.
Background: The worldwide prevalence of diabetes has been increasing in recent years, with a projected prevalence of 700 million patients by 2045, leading to economic burdens on societies. Type 2 diabetes mellitus (T2DM), representing more than 95% of all diabetes cases, is a multifactorial metabolic disorder characterized by insulin resistance leading to an imbalance between insulin requirements and supply. Overweight and obesity are the main risk factors for developing type 2 diabetes mellitus. The lifestyle modification of following a healthy diet and physical activity are the primary successful treatment and prevention methods for type 2 diabetes mellitus. Problems may exist with patients not achieving recommended levels of physical activity. Electrical muscle stimulation (EMS) is an increasingly popular training method and has become in the focus of research in recent years. It involves the external application of an electric field to muscles, which can lead to muscle contraction. Positive effects of EMS training have been found in healthy individuals as well as in various patient groups. New EMS devices offer a wide range of mobile applications for whole-body electrical muscle stimulation (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. This dissertation project aims to investigate whether WB-EMS is suitable for intensifying low-intensive dynamic exercises such as walking and Nordic walking.
Methods: Two independent studies were conducted. The first study aimed to investigate the reliability of exercise parameters during the 10-meter Incremental Shuttle Walk Test (10MISWT) using superimposed WB-EMS (research question 1, sub-question a) and the difference in exercise intensity compared to conventional walking (CON-W, research question 1, sub-question b). The second study aimed to compare differences in exercise parameters between superimposed WB-EMS (WB-EMS-W) and conventional walking (CON-W), as well as between superimposed WB-EMS (WB-EMS-NW) and conventional Nordic walking (CON-NW) on a treadmill (research question 2). Both studies took place in participant groups of healthy, moderately active men aged 35-70 years. During all measurements, the Easy Motion Skin® WB-EMS low frequency stimulation device with adjustable intensities for eight muscle groups was used. The current intensity was individually adjusted for each participant at each trial to ensure safety, avoiding pain and muscle cramps. In study 1, thirteen individuals were included for each sub question. A randomized cross-over design with three measurement appointments used was to avoid confounding factors such as delayed onset muscle soreness. The 10MISWT was performed until the participants no longer met the criteria of the test and recording five outcome measures: peak oxygen uptake (VO2peak), relative VO2peak (rel.VO2peak), maximum walk distance (MWD), blood lactate concentration, and the rate of perceived exertion (RPE).
Eleven participants were included in study 2. A randomized cross-over design in a study with four measurement appointments was used to avoid confounding factors. A treadmill test protocol at constant velocity (6.5 m/s) was developed to compare exercise intensities. Oxygen uptake (VO2), relative VO2 (rel.VO2) blood lactate, and the RPE were used as outcome variables. Test-retest reliability between measurements was determined using a compilation of absolute and relative measures of reliability. Outcome measures in study 2 were studied using multifactorial analyses of variances.
Results: Reliability analysis showed good reliability for VO2peak, rel.VO2peak, MWD and RPE with no statistically significant difference for WB-EMS-W during 10WISWT. However, differences compared to conventional walking in outcome variables were not found. The analysis of the treadmill tests showed significant effects for the factors CON/WB-EMS and W/NW for the outcome variables VO2, rel.VO2 and lactate, with both factors leading to higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS∗W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values, RPE differences for W/NW and EMS∗W/NW were not significant.
Discussion: The present project found good reliability for measuring VO2peak, rel. VO2peak, MWD and RPE during 10MISWT during WB-EMS-W, confirming prior research of the test. The test appears technically limited rather than physiologically in healthy, moderately active men. However, it is unsuitable for investigating differences in exercise intensities using WB-EMS-W compared to CON-W due to different perceptions of current intensity between exercise and rest. A treadmill test with constant walking speed was conducted to adjust individual maximum tolerable current intensity for the second part of the project. The treadmill test showed a significant increase in metabolic demands during WB-EMS-W and WB-EMS-NW by an increased VO2 and blood lactate concentration. However, the clinical relevance of these findings remains debatable. The study also found that WB-EMS superimposed exercises are perceived as more strenuous than conventional exercise. While in parts comparable studies lead to higher results for VO2, our results are in line with those of other studies using the same frequency. Due to the minor clinical relevance the use of WB-EMS as exercise intensification tool during walking and Nordic walking is limited. High device cost should be considered. Habituation to WB-EMS could increase current intensity tolerance and VO2 and make it a meaningful method in the treatment of T2DM. Recent figures show that WB-EMS is used in obese people to achieve health and weight goals. The supposed benefit should be further investigated scientifically.
Increasingly fast development cycles and individualized products pose major challenges for today's smart production systems in times of industry 4.0. The systems must be flexible and continuously adapt to changing conditions while still guaranteeing high throughputs and robustness against external disruptions. Deep reinforcement learning (RL) algorithms, which already reached impressive success with Google DeepMind's AlphaGo, are increasingly transferred to production systems to meet related requirements. Unlike supervised and unsupervised machine learning techniques, deep RL algorithms learn based on recently collected sensorand process-data in direct interaction with the environment and are able to perform decisions in real-time. As such, deep RL algorithms seem promising given their potential to provide decision support in complex environments, as production systems, and simultaneously adapt to changing circumstances. While different use-cases for deep RL emerged, a structured overview and integration of findings on their application are missing. To address this gap, this contribution provides a systematic literature review of existing deep RL applications in the field of production planning and control as well as production logistics. From a performance perspective, it became evident that deep RL can beat heuristics significantly in their overall performance and provides superior solutions to various industrial use-cases. Nevertheless, safety and reliability concerns must be overcome before the widespread use of deep RL is possible which presumes more intensive testing of deep RL in real world applications besides the already ongoing intensive simulations.
Do all roads lead to Rome?
(2020)
Content website providers have two main goals: They seek to attract consumers and to keep them on their websites as long as possible. To reach potential consumers, they can utilize several online channels, such as paid search results or advertisements on social media, all of which usually require a substantial marketing budget. However, with rising user numbers of online communication tools, website providers increasingly integrate social sharing buttons on their websites to encourage existing consumers to facilitate referrals to their social networks. While little is known about this social form of guiding consumers to a content website, the study proposes that the way in which consumers reach a website is related to their stickiness to the website and their propensity to refer content to others. By using a unique clickstream data set of a video-on-demand website, the study compares consumers referred by their social network to those consumers arriving at the website via organic search or social media advertisements in terms of stickiness to the website (e.g., visit length, number of page views, video starts) and referral likelihood. The results show that consumers referred through social referrals spend more time on the website, view more pages, and start more videos than consumers who respond to social media advertisements, but less than those coming through organic search. Concerning referral propensity, the results indicate that consumers attracted to a website through social referrals are more likely to refer content to others than those who came through organic search or social media advertisements. The study offers direct insights to managers and recommends an increase in their efforts to promote social referrals on their websites.