Refine
Year of publication
- 2024 (288) (remove)
Document Type
- Doctoral Thesis (108)
- Article (69)
- Monograph/Edited Volume (29)
- Part of a Book (23)
- Other (15)
- Conference Proceeding (9)
- Working Paper (9)
- Master's Thesis (8)
- Part of Periodical (7)
- Report (3)
Is part of the Bibliography
- yes (288) (remove)
Keywords
- Judentum (5)
- Arctic (4)
- Arktis (4)
- Brandenburg (4)
- Kommunalwissenschaft (4)
- Kommune (4)
- digital transformation (4)
- experiment (4)
- machine learning (4)
- Christentum (3)
Institute
- Fachgruppe Politik- & Verwaltungswissenschaft (26)
- Fachgruppe Betriebswirtschaftslehre (22)
- Extern (20)
- Bürgerliches Recht (17)
- Historisches Institut (17)
- Institut für Biochemie und Biologie (16)
- Öffentliches Recht (16)
- Fachgruppe Soziologie (15)
- Fachgruppe Volkswirtschaftslehre (15)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
Eskalation des Commitments in Wirtschaftsinformatik Projekten: eine kognitiv-affektive Perspektive
(2024)
Projekte im Bereich der Wirtschaftsinformatik (IS-Projekte) sind von zentraler Bedeutung für die Steuerung von Unternehmensstrategien und die Aufrechterhaltung von Wettbewerbsvorteilen, überschreiten jedoch häufig das Budget, sprengen den Zeitrahmen und weisen eine hohe Misserfolgsquote auf. Diese Dissertation befasst sich mit den psychologischen Grundlagen menschlichen Verhaltens - insbesondere Kognition und Emotion - im Zusammenhang mit einem weit verbreiteten Problem im IS-Projektmanagement: der Tendenz, an fehlgehenden Handlungssträngen festzuhalten, auch Eskalation des Commitments (Englisch: “escalation of commitment” - EoC) genannt.
Mit einem kombinierten Forschungsansatz (dem Mix von qualitativen und quantitativen Methoden) untersuche ich in meiner Dissertation die emotionalen und kognitiven Grundlagen der Entscheidungsfindung hinter eskalierendem Commitment zu scheiternden IS-Projekten und deren Entwicklung über die Zeit. Die Ergebnisse eines psychophysiologischen Laborexperiments liefern Belege auf die Vorhersagen bezüglich der Rolle von negativen und komplexen situativen Emotionen der kognitiven Dissonanz Theorie gegenüber der Coping-Theorie und trägt zu einem besseren Verständnis dafür bei, wie sich Eskalationstendenzen während sequenzieller Entscheidungsfindung aufgrund kognitiver Lerneffekte verändern. Mit Hilfe psychophysiologischer Messungen, einschließlich der Daten-Triangulation zwischen elektrodermaler und kardiovaskulärer Aktivität sowie künstliche Intelligenz-basierter Analyse von Gesichtsmikroexpressionen, enthüllt diese Forschung physiologische Marker für eskalierendes Commitment. Ergänzend zu dem Experiment zeigt eine qualitative Analyse text-basierter Reflexionen während der Eskalationssituationen, dass Entscheidungsträger verschiedene kognitive Begründungsmuster verwenden, um eskalierende Verhaltensweisen zu rechtfertigen, die auf eine Sequenz von vier unterschiedlichen kognitiven Phasen schließen lassen.
Durch die Integration von qualitativen und quantitativen Erkenntnissen entwickelt diese Dissertation ein umfassendes theoretisches Model dafür, wie Kognition und Emotion eskalierendes Commitment über die Zeit beeinflussen. Ich schlage vor, dass eskalierendes Commitment eine zyklische Anpassung von Denkmodellen ist, die sich durch Veränderungen in kognitiven Begründungsmustern, Variationen im zeitlichen Kognitionsmodus und Interaktionen mit situativen Emotionen und deren Erwartung auszeichnet. Der Hauptbeitrag dieser Arbeit liegt in der Entflechtung der emotionalen und kognitiven Mechanismen, die eskalierendes Commitment im Kontext von IS-Projekten antreiben. Die Erkenntnisse tragen dazu bei, die Qualität von Entscheidungen unter Unsicherheit zu verbessern und liefern die Grundlage für die Entwicklung von Deeskalationsstrategien. Beteiligte an „in Schieflage geratenden“ IS-Projekten sollten sich der Tendenz auf fehlgeschlagenen Aktionen zu beharren und der Bedeutung der zugrundeliegenden emotionalen und kognitiven Dynamiken bewusst sein.
The remarkable antifouling properties of zwitterionic polymers in controlled environments are often counteracted by their delicate mechanical stability. In order to improve the mechanical stabilities of zwitterionic hydrogels, the effect of increased crosslinker densities was thus explored. In a first approach, terpolymers of zwitterionic monomer 3-[N -2(methacryloyloxy)ethyl-N,N-dimethyl]ammonio propane-1-sulfonate (SPE), hydrophobic monomer butyl methacrylate (BMA), and photo-crosslinker 2-(4-benzoylphenoxy)ethyl methacrylate (BPEMA) were synthesized. Thin hydrogel coatings of the copolymers were then produced and photo-crosslinked. Studies of the swollen hydrogel films showed that not only the mechanical stability but also, unexpectedly, the antifouling properties were improved by the presence of hydrophobic BMA units in the terpolymers.
Based on the positive results shown by the amphiphilic terpolymers and in order to further test the impact that hydrophobicity has on both the antifouling properties of zwitterionic hydrogels and on their mechanical stability, a new amphiphilic zwitterionic methacrylic monomer, 3-((2-(methacryloyloxy)hexyl)dimethylammonio)propane-1-sulfonate (M1), was synthesized in good yields in a multistep synthesis. Homopolymers of M1 were obtained by free-radical polymerization. Similarly, terpolymers of M1, zwitterionic monomer SPE, and photo-crosslinker BPEMA were synthesized by free-radical copolymerization and thoroughly characterized, including its solubilities in selected solvents.
Also, a new family of vinyl amide zwitterionic monomomers, namely 3-(dimethyl(2-(N -vinylacetamido)ethyl)ammonio)propane-1-sulfonate (M2), 4-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)butane-1-sulfonate (M3), and 3-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)propyl sulfate (M4), together with the new photo-crosslinker 4-benzoyl-N-vinylbenzamide (M5) that is well-suited for copolymerization with vinylamides, are introduced within the scope of the present work. The monomers are synthesized with good yields developing a multistep synthesis. Homopolymers of the new vinyl amide zwitterionic monomers are obtained by free-radical polymerization and thoroughly characterized. From the solubility tests, it is remarkable that the homopolymers produced are fully soluble in water, evidence of their high hydrophilicity. Copolymerization of the vinyl amide zwitterionic monomers, M2, M3, and M4 with the vinyl amide photo-crosslinker M5 proved to require very specific polymerization conditions. Nevertheless, copolymers were successfully obtained by free-radical copolymerization under appropriate conditions.
Moreover, in an attempt to mitigate the intrinsic hydrophobicity introduced in the copolymers by the photo-crosslinkers, and based on the proven affinity of quaternized diallylamines to copolymerize with vinyl amides, a new quaternized diallylamine sulfobetaine photo-crosslinker 3-(diallyl(2-(4-benzoylphenoxy)ethyl)ammonio)propane-1-sulfonate (M6) is synthesized. However, despite a priori promising copolymerization suitability, copolymerization with the vinyl amide zwitterionic monomers could not be achieved.
Beerdigen oder verbrennen?
(2024)
During the last decades, therapeutical proteins have risen to great significance in the pharmaceutical industry. As non-human proteins that are introduced into the human body cause a distinct immune system reaction that triggers their rapid clearance, most newly approved protein pharmaceuticals are shielded by modification with synthetic polymers to significantly improve their blood circulation time. All such clinically approved protein-polymer conjugates contain polyethylene glycol (PEG) and its conjugation is denoted as PEGylation. However, many patients develop anti-PEG antibodies which cause a rapid clearance of PEGylated molecules upon repeated administration. Therefore, the search for alternative polymers that can replace PEG in therapeutic applications has become important. In addition, although the blood circulation time is significantly prolonged, the therapeutic activity of some conjugates is decreased compared to the unmodified protein. The reason is that these conjugates are formed by the traditional conjugation method that addresses the protein's lysine side chains. As proteins have many solvent exposed lysines, this results in a somewhat uncontrolled attachment of polymer chains, leading to a mixture of regioisomers, with some of them eventually affecting the therapeutic performance.
This thesis investigates a novel method for ligating macromolecules in a site-specific manner, using enzymatic catalysis. Sortase A is used as the enzyme: It is a well-studied transpeptidase which is able to catalyze the intermolecular ligation of two peptides. This process is commonly referred to as sortase-mediated ligation (SML). SML constitutes an equilibrium reaction, which limits product yield. Two previously reported methods to overcome this major limitation were tested with polymers without using an excessive amount of one reactant.
Specific C- or N-terminal peptide sequences (recognition sequence and nucleophile) as part of the protein are required for SML. The complementary peptide was located at the polymer chain end. Grafting-to was used to avoid damaging the protein during polymerization. To be able to investigate all possible combinations (protein-recognition sequence and nucleophile-protein as well as polymer-recognition sequence and nucleophile-polymer) all necessary building blocks were synthesized. Polymerization via reversible deactivation radical polymerization (RDRP) was used to achieve a narrow molecular weight distribution of the polymers, which is required for therapeutic use.
The synthesis of the polymeric building blocks was started by synthesizing the peptide via automated solid-phase peptide synthesis (SPPS) to avoid post-polymerization attachment and to enable easy adaptation of changes in the peptide sequence. To account for the different functionalities (free N- or C-terminus) required for SML, different linker molecules between resin and peptide were used.
To facilitate purification, the chain transfer agent (CTA) for reversible addition-fragmentation chain-transfer (RAFT) polymerization was coupled to the resin-immobilized recognition sequence peptide. The acrylamide and acrylate-based monomers used in this thesis were chosen for their potential to replace PEG.
Following that, surface-initiated (SI) ATRP and RAFT polymerization were attempted, but failed. As a result, the newly developed method of xanthate-supported photo-iniferter (XPI) RAFT polymerization in solution was used successfully to obtain a library of various peptide-polymer conjugates with different chain lengths and narrow molar mass distributions.
After peptide side chain deprotection, these constructs were used first to ligate two polymers via SML, which was successful but revealed a limit in polymer chain length (max. 100 repeat units). When utilizing equimolar amounts of reactants, the use of Ni2+ ions in combination with a histidine after the recognition sequence to remove the cleaved peptide from the equilibrium maximized product formation with conversions of up to 70 %.
Finally, a model protein and a nanobody with promising properties for therapeutical use were biotechnologically modified to contain the peptide sequences required for SML. Using the model protein for C- or N-terminal SML with various polymers did not result in protein-polymer conjugates. The reason is most likely the lack of accessibility of the protein termini to the enzyme. Using the nanobody for C-terminal SML, on the other hand, was successful. However, a similar polymer chain length limit was observed as in polymer-polymer SML. Furthermore, in case of the synthesis of protein-polymer conjugates, it was more effective to shift the SML equilibrium by using an excess of polymer than by employing the Ni2+ ion strategy.
Overall, the experimental data from this work provides a good foundation for future research in this promising field; however, more research is required to fully understand the potential and limitations of using SML for protein-polymer synthesis. In future, the method explored in this dissertation could prove to be a very versatile pathway to obtain therapeutic protein-polymer conjugates that exhibit high activities and long blood circulation times.
This article analyses incremental institutional change and subsequent organizational and performance outcomes of the digital transformation from a comparative perspective. Through 31 expert interviews, the authors compare two digitalized public services in Germany. Two digitalization approaches are identified. The voluntary, decentralized bottom-up approach involves layering of new rules, limited organizational restructuring, and performance deficits. Conversely, the compulsory, top-down approach with centralized control facilitates displacement of existing rules and far-reaching organizational change; in this study, it is also associated with improved performance.
Dieser Beitrag vergleicht die kommunale Verwaltungsdigitalisierung in Deutschland, Österreich und der Schweiz (DACH-Länder) als Vertreter der kontinentaleuropäisch-föderalen Verwaltungstradition bei zugleich unterschiedlichen Digitalisierungsansätzen und -fortschritten. Basierend auf Interviews mit 22 Expert*innen und Beobachtungen in je einer Kommune pro Land sowie Dokumenten-, Literatur- und Sekundärdatenanalysen untersucht die Studie, wie Verwaltungsdigitalisierung im Mehrebenensystem organisiert ist und welche Rolle dabei das Verwaltungsprofil spielt sowie welche Innovationsschwerpunkte die Kommunen im Hinblick auf die Leistungserbringung und die internen Prozesse setzen. Die Ergebnisse zeigen, dass der hohe Grad lokaler Autonomie den Kommunen ermöglicht, eigene Akzente in der Verwaltungsdigitalisierung zu setzen. Zugleich wirken die stark verflochtenen komplexen Entscheidungsstrukturen und hohen Koordinationsbedarfe in verwaltungsföderalen Systemen, die in Deutschland am stärksten, in Österreich etwas schwächer und in der Schweiz am geringsten ausgeprägt sind, als Digitalisierungshemmnisse. Ferner weisen die Befunde auf eine unitarisierende Wirkung der Verwaltungsdigitalisierung als Reformbereich hin. Insgesamt trägt die Studie zu einem besseren Verständnis dafür bei, welche Problematik die Verwaltungsdigitalisierung für föderal-dezentrale Verwaltungsmodelle mit sich bringt.
Die Nutzung von Informations- und Kommunikationstechnik (IKT), Fachverfahren und die Automatisierung von Prozessen verändern die Sachbearbeitung und Leistungserstellung in der Verwaltung und somit die Tätigkeiten, Arbeitsbedingungen und Personalstrukturen. Bei der Antragsbearbeitung und Bescheiderstellung in der Ordnungs- und Leistungsverwaltung erhält IKT nicht nur eine unterstützende, sondern zunehmend auch eine leitende oder entscheidende Rolle. Abhängig von der konkreten Ausgestaltung kann die fortschreitende Digitalisierung eine ganzheitliche Sachbearbeitung ermöglichen, aber auch einschränken. Insgesamt kann sie zu einer Neuordnung des Berufsfeldes öffentlicher Dienst führen.
The growing use of digital tools in policy implementation has altered the work of street-level bureaucrats who are granted substantial discretionary power in decision-making. Digital tools can constrain discretionary power, like the curtailment thesis proposed, or serve as action resources, like the enablement thesis suggested. This article assesses empirical evidence of the impact of digital tools on street-level work and decision-making in service-oriented and regulation-oriented organisations based on a systematic literature review and thematic qualitative content analysis of 36 empirical studies published until 2021. The findings demonstrate different effects with regard to the role of digital tools and the core tasks of the public administration, depending on political and managerial goals and consequent system design. Leading or decisive digital tools mostly curtail discretion, especially in service-oriented organisations. In contrast, an enhanced information base or recommendations for actions enable decision-making, in particular in regulation-oriented organisations. By showing how street-level bureaucrats actively try to resist the curtailing effects caused by rigid design to address individual circumstances, for instance by establishing ways of coping like rule bending or rule breaking, using personal resources or prioritising among clients, this study demonstrates the importance of the continuation thesis and the persistently crucial role of human judgement in policy implementation.
Legitimiertes Unrecht
(2024)
Das Oberste Gericht der DDR war integraler Bestandteil der sozialistischen Staatsführung und unterlag strengen Denk- und Organisationsstrukturen. Es war eng in die politische Agenda der SED eingebunden und genoss keinerlei Unabhängigkeit. Die Auslegung des DDR-Rechts durch das Gericht orientierte sich ausschließlich an den innen- und außenpolitischen Interessen der SED. Dies galt auch für die Rechtsprechung in Fällen der Republikflucht und ihrer gesetzlichen Vorläufer. Die höchste Gerichtsinstanz im Staat war aktiv an der Gestaltung und Umsetzung der Strafjustiz gegen Republikflüchtige beteiligt, was wesentlich zur Festigung der Herrschaftsgewalt der SED beitrug. Die vorliegende Untersuchung analysiert Urteile des Obersten Gerichts im historisch-politischen Kontext und zeigt auf, dass die Urteilspraxis ausschließlich im Interesse parteipolitischer Ziele handelte und weder dem Volk noch der eigentlichen Rechtsfindung verpflichtet war. Des Weiteren wird der maßgebliche Beitrag des Obersten Gerichts an der schrittweisen Kriminalisierung der Bürger der DDR beleuchtet. Dies wirft ein kritisches Licht auf die Rolle des Rechtssystems bei der Sicherung von Rechtsstaatlichkeit und Menschenrechten in autoritären Regimen.
Citizenship
(2024)
The dynamic landscape of digital transformation entails an impact on industrial-age manufacturing companies that goes beyond product offerings, changing operational paradigms, and requiring an organization-wide metamorphosis. An initiative to address the given challenges is the creation of Digital Innovation Units (DIUs) – departments or distinct legal entities that use new structures and practices to develop digital products, services, and business models and support or drive incumbents’ digital transformation. With more than 300 units in German-speaking countries alone and an increasing number of scientific publications, DIUs have become a widespread phenomenon in both research and practice.
This dissertation examines the evolution process of DIUs in the manufacturing
industry during their first three years of operation, through an extensive longitudinal single-case study and several cross-case syntheses of seven DIUs. Building on the lenses of organizational change and development, time, and socio-technical systems, this research provides insights into the fundamentals, temporal dynamics, socio-technical interactions, and relational dynamics of a DIU’s evolution process. Thus, the dissertation promotes a dynamic understanding of DIUs and adds a two-dimensional perspective to the often one-dimensional view of these units and their interactions with the main organization throughout the startup and growth phases of a DIU.
Furthermore, the dissertation constructs a phase model that depicts the early stages of DIU evolution based on these findings and by incorporating literature from information systems research. As a result, it illustrates the progressive intensification of collaboration between the DIU and the main organization. After being implemented, the DIU sparks initial collaboration and instigates change within (parts of) the main organization. Over time, it adapts to the corporate environment to some extent, responding to changing circumstances in order to contribute to long-term transformation. Temporally, the DIU drives the early phases of cooperation and adaptation in particular, while the main organization triggers the first major evolutionary step and realignment of the DIU.
Overall, the thesis identifies DIUs as malleable organizational structures that are crucial for digital transformation. Moreover, it provides guidance for practitioners on the process of building a new DIU from scratch or optimizing an existing one.
The icosahedral non-hydrostatic large eddy model (ICON-LEM) was applied around the drift track of the Multidisciplinary Observatory Study of the Arctic (MOSAiC) in 2019 and 2020. The model was set up with horizontal grid-scales between 100m and 800m on areas with radii of 17.5km and 140 km. At its lateral boundaries, the model was driven by analysis data from the German Weather Service (DWD), downscaled by ICON in limited area mode (ICON-LAM) with horizontal grid-scale of 3 km.
The aim of this thesis was the investigation of the atmospheric boundary layer near the surface in the central Arctic during polar winter with a high-resolution mesoscale model. The default settings in ICON-LEM prevent the model from representing the exchange processes in the Arctic boundary layer in accordance to the MOSAiC observations. The implemented sea-ice scheme in ICON does not include a snow layer on sea-ice, which causes a too slow response of the sea-ice surface temperature to atmospheric changes. To allow the sea-ice surface to respond faster to changes in the atmosphere, the implemented sea-ice parameterization in ICON was extended with an adapted heat capacity term.
The adapted sea-ice parameterization resulted in better agreement with the MOSAiC observations. However, the sea-ice surface temperature in the model is generally lower than observed due to biases in the downwelling long-wave radiation and the lack of complex surface structures, like leads. The large eddy resolving turbulence closure yielded a better representation of the lower boundary layer under strongly stable stratification than the non-eddy-resolving turbulence closure. Furthermore, the integration of leads into the sea-ice surface reduced the overestimation of the sensible heat flux for different weather conditions.
The results of this work help to better understand boundary layer processes in the central Arctic during the polar night. High-resolving mesoscale simulations are able to represent temporally and spatially small interactions and help to further develop parameterizations also for the application in regional and global models.
ADHS bei Jugendlichen
(2024)
ADHS galt lange als eine Störung des Kindesalters. Aber bis zu 80 % der Patienten sind auch noch als Jugendliche betroffen. Gerade sie brauchen Hilfe bei ihren Problemen!
In der Schule müssen sie öfter die Klasse wiederholen, im sozialen und emotionalen Bereich gibt es Konflikte mit Gleichaltrigen und Eltern. Unbehandelt drohen psychische Störungen, Drogenmissbrauch oder delinquentes Verhalten.
Das vorliegende Lerntraining ist das erste multimodale Behandlungskonzept für Jugendliche im Alter von 12 bis 17 Jahren. Es werden konkrete Probleme und Aufgaben aus Schule und Umwelt behandelt, um daran allgemeine Strategien herzuleiten. Eltern und Lehrer werden intensiv in die Behandlung mit einbezogen.
Concepts and techniques for 3D-embedded treemaps and their application to software visualization
(2024)
This thesis addresses concepts and techniques for interactive visualization of hierarchical data using treemaps. It explores (1) how treemaps can be embedded in 3D space to improve their information content and expressiveness, (2) how the readability of treemaps can be improved using level-of-detail and degree-of-interest techniques, and (3) how to design and implement a software framework for the real-time web-based rendering of treemaps embedded in 3D. With a particular emphasis on their application, use cases from software analytics are taken to test and evaluate the presented concepts and techniques.
Concerning the first challenge, this thesis shows that a 3D attribute space offers enhanced possibilities for the visual mapping of data compared to classical 2D treemaps. In particular, embedding in 3D allows for improved implementation of visual variables (e.g., by sketchiness and color weaving), provision of new visual variables (e.g., by physically based materials and in situ templates), and integration of visual metaphors (e.g., by reference surfaces and renderings of natural phenomena) into the three-dimensional representation of treemaps.
For the second challenge—the readability of an information visualization—the work shows that the generally higher visual clutter and increased cognitive load typically associated with three-dimensional information representations can be kept low in treemap-based representations of both small and large hierarchical datasets. By introducing an adaptive level-of-detail technique, we cannot only declutter the visualization results, thereby reducing cognitive load and mitigating occlusion problems, but also summarize and highlight relevant data. Furthermore, this approach facilitates automatic labeling, supports the emphasis on data outliers, and allows visual variables to be adjusted via degree-of-interest measures.
The third challenge is addressed by developing a real-time rendering framework with WebGL and accumulative multi-frame rendering. The framework removes hardware constraints and graphics API requirements, reduces interaction response times, and simplifies high-quality rendering. At the same time, the implementation effort for a web-based deployment of treemaps is kept reasonable.
The presented visualization concepts and techniques are applied and evaluated for use cases in software analysis. In this domain, data about software systems, especially about the state and evolution of the source code, does not have a descriptive appearance or natural geometric mapping, making information visualization a key technology here. In particular, software source code can be visualized with treemap-based approaches because of its inherently hierarchical structure. With treemaps embedded in 3D, we can create interactive software maps that visually map, software metrics, software developer activities, or information about the evolution of software systems alongside their hierarchical module structure.
Discussions on remaining challenges and opportunities for future research for 3D-embedded treemaps and their applications conclude the thesis.
In this visualization, the authors show changes in family patterns by different race groups across two cohorts. Using data from the National Longitudinal Survey of Youth 1979 (born from 1957 to 1965) and 1997 (born from 1980 to 1984), the authors visualize the relationship-parenthood state distributions at each age between 15 and 35 years by race and cohort. The results suggest the rise of cohabiting mothers and the decline of married and divorced mothers among women born from 1980 to 1984. Black women born from 1980 to 1984 were more likely to experience single/childless and single/parent status compared with Black women born from 1957 to 1965. Although with some visible postponement in the recent cohort, white women in both cohorts were more likely to experience married/parent status than other race groups. The decline in married/parent status across the two generations was sharpest among Hispanic women. These descriptive findings highlight the importance of identifying race when discussing changes in family formation and dissolution trends across generations.
Rapidly growing seismic and macroseismic databases and simplified access to advanced machine learning methods have in recent years opened up vast opportunities to address challenges in engineering and strong motion seismology from novel, datacentric perspectives. In this thesis, I explore the opportunities of such perspectives for the tasks of ground motion modeling and rapid earthquake impact assessment, tasks with major implications for long-term earthquake disaster mitigation.
In my first study, I utilize the rich strong motion database from the Kanto basin, Japan, and apply the U-Net artificial neural network architecture to develop a deep learning based ground motion model. The operational prototype provides statistical estimates of expected ground shaking, given descriptions of a specific earthquake source, wave propagation paths, and geophysical site conditions. The U-Net interprets ground motion data in its spatial context, potentially taking into account, for example, the geological properties in the vicinity of observation sites. Predictions of ground motion intensity are thereby calibrated to individual observation sites and earthquake locations.
The second study addresses the explicit incorporation of rupture forward directivity into ground motion modeling. Incorporation of this phenomenon, causing strong, pulse like ground shaking in the vicinity of earthquake sources, is usually associated with an intolerable increase in computational demand during probabilistic seismic hazard analysis (PSHA) calculations. I suggest an approach in which I utilize an artificial neural network to efficiently approximate the average, directivity-related adjustment to ground motion predictions for earthquake ruptures from the 2022 New Zealand National Seismic Hazard Model. The practical implementation in an actual PSHA calculation demonstrates the efficiency and operational readiness of my model. In a follow-up study, I present a proof of concept for an alternative strategy in which I target the generalizing applicability to ruptures other than those from the New Zealand National Seismic Hazard Model.
In the third study, I address the usability of pseudo-intensity reports obtained from macroseismic observations by non-expert citizens for rapid impact assessment. I demonstrate that the statistical properties of pseudo-intensity collections describing the intensity of shaking are correlated with the societal impact of earthquakes. In a second step, I develop a probabilistic model that, within minutes of an event, quantifies the probability of an earthquake to cause considerable societal impact. Under certain conditions, such a quick and preliminary method might be useful to support decision makers in their efforts to organize auxiliary measures for earthquake disaster response while results from more elaborate impact assessment frameworks are not yet available.
The application of machine learning methods to datasets that only partially reveal characteristics of Big Data, qualify the majority of results obtained in this thesis as explorative insights rather than ready-to-use solutions to real world problems. The practical usefulness of this work will be better assessed in the future by applying the approaches developed to growing and increasingly complex data sets.
The urban heat island (UHI) effect, describing an elevated temperature of urban areas compared with their natural surroundings, can expose urban dwellers to additional heat stress, especially during hot summer days. A comprehensive understanding of the UHI dynamics along with urbanization is of great importance to efficient heat stress mitigation strategies towards sustainable urban development. This is, however, still challenging due to the difficulties of isolating the influences of various contributing factors that interact with each other. In this work, I present a systematical and quantitative analysis of how urban intrinsic properties (e.g., urban size, density, and morphology) influence UHI intensity.
To this end, we innovatively combine urban growth modelling and urban climate simulation to separate the influence of urban intrinsic factors from that of background climate, so as to focus on the impact of urbanization on the UHI effect. The urban climate model can create a laboratory environment which makes it possible to conduct controlled experiments to separate the influences from different driving factors, while the urban growth model provides detailed 3D structures that can be then parameterized into different urban development scenarios tailored for these experiments. The novelty in the methodology and experiment design leads to the following achievements of our work.
First, we develop a stochastic gravitational urban growth model that can generate 3D structures varying in size, morphology, compactness, and density gradient. We compare various characteristics, like fractal dimensions (box-counting, area-perimeter scaling, area-population scaling, etc.), and radial gradient profiles of land use share and population density, against those of real-world cities from empirical studies. The model shows the capability of creating 3D structures resembling real-world cities. This model can generate 3D structure samples for controlled experiments to assess the influence of some urban intrinsic properties in question. [Chapter 2]
With the generated 3D structures, we run several series of simulations with urban structures varying in properties like size, density and morphology, under the same weather conditions. Analyzing how the 2m air temperature based canopy layer urban heat island (CUHI) intensity varies in response to the changes of the considered urban factors, we find the CUHI intensity of a city is directly related to the built-up density and an amplifying effect that urban sites have on each other. We propose a Gravitational Urban Morphology (GUM) indicator to capture the neighbourhood warming effect. We build a regression model to estimate the CUHI intensity based on urban size, urban gross building volume, and the GUM indicator. Taking the Berlin area as an example, we show the regression model capable of predicting the CUHI intensity under various urban development scenarios. [Chapter 3]
Based on the multi-annual average summer surface urban heat island (SUHI) intensity derived from Land surface temperature, we further study how urban intrinsic factors influence the SUHI effect of the 5,000 largest urban clusters in Europe. We find a similar 3D GUM indicator to be an effective predictor of the SUHI intensity of these European cities. Together with other urban factors (vegetation condition, elevation, water coverage), we build different multivariate linear regression models and a climate space based Geographically Weighted Regression (GWR) model that can better predict SUHI intensity. By investigating the roles background climate factors play in modulating the coefficients of the GWR model, we extend the multivariate linear model to a nonlinear one by integrating some climate parameters, such as the average of daily maximal temperature and latitude. This makes it applicable across a range of background climates. The nonlinear model outperforms linear models in SUHI assessment as it captures the interaction of urban factors and the background climate. [Chapter 4]
Our work reiterates the essential roles of urban density and morphology in shaping the urban thermal environment. In contrast to many previous studies that link bigger cities with higher UHI intensity, we show that cities larger in the area do not necessarily experience a stronger UHI effect. In addition, the results extend our knowledge by demonstrating the influence of urban 3D morphology on the UHI effect. This underlines the importance of inspecting cities as a whole from the 3D perspective. While urban 3D morphology is an aggregated feature of small-scale urban elements, the influence it has on the city-scale UHI intensity cannot simply be scaled up from that of its neighbourhood-scale components. The spatial composition and configuration of urban elements both need to be captured when quantifying urban 3D morphology as nearby neighbourhoods also cast influences on each other. Our model serves as a useful UHI assessment tool for the quantitative comparison of urban intervention/development scenarios. It can support harnessing the capacity of UHI mitigation through optimizing urban morphology, with the potential of integrating climate change into heat mitigation strategies.
Gewerblicher Rechtsschutz
(2024)
Der Gewerbliche Rechtsschutz ist mit seinen Bestandteilen Patentrecht, Gebrauchsmusterrecht, Designrecht und Markenrecht ein wichtiger Bestandteil der Schwerpunktbereiche zum Wirtschaftsrecht und Gewerblichen Rechtsschutz. Die Materie ist eng mit den Problemen der Informationsgesellschaft verknüpft und unterliegt gegenwärtig einigen Änderungen. Der Grundriss stellt das Rechtsgebiet kompakt und hochaktuell dar. Insbesondere sind neueste Entscheidungen aus der Rechtsprechung eingearbeitet. Das Werk stellt in erster Linie die deutsche Rechtslage, stets unter Einbeziehung der Vorgaben des Rechts der Europäischen Union dar. Der Grundriss orientiert sich an der Struktur der Gesetze und konzentriert sich auf das für Ausbildung und Prüfung Relevante. Schwerpunkte bildet dabei das für Prüfungsarbeiten wie Klausuren und Hausarbeiten regelmäßig so wichtige materielle Recht. Für das Patentrecht, Gebrauchsmusterrecht, Designrecht und Markenrecht sind jeweils dargestellt :Schutzrechtsgegenstand Schutzrechtsinhaber Formelle Voraussetzungen für die Entstehung des Schutzrechts Schutzrechtsumfang Schutzrechtsdauer Verkehrsfähigkeit des Schutzrechts Rechtsfolgen bei Schutzrechtsverletzung. Die Materie wird anschaulich und mit vielen Beispielen, Fällen und Grafiken dargestellt.Vorteile auf einen Blickkompakte Darstellung des geltenden Rechts vom Autor des Wettbewerbsrechts, Kartellrechts und Urheberrechts in der Grundriss-Reihe zahlreiche Beispiele, Fälle und Übersichten zur Veranschaulichung.
Mit der 2. Auflage wird das Werk auf den neuesten Stand gebracht. Insbesondere sind das Zweite Gesetz zur Vereinfachung und Modernisierung des Patentrechts (2. PatMoG), das Gesetz zur Anpassung patentrechtlicher Vorschriften auf Grund der europäischen Patentreform, das Verbandsklagenrichtlinienumsetzungsgesetz und das Gesetz zur Stärkung des fairen Wettbewerbs berücksichtigt. Eingearbeitet sind außerdem die Grundlagenentscheidungen EuGH WRP 2020, 438 - Constantin Film Produktion/EUIPO ("Fack Ju Göthe"), EuGH WRP 2020, 707 - Coty Germany/Amazon Services Europe u.a., EuGH WRP 2020, 707 - Gömböc Kutató, Szolgáto és Kreskedelmi/Szellemi Tuljadon Nemzeti Hivatala, EuGH WRP 2020, 1007 - mk adovkaten/MBK Rechtsanwälte, BGH GRUR 2019, 496 - Spannungsversorgungsvorrichtung, BGH WRP 2019, 1311 - Ortlieb II, BGH WRP 2020, 1311 - Quadratische Tafelschokoladenverpackung, BGH GRUR 2022, 893 - Aminosäureproduktion.
Urheberrecht
(2024)
Das Urheberrecht ist ein wichtiger Bestandteil der Wahlfächer Wirtschaftsrecht und Gewerblicher Rechtsschutz. Gegenstand des Rechtsgebiets sind die Rechte an Werken der Literatur, Wissenschaft und Kunst. Die Materie ist eng mit den Problemen der Informationsgesellschaft verknüpft und unterliegt gegenwärtig zahlreichen Änderungen.Der Grundriss stellt das Rechtsgebiet kompakt und hochaktuell dar. Insbesondere sind neueste Entscheidungen aus der Rechtsprechung eingearbeitet.Das Werk stellt nicht nur das Urheberrecht im eigentlichen Sinne, sondern auch Teile des Kunsturhebergesetzes, insbesondere das Recht am eigenen Bild, eingehend dar.Der Grundriss orientiert sich an der Struktur des Gesetzes und konzentriert sich auf das für Ausbildung und Prüfung Relevante. Schwerpunkte bilden dabei: Werkbegriff Übertragung von Nutzungs- und Verwertungsrechten Rechtsfolgen von Urheberrechtsverletzungen Urheberrechts-Diensteanbieter-Gesetz (UrhDaG) Recht am eigenen Bild nach 22, 23 Kunst UrhG. Die Materie wird anschaulich und mit vielen Beispielen, Schemata und Grafiken dargestellt.Vorteile auf einen Blickkompakte Darstellung des geltenden Rechts vom Autor des Gewerblichen Rechtsschutzes, Lauterkeitsrechts, Kartellrechts und Urheberrechts jeweils in der Grundriss-Reihe zahlreiche Beispiele und Übersichten zur Veranschaulichung.
Mit der 5. Auflage wird das Werk auf den neuesten Stand gebracht.Insbesondere sind das Gesetz zur Anpassung des Urheberrechts an die Erfordernisse des digitalen Binnenmarktes, das Gesetz zur Stärkung des fairen Wettbewerbs sowie zahlreiche neue Grundlagenentscheidungen des BGH zum Verzicht auf das Namensnennungsrecht nach 13 S. 2 UrhG (WRP 2023, 1469 - Microstock-Portal), zur Haftung von Plattformen (WRP 2022, 1106 - YouTube II u. 1120 - up-loaded II), zur freien Benutzung (WRP 2022, 729 - Porsche 911) und zum Bildnisschutz (GRUR 2021, 1222 - Die Auserwählten; WRP 2022, 601 - Tina Turner) eingearbeitet.
We analyze how conventional emissions trading schemes (ETS) can be modified by introducing “clean-up certificates” to allow for a phase of net-negative emissions. Clean-up certificates bundle the permission to emit CO2 with the obligation for its removal. We show that demand for such certificates is determined by cost-saving technological progress, the discount rate and the length of the compliance period. Introducing extra clean-up certificates into an existing ETS reduces near-term carbon prices and mitigation efforts. In contrast, substituting ETS allowances with clean-up certificates reduces cumulative emissions without depressing carbon prices or mitigation in the near term. We calibrate our model to the EU ETS and identify reforms where simultaneously (i) ambition levels rise, (ii) climate damages fall, (iii) revenues from carbon prices rise and (iv) carbon prices and aggregate mitigation cost fall. For reducing climate damages, roughly half of the issued clean-up certificates should replace conventional ETS allowances. In the context of the EU ETS, a European Carbon Central Bank could manage the implementation of cleanup certificates and could serve as an enforcement mechanism.
This chapter provides an overview of methods to capture developments and changes in motivational beliefs. Motivational research has recently begun to venture beyond just examining average developmental trends in motivational variables by starting to investigate how developmental changes in motivational variables differ between and within individuals in different learning situations and across contexts. Although studies have started to uncover differences in motivational changes, a systematic overview of suitable methods for capturing motivational differences in developmental processes is still missing. In this chapter, we review key methods of change modelling, bringing together variable-centred approaches, such as growth modelling and true intraindividual change (TIC) models, and person-centred approaches, such as latent transition and growth mixture models. We illustrate the value of the reviewed statistical methods for the analysis of context-specific motivational changes by reviewing recent empirical studies that identify different patterns and trajectories of such motivational beliefs across time. Our focus is thereby on research grounded in situated expectancy-value theory as a core theory in motivational research.
Prognose, Planung, Sicherung
(2024)
Homomorphisms are a fundamental concept in mathematics expressing the similarity of structures. They provide a framework that captures many of the central problems of computer science with close ties to various other fields of science. Thus, many studies over the last four decades have been devoted to the algorithmic complexity of homomorphism problems. Despite their generality, it has been found that non-uniform homomorphism problems, where the target structure is fixed, frequently feature complexity dichotomies. Exploring the limits of these dichotomies represents the common goal of this line of research.
We investigate the problem of counting homomorphisms to a fixed structure over a finite field of prime order and its algorithmic complexity. Our emphasis is on graph homomorphisms and the resulting problem #_{p}Hom[H] for a graph H and a prime p. The main research question is how counting over a finite field of prime order affects the complexity.
In the first part of this thesis, we tackle the research question in its generality and develop a framework for studying the complexity of counting problems based on category theory. In the absence of problem-specific details, results in the language of category theory provide a clear picture of the properties needed and highlight common ground between different branches of science. The proposed problem #Mor^{C}[B] of counting the number of morphisms to a fixed object B of C is abstract in nature and encompasses important problems like constraint satisfaction problems, which serve as a leading example for all our results. We find explanations and generalizations for a plethora of results in counting complexity. Our main technical result is that specific matrices of morphism counts are non-singular. The strength of this result lies in its algebraic nature. First, our proofs rely on carefully constructed systems of linear equations, which we know to be uniquely solvable. Second, by exchanging the field that the matrix is defined by to a finite field of order p, we obtain analogous results for modular counting. For the latter, cancellations are implied by automorphisms of order p, but intriguingly we find that these present the only obstacle to translating our results from exact counting to modular counting. If we restrict our attention to reduced objects without automorphisms of order p, we obtain results analogue to those for exact counting. This is underscored by a confluent reduction that allows this restriction by constructing a reduced object for any given object. We emphasize the strength of the categorial perspective by applying the duality principle, which yields immediate consequences for the dual problem of counting the number of morphisms from a fixed object.
In the second part of this thesis, we focus on graphs and the problem #_{p}Hom[H]. We conjecture that automorphisms of order p capture all possible cancellations and that, for a reduced graph H, the problem #_{p}Hom[H] features the complexity dichotomy analogue to the one given for exact counting by Dyer and Greenhill. This serves as a generalization of the conjecture by Faben and Jerrum for the modulus 2. The criterion for tractability is that H is a collection of complete bipartite and reflexive complete graphs. From the findings of part one, we show that the conjectured dichotomy implies dichotomies for all quantum homomorphism problems, in particular counting vertex surjective homomorphisms and compactions modulo p. Since the tractable cases in the dichotomy are solved by trivial computations, the study of the intractable cases remains. As an initial problem in a series of reductions capable of implying hardness, we employ the problem of counting weighted independent sets in a bipartite graph modulo prime p. A dichotomy for this problem is shown, stating that the trivial cases occurring when a weight is congruent modulo p to 0 are the only tractable cases. We reduce the possible structure of H to the bipartite case by a reduction to the restricted homomorphism problem #_{p}Hom^{bip}[H] of counting modulo p the number of homomorphisms between bipartite graphs that maintain a given order of bipartition. This reduction does not have an impact on the accessibility of the technical results, thanks to the generality of the findings of part one. In order to prove the conjecture, it suffices to show that for a connected bipartite graph that is not complete, #_{p}Hom^{bip}[H] is #_{p}P-hard. Through a rigorous structural study of bipartite graphs, we establish this result for the rich class of bipartite graphs that are (K_{3,3}\{e}, domino)-free. This overcomes in particular the substantial hurdle imposed by squares, which leads us to explore the global structure of H and prove the existence of explicit structures that imply hardness.
“Ick bin een Berlina”
(2024)
Background: Robots are increasingly used as interaction partners with humans. Social robots are designed to follow expected behavioral norms when engaging with humans and are available with different voices and even accents. Some studies suggest that people prefer robots to speak in the user’s dialect, while others indicate a preference for different dialects.
Methods: Our study examined the impact of the Berlin dialect on perceived trustworthiness and competence of a robot. One hundred and twenty German native speakers (Mage = 32 years, SD = 12 years) watched an online video featuring a NAO robot speaking either in the Berlin dialect or standard German and assessed its trustworthiness and competence.
Results: We found a positive relationship between participants’ self-reported Berlin dialect proficiency and trustworthiness in the dialect-speaking robot. Only when controlled for demographic factors, there was a positive association between participants’ dialect proficiency, dialect performance and their assessment of robot’s competence for the standard German-speaking robot. Participants’ age, gender, length of residency in Berlin, and device used to respond also influenced assessments. Finally, the robot’s competence positively predicted its trustworthiness.
Discussion: Our results inform the design of social robots and emphasize the importance of device control in online experiments.
‘Modern talking’
(2024)
Despite growing interest, we lack a clear understanding of how the arguably ambiguous phenomenon of agile is perceived in government practice. This study aims to alleviate this puzzle by investigating how managers and employees in German public sector organisations make sense of agile as a spreading management fashion in the form of narratives. This is important because narratives function as innovation carriers that ultimately influence the manifestations of the concept in organisations. Based on a multi-case study of 31 interviews and 24 responses to a qualitative online survey conducted in 2021 and 2022, we provide insights into what public sector managers, employees and consultants understand (and, more importantly, do not understand) as agile and how they weave it into their existing reality of bureaucratic organisations. We uncover three meta-narratives of agile government, which we label ‘renew’, ‘complement’ and ‘integrate’. In particular, the meta-narratives differ in their positioning of how agile interacts with the characteristics of bureaucratic organisations. Importantly, we also show that agile as a management fad serves as a projection surface for what actors want from a modern and digital organisation. Thus, the vocabulary of agile government within the narratives is inherently linked to other diffusing phenomena such as new work or digitalisation.
Advancing digitalization is changing society and has far-reaching effects on people and companies. Fundamental to these changes are the new technological possibilities for processing data on an ever-increasing scale and for various purposes. The availability of large and high-quality data sets, especially those based on personal data, is crucial. They are used either to improve the productivity, quality, and individuality of products and services or to develop new types of services. Today, user behavior is tracked more actively and comprehensively than ever despite increasing legal requirements for protecting personal data worldwide. That increasingly raises ethical, moral, and social questions, which have moved to the forefront of the political debate, not least due to popular cases of data misuse. Given this discourse and the legal requirements, today's data management must fulfill three conditions: Legality or legal conformity of use and ethical legitimacy. Thirdly, the use of data should add value from a business perspective. Within the framework of these conditions, this cumulative dissertation pursues four research objectives with a focus on gaining a better understanding of
(1) the challenges of implementing privacy laws,
(2) the factors that influence customers' willingness to share personal data,
(3) the role of data protection for digital entrepreneurship, and
(4) the interdisciplinary scientific significance, its development, and its interrelationships.
Kaschrut
(2024)
We examine how the gender of business owners is related to the wages paid to female relative to male employees working in their firms. Using Finnish register data and employing firm fixed effects, we find that the gender pay gap is—starting from a gender pay gap of 11 to 12%—two to three percentage points lower for hourly wages in female-owned firms than in male-owned firms. Results are robust to how the wage is measured, as well as to various further robustness checks. More importantly, we find substantial differences between industries. While, for instance, in the manufacturing sector, the gender of the owner plays no role in the gender pay gap, in several service sector industries, like ICT or business services, no or a negligible gender pay gap can be found, but only when firms are led by female business owners. Businesses with male ownership maintain a gender pay gap of around 10% also in the latter industries. With increasing firm size, the influence of the gender of the owner, however, fades. In large firms, it seems that others—firm managers—determine wages and no differences in the pay gap are observed between male- and female-owned firms.
We examine how the gender of business-owners is related to the wages paid to female relative to male employees working in their firms. Using Finnish register data and employing firm fixed effects, we find that the gender pay gap is – starting from a gender pay gap of 11 to 12 percent - two to three percentage-points lower for hourly wages in female-owned firms than in male-owned firms. Results are robust to how the wage is measured, as well as to various further robustness checks. More importantly, we find substantial differences between industries. While, for instance, in the manufacturing sector, the gender of the owner plays no role for the gender pay gap, in several service sector industries, like ICT or business services, no or a negligible gender pay gap can be found, but only when firms are led by female business owners. Businesses in male ownership maintain a gender pay gap of around 10 percent also in the latter industries. With increasing firm size, the influence of the gender of the owner, however, fades. In large firms, it seems that others – firm managers – determine wages and no differences in the pay gap are observed between male- and female-owned firms.
Werner Krause and Christina Gahn argue that we need to pay more attention to how the media communicates the results of opinion polls to the public. Reporting methodological details, such as margins of error, can alter citizens’ vote choices on election day. This has important implications for elections around the world
Today, near-surface investigations are frequently conducted using non-destructive or minimally invasive methods of applied geophysics, particularly in the fields of civil engineering, archaeology, geology, and hydrology. One field that plays an increasingly central role in research and engineering is the examination of sedimentary environments, for example, for characterizing near-surface groundwater systems. A commonly employed method in this context is ground-penetrating radar (GPR). In this technique, short electromagnetic pulses are emitted into the subsurface by an antenna, which are then reflected, refracted, or scattered at contrasts in electromagnetic properties (such as the water table). A receiving antenna records these signals in terms of their amplitudes and travel times. Analysis of the recorded signals allows for inferences about the subsurface, such as the depth of the groundwater table or the composition and characteristics of near-surface sediment layers. Due to the high resolution of the GPR method and continuous technological advancements, GPR data acquisition is increasingly performed in three-dimensional (3D) fashion today.
Despite the considerable temporal and technical efforts involved in data acquisition and processing, the resulting 3D data sets (providing high-resolution images of the subsurface) are typically interpreted manually. This is generally an extremely time-consuming analysis step. Therefore, representative 2D sections highlighting distinctive reflection structures are often selected from the 3D data set. Regions showing similar structures are then grouped into so-called radar facies. The results obtained from 2D sections are considered representative of the entire investigated area. Interpretations conducted in this manner are often incomplete and highly dependent on the expertise of the interpreters, making them generally non-reproducible.
A promising alternative or complement to manual interpretation is the use of GPR attributes. Instead of using the recorded data directly, derived quantities characterizing distinctive reflection structures in 3D are applied for interpretation. Using various field and synthetic data sets, this thesis investigates which attributes are particularly suitable for this purpose. Additionally, the study demonstrates how selected attributes can be utilized through specific processing and classification methods to create 3D facies models. The ability to generate attribute-based 3D GPR facies models allows for partially automated and more efficient interpretations in the future. Furthermore, the results obtained in this manner describe the subsurface in a reproducible and more comprehensive manner than what has typically been achievable through manual interpretation methods.
To achieve the Paris climate target, deep emissions reductions have to be complemented with carbon dioxide removal (CDR). However, a portfolio of CDR options is necessary to reduce risks and potential negative side effects. Despite a large theoretical potential, ocean-based CDR such as ocean alkalinity enhancement (OAE) has been omitted in climate change mitigation scenarios so far. In this study, we provide a techno-economic assessment of large-scale OAE using hydrated lime ('ocean liming'). We address key uncertainties that determine the overall cost of ocean liming (OL) such as the CO2 uptake efficiency per unit of material, distribution strategies avoiding carbonate precipitation which would compromise efficiency, and technology availability (e.g., solar calciners). We find that at economic costs of 130–295 $/tCO2 net-removed, ocean liming could be a competitive CDR option which could make a significant contribution towards the Paris climate target. As the techno-economic assessment identified no showstoppers, we argue for more research on ecosystem impacts, governance, monitoring, reporting, and verification, and technology development and assessment to determine whether ocean liming and other OAE should be considered as part of a broader CDR portfolio.
?של מי הנקמה
(2024)
Du sollst nicht essen
(2024)
Zwar sind Menschen biologisch gesehen Allesesser, dennoch gibt es keine Gemeinschaft, die alle ihr zur Verfügung stehenden Nahrungsmittel voll ausschöpft. Immer wird etwas nicht gegessen. Warum wir nicht essen, was wir nicht essen – das beleuchtet dieser Sammelband aus neuro-, ernährungs-, gesellschafts- und religionswissenschaftlicher Perspektive. Ein „religiöser Nutriscore“ gibt Auskunft über die wichtigsten Verzichtsregeln in Judentum, Christentum und Islam. Eine Fotostrecke veranschaulicht, wie bestimmte Speisen zu Festen und Feiertagen zu einem heiligen Essen werden. Nicht zuletzt werden Wege aufgezeigt, wie Menschen, die verschiedene Speiseregeln befolgen, dennoch zusammen essen können – inklusive Praxistest in der Unimensa.
Du sollst nicht essen
(2024)
Zwar sind Menschen biologisch gesehen Allesesser, dennoch gibt es keine Gemeinschaft, die alle ihr zur Verfügung stehenden Nahrungsmittel voll ausschöpft. Immer wird etwas nicht gegessen. Warum wir nicht essen, was wir nicht essen – das beleuchtet dieser Sammelband aus neuro-, ernährungs-, gesellschafts- und religionswissenschaftlicher Perspektive. Ein „religiöser Nutriscore“ gibt Auskunft über die wichtigsten Verzichtsregeln in Judentum, Christentum und Islam. Eine Fotostrecke veranschaulicht, wie bestimmte Speisen zu Festen und Feiertagen zu einem heiligen Essen werden. Nicht zuletzt werden Wege aufgezeigt, wie Menschen, die verschiedene Speiseregeln befolgen, dennoch zusammen essen können – inklusive Praxistest in der Unimensa.
Moss-microbe associations are often characterised by syntrophic interactions between the microorganisms and their hosts, but the structure of the microbial consortia and their role in peatland development remain unknown.
In order to study microbial communities of dominant peatland mosses, Sphagnum and brown mosses, and the respective environmental drivers, four study sites representing different successional stages of natural northern peatlands were chosen on a large geographical scale: two brown moss-dominated, circumneutral peatlands from the Arctic and two Sphagnum-dominated, acidic peat bogs from subarctic and temperate zones.
The family Acetobacteraceae represented the dominant bacterial taxon of Sphagnum mosses from various geographical origins and displayed an integral part of the moss core community. This core community was shared among all investigated bryophytes and consisted of few but highly abundant prokaryotes, of which many appear as endophytes of Sphagnum mosses. Moreover, brown mosses and Sphagnum mosses represent habitats for archaea which were not studied in association with peatland mosses so far. Euryarchaeota that are capable of methane production (methanogens) displayed the majority of the moss-associated archaeal communities. Moss-associated methanogenesis was detected for the first time, but it was mostly negligible under laboratory conditions. Contrarily, substantial moss-associated methane oxidation was measured on both, brown mosses and Sphagnum mosses, supporting that methanotrophic bacteria as part of the moss microbiome may contribute to the reduction of methane emissions from pristine and rewetted peatlands of the northern hemisphere.
Among the investigated abiotic and biotic environmental parameters, the peatland type and the host moss taxon were identified to have a major impact on the structure of moss-associated bacterial communities, contrarily to archaeal communities whose structures were similar among the investigated bryophytes. For the first time it was shown that different bog development stages harbour distinct bacterial communities, while at the same time a small core community is shared among all investigated bryophytes independent of geography and peatland type.
The present thesis displays the first large-scale, systematic assessment of bacterial and archaeal communities associated both with brown mosses and Sphagnum mosses. It suggests that some host-specific moss taxa have the potential to play a key role in host moss establishment and peatland development.
The increasing number of known exoplanets raises questions about their demographics and the mechanisms that shape planets into how we observe them today. Young planets in close-in orbits are exposed to harsh environments due to the host star being magnetically highly active, which results in high X-ray and extreme UV fluxes impinging on the planet. Prolonged exposure to this intense photoionizing radiation can cause planetary atmospheres to heat up, expand and escape into space via a hydrodynamic escape process known as photoevaporation. For super-Earth and sub-Neptune-type planets, this can even lead to the complete erosion of their primordial gaseous atmospheres. A factor of interest for this particular mass-loss process is the activity evolution of the host star. Stellar rotation, which drives the dynamo and with it the magnetic activity of a star, changes significantly over the stellar lifetime. This strongly affects the amount of high-energy radiation received by a planet as stars age. At a young age, planets still host warm and extended envelopes, making them particularly susceptible to atmospheric evaporation. Especially in the first gigayear, when X-ray and UV levels can be 100 - 10,000 times higher than for the present-day sun, the characteristics of the host star and the detailed evolution of its high-energy emission are of importance.
In this thesis, I study the impact of stellar activity evolution on the high-energy-induced atmospheric mass loss of young exoplanets. The PLATYPOS code was developed as part of this thesis to calculate photoevaporative mass-loss rates over time. The code, which couples parameterized planetary mass-radius relations with an analytical hydrodynamic escape model, was used, together with Chandra and eROSITA X-ray observations, to investigate the future mass loss of the two young multiplanet systems V1298 Tau and K2-198. Further, in a numerical ensemble study, the effect of a realistic spread of activity tracks on the small-planet radius gap was investigated for the first time. The works in this thesis show that for individual systems, in particular if planetary masses are unconstrained, the difference between a young host star following a low-activity track vs. a high-activity one can have major implications: the exact shape of the activity evolution can determine whether a planet can hold on to some of its atmosphere, or completely loses its envelope, leaving only the bare rocky core behind. For an ensemble of simulated planets, an observationally-motivated distribution of activity tracks does not substantially change the final radius distribution at ages of several gigayears. My simulations indicate that the overall shape and slope of the resulting small-planet radius gap is not significantly affected by the spread in stellar activity tracks. However, it can account for a certain scattering or fuzziness observed in and around the radius gap of the observed exoplanet population.
Development of a CRISPR/Cas gene editing technique for the coccolithophore Chrysotila carterae
(2024)
Organizational commitments to equality change how people view women’s and men’s professional success
(2024)
To address women’s underrepresentation in high-status positions, many organizations have committed to gender equality. But is women’s professional success viewed less positively when organizations commit to women’s advancement? Do equality commitments have positive effects on evaluations of successful men? We fielded a survey experiment with a national probability sample in Germany (N = 3229) that varied employees’ gender and their organization’s commitment to equality. Respondents read about a recently promoted employee and rated how decisive of a role they thought intelligence and effort played in getting the employee promoted from 1 “Not at all decisive” to 7 “Very decisive” and the fairness of the promotion from 1 “Very unfair” to 7 “Very fair.” When organizations committed to women’s advancement rather than uniform performance standards, people believed intelligence and effort were less decisive in women’s promotions, but that intelligence was more decisive in men’s promotions. People viewed women’s promotions as least fair and men’s as most fair in organizations committed to women’s advancement. However, women’s promotions were still viewed more positively than men’s in all conditions and on all outcomes, suggesting people believed that organizations had double standards for success that required women to be smarter and work harder to be promoted, especially in organizations that did not make equality commitments.
Captive Red Army soldiers made up the majority of victims of Nazi Germany’s starvation policy against Soviet civilians and other non-combatants and thus constituted the largest single victim group of the German war of annihilation against the Soviet Union. Indeed, Soviet prisoners of war were the largest victim group of all National Socialist annihilation policies after the European Jews. Before the launch of Operation Barbarossa, it was clear to the Wehrmacht planning departments on exactly what scale they could expect to capture Soviet troops. Yet, they neglected to make the necessary preparations for feeding and sheltering the captured soldiers, who were viewed by the economic staffs and the military leadership alike as direct competitors of German troops and the German home front for precious food supplies. The number of extra mouths to feed was incompatible with German war aims. The obvious limitations on their freedom of movement and the relative ease with which large numbers could be segregated and their rations controlled were crucial factors in the death of over 3 million Soviet POWs, the vast majority directly or indirectly as a result of deliberate policies of neglect, undernourishment, and starvation while in the ‘care’ of the Wehrmacht. The most reliable figures for the mortality of Soviet POWs in German captivity reveal that up to 3.3 million died from a total of just over 5.7 million captured between June 1941 and February 1945 — a proportion of almost 58 percent. Of these, 2 million were already dead by the beginning of February 1942. In English, there is still neither a single monograph nor a single edited volume dedicated to the subject. This article now provides the first detailed stand-alone synthesis in that language addressing the whole period from 1941 to 1945.
Diese Arbeit zeigt auf, wie historisch und rechtlich eine Ungleichheit zwischen Schwarzen und Weißen in Deutschland gewachsen ist und geht der Frage nach, welche Anforderungen das Verfassungsrecht, die Rechtspraxis und die Politik erfüllen müssen, um sie auszugleichen.
Eingangs wird die Entwicklung des Verbots der rassischen Diskriminierung im internationalen und nationalen Recht dargelegt. Folglich zeichnet die Verfasserin die Diskriminierungsgeschichte von Schwarzen Menschen nach. Zur Überwindung der nach wie vor bestehenden strukturellen Diskriminierung schlägt sie ein positives Recht vor, das sich auf Menschenrechtsstandards und Lösungsansätzen aus Rechtsvergleichen stützt und die Gleichberechtigung von Schwarzen Menschen bewirken soll.
Der Fall des T. Annius Milo bietet für den Lateinunterricht großes didaktisches Potenzial. Denn an seinem Beispiel kann die Lektüre eines lateinischen Textes hervorragend mit realienkundlichen Aspekten verknüpft und es können plausible Bezüge zur Gegenwart hergestellt werden. Die vorliegende Masterarbeit zeigt, welch reiches Themenspektrum in Ciceros Rede Pro Milone steckt. Dazu zählen der historische Kontext des Falls, der Tatbestand des Mordes und der Ablauf des damaligen Gerichtsverfahrens. Darüber hinaus wird das römische Recht mit dem heutzutage in Deutschland geltenden Strafrecht verglichen. Und zu guter Letzt wird hier die Glaubwürdigkeit verschiedener schriftlicher Zeugnisse geprüft, insbesondere die Frage, ob die überlieferte Rede das einstige Prozessgeschehen in authentischer Weise widerspiegelt.
Diglossic translanguaging
(2024)
This book examines how German-speaking Jews living in Berlin make sense and make use of their multilingual repertoire. With a focus on lexical variation, the book demonstrates how speakers integrate Yiddish and Hebrew elements into German for indexing belonging and for positioning themselves within the Jewish community. Linguistic choices are shaped by language ideologies (e.g., authenticity, prescriptivism, nostalgia). Speakers translanguage when using their multilingual repertoire, but do so in a diglossic way, using elements from different languages for specific domains
Der vorliegende Beitrag, der sich weniger als Fachbeitrag, sondern vielmehr als Erfahrungsbericht aus der Praxis versteht, berichtet von unterschiedlichen Versuchen, die Mensch-Tier-Beziehung in den schulischen Kontext einzubringen und somit der unzureichenden Beachtung der Thematik entgegenzuwirken. Nachdem überblicksartig die Relevanz der Mensch-Tier-Thematik herausgestellt und auf diese Weise die Notwendigkeit einer unterrichtlichen Beschäftigung mit dem Verhältnis von Menschen und anderen Tieren begründet wird, wird zunächst von einem ersten Versuch berichtet, (angehende) Lehrkräfte im Rahmen eines Workshops am Studienseminar Potsdam für die Relevanz der Mensch-Tier-Thematik zu sensibilisieren sowie über eine mögliche Umsetzung in den verschiedenen Unterrichtsfächern zu informieren. Anschließend werden – exemplarisch für den Politikunterricht – zwei Unterrichtsstunden, die die Mensch-Tier-Beziehung auf verschiedene Weise in den Politikunterricht einbeziehen, sowie die im Rahmen der Durchführung gesammelten Erfahrungen vorgestellt.
Wem wird geglaubt und wem nicht? Wessen Wissen wird weitergegeben und wessen nicht? Wer hat eine Stimme und wer nicht? Theorien der epistemischen Ungerechtigkeit befassen sich mit dem breiten Feld der ungerechten oder unfairen Behandlung, die mit Fragen des Wissens, Verstehens und Kommunizierens zusammenhängen, wie z.B. die Möglichkeit, vom Wissen oder von kommunikativen Praktiken ausgeschlossen zu werden oder zum Schweigen gebracht zu werden, aber auch Kontexte, in denen die Bedeutungen mancher systematisch verzerrt oder falsch gehört und falsch dargestellt werden, in denen manchen misstraut wird oder es an epistemischer Handlungsfähigkeit mangelt. In diesem Buch wird eine Übersicht über die breite Debatte epistemischer Ungerechtigkeit, epistemischer Unterdrückung und epistemischer Gewalt gegeben, in dem unterschiedliche Theorien, die sich auf der Schnittstelle von Gerechtigkeitstheorie und epistemischen Fragen befinden, systematisch und kritisch diskutiert sowie theoretische Vorgänger dieser Theorien beleuchtet werden.
In the debate on epistemic injustice, it is generally assumed that testimonial injustice as one form of epistemic injustice cannot be committed (fully) deliberately or intentionally because it involves unconscious identity prejudices. Drawing on the case of sexual violence against refugees in European refugee camps, this paper argues that there is a form of testimonial injustice—willful testimonial injustice—that is deliberate. To do so, the paper argues (a) that the hearer intentionally utilizes negative identity prejudices for a particular purpose and (b) that the hearer is aware of the fact that the intentionally used prejudices are in fact prejudices. Furthermore, the paper shows how testimonial injustice relates to recognition failures both in terms of a causal as well as a constitutive claim. In fact, introducing willful testimonial injustice can support the constitutive claim of such a relation that has so far received little attention. Besides arguing for a novel form of testimonial injustice and contributing to the recent debate on the relation between epistemic injustice and recognition failures, this paper is also motivated by the attempt to draw attention to the inhumane conditions for refugees at the border of Europe as well as elsewhere.
Global warming, driven primarily by the excessive emission of greenhouse gases such as carbon dioxide into the atmosphere, has led to severe and detrimental environmental impacts. Rising global temperatures have triggered a cascade of adverse effects, including melting glaciers and polar ice caps, more frequent and intense heat waves disrupted weather patterns, and the acidification of oceans. These changes adversely affect ecosystems, biodiversity, and human societies, threatening food security, water availability, and livelihoods. One promising solution to mitigate the harmful effects of global warming is the widespread adoption of solar cells, also known as photovoltaic cells. Solar cells harness sunlight to generate electricity without emitting greenhouse gases or other pollutants. By replacing fossil fuel-based energy sources, solar cells can significantly reduce CO2 emissions, a significant contributor to global warming. This transition to clean, renewable energy can help curb the increasing concentration of greenhouse gases in the atmosphere, thereby slowing down the rate of global temperature rise.
Solar energy’s positive impact extends beyond emission reduction. As solar panels become more efficient and affordable, they empower individuals, communities, and even entire nations to generate electricity and become less dependent on fossil fuels. This decentralized energy generation can enhance resilience in the face of climate-related challenges. Moreover, implementing solar cells creates green jobs and stimulates technological innovation, further promoting sustainable economic growth. As solar technology advances, its integration with energy storage systems and smart grids can ensure a stable and reliable energy supply, reducing the need for backup fossil fuel power plants that exacerbate environmental degradation.
The market-dominant solar cell technology is silicon-based, highly matured technology with a highly systematic production procedure. However, it suffers from several drawbacks, such as: 1) Cost: still relatively high due to high energy consumption due to the need to melt and purify silicon, and the use of silver as an electrode, which hinders their widespread availability, especially in low-income countries. 2) Efficiency: theoretically, it should deliver around 29%; however, the efficiency of most of the commercially available silicon-based solar cells ranges from 18 – 22%. 3) Temperature sensitivity: The efficiency decreases with the increase in the temperature, affecting their output. 4) Resource constraints: silicon as a raw material is unavailable in all countries, creating supply chain challenges.
Perovskite solar cells arose in 2011 and matured very rapidly in the last decade as a highly efficient and versatile solar cell technology. With an efficiency of 26%, high absorption coefficients, solution processability, and tunable band gap, it attracted the attention of the solar cells community. It represented a hope for cheap, efficient, and easily processable next-generation solar cells. However, lead toxicity might be the block stone hindering perovskite solar cells’ market reach. Lead is a heavy and bioavailable element that makes perovskite solar cells environmentally unfriendly technology. As a result, scientists try to replace lead with a more environmentally friendly element. Among several possible alternatives, tin was the most suitable element due to its electronic and atomic structure similarity to lead.
Tin perovskites were developed to alleviate the challenge of lead toxicity. Theoretically, it shows very high absorption coefficients, an optimum band gap of 1.35 eV for FASnI3, and a very high short circuit current, which nominates it to deliver the highest possible efficiency of a single junction solar cell, which is around 30.1% according to Schockly-Quisser limit. However, tin perovskites’ efficiency still lags below 15% and is irreproducible, especially from lab to lab. This humble performance could be attributed to three reasons: 1) Tin (II) oxidation to tin (IV), which would happen due to oxygen, water, or even by the effect of the solvent, as was discovered recently. 2) fast crystallization dynamics, which occurs due to the lateral exposure of the P-orbitals of the tin atom, which enhances its reactivity and increases the crystallization pace. 3) Energy band misalignment: The energy bands at the interfaces between the perovskite absorber material and the charge selective layers are not aligned, leading to high interfacial charge recombination, which devastates the photovoltaic performance. To solve these issues, we implemented several techniques and approaches that enhanced the efficiency of tin halide perovskites, providing new chemically safe solvents and antisolvents. In addition, we studied the energy band alignment between the charge transport layers and the tin perovskite absorber.
Recent research has shown that the principal source of tin oxidation is the solvent known as dimethylsulfoxide, which also happens to be one of the most effective solvents for processing perovskite. The search for a stable solvent might prove to be the factor that makes all the difference in the stability of tin-based perovskites. We started with a database of over 2,000 solvents and narrowed it down to a series of 12 new solvents that are suitable for processing FASnI3 experimentally. This was accomplished by looking into 1) the solubility of the precursor chemicals FAI and SnI2, 2) the thermal stability of the precursor solution, and 3) the potential to form perovskite. Finally, we show that it is possible to manufacture solar cells using a novel solvent system that outperforms those produced using DMSO. The results of our research give some suggestions that may be used in the search for novel solvents or mixes of solvents that can be used to manufacture stable tin-based perovskites.
Due to the quick crystallization of tin, it is more difficult to deposit tin-based perovskite films from a solution than manufacturing lead-based perovskite films since lead perovskite is more often utilized. The most efficient way to get high efficiencies is to deposit perovskite from dimethyl sulfoxide (DMSO), which slows down the quick construction of the tin-iodine network that is responsible for perovskite synthesis. This is the most successful approach for achieving high efficiencies. Dimethyl sulfoxide, which is used in the processing, is responsible for the oxidation of tin, which is a disadvantage of this method. This research presents a potentially fruitful alternative in which 4-(tert-butyl) pyridine can substitute dimethyl sulfoxide in the process of regulating crystallization without causing tin oxidation to take place. Perovskite films that have been formed from pyridine have been shown to have a much-reduced defect density. This has resulted in increased charge mobility and better photovoltaic performance, making pyridine a desirable alternative for use in the deposition of tin perovskite films.
The precise control of perovskite precursor crystallization inside a thin film is of utmost importance for optimizing the efficiency and manufacturing of solar cells. The deposition process of tin-based perovskite films from a solution presents difficulties due to the quick crystallization of tin compared to the more often employed lead perovskite. The optimal approach for attaining elevated efficiencies entails using dimethyl sulfoxide (DMSO) as a medium for depositing perovskite. This choice of solvent impedes the tin-iodine network’s fast aggregation, which plays a crucial role in the production of perovskite. Nevertheless, this methodology is limited since the utilization of dimethyl sulfoxide leads to the oxidation of tin throughout the processing stage. In this thesis, we present a potentially advantageous alternative approach wherein 4-(tert-butyl) pyridine is proposed as a substitute for dimethyl sulfoxide in regulating crystallization processes while avoiding the undesired consequence of tin oxidation. Films of perovskite formed using pyridine as a solvent have a notably reduced density of defects, resulting in higher mobility of charges and improved performance in solar applications. Consequently, the utilization of pyridine for the deposition of tin perovskite films is considered advantageous.
Tin perovskites are suffering from an apparent energy band misalignment. However, the band diagrams published in the current body of research display contradictions, resulting in a dearth of unanimity. Moreover, comprehensive information about the dynamics connected with charge extraction is lacking. This thesis aims to ascertain the energy band locations of tin perovskites by employing the kelvin probe and Photoelectron yield spectroscopy methods. This thesis aims to construct a precise band diagram for the often-utilized device stack. Moreover, a comprehensive analysis is performed to assess the energy deficits inherent in the current energetic structure of tin halide perovskites. In addition, we investigate the influence of BCP on the improvement of electron extraction in C60/BCP systems, with a specific emphasis on the energy factors involved. Furthermore, transient surface photovoltage was utilized to investigate the charge extraction kinetics of frequently studied charge transport layers, such as NiOx and PEDOT as hole transport layers and C60, ICBA, and PCBM as electron transport layers. The Hall effect, KP, and TRPL approaches accurately ascertain the p-doping concentration in FASnI3. The results consistently demonstrated a value of 1.5 * 1017 cm-3. Our research findings highlight the imperative nature of autonomously constructing the charge extraction layers for tin halide perovskites, apart from those used for lead perovskites.
The crystallization of perovskite precursors relies mainly on the utilization of two solvents. The first one dissolves the perovskite powder to form the precursor solution, usually called the solvent. The second one precipitates the perovskite precursor, forming the wet film, which is a supersaturated solution of perovskite precursor and in the remains of the solvent and the antisolvent. Later, this wet film crystallizes upon annealing into a full perovskite crystallized film. In our research context, we proposed new solvents to dissolve FASnI3, but when we tried to form a film, most of them did not crystallize. This is attributed to the high coordination strength between the metal halide and the solvent molecules, which is unbreakable by the traditionally used antisolvents such as Toluene and Chlorobenzene. To solve this issue, we introduce a high-throughput antisolvent screening in which we screened around 73 selected antisolvents against 15 solvents that can form a 1M FASnI3 solution. We used for the first time in tin perovskites machine learning algorithm to understand and predict the effect of an antisolvent on the crystallization of a precursor solution in a particular solvent. We relied on film darkness as a primary criterion to judge the efficacy of a solvent-antisolvent pair. We found that the relative polarity between solvent and antisolvent is the primary factor that affects the solvent-antisolvent interaction. Based on our findings, we prepared several high-quality tin perovskite films free from DMSO and achieved an efficiency of 9%, which is the highest DMSO tin perovskite device so far.
Knowledge about causal structures is crucial for decision support in various domains. For example, in discrete manufacturing, identifying the root causes of failures and quality deviations that interrupt the highly automated production process requires causal structural knowledge. However, in practice, root cause analysis is usually built upon individual expert knowledge about associative relationships. But, "correlation does not imply causation", and misinterpreting associations often leads to incorrect conclusions. Recent developments in methods for causal discovery from observational data have opened the opportunity for a data-driven examination. Despite its potential for data-driven decision support, omnipresent challenges impede causal discovery in real-world scenarios. In this thesis, we make a threefold contribution to improving causal discovery in practice.
(1) The growing interest in causal discovery has led to a broad spectrum of methods with specific assumptions on the data and various implementations. Hence, application in practice requires careful consideration of existing methods, which becomes laborious when dealing with various parameters, assumptions, and implementations in different programming languages. Additionally, evaluation is challenging due to the lack of ground truth in practice and limited benchmark data that reflect real-world data characteristics.
To address these issues, we present a platform-independent modular pipeline for causal discovery and a ground truth framework for synthetic data generation that provides comprehensive evaluation opportunities, e.g., to examine the accuracy of causal discovery methods in case of inappropriate assumptions.
(2) Applying constraint-based methods for causal discovery requires selecting a conditional independence (CI) test, which is particularly challenging in mixed discrete-continuous data omnipresent in many real-world scenarios. In this context, inappropriate assumptions on the data or the commonly applied discretization of continuous variables reduce the accuracy of CI decisions, leading to incorrect causal structures.
Therefore, we contribute a non-parametric CI test leveraging k-nearest neighbors methods and prove its statistical validity and power in mixed discrete-continuous data, as well as the asymptotic consistency when used in constraint-based causal discovery. An extensive evaluation of synthetic and real-world data shows that the proposed CI test outperforms state-of-the-art approaches in the accuracy of CI testing and causal discovery, particularly in settings with low sample sizes.
(3) To show the applicability and opportunities of causal discovery in practice, we examine our contributions in real-world discrete manufacturing use cases. For example, we showcase how causal structural knowledge helps to understand unforeseen production downtimes or adds decision support in case of failures and quality deviations in automotive body shop assembly lines.
Portal Transfer 2024
(2024)
Liebe Leserinnen und Leser, die eigene „Blase“ verlassen, Perspektiven wechseln, Silo-Mentalität überwinden – was der Wissenschaft in ihrem Innern gelingt, ja gelingen muss, um erfolgreich zu sein, stellt sie in ihrer Außenwirkung noch immer vor Herausforderungen. Dabei gehört es doch inzwischen zum Selbstverständnis moderner Universitäten, öffentlich zu erklären, woran in ihren Räumen geforscht wird, sich in gesellschaftliche Diskurse einzubringen und ihre Erkenntnisse zügig in die Praxis zu überführen.
Die Universität Potsdam hat diese Transferaufgaben neben Lehre und Forschung als dritte Säule installiert und ihrem Gebäude damit noch mehr Stabilität verliehen. Seit Jahren gehört sie im nationalen Vergleich zu den erfolgreichsten Hochschulen, wenn es darum geht, Start-ups zu fördern und aus der Forschung heraus Unternehmen zu gründen: In diesem Magazin berichten wir von der Potassco Solutions GmbH des Informatikers Torsten Schaub, der mit seinem KI-System Clingo komplexe Optimierungsprobleme in Betrieben löst. Oder von der SEQSTANT GmbH, die mit innovativer Diagnostik Erreger von Atemwegserkrankungen in Echtzeit bestimmen kann. Wir zeigen aber auch, wie Forschungsteams mit der Industrie kooperieren, zum Beispiel mit der K-UTEC im thüringischen Sondershausen, um mit wissenschaftlichem Knowhow dazu beizutragen, dass dort in Produktionsabfällen kein wertvolles Lithium verloren geht.
Richtet sich der Technologietransfer vor allem an die Wirtschaft, so hilft der Wissenstransfer der gesamten Gesellschaft. Besonders stark ist die Universität Potsdam hier in der Bildung, denn mit ihren Lehramtsabsolventen schickt sie auch gleich den aktuellen Stand der Unterrichtsforschung in die Schulpraxis. Immer häufiger zieht dabei die Digitalisierung in die Klassenzimmer ein. Wie das gut gelingen kann, ist in diesem Magazin zu lesen. Zudem erklären wir, was die Sportwissenschaft zur Therapie von Depressionen beitragen kann oder wie die Umweltforschung das Risikomanagement in von Hochwasser bedrohten Regionen verbessern will. Ob in öffentlichen Verwaltungen oder politischen Institutionen – überall ist wissenschaftliche Expertise gefragt. Wir zeigen das am Beispiel von Frauke Brosius-Gersdorf, die als Juristin die Bundesregierung zur Regulierung des Schwangerschaftsabbruchs berät.
Der kürzeste Weg des Wissens aus der Universität in die Praxis führt zweifelsohne über die Alumni, die als Fach- und Führungskräfte im Land und darüber hinaus wirksam werden. Dass dieser Weg schon während des Studiums beginnen kann, beweisen die vielen studentischen Initiativen, die hier zu Wort kommen. Sie alle scheuen nicht das Rampenlicht: ob bei Science Slams auf den Bühnen im Land Brandenburg, bei den TEDx-Talks im Hans Otto Theater, beim Kunst-Rundgang in der Potsdamer Waschhaus-Arena oder mit englischsprachigem Schauspiel an der Uni. Öffentlich in Erscheinung treten, neue Formen finden, um Wissen in die Breite der Bevölkerung zu tragen – auch das gehört zum Transfer. Genau wie dieses Magazin.
In many churches nowadays, there has been a standardized approach to premarital counseling for couples involving social, pastoral, and psychological perspectives. In contrast, many rabbis and other Jewish officials still concentrate on legal aspects alone. The need for resolving important issues on the verge of wedlock is too often left to secular experts in law, psychology, or counseling. However, in recent years, this lack of formal training for marriage preparation has also been acknowledged by the Jewish clergy in order to incorporate it in the preparatory period before the bond is tied. This case study focuses on Jewish and Roman Catholic conceptions of marriage, past and present. We intend to do a comparative analysis of the prerequisites of religious marriage based on the assumption that both Judaism and the Roman Catholic Church have a distinct legal framework to assess marriage preparation.
Long-term bacteria-fungi-plant associations in permafrost soils inferred from palaeometagenomics
(2024)
The arctic is warming 2 – 4 times faster than the global average, resulting in a strong feedback on northern ecosystems such as boreal forests, which cover a vast area of the high northern latitudes. With ongoing global warming, the treeline subsequently migrates northwards into tundra areas. The consequences of turning ecosystems are complex: on the one hand, boreal forests are storing large amounts of global terrestrial carbon and act as a carbon sink, dragging carbon dioxide out of the global carbon cycle, suggesting an enhanced carbon uptake with increased tree cover. On the other hand, with the establishment of trees, the albedo effect of tundra decreases, leading to enhanced soil warming. Meanwhile, permafrost thaws, releasing large amounts of previously stored carbon into the atmosphere. So far, mainly vegetation dynamics have been assessed when studying the impact of warming onto ecosystems. Most land plants are living in close symbiosis with bacterial and fungal communities, sustaining their growth in nutrient poor habitats. However, the impact of climate change on these subsoil communities alongside changing vegetation cover remains poorly understood. Therefore, a better understanding of soil community dynamics on multi millennial timescales is inevitable when addressing the development of entire ecosystems. Unravelling long-term cross-kingdom dependencies between plant, fungi, and bacteria is not only a milestone for the assessment of warming on boreal ecosystems. On top, it also is the basis for agriculture strategies to sustain society with sufficient food in a future warming world.
The first objective of this thesis was to assess ancient DNA as a proxy for reconstructing the soil microbiome (Manuscripts I, II, III, IV). Research findings across these projects enable a comprehensive new insight into the relationships of soil microorganisms to the surrounding vegetation. First, this was achieved by establishing (Manuscript I) and applying (Manuscript II) a primer pair for the selective amplification of ancient fungal DNA from lake sediment samples with the metabarcoding approach. To assess fungal and plant co-variation, the selected primer combination (ITS67, 5.8S) amplifying the ITS1 region was applied on samples from five boreal and arctic lakes. The obtained data showed that the establishment of fungal communities is impacted by warming as the functional ecological groups are shifting. Yeast and saprotroph dominance during the Late Glacial declined with warming, while the abundance of mycorrhizae and parasites increased with warming. The overall species richness was also alternating. The results were compared to shotgun sequencing data reconstructing fungi and bacteria (Manuscripts III, IV), yielding overall comparable results to the metabarcoding approach. Nonetheless, the comparison also pointed out a bias in the metabarcoding, potentially due to varying ITS lengths or copy numbers per genome.
The second objective was to trace fungus-plant interaction changes over time (Manuscripts II, III). To address this, metabarcoding targeting the ITS1 region for fungi and the chloroplast P6 loop for plants for the selective DNA amplification was applied (Manuscript II). Further, shotgun sequencing data was compared to the metabarcoding results (Manuscript III). Overall, the results between the metabarcoding and the shotgun approaches were comparable, though a bias in the metabarcoding was assumed. We demonstrated that fungal shifts were coinciding with changes in the vegetation. Yeast and lichen were mainly dominant during the Late Glacial with tundra vegetation, while warming in the Holocene lead to the expansion of boreal forests with increasing mycorrhizae and parasite abundance. Aside, we highlighted that Pinaceae establishment is dependent on mycorrhizal fungi such as Suillineae, Inocybaceae, or Hyaloscypha species also on long-term scales.
The third objective of the thesis was to assess soil community development on a temporal gradient (Manuscripts III, IV). Shotgun sequencing was applied on sediment samples from the northern Siberian lake Lama and the soil microbial community dynamics compared to ecosystem turnover. Alongside, podzolization processes from basaltic bedrock were recovered (Manuscript III). Additionally, the recovered soil microbiome was compared to shotgun data from granite and sandstone catchments (Manuscript IV, Appendix). We assessed if the establishment of the soil microbiome is dependent on the plant taxon and as such comparable between multiple geographic locations or if the community establishment is driven by abiotic soil properties and as such the bedrock area. We showed that the development of soil communities is to a great extent driven by the vegetation changes and temperature variation, while time only plays a minor role. The analyses showed general ecological similarities especially between the granite and basalt locations, while the microbiome on species-level was rather site-specific. A greater number of correlated soil taxa was detected for deep-rooting boreal taxa in comparison to grasses with shallower roots. Additionally, differences between herbaceous taxa of the late Glacial compared to taxa of the Holocene were revealed.
With this thesis, I demonstrate the necessity to investigate subsoil community dynamics on millennial time scales as it enables further understanding of long-term ecosystem as well as soil development processes and such plant establishment. Further, I trace long-term processes leading to podzolization which supports the development of applied carbon capture strategies under future global warming.
Does working in a gender-atypical occupation reduce individuals’ likelihood of finding a different-sex romantic partner, and do such occupational partnership penalties contribute to occupational gender segregation? To answer this question, we theorized partnership penalties for working in gender-atypical occupations by drawing on insights from evolutionary psychology, social constructivism, and rational choice theory and exploited the stability of occupational pathways in Germany. In Study 1, we analyzed observational data from a national probability sample (N= 1,634,944) to assess whether individuals in gender-atypical occupations were less likely to be partnered than individuals who worked in gender typical occupations. To assess whether the observed partnership gaps found in Study 1 were causally related to the gender typicality of men’s and women’s occupations, we conducted a field experiment on a dating app (N = 6,778). Because the findings from Study 2 suggested that young women and men indeed experienced penalties for working in a gender-atypical occupation (at least when they were not highly attractive), we employed a choice-experimental design in Study 3 (N = 1,250) to assess whether women and men were aware of occupational partnership penalties and showed that anticipating occupational partnership penalties may keep young and highly educated women from working in gender-atypical occupations. Our main conclusion therefore is that that observed penalties and their anticipation seem to be driven by unconscious rather than conscious processes.
Das Forschungsprojekt „Workflow-Management-Systeme für Open-Access-Hochschulverlage (OA-WFMS)” ist eine Kooperation zwischen der HTWK Leipzig und der Universität Potsdam. Ziel ist es, die Bedarfe von Universitäts- und Hochschulverlagen und Anforderungen an ein Workflow-Management-Systeme (WFMS) zu analysieren, um daraus ein generisches Lastenheft zu erstellen. Das WFMS soll den Publikationsprozess in OA-Verlagen erleichtern, beschleunigen sowie die Verbreitung von Open Access und das nachhaltige, digitale wissenschaftliche Publizieren fördern.
Das Projekt baut auf den Ergebnissen der Projekte „Open-Access-Hochschulverlag (OA-HVerlag)“ und „Open-Access-Strukturierte-Kommunikation (OA-STRUKTKOMM)“ auf. Der diesem Bericht zugrunde liegende Auftaktworkshop fand 2024 in Leipzig mit Vertreter:innen von zehn Institutionen statt. Der Workshop diente dazu, Herausforderungen und Anforderungen an ein WFMS zu ermitteln sowie bestehende Lösungsansätze und Tools zu diskutieren.
Im Workshop wurden folgende Fragen behandelt:
a. Wie kann die Organisation und Überwachung von Publikationsprozessen in wissenschaftlichen Verlagen durch ein WFMS effizient gestaltet werden?
b. Welche Anforderungen muss ein WFMS erfüllen, um Publikationsprozesse optimal zu unterstützen?
c. Welche Schnittstellen müssen berücksichtigt werden, um die Interoperabilität der Systeme zu garantieren?
d. Welche bestehenden Lösungsansätze und Tools sind bereits im Einsatz und welche Vor- und Nachteile haben diese?
Der Workshop gliederte sich in zwei Teile : Teil 1 behandelte Herausforderungen und Anforderungen (Fragen a. bis c.), Teil 2 bestehende Lösungen und Tools (Frage d.). Die Ergebnisse des Workshops fließen in die Bedarfsanalyse des Forschungsprojekts ein.
Die im Bericht dokumentierten Ergebnisse zeigen die Vielzahl der Herausforderungen der bestehenden Ansätze bezüglich des OA-Publikationsmanagements . Die Herausforderungen zeigen sich insbesondere bei der Systemheterogenität, den individuellen Anpassungsbedarfen und der Notwendigkeit der systematischen Dokumentation. Die eingesetzten Unterstützungssysteme und Tools wie Dateiablagen, Projektmanagement- und Kommunikationstools können insgesamt den Anforderungen nicht genügen, für Teillösungen sind sie jedoch nutzbar. Deshalb muss die Integration bestehender Systeme in ein zu entwickelndes OA-WFMS in Betracht gezogen und die Interoperabilität der miteinander interagierenden Systeme gewährleistet werden. Die Beteiligten des Workshops waren sich einig, dass das OA-WFMS flexibel und modular aufgebaut werden soll. Einer konsortialen Softwareentwicklung und einem gemeinsamen Betrieb im Verbund wurde der Vorrang gegeben.
Der Workshop lieferte wertvolle Einblicke in die Arbeit der Hochschulverlage und bildet somit eine solide Grundlage für die in Folge zu erarbeitende weitere Bedarfsanalyse und die Erstellung des generischen Lastenheftes.
Diskursive Perspektiven auf internationale Politik haben in den vergangenen Jahren an Relevanz und Popularität gewonnen. Der vorliegende Beitrag gibt zunächst einen Überblick über verschiedene Spielarten diskursiver Ansätze in den Internationalen Beziehungen, um sich dann vor allem poststrukturalistisch inspirierten Diskursarbeiten zu widmen. Poststrukturalistische Ansätze, so argumentieren wir, sind besonders interessant für die Disziplin der IB, da sie vier spezifische Gewinne bieten: Erstens erlauben sie eine kritische Perspektive auf Fragen internationaler Politik, zweitens hilft eine poststrukturalistische Perspektive dabei, den oft übersehenen politischen Charakter sozialer Realität herauszustellen, drittens halten sie dazu an, die eigene Sichtweise des/der Forschenden zu reflektieren und viertens erlaubt es eine poststrukturalistische Vorgehensweise mit ihrem Fokus auf „Wie-möglich-Fragen“, eine alternative analytische Perspektive zu dominanten erklärenden Ansätzen einzunehmen.
Cross-sectional associations of dietary biomarker patterns with health and nutritional status
(2024)
Der Data Act
(2024)
Der Data Act bildet den vorläufigen Schlussstein der EU-Datenregulierung. Die verschiedenen Instrumente der Verordnung tarieren vor allem die Beziehungen der Datenökonomie mit Datenzugangsrechten, weitreichenden Regelungen zu Datenverträgen und Cloud-Services sowie mit spezifischen Interoperabilitätsvorgaben neu aus. Der Beitrag gibt – mit einem Schwerpunkt im Datenwirtschaftsrecht – einen Überblick über die Neuregelungen, zeigt übergreifende Weichenstellungen auf und benennt strukturelle Herausforderungen.
Die vorliegende Arbeit thematisiert die Synthese und die Polymerisation von Monomeren auf der Basis nachwachsender Rohstoffe wie zum Beispiel in Gewürzen und ätherischen Ölen enthaltenen kommerziell verfügbaren Phenylpropanoiden (Eugenol, Isoeugenol, Zimtalkohol, Anethol und Estragol) und des Terpenoids Myrtenol sowie ausgehend von der Rinde einer Birke (Betula pendula) und der Korkeiche (Quercus suber). Ausgewählte Phenylpropanoide (Eugenol, Isoeugenol und Zimtalkohol) und das Terpenoid Myrtenol wurden zunächst in den jeweiligen Laurylester überführt und anschließend das olefinische Strukturelement epoxidiert, wobei 4 neue (2-Methoxy-4-(oxiran-2-ylmethyl)phenyldodecanoat, 2-Methoxy-4-(3-methyl-oxiran-2-yl)phenyldodecanoat, (3-Phenyloxiran-2-yl)methyldodecanoat, (7,7-Dimethyl-3-oxatricyclo[4.1.1.02,4]octan-2-yl)methyldodecanoat) und 2 bereits bekannte monofunktionelle Epoxide (2-(4-Methoxybenzyl)oxiran und 2-(4-Methoxyphenyl)-3-methyloxiran) erhalten wurden, die mittels 1H-NMR-, 13C-NMR- und FT-IR-Spektroskopie sowie mit DSC untersucht wurden. Die Photo-DSC Untersuchung der Epoxidmonomere in einer kationischen Photopolymerisation bei 40 °C ergab die maximale Polymerisationsgeschwindigkeit (Rpmax: 0,005 s-1 bis 0,038 s-1) sowie die Zeit (tmax: 13 s bis 26 s) bis zum Erreichen des Rpmax-Wertes und führte zu flüssigen Oligomeren, deren zahlenmittlerer Polymerisationsgrad mit 3 bis 6 mittels GPC bestimmt wurde. Die Umsetzung von 2-Methoxy-4-(oxiran-2-ylmethyl)phenyldodecanoat mit Methacrylsäure ergab ein Isomerengemisch (2-Methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat und 2-Methoxy-4-(2-(methacryl-oyloxy)-3-hydroxypropyl)phenyldodecanoat), das mittels Photo-DSC in einer freien radikalischen Photopolymerisation untersucht wurde (Rpmax: 0,105 s-1 und tmax: 5 s), die zu festen in Chloroform unlöslichen Polymeren führte.
Aus Korkpulver und gemahlener Birkenrinde wurden selektiv 2 kristalline ω-Hydroxyfettsäuren (9,10-Epoxy-18-hydroxyoctadecansäure und 22-Hydroxydocosansäure) isoliert. Die kationische Photopolymerisation der 9,10-Epoxy-18-hydroxyoctadecansäure ergab einen nahezu farblosen transparenten und bei Raumtemperatur elastischen Film, welcher ein Anwendungspotential für Oberflächenbeschichtungen hat. Aus der Reaktion von 9,10-Epoxy-18-hydroxyoctadecansäure mit Methacrylsäure wurde ein bei Raumtemperatur flüssiges Gemisch aus zwei Konstitutionsisomeren (9,18-Dihydroxy-10-(methacryloyloxy)octadecansäure und 9-(Methacryloyloxy)-10,18-dihydroxyoctadecansäure) erhalten (Tg: -60 °C). Die radikalische Photopolymerisation dieser Konstitutionsisomere wurde ebenfalls mittels Photo-DSC untersucht (Rpmax: 0,098 s-1 und tmax: 3,8 s). Die Reaktion von 22-Hydroxydocosansäure mit Methacryloylchlorid ergab die kristalline 22-(Methacryloyloxy)docosansäure, welche ebenfalls in einer radikalischen Photopolymerisation mittels Photo-DSC untersucht wurde (Rpmax: 0,023 s-1 und tmax: 9,6 s).
Die mittels AIBN in Dimethylsulfoxid initiierte Homopolymerisation der 22-(Methacryloyloxy)docosansäure und der Isomerengemische bestehend aus 2-Methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat und 2-Methoxy-4-(2-(methacryl-oyloxy)-3-hydroxypropyl)phenyldodecanoat sowie aus 9,18-Dihydroxy-10-(methacryloy-loxy)octadecansäure und 9-(Methacryloyloxy)-10,18-dihydroxyoctadecansäure ergab feste lösliche Polymere, die mittels 1H-NMR- und FT-IR-Spektroskopie, GPC (Poly(2-methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat / 2-methoxy-4-(2-(methacryloyloxy)-3-hydroxypropyl)phenyldodecanoat): Pn = 94) und DSC (Poly(2-methoxy-4-(2-hydroxy-3-(methacryloyloxy)propyl)phenyldodecanoat / 2-methoxy-4-(2-(methacryloyloxy)-3-hydroxypropyl)phenyldodecanoat): Tg: 52 °C; Poly(9,18-dihydroxy-10-(methacryloyloxy)-octadecansäure / 9-(methacryloyloxy)-10,18-dihydroxyoctadecansäure): Tg: 10 °C; Poly(22-(methacryloyloxy)docosansäure): Tm: 74,1 °C, wobei der Schmelzpunkt mit dem des Photopolymers (Tm = 76,8 °C) vergleichbar ist) charakterisiert wurden.
Das bereits bekannte Monomer 4-(4-Methacryloyloxyphenyl)butan-2-on wurde ausgehend von 4-(4-Hydroxyphenyl)butan-2-on hergestellt, welches aus Birkenrinde gewonnen werden kann, und unter identischen Bedingungen für einen Vergleich mit den neuen Monomeren polymerisiert. Die freie radikalische Polymerisation führte zu Poly(4-(4-methacryloyloxyphenyl)butan-2-on) (Pn: 214 und Tg: 83 °C). Neben der Homopolymerisation wurde eine statistische Copolymerisation des Isomerengemisches 2-Methoxy-4-(2-hydroxy-3-(methacryl-oyloxy)propyl)phenyldodecanoat / 2-Methoxy-4-(2-(methacryloyloxy)-3-hydroxypropyl)-phenyldodecanoat mit 4-(4-Methacryloyloxyphenyl)butan-2-on untersucht, wobei ein äquimolarer Einsatz der Ausgangsmonomere zu einem Anstieg der Ausbeute, der Molmassenverteilung und der Dispersität des Copolymers (Tg: 44 °C) führte. Die unter Verwendung von Diethylcarbonat als „grünes“ Lösungsmittel mittels AIBN initiierten freien radikalischen Homopolymerisationen von 4-(4-Methacryloyloxyphenyl)butan-2-on und von Laurylmethacrylat ergaben vergleichbare Polymerisationsgrade der Homopolymere (Pn: 150), welche jedoch aufgrund ihrer Strukturunterschiede deutlich unterschiedliche Glasübergangstemperaturen hatten (Poly(4-(4-methacryloyloxyphenyl)butan-2-on): Tg: 70 °C, Poly(laurylmethacrylat) Tg: -49 °C. Eine statistische Copolymerisation äquimolarer Stoffmengen der beiden Monomere in Diethylcarbonat führte bei einer Polymerisationszeit von 60 Minuten zu einem leicht bevorzugten Einbau des 4-(4-Methacryloyloxyphenyl)butan-2-on in das Copolymer (Tg: 17 °C). Copolymerisationsdiagramme für die freien radikalischen Copolymerisationen von 4-(4-Methacryloyloxyphenyl)butan-2-on mit n-Butylmethacrylat beziehungsweise 2-(Dimethylamino)ethylmethacrylat (t: 20 min bis 60 min; Molenbrüche (X) für 4-(4-Methacryloyloxyphenyl)butan-2-on: 0,2; 0,4; 0,6 und 0,8) zeigten ein nahezu ideales azeotropes Copolymerisationsverhalten, obwohl ein leicht bevorzugter Einbau von 4-(4-Methacryloyloxyphenyl)butan-2-on in das jeweilige Copolymer beobachtet wurde. Dabei korreliert ein Anstieg der Ausbeute und der Glasübergangstemperatur der erhaltenen Copolymere mit einem zunehmenden Gehalt an 4-(4-Methacryloyloxyphenyl)butan-2-on im Reaktionsgemisch. Die unter Einsatz der modifizierten Gibbs-DiMarzio-Gleichung berechneten Glasübergangstemperaturen der Copolymere stimmten mit den gemessenen Werten gut überein. Das ist eine gute Ausgangsbasis für die Bestimmung der Glasübergangstemperatur eines Copolymers mit einer beliebigen Zusammensetzung.
Sigmund Freud, the founder of psychoanalysis, began his intellectual life with the Jewish Bible and also ended it with it. He began by reading the Philippson Bible together, especially with his father Jacob Freud, and ended by studying the figure of Moses. This study systematically traces this preoccupation and shows that the Jewish Bible was a constant reference for Freud and determined his Jewish identity. This is shown by analysing family documents, religious instruction and references to the Bible in Freud's writings and correspondence.
The automotive industry is a prime example of digital technologies reshaping mobility. Connected, autonomous, shared, and electric (CASE) trends lead to new emerging players that threaten existing industrial-aged companies. To respond, incumbents need to bridge the gap between contrasting product architecture and organizational principles in the physical and digital realms. Over-the-air (OTA) technology, that enables seamless software updates and on-demand feature additions for customers, is an example of CASE-driven digital product innovation. Through an extensive longitudinal case study of an OTA initiative by an industrial- aged automaker, this dissertation explores how incumbents accomplish digital product innovation. Building on modularity, liminality, and the mirroring hypothesis, it presents a process model that explains the triggers, mechanisms, and outcomes of this process. In contrast to the literature, the findings emphasize the primacy of addressing product architecture challenges over organizational ones and highlight the managerial implications for success.
Assessing the impact of global change on hydrological systems is one of the greatest hydrological challenges of our time. Changes in land cover, land use, and climate have an impact on water quantity, quality, and temporal availability. There is a widespread consensus that, given the far-reaching effects of global change, hydrological systems can no longer be viewed as static in their structure; instead, they must be regarded as entire ecosystems, wherein hydrological processes interact and coevolve with biological, geomorphological, and pedological processes. To accurately predict the hydrological response under the impact of global change, it is essential to understand this complex coevolution. The knowledge of how hydrological processes, in particular the formation of subsurface (preferential) flow paths, evolve within this coevolution and how they feed back to the other processes is still very limited due to a lack of observational data.
At the hillslope scale, this intertwined system of interactions is known as the hillslope feedback cycle. This thesis aims to enhance our understanding of the hillslope feedback cycle by studying the coevolution of hillslope structure and hillslope hydrological response. Using chronosequences of moraines in two glacial forefields developed from siliceous and calcareous glacial till, the four studies shed light on the complex coevolution of hydrological, biological, and structural hillslope properties, as well as subsurface hydrological flow paths over an evolutionary period of 10 millennia in these two contrasting geologies. The findings indicate that the contrasting properties of siliceous and calcareous parent materials lead
to variations in soil structure, permeability, and water storage. As a result, different plant species and vegetation types are favored on siliceous versus calcareous parent material, leading to diverse ecosystems with distinct hydrological dynamics. The siliceous parent material was found to show a higher activity level in driving the coevolution. The soil pH resulting from parent material weathering emerges as a crucial factor, influencing vegetation development, soil formation, and consequently, hydrology. The acidic weathering of the siliceous parent material favored the accumulation of organic matter, increasing the soils’ water storage capacity and attracting acid-loving shrubs, which further promoted organic matter accumulation and ultimately led to podsolization after 10 000 years. Tracer experiments revealed that the subsurface flow path evolution was influenced by soil and vegetation development, and vice versa. Subsurface flow paths changed from vertical, heterogeneous matrix flow to finger-like flow paths over a few hundred years, evolving into macropore flow, water storage, and lateral subsurface flow after several thousand years. The changes in flow paths among younger age classes were driven by weathering processes altering soil structure, as well as by vegetation development and root activity. In the older age
class, the transition to more water storage and lateral flow was attributed to substantial organic matter accumulation and ongoing podsolization. The rapid vertical water transport in the finger-like flow paths, along with the conductive sandy material, contributed to podsolization and thus to the shift in the hillslope hydrological response.
In contrast, the calcareous site possesses a high pH buffering capacity, creating a neutral to basic environment with relatively low accumulation of dead organic matter, resulting in a lower water storage capacity and the establishment of predominantly grass vegetation. The coevolution was found to be less dynamic over the millennia. Similar to the siliceous site, significant changes in subsurface flow paths occurred between the young age classes. However, unlike the siliceous site, the subsurface flow paths at the calcareous site only altered in shape and not in direction. Tracer experiments showed that flow paths changed from vertical, heterogeneous matrix flow to vertical, finger-like flow paths after a few hundred to thousands of years, which was driven by root activities and weathering processes. Despite having a finer soil texture, water storage at the calcareous site was significantly lower than at the siliceous site, and water transport remained primarily rapid and vertical, contributing to the flourishing of grass vegetation.
The studies elucidated that changes in flow paths are predominantly shaped by the characteristics of the parent material and its weathering products, along with their complex interactions with initial water flow paths and vegetation development. Time, on the other hand, was not found to be a primary factor in describing the evolution of the hydrological response. This thesis makes a valuable contribution to closing the gap in the observations of the coevolution of hydrological processes within the hillslope feedback cycle, which is important to improve predictions of hydrological processes in changing landscapes. Furthermore, it emphasizes the importance of interdisciplinary studies in addressing the hydrological challenges arising from global change.
Protected cultivation in greenhouses or polytunnels offers the potential for sustainable production of high-yield, high-quality vegetables. This is related to the ability to produce more on less land and to use resources responsibly and efficiently. Crop yield has long been considered the most important factor. However, as plant-based diets have been proposed for a sustainable food system, the targeted enrichment of health-promoting plant secondary metabolites should be addressed. These metabolites include carotenoids and flavonoids, which are associated with several health benefits, such as cardiovascular health and cancer protection.
Cover materials generally have an influence on the climatic conditions, which in turn can affect the levels of secondary metabolites in vegetables grown underneath. Plastic materials are cost-effective and their properties can be modified by incorporating additives, making them the first choice. However, these additives can migrate and leach from the material, resulting in reduced service life, increased waste and possible environmental release. Antifogging additives are used in agricultural films to prevent the formation of droplets on the film surface, thereby increasing light transmission and preventing microbiological contamination.
This thesis focuses on LDPE/EVA covers and incorporated antifogging additives for sustainable protected cultivation, following two different approaches. The first addressed the direct effects of leached antifogging additives using simulation studies on lettuce leaves (Lactuca sativa var capitata L). The second determined the effect of antifog polytunnel covers on lettuce quality. Lettuce is usually grown under protective cover and can provide high nutritional value due to its carotenoid and flavonoid content, depending on the cultivar.
To study the influence of simulated leached antifogging additives on lettuce leaves, a GC-MS method was first developed to analyze these additives based on their fatty acid moieties. Three structurally different antifogging additives (reference material) were characterized outside of a polymer matrix for the first time. All of them contained more than the main fatty acid specified by the manufacturer. Furthermore, they were found to adhere to the leaf surface and could not be removed by water or partially by hexane.
The incorporation of these additives into polytunnel covers affects carotenoid levels in lettuce, but not flavonoids, caffeic acid derivatives and chlorophylls. Specifically, carotenoids were higher in lettuce grown under polytunnels without antifog than with antifog. This has been linked to their effect on the light regime and was suggested to be related to carotenoid function in photosynthesis.
In terms of protected cultivation, the use of LDPE/EVA polytunnels affected light and temperature, and both are closely related. The carotenoid and flavonoid contents of lettuce grown under polytunnels was reversed, with higher carotenoid and lower flavonoid levels. At the individual level, the flavonoids detected in lettuce did not differ however, lettuce carotenoids adapted specifically depending on the time of cultivation. Flavonoid reduction was shown to be transcriptionally regulated (CHS) in response to UV light (UVR8). In contrast, carotenoids are thought to be regulated post-transcriptionally, as indicated by the lack of correlation between carotenoid levels and transcripts of the first enzyme in carotenoid biosynthesis (PSY) and a carotenoid degrading enzyme (CCD4), as well as the increased carotenoid metabolic flux. Understanding the regulatory mechanisms and metabolite adaptation strategies could further advance the strategic development and selection of cover materials.
Microalgae have been recognized as a promising green production platform for recombinant proteins. The majority of studies on recombinant protein expression have been conducted in the green microalga C. reinhardtii. While promising improvement regarding nuclear transgene expression in this alga has been made, it is still inefficient due to epigenetic silencing, often resulting in low yields that are not competitive with other expressor organisms. Other microalgal species might be better suited for high-level protein expression, but are limited in their availability of molecular tools.
The red microalga Porphyridium purpureum recently emerged as candidate for the production of recombinant proteins. It is promising in that transformation vectors are episomally maintained as autonomously replicating plasmids in the nucleus at a high copy number, thus leading to high expression values in this red alga.
In this work, we expand the genetic tools for P. purpureum and investigate parameters that govern efficient transgene expression. We provide an improved transformation protocol to streamline the generation of transgenic lines in this organism. After being able to efficiently generate transgenic lines, we showed that codon usage is a main determinant of high-level transgene expression, not only at the protein level but also at the level of mRNA accumulation. The optimized expression constructs resulted in YFP accumulation up to an unprecedented 5% of the total soluble protein. Furthermore, we designed new constructs conferring efficient transgene expression into the culture medium, simplifying purification and harvests of recombinant proteins. To further improve transgene expression, we tested endogenous promoters driving the most highly transcribed genes in P. purpureum and found minor increase of YFP accumulation.
We employed the previous findings to express complex viral antigens from the hepatitis B virus and the hepatitis C virus in P. purpureum to demonstrate its feasibility as producer of biopharmaceuticals. The viral glycoproteins were successfully produced to high levels and could reach their native confirmation, indicating a functional glycosylation machinery and an appropriate folding environment in this red alga. We could successfully upscale the biomass production of transgenic lines and with that provide enough material for immunization trials in mice that were performed in collaboration. These trials showed no toxicity of neither the biomass nor the purified antigens, and, additionally, the algal-produced antigens were able to elicit a strong and specific immune response.
The results presented in this work pave the way for P. purpureum as a new promising producer organism for biopharmaceuticals in the microalgal field.
Column-oriented database systems can efficiently process transactional and analytical queries on a single node. However, increasing or peak analytical loads can quickly saturate single-node database systems. Then, a common scale-out option is using a database cluster with a single primary node for transaction processing and read-only replicas. Using (the naive) full replication, queries are distributed among nodes independently of the accessed data. This approach is relatively expensive because all nodes must store all data and apply all data modifications caused by inserts, deletes, or updates.
In contrast to full replication, partial replication is a more cost-efficient implementation: Instead of duplicating all data to all replica nodes, partial replicas store only a subset of the data while being able to process a large workload share. Besides lower storage costs, partial replicas enable (i) better scaling because replicas must potentially synchronize only subsets of the data modifications and thus have more capacity for read-only queries and (ii) better elasticity because replicas have to load less data and can be set up faster. However, splitting the overall workload evenly among the replica nodes while optimizing the data allocation is a challenging assignment problem.
The calculation of optimized data allocations in a partially replicated database cluster can be modeled using integer linear programming (ILP). ILP is a common approach for solving assignment problems, also in the context of database systems. Because ILP is not scalable, existing approaches (also for calculating partial allocations) often fall back to simple (e.g., greedy) heuristics for larger problem instances. Simple heuristics may work well but can lose optimization potential.
In this thesis, we present optimal and ILP-based heuristic programming models for calculating data fragment allocations for partially replicated database clusters. Using ILP, we are flexible to extend our models to (i) consider data modifications and reallocations and (ii) increase the robustness of allocations to compensate for node failures and workload uncertainty. We evaluate our approaches for TPC-H, TPC-DS, and a real-world accounting workload and compare the results to state-of-the-art allocation approaches. Our evaluations show significant improvements for varied allocation’s properties: Compared to existing approaches, we can, for example, (i) almost halve the amount of allocated data, (ii) improve the throughput in case of node failures and workload uncertainty while using even less memory, (iii) halve the costs of data modifications, and (iv) reallocate less than 90% of data when adding a node to the cluster. Importantly, we can calculate the corresponding ILP-based heuristic solutions within a few seconds. Finally, we demonstrate that the ideas of our ILP-based heuristics are also applicable to the index selection problem.
Im Rahmen einer explorativen Entwicklung wurde in der vorliegenden Studie ein Konzept zur Wissenschaftskommunikation für ein Graduiertenkolleg, in dem an photochemischen Prozessen geforscht wird, erstellt und anschließend evaluiert. Der Grund dafür ist die immer stärker wachsende Forderung nach Wissenschaftskommunikation seitens der Politik. Es wird darüber hinaus gefordert, dass die Kommunikation der eigenen Forschung in Zukunft integrativer Bestandteil des wissenschaftlichen Arbeitens wird. Um junge Wissenschaftler bereits frühzeitig auf diese Aufgabe vorzubereiten, wird Wissenschaftskommunikation auch in Forschungsverbünden realisiert.
Aus diesem Grund wurde in einer Vorstudie untersucht, welche Anforderungen an ein Konzept zur Wissenschaftskommunikation im Rahmen eines Forschungsverbundes gestellt werden, indem die Einstellung der Doktoranden zur Wissenschaftskommunikation sowie ihre Kommunikationsfähigkeiten anhand eines geschlossenen Fragebogens evaluiert wurden. Darüber hinaus wurden aus den Daten Wissenschaftskommunikationstypen abgeleitet. Auf Grundlage der Ergebnisse wurden unterschiedliche Wissenschaftskommunikationsmaßnahmen entwickelt, die sich in der Konzeption, den Rezipienten, sowie der Form der Kommunikation und den Inhalten unterscheiden.
Im Rahmen dieser Entwicklung wurde eine Lerneinheit mit Bezug auf die Inhalte des Graduiertenkollegs, bestehend aus einem Lehr-Lern-Experiment und den dazugehörigen Begleitmaterialien, konzipiert. Anschließend wurde die Lerneinheit in eine der Wissenschaftskommunikationsmaßnahmen integriert. Je nach Anforderung an die Doktoranden, wurden die Maßnahmen durch vorbereitende Workshops ergänzt.
Durch einen halboffenen Pre-Post-Fragebogen wurde der Einfluss der Wissenschaftskommunikationsmaßnahmen und der dazugehörigen Workshops auf die Selbstwirksamkeit der Doktoranden evaluiert, um Rückschlüsse darauf zu ziehen, wie sich die Wahrnehmung der eigenen Kommunikationsfähigkeiten durch die Interventionen verändert. Die Ergebnisse deuten darauf hin, dass die einzelnen Wissenschaftskommunikationsmaßnahmen die verschiedenen Typen in unterschiedlicher Weise beeinflussen. Es ist anzunehmen, dass es abhängig von der eigenen Einschätzung der Kommunikationsfähigkeit unterschiedliche Bedürfnisse der Förderung gibt, die durch dedizierte Wissenschaftskommunikationsmaßnahmen berücksichtigt werden können.
Auf dieser Grundlage werden erste Ansätze für eine allgemeingültige Strategie vorgeschlagen, die die individuellen Fähigkeiten zur Wissenschaftskommunikation in einem naturwissenschaftlichen Forschungsverbund fördert.
This thesis focuses on the molecular evolution of Macroscelidea, commonly referred to as sengis. Sengis are a mammalian order belonging to the Afrotherians, one of the four major clades of placental mammals. Sengis currently consist of twenty extant species, all of which are endemic to the African continent. They can be separated in two families, the soft-furred sengis (Macroscelididae) and the giant sengis (Rhynchocyonidae). While giant sengis can be exclusively found in forest habitats, the different soft-furred sengi species dwell in a broad range of habitats, from tropical rain-forests to rocky deserts.
Our knowledge on the evolutionary history of sengis is largely incomplete. The high level of superficial morphological resemblance among different sengi species (especially the soft-furred sengis) has for example led to misinterpretations of phylogenetic relationships, based on morphological characters. With the rise of DNA based taxonomic inferences, multiple new genera were defined and new species described. Yet, no full taxon molecular phylogeny exists, hampering the answering of basic taxonomic questions. This lack of knowledge can be to some extent attributed to the limited availability of fresh-tissue samples for DNA extraction. The broad African distribution, partly in political unstable regions and low population densities complicate contemporary sampling approaches. Furthermore, the DNA information available usually covers only short stretches of the mitochondrial genome and thus a single genetic locus with limited informational content.
Developments in DNA extraction and library protocols nowadays offer the opportunity to access DNA from museum specimens, collected over the past centuries and stored in natural history museums throughout the world. Thus, the difficulties in fresh-sample acquisition for molecular biological studies can be overcome by the application of museomics, the research field which emerged from those laboratory developments.
This thesis uses fresh-tissue samples as well as a vast collection museum specimens to investigate multiple aspects about the macroscelidean evolutionary history. Chapter 4 of this thesis focuses on the phylogenetic relationships of all currently known sengi species. By accessing DNA information from museum specimens in combination of fresh tissue samples and publicly available genetic resources it produces the first full taxon molecular phylogeny of sengis. It confirms the monophyly of the genus Elephantulus and discovers multiple deeply divergent lineages within different species, highlighting the need for species specific approaches. The study furthermore focuses on the evolutionary time frame of sengis by evaluating the impact of commonly varied parameters on tree dating. The results of the study show, that the mitochondrial information used in previous studies to temporal calibrate the Macroscelidean phylogeny led to an overestimation of node ages within sengis. Especially soft-furred sengis are thus much younger than previously assumed. The refined knowledge of nodes ages within sengis offer the opportunity to link e.g. speciation events to environmental changes.
Chapter 5 focuses on the genus Petrodromus with its single representative Petrodromus tetradactylus. It again exploits the opportunities of museomics and gathers a comprehensive, multi-locus genetic dataset of P. tetradactylus individuals, distributed across most the known range of this species. It reveals multiple deeply divergent lineages within Petrodromus, whereby some could possibly be associated to previously described sub-species, at least one was formerly unknown. It underscores the necessity for a revision of the genus Petrodromus through the integration of both molecular and morphological evidence. The study, furthermore identifies changing forest distributions through climatic oscillations as main factor shaping the genetic structure of Petrodromus.
Chapter 6 uses fresh tissue samples to extent the genomic resources of sengis by thirteen new nuclear genomes, of which two were de-novo assembled. An extensive dataset of more than 8000 protein coding one-to-one orthologs allows to further refine and confirm the temporal time frame of sengi evolution found in Chapter 4. This study moreover investigates the role of gene-flow and incomplete lineage sorting (ILS) in sengi evolution. In addition it identifies clade specific genes of possible outstanding evolutionary importance and links them to potential phenotypic traits affected. A closer investigation of olfactory receptor proteins reveals clade specific differences. A comparison of the demographic past of sengis to other small African mammals does not reveal a sengi specific pattern.
Kochbücher à la religion
(2024)
Zur Auswertung der Umfrage
(2024)
Religionen sind wichtige Akteurinnen der Zivilgesellschaft. Die Frage, wie sie miteinander umgehen, ist von entscheidender Bedeutung für die Zukunft pluraler, offener Gesellschaften. Für dieses Buch haben sich vier Akteure der interreligiösen Verständigung erstmals zusammengetan: Die national und international agierenden Organisationen Religions for Peace und Stiftung Weltethos, der Bundeskongress der Räte der Religionen als Verbund kommunaler interreligiöser Initiativen sowie die Forschungsschnittstelle Forum Religionen im Kontext an der Universität Potsdam. Das Buch enthält Steckbriefe von siebzig interreligiösen Organisationen und Initiativen. Ergänzt wird es durch Reflexionen über die Geschichte und die Zukunft des interreligiösen Dialogs in Deutschland sowie über die Rolle von Religion in der Zivilgesellschaft.
Der Föderalismus, dessen Wurzeln bis in das Mittelalter zurückreichen, gehört zu den Grundtatsachen der deutschen Geschichte. Dieses historische Erbe spiegelt sich in der heutigen deutschen Staatlichkeit wider, wie sie im Grundgesetz verankert ist und von Bund, Ländern und Kommunen mit Leben erfüllt wird. Renommierte Historiker, Politologen und Rechtswissenschaftler zeichnen in diesem Band die grundlegenden Entwicklungen der Föderalismusgeschichte in Deutschland seit der Gründung des deutschen Nationalstaats (1871) nach. Sie zeigen dabei die Kontinuitäten und Systembrüche deutscher Staatlichkeit auf – vom Kaiserreich über die Weimarer Republik und den NS-Staat bis hin zur Gegenwart in der Bundesrepublik Deutschland.
Zum Werk
Der Band zum Eigentümer-Besitzer-Verhältnis erscheint nunmehr in 10. Auflage und behandelt in bewährter Darstellungsform die wichtigsten in den Übungen und im Examen zur Bearbeitung gestellten Streitfragen aus diesem Bereich des Sachenrechts.
Die einzelnen Probleme werden anhand von Beispielfällen erklärt, die in Lehre und Rechtsprechung vertretenen Meinungen werden einander dabei gegenübergestellt, die sie tragenden Argumente werden in Kürze herausgearbeitet und an weiteren Fällen erprobt. Auf diese Weise entwickeln Studierende das für die Falllösung relevante juristische Problembewusstsein sowie die Fähigkeit, die verschiedenen Ansichten und Argumente gegeneinander abzuwägen, eine eigene Meinung zu bilden und sie überzeugend zu begründen.
Vorteile auf einen Blick alle relevanten Probleme zum Eigentümer-Besitzer-Verhältnis in einem Band umfassende Fundstellensammlung für die in Lehre und Rechtsprechung vertretenen Meinungen
Zur Neuauflage
Für die Neuauflage wurden die aktuelle Literatur sowie die neueste Rechtsprechung eingearbeitet sowie ein Abschnitt zum Aufbau von Meinungsstreits in Prüfungen ergänzt.
Zielgruppe
Für Studierende der Rechtswissenschaften, Rechtsreferendarinnen und -referendare und AG-Leiterinnen und -Leiter.
Jahresbericht 2023
(2024)
Dieser Jahresbericht umfasst den Berichtszeitraum 2023, in dem Forschung und Lehre wieder in Präsenz stattfinden konnten. Begegnung und Austausch in Hörsaal und Seminarraum, auf Konferenzpaneln und während Kaffeepausen sind wieder möglich, aber die Möglichkeiten von Homeoffice und Onlinekommunikation bleiben weiter bestehen, wie die Erfahrung zeigt.
Das MenschenRechtsZentrum als interdisziplinär arbeitende, zentrale wissenschaftliche Einrichtung der Universität Potsdam hat es im Berichtszeitraum erneut unternommen, juristische, philosophische, geschichts- und kultur- sowie politikwissenschaftliche Perspektiven auf das Thema Menschenrechte in Forschung und Lehre miteinander zu verbinden.
Die Wissenschaftler*innen des MenschenRechtsZentrums lehren an den Fakultäten, denen sie angehören. Hier werden daher nur diejenigen Aktivitäten angeführt, die einen Bezug zur Arbeit des MenschenRechtsZentrums sowie zu menschenrechtlichen Fragestellungen haben; weitergehende Informationen finden sich auf den Homepages der jeweiligen Personen.
Die interventionelle Behandlung des Vorhofflimmerns verursacht häufiger als in der Vergangenheit wahrgenommen eine Beeinträchtigung benachbarter Gewebe und Organe. Im Vordergrund der Betrachtungen dieser Arbeit stehen Schäden des Oesophagus, die aufgrund der schlechten Vorhersagbarkeit, des zeitlich verzögerten Auftretens und der fatalen Prognose bei Ausbildung einer atrio-oesophagealen Fistel besondere Relevanz haben.
Das Vorhofflimmern selbst ist nicht mit einer unmittelbaren vitalen Bedrohung verbunden, aber durch seine Komplikationen (z.B. Herzinsuffizienz, Schlaganfall) dennoch prognostisch relevant. Durch Antiarrhythmika gelingt keine Verbesserung der Rhythmuskontrolle (Arrhythmie-Freiheit), eine katheterinterventionelle Behandlung ist der medikamentösen Therapie überlegen. Durch eine frühzeitige und erfolgreiche Behandlung des Vorhofflimmerns konnte eine Verbesserung klinischer Endpunkte und der Prognose erreicht werden. Das Risiko einer invasiven Behandlung (insbesondere hinsichtlich des Auftretens prognoserelevanter Komplikationen) muss jedoch bei der Indikationsstellung und der Prozedur-Durchführung bedacht und gegenüber den günstigen Effekten der Behandlung abgewogen werden.
Untersuchungen zur Vermeidung der sehr seltenen atrio-oesophagealen Fisteln bedienen sich Surrogat-Parametern, hier bisher ausschließlich den ablationsinduzierten Schleimhaut-Läsionen des Oesophagus. Die Untersuchungen dieser Arbeit zeigen ein komplexeres Bild der (peri)-oesophagealen Schädigungen nach Vorhofflimmern-Ablation mit thermischen Energiequellen.
(1) Neue Definition der Oesophagus-Schäden: Oesophageale und perioesophageale Beeinträchtigungen treten sehr häufig auf (nach der hier verwendeten erweiterten Definition bei zwei Drittel der Patienten) und sind unabhängig von der verwendeten Ablationsenergie. Unterschiede finden sich in den Manifestationen der Oesophagus-Schäden für die verschiedenen Energie-Protokolle, ohne dass der Mechanismus hierfür aufgeklärt ist. Diese Arbeit beschreibt die unterschiedlichen Ausprägungen thermischer Oesophagus-Schäden, deren Determinanten und pathophysiologische Relevanz.
(2) Die Detektion (zum Teil subtiler) Oesophagus-Schäden ist maßgeblich von der Intensität der Nachsorge abhängig. Eine Beschränkung auf subjektive Schilderungen (z.B. Schmerzen beim Schluckakt, Sodbrennen) ist irreführend, die Mehrzahl der Veränderungen bleibt asymptomatisch, die Symptome der ausgebildeten atrio-oesophagealen Fistel (meist nach mehreren Wochen) bereits mit einer sehr schlechten Prognose belastet. Eine Endoskopie der Speiseröhre findet in den meisten elektrophysiologischen Zentren nicht oder nur bei anhaltenden Symptomen statt und kann ausschließlich Mukosa-Läsionen nachweisen. Damit wird das Ausmaß des oesophagealen und perioesophagealen Schadens bei Weitem unterschätzt. Veränderungen des perioesophagealen Raums, deren klinische Relevanz (noch) unklar ist, werden nicht erfasst, und damit ein Wandödem und Schäden im Gewebe zwischen linkem Vorhof und Speiseröhre (einschl. Nerven und Gefäßen) ignoriert.
Die Studien tragen auch zur Neubewertung etablierter Messgrößen und Risikofaktoren der Oesophagus-Schäden bei.
(3) Das Temperaturmonitoring im Oesophagus anhand der Maximalabweichungen ist erst für Extremwerte aussagekräftig und dadurch nicht hilfreich, Oesophagus-Läsionen zu vermeiden. Die komplexe Analyse der Temperatur-Rohdaten (bisher nur offline möglich) liefert in der AUC für RF-Ablationen einen prädiktiven Parameter für Oesophagus-Schäden, der eine Strukturierung der weiteren endoskopischen Diagnostik erlaubt. Ein vergleich¬barer Wert für die Cryoablationen konnte in den Analysen nicht gefunden werden.
(4) Eine chronische Entzündung des unteren Oesophagus-Drittels behindert nicht nur das Abheilen einer thermischen Oesophagus-Läsion, sondern kann das Auftreten solcher Läsionen durch die Ablation begünstigen. Die große Zahl vorbestehender Oesophagus-Veränderungen, die eine erhöhte Vulnerabilität anzeigen, und die Bedeutung für die Ent¬stehung thermischer Läsionen können der Ansatzpunkt präventiver Maßnahmen sein.
Ergänzend werden Ausprägungen der Oesophagus-Schäden durch umfangreiche Diagnostik erfasst und beschrieben, die aus pathophysiologischen Überlegungen relevant sein können.
(5) Die systematische Erweiterung der bildgebenden Diagnostik auf den perioesophagealen Raum durch Endosonographie zeigte, dass Schleimhaut-Läsionen alleine nur einen geringen Teil der Oesophagus-Schäden darstellen. Schleimhaut-Läsionen infolge einer instrumentellen Verletzung sind nicht mit dem Risiko der Ausbildung einer atrio-oesophagealen Fistel verbunden und unterstreichen die pathophysiologische Relevanz der perioesophagealen Veränderungen.
(6) Eine funktionelle Diagnostik thermischer Schäden des perioesophagealen Vagus-Plexus identifiziert Patienten mit Oesophagus-Schäden, die bildgebend nicht erfasst wurden, jedoch in ihren Auswirkungen (Nahrungsretention und gastro-oesophagealer Reflux) zur Läsionsprogression beitragen können.
Nils-Hendrik Grohmann beschäftigt sich mit dem noch andauernden Stärkungsprozess der UN-Menschenrechtsvertragsorgane. Er analysiert, welche rechtlichen Befugnisse die Ausschüsse haben, ob sie von sich aus Vorschläge einbringen können und inwieweit sie ihre Verfahrensweisen bisher aufeinander abgestimmt haben. Ein weiterer Schwerpunkt liegt auf der Zusammenarbeit zwischen den verschiedenen Ausschüssen und der Frage, welche Rolle das Treffen der Vorsitzenden bei der Stärkung spielen kann.
Nils-Hendrik Grohmann beschäftigt sich mit dem noch andauernden Stärkungsprozess der UN-Menschenrechtsvertragsorgane. Er analysiert, welche rechtlichen Befugnisse die Ausschüsse haben, ob sie von sich aus Vorschläge einbringen können und inwieweit sie ihre Verfahrensweisen bisher aufeinander abgestimmt haben. Ein weiterer Schwerpunkt liegt auf der Zusammenarbeit zwischen den verschiedenen Ausschüssen und der Frage, welche Rolle das Treffen der Vorsitzenden bei der Stärkung spielen kann.
Organic-inorganic hybrids based on P3HT and mesoporous silicon for thermoelectric applications
(2024)
This thesis presents a comprehensive study on synthesis, structure and thermoelectric transport properties of organic-inorganic hybrids based on P3HT and porous silicon. The effect of embedding polymer in silicon pores on the electrical and thermal transport is studied. Morphological studies confirm successful polymer infiltration and diffusion doping with roughly 50% of the pore space occupied by conjugated polymer. Synchrotron diffraction experiments reveal no specific ordering of the polymer inside the pores. P3HT-pSi hybrids show improved electrical transport by five orders of magnitude compared to porous silicon and power factor values comparable or exceeding other P3HT-inorganic hybrids. The analysis suggests different transport mechanisms in both materials. In pSi, the transport mechanism relates to a Meyer-Neldel compansation rule. The analysis of hybrids' data using the power law in Kang-Snyder model suggests that a doped polymer mainly provides charge carriers to the pSi matrix, similar to the behavior of a doped semiconductor. Heavily suppressed thermal transport in porous silicon is treated with a modified Landauer/Lundstrom model and effective medium theories, which reveal that pSi agrees well with the Kirkpatrick model with a 68% percolation threshold. Thermal conductivities of hybrids show an increase compared to the empty pSi but the overall thermoelectric figure of merit ZT of P3HT-pSi hybrid exceeds both pSi and P3HT as well as bulk Si.
This paper provides novel evidence on the impact of public transport subsidies on air pollution. We obtain causal estimates by leveraging a unique policy intervention in Germany that temporarily reduced nationwide prices for regional public transport to a monthly flat rate price of 9 Euros. Using DiD estimation strategies on air pollutant data, we show that this intervention causally reduced a benchmark air pollution index by more than eight percent and, after its termination, increased again. Our results illustrate that public transport subsidies – especially in the context of spatially constrained cities – offer a viable alternative for policymakers and city planers to improve air quality, which has been shown to crucially affect health outcomes.
The landscape of software self-adaptation is shaped in accordance with the need to cost-effectively achieve and maintain (software) quality at runtime and in the face of dynamic operation conditions. Optimization-based solutions perform an exhaustive search in the adaptation space, thus they may provide quality guarantees. However, these solutions render the attainment of optimal adaptation plans time-intensive, thereby hindering scalability. Conversely, deterministic rule-based solutions yield only sub-optimal adaptation decisions, as they are typically bound by design-time assumptions, yet they offer efficient processing and implementation, readability, expressivity of individual rules supporting early verification. Addressing the quality-cost trade-of requires solutions that simultaneously exhibit the scalability and cost-efficiency of rulebased policy formalism and the optimality of optimization-based policy formalism as explicit artifacts for adaptation. Utility functions, i.e., high-level specifications that capture system objectives, support the explicit treatment of quality-cost trade-off. Nevertheless, non-linearities, complex dynamic architectures, black-box models, and runtime uncertainty that makes the prior knowledge obsolete are a few of the sources of uncertainty and subjectivity that render the elicitation of utility non-trivial.
This thesis proposes a twofold solution for incremental self-adaptation of dynamic architectures. First, we introduce Venus, a solution that combines in its design a ruleand an optimization-based formalism enabling optimal and scalable adaptation of dynamic architectures. Venus incorporates rule-like constructs and relies on utility theory for decision-making. Using a graph-based representation of the architecture, Venus captures rules as graph patterns that represent architectural fragments, thus enabling runtime extensibility and, in turn, support for dynamic architectures; the architecture is evaluated by assigning utility values to fragments; pattern-based definition of rules and utility enables incremental computation of changes on the utility that result from rule executions, rather than evaluating the complete architecture, which supports scalability. Second, we introduce HypeZon, a hybrid solution for runtime coordination of multiple off-the-shelf adaptation policies, which typically offer only partial satisfaction of the quality and cost requirements. Realized based on meta-self-aware architectures, HypeZon complements Venus by re-using existing policies at runtime for balancing the quality-cost trade-off.
The twofold solution of this thesis is integrated in an adaptation engine that leverages state- and event-based principles for incremental execution, therefore, is scalable for large and dynamic software architectures with growing size and complexity. The utility elicitation challenge is resolved by defining a methodology to train utility-change prediction models. The thesis addresses the quality-cost trade-off in adaptation of dynamic software architectures via design-time combination (Venus) and runtime coordination (HypeZon) of rule- and optimization-based policy formalisms, while offering supporting mechanisms for optimal, cost-effective, scalable, and robust adaptation. The solutions are evaluated according to a methodology that is obtained based on our systematic literature review of evaluation in self-healing systems; the applicability and effectiveness of the contributions are demonstrated to go beyond the state-of-the-art in coverage of a wide spectrum of the problem space for software self-adaptation.
Die vorliegende Masterarbeit widmet sich der Frage, inwiefern die neuesten Lehrwerke für den gymnasialen Französischunterricht, Découvertes 1 (Klett) und À plus 1 (Cornelsen) aus dem Jahr 2020, sprachvernetzende Inhalte nutzen, um auf vorgelernte Sprachen und frühere Spracherwerbsprozesse hinzuweisen oder darauf zurückzugreifen. Der Fokus liegt dabei auf der Schul- und/oder Erstsprache Deutsch sowie der ersten Fremdsprache Englisch, wobei auch andere auftretende Sprachen in die Untersuchung einbezogen werden.
Die Arbeit leistet einen Beitrag zum fachdidaktischen Diskurs bezüglich mehrsprachigkeitsdidaktischer Inhalte in Fremdsprachenlehrwerken. Darüber hinaus kann sie Lehrkräften aufzeigen, wie diese aktuellen Lehrwerke ihren mehrsprachigkeitsorientierten Unterricht begleiten können.
Die Einleitung betont die Relevanz der Sprachvernetzung für den Fremdsprachenunterricht, insbesondere im Hinblick auf die individuelle Mehrsprachigkeit der Schüler*innen. Es wird auf das Potenzial des interlingualen Transfers hingewiesen, das u. a. in einer Lernerleichterung sowie der Förderung der Sprachbewusstheit und der Sprachlernbewusstheit besteht.
In Kapitel 2 werden die theoretischen Grundlagen für die Analyse gelegt, indem Mehrsprachigkeit und Mehrsprachigkeitsdidaktik, Sprachvernetzung und ihr Potenzial näher betrachtet werden. Zudem wird anhand des Deutschen und Englischen aufgezeigt, welches sprachliche Transferpotenzial im Anfangsunterricht Französisch eingebracht werden könnte. Auch die Bedingungen dafür, dass Schüler*innen den interlingualen Transfer in ihrem Spracherwerb einsetzen, werden besprochen.
Kapitel 3 gibt einen Überblick über den Forschungsstand zu Sprachvernetzung und Mehrsprachigkeit in Fremdsprachenlehrwerken und identifiziert die Forschungslücke, die diese Arbeit zu schließen versucht.
In Kapitel 4 werden die Forschungsfrage und ihre Unterfragen formuliert, die untersuchten Lehrwerke beschrieben und die Auswahl der Lehrwerke und der untersuchten Lehrwerkskomponenten begründet. Zudem wird die Methodik der vergleichenden Lehrwerkanalyse erläutert.
Die Ergebnisse der Analyse werden in Kapitel 5 ausführlich dargestellt. Es wird aufgezeigt, welche sprachvernetzenden Inhalte in den jeweiligen Lehrwerken vorkommen – in welcher Form und unter Einbezug welcher Sprachen und sprachlichen Ebenen.
In Kapitel 6 werden die Ergebnisse diskutiert und analysiert, wobei auf die Mehrsprachigkeitskonzepte der Lehrwerke und die Trends bei den sprachvernetzenden Inhalten eingegangen wird.
Im abschließenden Kapitel 7 wird zusammenfassend betont, dass beide Lehrwerke viele sprachvernetzende Inhalte anbieten, die das Potenzial haben, mehrsprachigkeitsdidaktisches Arbeiten zu unterstützen. Insbesondere auf der Produktionsebene werden jedoch noch zu wenige Transferprozesse initiiert. Zudem wird aufgezeigt, welche weiteren Untersuchungen ergänzend möglich sind, z. B. hinsichtlich des Einsatzes der sprachvernetzenden Inhalte im Unterricht.
Unternehmenssteuerrecht
(2024)
Das Unternehmenssteuerrecht gilt als schwierige Spezialmaterie. Wer in diesem praktisch wichtigen Rechtsgebiet den Durchblick behalten will, muss die relevanten Zusammenhänge verstehen. Der stringente Aufbau und zahlreiche – zumeist an die BFH-Rechtsprechung angelehnte – Beispiele des „Hüttemann/Schön" erleichtern ebenso wie viele Querverweise den Überblick und die Durchdringung der Zusammenhänge.
Das Werk eröffnet nicht nur interessierten Studierenden der Rechtswissenschaften, Wirtschaftswissenschaften und Steuerwissenschaften einen systematischen Zugang zum Unternehmenssteuerrecht. Er richtet sich auch an Praktiker, die bei ihrer beratenden und gestaltenden Tätigkeit auf hinreichende Kenntnisse und ein vertieftes Verständnis dieses Rechtsgebiets angewiesen sind.
Virtual reality promises high potential as an immersive, hands-on learning tool for training 21st-century skills. However, previous research revealed that the mere use of digital tools in higher education does not automatically translate into learning outcomes. Instead, information systems studies emphasized the importance of effective use behavior to achieve technology usage goals. Applying the affordance network approach, we investigated what constitutes effective usage behavior regarding a virtual reality collaboration system in digital education. Therefore, we conducted 18 interviews with students and observations of six course sessions. The results uncover how affordance actualization contributed to the achievement of learning goals. A comparison with findings of previous studies on other information systems (i.e., electronic medical record systems, big data analytics, fitness wearables) allowed us to highlight system-specific differences in effective use behavior. We also demonstrated a clear distinction between concepts surrounding effective use theory facilitating the application of the affordance network approach in information systems research.
Money matters!
(2024)
This paper examines the context dependency of attitudes toward maternal employment. We test three sets of factors that may affect these attitudes—economic benefits, normative obligations, and child-related consequences—by analyzing data from a unique survey experimental design implemented in a large-scale household panel survey in Germany (17,388 observations from 3,494 respondents). Our results show that the economic benefits associated with maternal employment are the most important predictor of attitudes supporting maternal employment. Moreover, we find that attitudes toward maternal employment vary by individual, household, and contextual characteristics (in particular, childcare quality). We interpret this variation as an indication that negative attitudes toward maternal employment do not necessarily reflect gender essentialism; rather, gender role attitudes are contingent upon the frames individuals have in mind.
Artikel 15 Grundgesetz als sozialistische Utopie? Keineswegs. Die Sozialisierungsnorm gibt dem Gesetzgeber ein Instrument an die Hand, um staatliche Gewährleistungsverantwortung mithilfe gemeinwirtschaftlicher Organisationsformen wahrzunehmen. Sozialisierungsmaßnahmen greifen in das Eigentumsgrundrecht ein. Sie treffen zudem auf grundrechtliche Funktionsgarantien einer marktwirtschaftlichen Ordnung und die unionsrechtliche Systemgarantie zugunsten des freien Wettbewerbs. Die Arbeit untersucht daher die verfassungsrechtlichen Anforderungen an die Sozialisierungsgesetzgebung auf Bundes- und Landesebene einschließlich der gerichtlichen Kontrolle. Ferner zeigt die Arbeit auf, wie sich Sozialisierungsgesetze unionsrechtskonform verhalten können.
The public health insurance in Germany will face huge economic challenges in the upcoming years. New diagnostic and therapeutic methods as well as the demographic change contribute to constantly rising expenditure. Although incentives for health-promoting behaviour or financial sanctions for an unhealthy lifestyle have been already discussed in the past, there has been a general reluctance to legally establish corresponding mechanisms for fear of eroding solidarity and increasing state control. In the course of the Coronavirus pandemic however, a stronger awareness rose to the fact that personal health-related life choices can have a huge impact on the stability of the healthcare system including public health insurance. Not only in Germany but throughout much of Europe, the pandemic led to a new and more fundamental debate about the relationship between individual responsibility for personal health and the wider responsibility for public health assumed by the community of solidarity.
This work analyzed functional and regulatory aspects of the so far little characterized EPSIN N-terminal Homology (ENTH) domain-containing protein EPSINOID2 in Arabidopsis thaliana. ENTH domain proteins play accessory roles in the formation of clathrin-coated vesicles (CCVs) (Zouhar and Sauer 2014). Their ENTH domain interacts with membranes and their typically long, unstructured C-terminus contains binding motifs for adaptor protein complexes and clathrin itself. There are seven ENTH domain proteins in Arabidopsis. Four of them possess the canonical long C-terminus and participate in various, presumably CCV-related intracellular transport processes (Song et al. 2006; Lee et al. 2007; Sauer et al. 2013; Collins et al. 2020; Heinze et al. 2020; Mason et al. 2023). The remaining three ENTH domain proteins, however, have severely truncated C-termini and were termed EPSINOIDs (Zouhar and Sauer 2014; Freimuth 2015). Their functions are currently unclear. Preceding studies focusing on EPSINOID2 indicated a role in root hair formation: epsinoid2 T DNA mutants exhibited an increased root hair density and EPSINOID2-GFP was specifically located in non-hair cell files in the Arabidopsis root epidermis (Freimuth 2015, 2019).
In this work, it was clearly shown that loss of EPSINOID2 leads to an increase in root hair density through analyses of three independent mutant alleles, including a newly generated CRISPR/Cas9 full deletion mutant. The ectopic root hairs emerging from non-hair positions in all epsinoid2 mutant alleles are most likely not a consequence of altered cell fate, because extensive genetic analyses placed EPSINOID2 downstream of the established epidermal patterning network. Thus, EPSINOID2 seems to act as a cell autonomous inhibitor of root hair formation. Attempts to confirm this hypothesis by ectopically overexpressing EPSINOID2 led to the discovery of post-transcriptional and -translational regulation through different mechanisms. One involves the little characterized miRNA844-3p. Interference with this pathway resulted in ectopic EPSINOID2 overexpression and decreased root hair density, confirming it as negative factor in root hair formation. A second mechanism likely involves proteasomal degradation. Treatment with proteasomal inhibitor MG132 led to EPSINOID2-GFP accumulation, and a KEN box degron motif was identified in the EPSINOID2 sequence associated with degradation through a ubiquitin/proteasome-dependent pathway. In line with a tight dose regulation, genetic analyses of all three mutant alleles indicate that EPSINOID2 is haploinsufficient. Lastly, it was revealed that, although EPSINOID2 promoter activity was found in all epidermal cells, protein accumulation was observed in N-cells only, hinting at yet another layer of regulation.
Der russische Krimi
(2024)
Die erste umfassende Darstellung des Kriminalgenres in Russland. Sie geht auf Bücher und Filme ein und berücksichtigt die Debatten der Literaturkritik, da sich die Kulturpolitik während der sowjetischen Jahrzehnte schwer damit tat, dem Kriminalgenre überhaupt ein Existenzrecht zubilligen. Sympathie für die Miliz zu erzeugen wurde schließlich offizieller Zweck dieses politisch zu einer Nischenexistenz gedrängten Genres. Entsprechend liegt ein Akzent der Studie auf der Ideologie, besonders bei der Darstellung der Helden und ihrer Gegner und der Lebenswelt, die die Leser als ihre wiedererkennen sollten. Dabei erfahren sie eine Menge über die Gesellschaft, vor allem über deren sonst eher verschwiegene Schattenseiten.
Nicht zuletzt wegen der langen Entbehrung spannender Texte wurde der Krimi nach dem Ende des Sozialismus zu dem Bestsellergenre schlechthin. Am Bespiel des Frauenkrimis (Marinina und Nachfolgerinnen) und des postmodernen Krimis (Akunin) wird die postsowjetische Entwicklung bis in die 2010er Jahre gezeigt.
Die Dissertation untersucht die Entwicklung des Verantwortungseigentums insbesondere anhand der Carl-Zeiss-Stiftung unter Ernst Abbe.
Der Begriff des Verantwortungseigentums wird seit einigen Jahren in der rechtspolitischen Debatte zu alternativen Unternehmens- und Eigentumsformen diskutiert. Dabei wird die Einführung einer eigenen Gesellschaftsform gefordert.
Die Dissertation widmet sich diesen Forderungen und den Entwicklungen des Verantwortungseigentums anhand der Carl-Zeiss-Stiftung und ihrer Stiftungsbetriebe Zeiss und Schott.
Dort wurde bereits Ende des 19. Jahrhunderts eine Form dessen, was Jurist:innen heute unter Verantwortungseigentum verstehen, kautelar-juristisch eingeführt und geprägt.
Ziel und Zweck der Arbeit war es, die Überschneidungen, Parallelen und Unterschiede der Rechtssubjekte zu untersuchen und der Frage auf den Grund zu gehen, ob das Verantwortungseigentum einer längeren Rechtstradition folgt oder eine rein zeitgenössische Idee ist.
This thesis presents an attempt to use source code synthesised from Coq formalisations of device drivers for existing (micro)kernel operating systems, with a particular focus on the Linux Kernel.
In the first part, the technical background and related work are described. The focus is here on the possible approaches to synthesising certified software with Coq, namely the extraction to functional languages using the Coq extraction plugin and the extraction to Clight code using the CertiCoq plugin. It is noted that the implementation of CertiCoq is verified, whereas this is not the case for the Coq extraction plugin. Consequently, there is a correctness guarantee for the generated Clight code which does not hold for the code being generated by the Coq extraction plugin. Furthermore, the differences between user space and kernel space software are discussed in relation to Linux device drivers. It is elaborated that it is not possible to generate working Linux kernel module components using the Coq extraction plugin without significant modifications. In contrast, it is possible to produce working user space drivers both with the Coq extraction plugin and CertiCoq. The subsequent parts describe the main contributions of the thesis.
In the second part, it is demonstrated how to extend the Coq extraction plugin to synthesise foreign function calls between the functional language OCaml and the imperative language C. This approach has the potential to improve the type-safety of user space drivers. Furthermore, it is shown that the code being synthesised by CertiCoq cannot be used in kernel space without modifications to the necessary runtime. Consequently, the necessary modifications to the runtimes of CertiCoq and VeriFFI are introduced, resulting in the runtimes becoming compatible components of a Linux kernel module. Furthermore, justifications for the transformations are provided and possible further extensions to both plugins and solutions to failing garbage collection calls in kernel space are discussed.
The third part presents a proof of concept device driver for the Linux Kernel. To achieve this, the event handler of the original PC Speaker driver is partially formalised in Coq. Furthermore, some relevant formal properties of the formalised functionality are discussed. Subsequently, a kernel module is defined, utilising the modified variants of CertiCoq and VeriFFI to compile a working device driver. It is furthermore shown that it is possible to compile the synthesised code with CompCert, thereby extending the guarantee of correctness to the assembly layer. This is followed by a performance evaluation that compares a naive formalisation of the PC speaker functionality with the original PC Speaker driver pointing out the weaknesses in the formalisation and possible improvements. The part closes with a summary of the results, their implications and open questions being raised.
The last part lists all used sources, separated into scientific literature, documentations or reference manuals and artifacts, i.e. source code.
Governments engage in corporatization by creating corporate entities or reorganizing existing ones. These corporatization activities reflect an interplay between political agency and environmental pressures, including (changing) notions of state-market relations. This paper discusses two ideal-typed organizational models of corporatization: the state as a marketizer and the marketization of the state. Whereas the first emphasizes the role of political design and agency in corporatization, the second emphasizes the role of (actors in) the environment for corporatization. Both models are assessed across five corporatization episodes in Norway and Sweden, where we also demonstrate the interplay between political agency and environmental pressure.