Refine
Has Fulltext
- yes (67) (remove)
Year of publication
- 2024 (67) (remove)
Document Type
- Doctoral Thesis (47)
- Working Paper (6)
- Master's Thesis (5)
- Article (2)
- Other (2)
- Part of Periodical (2)
- Postprint (2)
- Monograph/Edited Volume (1)
Keywords
- Satzverarbeitung (3)
- sentence processing (3)
- Arctic (2)
- Arktis (2)
- Carotinoide (2)
- Deep Learning (2)
- Jüdische Studien (2)
- Klimawandel (2)
- Kohlenstoff (2)
- Konstruktivismus (2)
- carbon (2)
- carotenoids (2)
- climate change (2)
- communication (2)
- constructivism (2)
- deep learning (2)
- experiment (2)
- machine learning (2)
- (latente) Mehrebenen-(Kovariaten-)Modelle (1)
- 1848/49 revolution (1)
- 3D-Einbettung (1)
- 3D-embedding (1)
- ALOX15B (1)
- Acetobacteraceae (1)
- Aktin (1)
- Ambivalenz (1)
- Amblystegiaceae (1)
- Amnestien (1)
- Anode (1)
- Antibeschlag-Additive (1)
- Antifouling (1)
- Aphasie (1)
- Atmosphäre (1)
- Atmosphärenforschung (1)
- Ausbreitung der kosmischen Strahlung (1)
- Austausch zwischen zwei Spezies (1)
- Bachdenitrifikation (1)
- Banken (1)
- Bekämpfungskonventionen (1)
- Beschriftung (1)
- Betäubungsmittelkriminalität (1)
- Beweidung (1)
- Biklausalität (1)
- Bildkontextanalyse (1)
- Bilingualismus (1)
- Biotechnologie (1)
- Blazar (1)
- Blickbewegungen (1)
- Bodenbewegungsmodellierung (1)
- Bodenfeuchtigkeit (1)
- Bodenhydrologie (1)
- Braunmoose (1)
- Bryophyten (1)
- CN (1)
- Central Andes (1)
- Central Europe (1)
- Chemie (1)
- Christentum (1)
- Chronosequenzstudie (1)
- Cicero (1)
- Cognitive Apprenticeship (1)
- Copolymere (1)
- Curriculare Innovation (1)
- DSS-Colitis (1)
- Dateistruktur (1)
- Datenaufbereitung (1)
- Datenbank (1)
- Datenmonetarisierung (1)
- Datenschutz (1)
- Datenschutz-Grundverordnung (DSGVO) (1)
- Datenschutzmanagement (1)
- Datenverwaltung (1)
- Debugging (1)
- Designparameter (1)
- Diamantstempelzelle (1)
- Dichte (1)
- Diffraktion (1)
- Digitale Bildung (1)
- Digitalisierung (1)
- Drohnen-Fernerkundung (1)
- Dynamische kognitive Modellierung (1)
- Dürre (1)
- Eigenspannung (1)
- Einzugsgebietshydrologie Wasserqualitätsmodell (1)
- Elektrolumineszenz (1)
- Elektrolumineszenz-Folie (1)
- Elektronenrückstreubeugung (1)
- Endophyten (1)
- Energie (1)
- Englisch (1)
- English (1)
- Epiphyten (1)
- Erdmantel (1)
- Ernährungsgewohnheit (1)
- Essigsäurebakterien (1)
- Exhaustivität (1)
- Exoplaneten (1)
- Exoplanetenatmosphären (1)
- Exziton-Dissoziation (1)
- Eye-Tracking-Verfahren (1)
- Eye-tracking (1)
- FATF (1)
- Familiarität (1)
- Faulkner studies (1)
- Faulknerforschung (1)
- Fernerkundung (1)
- Fernerkundung an Vulkanen (1)
- Ferroperiklas (1)
- Flavonoide (1)
- Fokus (1)
- Folientunnel (1)
- Französisch (1)
- Frauen (1)
- Frauenbewegung (1)
- Fremdsprachendidaktik (1)
- French (1)
- Freud-Forschung (1)
- Freud-research (1)
- Frieden (1)
- GNSS (1)
- GPS (1)
- Gammastrahlen: allgemein (1)
- Geldwäsche (1)
- General Data Protection Regulation (GDPR) (1)
- Genomik (1)
- Geomorphologie (1)
- Geophysik (1)
- German women's movement (1)
- Geschichte (1)
- Geschichtswissenschaft (1)
- Gletscherschmelze (1)
- Grenzschicht (1)
- Habsburg Empire (1)
- Habsburg Studies (1)
- Habsburgisches Reich (1)
- Habsburgstudien (1)
- Halophyten (1)
- Hitzeaktionsplan (1)
- Hochdruck (1)
- Hochschuldidaktik (1)
- Hohlraumeffekte (1)
- Hydrogele (1)
- Hydrologie (1)
- In-situ Experimente (1)
- Inconel 718 (1)
- Individual Participant Data Metaanalyse (1)
- Indoor farming (1)
- Informationsstruktur (1)
- Inhaltsanalyse (1)
- Instabilitäten (1)
- Interessengrad-Techniken (1)
- Intersections (1)
- Intraklassenkorrelation (1)
- Islam (1)
- Jewish Studies (1)
- Jewish studies (1)
- Judentum (1)
- Kalibrierung an mehreren Standorten (1)
- Kasus (1)
- Kinder (1)
- Klimaanpassung (1)
- Klimaresilienz (1)
- Knock in Mäuse (1)
- Koartikulation (1)
- Kognitionspsychologie (1)
- Kohlenstoffnitrid (CN) (1)
- Komplexität (1)
- Kopfsalat (1)
- Korruption (1)
- Kovariatenwahl (1)
- Kreativitätstest (1)
- Kunststoff-Additive (1)
- Kursdesign (1)
- Ladungsgenerierung (1)
- Landschaftsentwicklung (1)
- Large-Scale Assessment (1)
- Laserstrahlschmelzen (1)
- Lateinunterricht (1)
- Legitimität (1)
- Lehrwerk (1)
- Lerneinheit (1)
- Lerntagebuch (1)
- Lernumgebung (1)
- Lesen (1)
- Lipoxygenase (1)
- Louise Otto-Peters (1)
- Lösungsmittel (1)
- MOOC (1)
- Massenspektrometrie (1)
- Mehrsprachigkeit (1)
- Mehrsprachigkeitsdidaktik (1)
- Mensch-Technik-Interaktion (1)
- Menschenrechte (1)
- Meta-Selbstanpassung (1)
- Methodik (1)
- Micro Degree (1)
- Mikroalgen (1)
- Moorsukzession (1)
- Moos-Mikroben-Interaktion (1)
- Moos-assoziierte Methanoxidation (1)
- Moos-assoziierte Methanproduktion (1)
- Morphologie (1)
- Nahrung der Zukunft (1)
- Nationale Aktionspläne (1)
- Natrium-Ionen-Batterie (1)
- Naturgefahren (1)
- Nicht-Fulleren-Akzeptoren (1)
- Numerus (1)
- Nutzer-Engagement (1)
- Online-Lehre (1)
- Onlinekurs (1)
- Onlinekurs-Produktion (1)
- Organisationen (1)
- Permafrost (1)
- Perowskit-Solarzellen (1)
- Pflanzenwachstum (1)
- Pfotenödem Mausmodell (1)
- Phonetik (1)
- Phonologie (1)
- Physik (1)
- Politikunterricht (1)
- Potenziale (1)
- Poweranalyse (1)
- Pro Milone (1)
- Probleme (1)
- Produktionssteuerung (1)
- Projektarbeit (1)
- Prototyp (1)
- Psychoanalyse (1)
- Psycholinguistik (1)
- Reinforcement Learning (1)
- Relativized Minimality (1)
- Relativsätze (1)
- Religionskunde (1)
- Religiöses Leben (1)
- Restaurierung von Flüssen (1)
- Revolution 1848/49 (1)
- Russian (1)
- Russisch (1)
- Saline Landwirtschaft (1)
- Schuld (1)
- Schulleistung (1)
- Schwefel (1)
- Schülermaterial (1)
- Seismologie (1)
- Selection-Linked Integration (1)
- Sequenzielle Likelihood (1)
- Simulation (1)
- Simulation, Größe (1)
- Social Bots erkennen (1)
- Softwareanalytik (1)
- Softwareentwicklung (1)
- Softwarevisualisierung (1)
- Solarzellen (1)
- Soziologie (1)
- Spaltsätze (1)
- Speisegebot (1)
- Spektroskopie (1)
- Sphagnum (1)
- Sprachbewusstheit (1)
- Spracherwerb (1)
- Sprachlernbewusstheit (1)
- Sprachvernetzung (1)
- Sprachverständnis (1)
- Staatsanleihen (1)
- Staatsverschuldung (1)
- Stadtplanung (1)
- Starkregen (1)
- Stereotype (1)
- Stern-Planeten-Wechselwirkung (1)
- Sternphysik (1)
- Strömungsneigung (1)
- Synchrotronstrahlung (1)
- Synthese (1)
- Systemtheorie (1)
- Talmudic Judaism (1)
- Talmudisches Judentum (1)
- Tanz (1)
- Terrorismus (1)
- Test (1)
- Testmanual (1)
- Textur (1)
- Thermoelektrizität (1)
- Torfmoose (1)
- Transfer (1)
- Transkriptomik (1)
- Treemaps (1)
- Tupaia belangeri (1)
- Ultraschall (1)
- Ungewissheit (1)
- Untereinheitenimpfstoff (1)
- Utility-Funktionen (1)
- Vorhersagemodelle (1)
- Vorurteile (1)
- Vulkanüberwachung (1)
- Völkerrecht (1)
- WPS Agenda (1)
- WPS agenda (1)
- Wasserdampf (1)
- Web-basiertes Rendering (1)
- Wellen (1)
- William Faulkner (1)
- Wirtschaftsinformatik Projekte (1)
- Wirtsspezifität (1)
- Wissenschaftskommunikation (1)
- Wissenschaftskommunikationstypen (1)
- Wärmefluss (1)
- Wärmekapazität (1)
- Zellmotilität (1)
- Zentraleuropa (1)
- Zustandsverwaltung (1)
- Zuweisung thematischer Rollen (1)
- Zwitterionen (1)
- actin (1)
- additive Fertigung (1)
- additive manufacturing (1)
- alternative Strafverfolgungsmechanismen (1)
- alternative criminal prosecution mechanisms (1)
- ambivalence (1)
- amnesties (1)
- anode (1)
- antifogging additives (1)
- antifouling (1)
- aphasia (1)
- architecture-based software adaptation (1)
- architekturbasierte Softwareanpassung (1)
- assimilatorische Aufnahme (1)
- assimilatory uptake (1)
- atmosphere (1)
- atmospheric science (1)
- banking (1)
- beliefs (1)
- biclausality (1)
- bilingualism (1)
- biotechnology (1)
- blazar (1)
- bleifreie Perowskit-Solarzellen (1)
- boundary layer (1)
- bourgeoisie (1)
- brown mosses (1)
- bryophytes (1)
- bürgerliches Frauenbild (1)
- case (1)
- catchment hydrology Water quality model (1)
- cavity effects (1)
- cell motility (1)
- charge generation (1)
- chemically induced dislocation (1)
- chemisch-induzierte Dislokation (1)
- chemistry (1)
- children (1)
- chronosequence study (1)
- clefts (1)
- climate mitigation (1)
- climate resilience (1)
- coarticulation (1)
- cognitive apprenticeship (1)
- cognitive psychology (1)
- complexity (1)
- conceptual change (1)
- cooperation (1)
- copolymers (1)
- corruption (1)
- cosmic ray propagation (1)
- covariate selection (1)
- creativity test (1)
- curriculum innovation (1)
- dance (1)
- data management (1)
- data monetization (1)
- data preparation (1)
- data privacy (1)
- database (1)
- debugging (1)
- degree-of-interest techniques (1)
- density (1)
- design parameters (1)
- diamond anvil cell (1)
- diffraction (1)
- diffraction elastic constants (1)
- diffraktionselastische Konstanten (1)
- digital education (1)
- digital fashion (1)
- digital product development (1)
- digital transformations (1)
- digital twin (1)
- digitale Mode (1)
- digitale Produktentwicklung (1)
- digitale Transformation (1)
- digitalization (1)
- drought (1)
- drug trafficking (1)
- dynamical cognitive modeling (1)
- e-learning (1)
- earth mantle (1)
- electroluminescence (1)
- electroluminescent foil (1)
- electron backscatter diffraction (1)
- emotional cognitive dynamics (1)
- emotional kognitive Dynamiken (1)
- endophytes (1)
- energy (1)
- entrepreneurship (1)
- enzymatische Reaktionsspezifität (1)
- epiphytes (1)
- equity crowdfunding (1)
- erklärte Varianz (1)
- escalation of commitment (1)
- eskalierendes Commitment (1)
- exciton dissociation (1)
- executive functions (1)
- exekutive Funktionen (1)
- exhaustivity (1)
- exoplanet atmospheres (1)
- exoplanets (1)
- experimental studies (1)
- experimentelle Studien (1)
- explained variance (1)
- eye movements (1)
- eye tracking (1)
- eye-tracking (1)
- familiarity (1)
- feminist foreign policy (1)
- feministische Außenpolitik (1)
- ferropericlase (1)
- file structure (1)
- finance (1)
- financial access and inclusion (1)
- fiscal capacity (1)
- fiskalische Kapazität (1)
- flavonoids (1)
- focus (1)
- foreign language teaching (1)
- fragile Staaten (1)
- fragile states (1)
- future food (1)
- galactic magnetic fields (1)
- galaktische Magnetfelder (1)
- gamma rays: general (1)
- genomics (1)
- geographische Großstudie (1)
- geomorphology (1)
- geophysics (1)
- geschützter Anbau (1)
- glacier melt (1)
- globales Navigationssatellitensystem (1)
- globales Positionsbestimmungssystem (1)
- graph neural networks (1)
- graphische neuronale Netze (1)
- grazing (1)
- ground motion modeling (1)
- guilt (1)
- halophytes (1)
- heat action plan (1)
- heat capacity (1)
- heat flux (1)
- hierarchical data (1)
- hierarchische Daten (1)
- high pressure (1)
- high resolution (1)
- history (1)
- hohe Auflösung (1)
- host-specificity (1)
- human diet (1)
- human rights (1)
- human-technology interaction (1)
- hybrid Bayesian-classical precision simulations (1)
- hybrid fashion (1)
- hybride Bayesianisch-klassische Simulationen der Schätzgenauigkeit (1)
- hydrogels (1)
- hydrology (1)
- hydrothermale Alteration (1)
- immaterielle Mode (1)
- in-operando SAXS (1)
- in-situ testing (1)
- individual participant data meta-analysis (1)
- individually, multisite, and cluster randomized trials (1)
- individuell-, block- und cluster-randomisierte Studien (1)
- indoor farming (1)
- infinitely repeated game (1)
- information structure (1)
- information systems projects (1)
- instabilities (1)
- interactive visualization (1)
- interaktive Visualisierung (1)
- international criminal law (1)
- international law (1)
- international mutual legal assistance (1)
- internationale Rechtshilfe (1)
- internationales Strafrecht (1)
- interspecies interchange (1)
- intraclass correlation (1)
- labeling (1)
- landscape evolution (1)
- language acquisition (1)
- language awareness (1)
- language learning awareness (1)
- large-scale assessment (1)
- large-scale study (1)
- laser powder bed fusion (1)
- lautes Denken (1)
- lead-free perovskites (1)
- leadership (1)
- learning environment (1)
- learning unit (1)
- legitimacy (1)
- lettuce (1)
- logical errors (1)
- logische Fehler (1)
- mHM-Nitrat-Modell (1)
- mHM-Nitrate model (1)
- mammalian ALOX15 orthologs (1)
- maschinelles Lernen (1)
- mass spectrometry (1)
- mechanical behavior (1)
- mechanisches Verhalten (1)
- menschliche Ernährung (1)
- meta self-adaptation (1)
- methanogenic archaea (1)
- methanotrophic bacteria (1)
- methanoxidierende Bakterien (1)
- methanproduzierende Archaeen (1)
- methodology (1)
- micro degree (1)
- micro-credential (1)
- microalgae (1)
- mikrobielle Moor-Kerngemeinschaft (1)
- model-driven engineering (1)
- modellgesteuerte Entwicklung (1)
- modular production (1)
- modulare Produktion (1)
- money laundering (1)
- morpho-syntactic features (1)
- morpho-syntaktische Merkmale (1)
- morphology (1)
- moss-associated archaea (1)
- moss-associated bacteria (1)
- moss-associated methanogenesis (1)
- moss-associated methanotrophy (1)
- moss-microbe-interactions (1)
- multi-site calibration (1)
- multilevel (latent covariate) models (1)
- multilingual didactics (1)
- multilingualism (1)
- national action plans (1)
- natural hazards (1)
- nicht-thermische Strahlung (1)
- non-fullerene acceptors (1)
- non-thermal radiation (1)
- northern peatlands (1)
- number (1)
- nördliche Moore (1)
- online course creation (1)
- online course design (1)
- online teaching (1)
- optical properties (1)
- optische Eigenschaften (1)
- organic solar cell (1)
- organic-inorganic hybrids (1)
- organisch-anorganische Hybride (1)
- organische Solarzelle (1)
- organisierte Kriminalität (1)
- organizations (1)
- organized crime (1)
- parameter transferability (1)
- peace (1)
- peatland core microbiome (1)
- peatland development (1)
- permafrost (1)
- perovskite (1)
- perovskite solar cells (1)
- persistent memory (1)
- persistenter Speicher (1)
- pflanzliche Sekundär Metabolite (1)
- phonetics (1)
- phonology (1)
- photovoltaische Materialien (1)
- physics (1)
- plant growth (1)
- plant secondary metabolites (1)
- plastic additives (1)
- plurilingualism (1)
- pluvial flooding (1)
- pmem (1)
- political science (1)
- polytunnel (1)
- potentials (1)
- power analysis (1)
- prediction models (1)
- prisoner’s dilemma (1)
- privacy management (1)
- problems (1)
- production control (1)
- progressive rendering (1)
- progressives Rendering (1)
- promises (1)
- protected cultivation (1)
- prototype (1)
- psychoanalysis (1)
- psycholinguistics (1)
- public debt (1)
- public good (1)
- rapid earthquake impact assessment (1)
- reading (1)
- reinforcement learning (1)
- relative clauses (1)
- relativierte Minimalitätstheorie (1)
- relativistic processes (1)
- relativistische Prozesse (1)
- remote sensing (1)
- residual stress (1)
- river restoration (1)
- räumlich-zeitliche Validierung (1)
- römisches Recht (1)
- saline agriculture (1)
- scalable (1)
- schnelle Einschätzung von Erdbebenauswirkungen (1)
- schwach elektrischer Fisch (1)
- science communication (1)
- science communication types (1)
- secondary plant metabolites (1)
- seismic hazard (1)
- seismische Gefährdung (1)
- seismology (1)
- sekundäre Pflanzenstoffe (1)
- selbstanpassende Systeme (1)
- selbstheilende Systeme (1)
- selection-linked integration (1)
- self-adaptive systems (1)
- self-healing (1)
- sequential likelihood (1)
- simulation (1)
- simulation, size (1)
- skalierbar (1)
- social bot detection (1)
- socio-technical system (1)
- sociology (1)
- sodium-ion battery (1)
- soft information (1)
- software analytics (1)
- software development (1)
- software visualization (1)
- soil hydrology (1)
- soil moisture (1)
- solar cells (1)
- sovereign exposure (1)
- sozio-technisches System (1)
- spatiotemporal validation (1)
- spectroscopy (1)
- spoken sentence comprehension (1)
- stabile Schichtung (1)
- stable stratification (1)
- star-planet interaction (1)
- state management (1)
- stellar physics (1)
- strategic uncertainty (1)
- stream denitrification (1)
- stream sinuosity (1)
- student achievement (1)
- städtisch (1)
- städtischer Wärmeinseleffekt (1)
- subunit vaccine (1)
- sulfur (1)
- suppression conventions (1)
- suspended sediment (1)
- suspendiertes Sediment (1)
- synchrotron radiation (1)
- synthesis (1)
- systematic (1)
- systematisch (1)
- systemic risk (1)
- systemisches Risiko (1)
- systems theory (1)
- tabellarische Dateien (1)
- tabular data (1)
- teaching material (1)
- terrorism (1)
- test (1)
- test manual (1)
- textbook (1)
- texture (1)
- thematic-role assignment (1)
- thermoelectricity (1)
- think aloud (1)
- tin perovskites (1)
- transcriptomics (1)
- transfer (1)
- transnational crime (1)
- transnational criminal law (1)
- transnationale Kriminalität (1)
- transnationales Strafrecht (1)
- treemaps (1)
- tropical freshwater fish (1)
- tropische Süßwasserfische (1)
- ultra-high energy cosmic rays (1)
- ultrahochenergetische kosmische Strahlung (1)
- ultrasound tongue imaging (1)
- uncertainty (1)
- unreal fashion (1)
- urban heat island (1)
- user engagement (1)
- utility functions (1)
- virtual fashion (1)
- virtuelle Mode (1)
- visuell-linguistische Integration (1)
- visuo-linguistic integration (1)
- volcanic hydrothermal systems (1)
- volcano remote sensing (1)
- voting (1)
- vulkanische Entgasungs-und Hydrothermalsysteme (1)
- vulkanische Entgasungssysteme (1)
- water vapour (1)
- waves (1)
- weakly electric fish (1)
- web-based rendering (1)
- women (1)
- zentrale Anden (1)
- zwitterions (1)
- Überschneidungen (1)
- Übertragbarkeit der Parameter (1)
Institute
- Extern (11)
- Institut für Physik und Astronomie (8)
- Hasso-Plattner-Institut für Digital Engineering GmbH (7)
- Institut für Biochemie und Biologie (5)
- Institut für Geowissenschaften (5)
- Institut für Umweltwissenschaften und Geographie (5)
- Center for Economic Policy Analysis (CEPA) (4)
- Department Linguistik (4)
- Fachgruppe Volkswirtschaftslehre (4)
- Institut für Chemie (4)
Das in diesem Beitrag vorgestellte Projektseminarkonzept reagiert auf eine wahrgenommene Distanz und Unsicherheit Studierender im Fach Lebensgestaltung-Ethik-Religionskunde gegenüber religionsbezogenen Themen. Mittels verschiedener Strategien wurde, ausgehend von der Conceptual Change-Forschung, zur Wahrnehmung und Reflexion des eigenen kulturellen Standortes und der eigenen Konzepte in Bezug auf Religion(en) angeregt. Ihren Lernprozess haben die Studierenden in Arbeitsjournaleinträgen festgehalten. Diese Einträge wurden wiederum mittels einer qualitative Inhaltsanalyse untersucht. Nach der Darstellung der dabei erhobenen religions- und unterrichtsbezogenen Vorstellungen der Studierenden werden im Beitrag Anregungen gegeben, inwiefern die analysierten Befunde als Grundlage für die Verbesserung der Hochschullehre im Fachbereich dienen können.
This study focuses on William Faulkner, whose works explore the demise of the slavery-based Old South during the Civil War in a highly experimental narrative style. Central to this investigation is the analysis of the temporal dimensions of both individual and collective guilt, thus offering a new approach to the often-discussed problem of Faulkner’s portrayal of social decay. The thesis examines how Faulkner re-narrates the legacy of the Old South as a guilt narrative and argues that Faulkner uses guilt in order to corroborate his concept of time and the idea of the continuity of the past. The focus of the analysis is on three of Faulkner’s arguably most important novels: The Sound and the Fury, Absalom, Absalom!, and Go Down, Moses. Each of these novels features a main character deeply overwhelmed by the crimes of the past, whether private, familial, or societal. As a result, guilt is explored both from a domestic as well as a social perspective. In order to show how Faulkner blends past and present by means of guilt, this work examines several methods and motifs borrowed from different fields and genres with which Faulkner narratively negotiates guilt. These include religious notions of original sin, the motif of the ancestral curse prevalent in the Southern Gothic genre, and the psychological concept of trauma. Each of these motifs emphasizes the temporal dimensions of guilt, which are the core of this study, and makes clear that guilt in Faulkner’s work is primarily to be understood as a temporal rather than a moral problem.
Im Rahmen einer explorativen Entwicklung wurde in der vorliegenden Studie ein Konzept zur Wissenschaftskommunikation für ein Graduiertenkolleg, in dem an photochemischen Prozessen geforscht wird, erstellt und anschließend evaluiert. Der Grund dafür ist die immer stärker wachsende Forderung nach Wissenschaftskommunikation seitens der Politik. Es wird darüber hinaus gefordert, dass die Kommunikation der eigenen Forschung in Zukunft integrativer Bestandteil des wissenschaftlichen Arbeitens wird. Um junge Wissenschaftler bereits frühzeitig auf diese Aufgabe vorzubereiten, wird Wissenschaftskommunikation auch in Forschungsverbünden realisiert.
Aus diesem Grund wurde in einer Vorstudie untersucht, welche Anforderungen an ein Konzept zur Wissenschaftskommunikation im Rahmen eines Forschungsverbundes gestellt werden, indem die Einstellung der Doktoranden zur Wissenschaftskommunikation sowie ihre Kommunikationsfähigkeiten anhand eines geschlossenen Fragebogens evaluiert wurden. Darüber hinaus wurden aus den Daten Wissenschaftskommunikationstypen abgeleitet. Auf Grundlage der Ergebnisse wurden unterschiedliche Wissenschaftskommunikationsmaßnahmen entwickelt, die sich in der Konzeption, den Rezipienten, sowie der Form der Kommunikation und den Inhalten unterscheiden.
Im Rahmen dieser Entwicklung wurde eine Lerneinheit mit Bezug auf die Inhalte des Graduiertenkollegs, bestehend aus einem Lehr-Lern-Experiment und den dazugehörigen Begleitmaterialien, konzipiert. Anschließend wurde die Lerneinheit in eine der Wissenschaftskommunikationsmaßnahmen integriert. Je nach Anforderung an die Doktoranden, wurden die Maßnahmen durch vorbereitende Workshops ergänzt.
Durch einen halboffenen Pre-Post-Fragebogen wurde der Einfluss der Wissenschaftskommunikationsmaßnahmen und der dazugehörigen Workshops auf die Selbstwirksamkeit der Doktoranden evaluiert, um Rückschlüsse darauf zu ziehen, wie sich die Wahrnehmung der eigenen Kommunikationsfähigkeiten durch die Interventionen verändert. Die Ergebnisse deuten darauf hin, dass die einzelnen Wissenschaftskommunikationsmaßnahmen die verschiedenen Typen in unterschiedlicher Weise beeinflussen. Es ist anzunehmen, dass es abhängig von der eigenen Einschätzung der Kommunikationsfähigkeit unterschiedliche Bedürfnisse der Förderung gibt, die durch dedizierte Wissenschaftskommunikationsmaßnahmen berücksichtigt werden können.
Auf dieser Grundlage werden erste Ansätze für eine allgemeingültige Strategie vorgeschlagen, die die individuellen Fähigkeiten zur Wissenschaftskommunikation in einem naturwissenschaftlichen Forschungsverbund fördert.
This dissertation examines the integration of incongruent visual-scene and morphological-case information (“cues”) in building thematic-role representations of spoken relative clauses in German.
Addressing the mutual influence of visual and linguistic processing, the Coordinated Interplay Account (CIA) describes a mechanism in two steps supporting visuo-linguistic integration (Knoeferle & Crocker, 2006, Cog Sci). However, the outcomes and dynamics of integrating incongruent thematic-role representations from distinct sources have been investigated scarcely. Further, there is evidence that both second-language (L2) and older speakers may rely on non-syntactic cues relatively more than first-language (L1)/young speakers. Yet, the role of visual information for thematic-role comprehension has not been measured in L2 speakers, and only limitedly across the adult lifespan.
Thematically unambiguous canonically ordered (subject-extracted) and noncanonically ordered (object-extracted) spoken relative clauses in German (see 1a-b) were presented in isolation and alongside visual scenes conveying either the same (congruent) or the opposite (incongruent) thematic relations as the sentence did.
1 a Das ist der Koch, der die Braut verfolgt.
This is the.NOM cook who.NOM the.ACC bride follows
This is the cook who is following the bride.
b Das ist der Koch, den die Braut verfolgt.
This is the.NOM cook whom.ACC the.NOM bride follows
This is the cook whom the bride is following.
The relative contribution of each cue to thematic-role representations was assessed with agent identification. Accuracy and latency data were collected post-sentence from a sample of L1 and L2 speakers (Zona & Felser, 2023), and from a sample of L1 speakers from across the adult lifespan (Zona & Reifegerste, under review). In addition, the moment-by-moment dynamics of thematic-role assignment were investigated with mouse tracking in a young L1 sample (Zona, under review).
The following questions were addressed: (1) How do visual scenes influence thematic-role representations of canonical and noncanonical sentences? (2) How does reliance on visual-scene, case, and word-order cues vary in L1 and L2 speakers? (3) How does reliance on visual-scene, case, and word-order cues change across the lifespan?
The results showed reliable effects of incongruence of visually and linguistically conveyed thematic relations on thematic-role representations. Incongruent (vs. congruent) scenes yielded slower and less accurate responses to agent-identification probes presented post-sentence. The recently inspected agent was considered as the most likely agent ~300ms after trial onset, and the convergence of visual scenes and word order enabled comprehenders to assign thematic roles predictively.
L2 (vs. L1) participants relied more on word order overall. In response to noncanonical clauses presented with incongruent visual scenes, sensitivity to case predicted the size of incongruence effects better than L1-L2 grouping. These results suggest that the individual’s ability to exploit specific cues might predict their weighting.
Sensitivity to case was stable throughout the lifespan, while visual effects increased with increasing age and were modulated by individual interference-inhibition levels. Thus, age-related changes in comprehension may stem from stronger reliance on visually (vs. linguistically) conveyed meaning.
These patterns represent evidence for a recent-role preference – i.e., a tendency to re-assign visually conveyed thematic roles to the same referents in temporally coordinated utterances. The findings (i) extend the generalizability of CIA predictions across stimuli, tasks, populations, and measures of interest, (ii) contribute to specifying the outcomes and mechanisms of detecting and indexing incongruent representations within the CIA, and (iii) speak to current efforts to understand the sources of variability in sentence comprehension.
Arachidonsäurelipoxygenasen (ALOX-Isoformen) sind Lipid-peroxidierenden Enzyme, die bei der Zelldifferenzierung und bei der Pathogenese verschiedener Erkrankungen bedeutsam sind. Im menschlichen Genom gibt es sechs funktionelle ALOX-Gene, die als Einzelkopiegene vorliegen. Für jedes humane ALOX-Gen gibt es ein orthologes Mausgen. Obwohl sich die sechs humanen ALOX-Isoformen strukturell sehr ähnlich sind, unterscheiden sich ihre funktionellen Eigenschaften deutlich voneinander. In der vorliegenden Arbeit wurden vier unterschiedliche Fragestellungen zum Vorkommen, zur biologischen Rolle und zur Evolutionsabhängigkeit der enzymatischen Eigenschaften von Säugetier-ALOX-Isoformen untersucht:
1) Spitzhörnchen (Tupaiidae) sind evolutionär näher mit dem Menschen verwandt als Nagetiere und wurden deshalb als Alternativmodelle für die Untersuchung menschlicher Erkrankungen vorgeschlagen. In dieser Arbeit wurde erstmals der Arachidonsäurestoffwechsel von Spitzhörnchen untersucht. Dabei wurde festgestellt, dass im Genom von Tupaia belangeri vier unterschiedliche ALOX15-Gene vorkommen und die Enzyme sich hinsichtlich ihrer katalytischen Eigenschaften ähneln. Diese genomische Vielfalt, die weder beim Menschen noch bei Mäusen vorhanden ist, erschwert die funktionellen Untersuchungen zur biologischen Rolle des ALOX15-Weges. Damit scheint Tupaia belangeri kein geeigneteres Tiermodel für die Untersuchung des ALOX15-Weges des Menschen zu sein.
2) Entsprechend der Evolutionshypothese können Säugetier-ALOX15-Orthologe in Arachidonsäure-12-lipoxygenierende- und Arachidonsäure-15-lipoxygenierende Enzyme eingeteilt werden. Dabei exprimieren Säugetierspezies, die einen höheren Evolutionsgrad als Gibbons aufweisen, Arachidonsäure-15-lipoxygenierende ALOX15-Orthologe, während evolutionär weniger weit entwickelte Säugetiere Arachidonsäure-12 lipoxygenierende Enzyme besitzen. In dieser Arbeit wurden elf neue ALOX15-Orthologe als rekombinante Proteine exprimiert und funktionell charakterisiert. Die erhaltenen Ergebnisse fügen sich widerspruchsfrei in die Evolutionshypothese ein und verbreitern deren experimentelle Basis. Die experimentellen Daten bestätigen auch das Triadenkonzept.
3) Da humane und murine ALOX15B-Orthologe unterschiedliche funktionelle Eigenschaften aufweisen, können Ergebnisse aus murinen Krankheitsmodellen zur biologischen Rolle der ALOX15B nicht direkt auf den Menschen übertragen werden. Um die ALOX15B-Orthologen von Maus und Mensch funktionell einander anzugleichen, wurden im Rahmen der vorliegenden Arbeit Knock-in Mäuse durch die In vivo Mutagenese mittels CRISPR/Cas9-Technik hergestellt. Diese exprimieren eine humanisierte Mutante (Doppelmutation von Tyrosin603Asparaginsäure+Histidin604Valin) der murinen Alox15b. Diese Mäuse waren lebens- und fortpflanzungsfähig, zeigten aber geschlechtsspezifische Unterschiede zu ausgekreuzten Wildtyp-Kontrolltieren im Rahmen ihre Individualentwicklung.
4) In vorhergehenden Untersuchungen zur Rolle der ALOX15B in Rahmen der Entzündungsreaktion wurde eine antiinflammatorische Wirkung des Enzyms postuliert. In der vorliegenden Arbeit wurde untersucht, ob eine Humanisierung der murinen Alox15b die Entzündungsreaktion in zwei verschiedenen murinen Entzündungsmodellen beeinflusst. Eine Humanisierung der murinen Alox15b führte zu einer verstärkten Ausbildung von Entzündungssymptomen im induzierten Dextran-Natrium-Sulfat-Kolitismodell. Im Gegensatz dazu bewirkte die Humanisierung der Alox15b eine Abschwächung der Entzündungssymptome im Freund‘schen Adjuvans Pfotenödemmodell. Diese Daten deuten darauf hin, dass sich die Rolle der ALOX15B in verschiedenen Entzündungsmodellen unterscheidet.
Der tänzerische Kreativitätstest stellt ein valides Instrumentarium dar, welches auf tanzspezifischen Aufgaben basiert und für die differenzierte und standardisierte Erfassung der tänzerischen Kreativität bei Kindern im Alter von 8 bis 12 Jahren konzipiert ist. Mit dem tänzerischen Kreativitätstest können nicht nur Fragestellungen zum Stand sowie zur Entwicklung tänzerisch-kreativer Fähigkeiten im Kindesalter bearbeitet werden, sondern er liefert auch wertvolle Informationen für die Optimierung von Trainings-, Förder- und Vermittlungsmaßnahmen. Erfasst werden folgende tänzerisch-kreativen Fähigkeiten: 1) Vielfalt und Originalität in der Fortbewegung und in Körperpositionen sowie 2) Ideenreichtum, Vielfalt und Originalität in der Gestaltung von Bewegungspatterns und -kompositionen. Dieser Test lässt sich mit größeren Gruppen und minimalem materiellen Aufwand durchführen, ist zeitlich unbeschränkt und ermöglicht es, unterschiedliche Leistungsniveaus zu identifizieren. Der tänzerische Kreativitätstest bietet Forschenden und Lehrkräften eine wertvolle Möglichkeit, die tänzerisch-kreativen Fähigkeiten von Kindern zu analysieren und zu fördern.
Open edX is an incredible platform to deliver MOOCs and SPOCs, designed to be robust and support hundreds of thousands of students at the same time. Nevertheless, it lacks a lot of the fine-grained functionality needed to handle students individually in an on-campus course. This short session will present the ongoing project undertaken by the 6 public universities of the Region of Madrid plus the Universitat Politècnica de València, in the framework of a national initiative called UniDigital, funded by the Ministry of Universities of Spain within the Plan de Recuperación, Transformación y Resiliencia of the European Union. This project, led by three of these Spanish universities (UC3M, UPV, UAM), is investing more than half a million euros with the purpose of bringing the Open edX platform closer to the functionalities required for an LMS to support on-campus teaching. The aim of the project is to coordinate what is going to be developed with the Open edX development community, so these developments are incorporated into the core of the Open edX platform in its next releases. Features like a complete redesign of platform analytics to make them real-time, the creation of dashboards based on these analytics, the integration of a system for customized automatic feedback, improvement of exams and tasks and the extension of grading capabilities, improvements in the graphical interfaces for both students and teachers, the extension of the emailing capabilities, redesign of the file management system, integration of H5P content, the integration of a tool to create mind maps, the creation of a system to detect students at risk, or the integration of an advanced voice assistant and a gamification mobile app, among others, are part of the functionalities to be developed. The idea is to transform a first-class MOOC platform into the next on-campus LMS.
Assessing the impact of global change on hydrological systems is one of the greatest hydrological challenges of our time. Changes in land cover, land use, and climate have an impact on water quantity, quality, and temporal availability. There is a widespread consensus that, given the far-reaching effects of global change, hydrological systems can no longer be viewed as static in their structure; instead, they must be regarded as entire ecosystems, wherein hydrological processes interact and coevolve with biological, geomorphological, and pedological processes. To accurately predict the hydrological response under the impact of global change, it is essential to understand this complex coevolution. The knowledge of how hydrological processes, in particular the formation of subsurface (preferential) flow paths, evolve within this coevolution and how they feed back to the other processes is still very limited due to a lack of observational data.
At the hillslope scale, this intertwined system of interactions is known as the hillslope feedback cycle. This thesis aims to enhance our understanding of the hillslope feedback cycle by studying the coevolution of hillslope structure and hillslope hydrological response. Using chronosequences of moraines in two glacial forefields developed from siliceous and calcareous glacial till, the four studies shed light on the complex coevolution of hydrological, biological, and structural hillslope properties, as well as subsurface hydrological flow paths over an evolutionary period of 10 millennia in these two contrasting geologies. The findings indicate that the contrasting properties of siliceous and calcareous parent materials lead
to variations in soil structure, permeability, and water storage. As a result, different plant species and vegetation types are favored on siliceous versus calcareous parent material, leading to diverse ecosystems with distinct hydrological dynamics. The siliceous parent material was found to show a higher activity level in driving the coevolution. The soil pH resulting from parent material weathering emerges as a crucial factor, influencing vegetation development, soil formation, and consequently, hydrology. The acidic weathering of the siliceous parent material favored the accumulation of organic matter, increasing the soils’ water storage capacity and attracting acid-loving shrubs, which further promoted organic matter accumulation and ultimately led to podsolization after 10 000 years. Tracer experiments revealed that the subsurface flow path evolution was influenced by soil and vegetation development, and vice versa. Subsurface flow paths changed from vertical, heterogeneous matrix flow to finger-like flow paths over a few hundred years, evolving into macropore flow, water storage, and lateral subsurface flow after several thousand years. The changes in flow paths among younger age classes were driven by weathering processes altering soil structure, as well as by vegetation development and root activity. In the older age
class, the transition to more water storage and lateral flow was attributed to substantial organic matter accumulation and ongoing podsolization. The rapid vertical water transport in the finger-like flow paths, along with the conductive sandy material, contributed to podsolization and thus to the shift in the hillslope hydrological response.
In contrast, the calcareous site possesses a high pH buffering capacity, creating a neutral to basic environment with relatively low accumulation of dead organic matter, resulting in a lower water storage capacity and the establishment of predominantly grass vegetation. The coevolution was found to be less dynamic over the millennia. Similar to the siliceous site, significant changes in subsurface flow paths occurred between the young age classes. However, unlike the siliceous site, the subsurface flow paths at the calcareous site only altered in shape and not in direction. Tracer experiments showed that flow paths changed from vertical, heterogeneous matrix flow to vertical, finger-like flow paths after a few hundred to thousands of years, which was driven by root activities and weathering processes. Despite having a finer soil texture, water storage at the calcareous site was significantly lower than at the siliceous site, and water transport remained primarily rapid and vertical, contributing to the flourishing of grass vegetation.
The studies elucidated that changes in flow paths are predominantly shaped by the characteristics of the parent material and its weathering products, along with their complex interactions with initial water flow paths and vegetation development. Time, on the other hand, was not found to be a primary factor in describing the evolution of the hydrological response. This thesis makes a valuable contribution to closing the gap in the observations of the coevolution of hydrological processes within the hillslope feedback cycle, which is important to improve predictions of hydrological processes in changing landscapes. Furthermore, it emphasizes the importance of interdisciplinary studies in addressing the hydrological challenges arising from global change.
It is a well-attested finding in head-initial languages that individuals with aphasia (IWA) have greater difficulties in comprehending object-extracted relative clauses (ORCs) as compared to subject-extracted relative clauses (SRCs). Adopting the linguistically based approach of Relativized Minimality (RM; Rizzi, 1990, 2004), the subject-object asymmetry is attributed to the occurrence of a Minimality effect in ORCs due to reduced processing capacities in IWA (Garraffa & Grillo, 2008; Grillo, 2008, 2009). For ORCs, it is claimed that the embedded subject intervenes in the syntactic dependency between the moved object and its trace, resulting in greater processing demands. In contrast, no such intervener is present in SRCs. Based on the theoretical framework of RM and findings from language acquisition (Belletti et al., 2012; Friedmann et al., 2009), it is assumed that Minimality effects are alleviated when the moved object and the intervening subject differ in terms of relevant syntactic features. For German, the language under investigation, the RM approach predicts that number (i.e., singular vs. plural) and the lexical restriction [+NP] feature (i.e., lexically restricted determiner phrases vs. lexically unrestricted pronouns) are considered relevant in the computation of Minimality. Greater degrees of featural distinctiveness are predicted to result in more facilitated processing of ORCs, because IWA can more easily distinguish between the moved object and the intervener.
This cumulative dissertation aims to provide empirical evidence on the validity of the RM approach in accounting for comprehension patterns during relative clause (RC) processing in German-speaking IWA. For that purpose, I conducted two studies including visual-world eye-tracking experiments embedded within an auditory referent-identification task to study the offline and online processing of German RCs. More specifically, target sentences were created to evaluate (a) whether IWA demonstrate a subject-object asymmetry, (b) whether dissimilarity in the number and/or the [+NP] features facilitates ORC processing, and (c) whether sentence processing in IWA benefits from greater degrees of featural distinctiveness. Furthermore, by comparing RCs disambiguated through case marking (at the relative pronoun or the following noun phrase) and number marking (inflection of the sentence-final verb), it was possible to consider the role of the relative position of the disambiguation point. The RM approach predicts that dissimilarity in case should not affect the occurrence of Minimality effects. However, the case cue to sentence interpretation appears earlier within RCs than the number cue, which may result in lower processing costs in case-disambiguated RCs compared to number-disambiguated RCs.
In study I, target sentences varied with respect to word order (SRC vs. ORC) and dissimilarity in the [+NP] feature (lexically restricted determiner phrase vs. pronouns as embedded element). Moreover, by comparing the impact of these manipulations in case- and number-disambiguated RCs, the effect of dissimilarity in the number feature was explored. IWA demonstrated a subject-object asymmetry, indicating the occurrence of a Minimality effect in ORCs. However, dissimilarity neither in the number feature nor in the [+NP] feature alone facilitated ORC processing. Instead, only ORCs involving distinct specifications of both the number and the [+NP] features were well comprehended by IWA. In study II, only temporarily ambiguous ORCs disambiguated through case or number marking were investigated, while controlling for varying points of disambiguation. There was a slight processing advantage of case marking as cue to sentence interpretation as compared to number marking.
Taken together, these findings suggest that the RM approach can only partially capture empirical data from German IWA. In processing complex syntactic structures, IWA are susceptible to the occurrence of the intervening subject in ORCs. The new findings reported in the thesis show that structural dissimilarity can modulate sentence comprehension in aphasia. Interestingly, IWA can override Minimality effects in ORCs and derive correct sentence meaning if the featural specifications of the constituents are maximally different, because they can more easily distinguish the moved object and the intervening subject given their reduced processing capacities. This dissertation presents new scientific knowledge that highlights how the syntactic theory of RM helps to uncover selective effects of morpho-syntactic features on sentence comprehension in aphasia, emphasizing the close link between assumptions from theoretical syntax and empirical research.
It is a common finding that preschoolers have difficulties in identifying who is doing what to whom in non-canonical sentences, such as (object-verb-subject) OVS and passive sentences in German. This dissertation investigates how German monolingual and German-Italian simultaneous bilingual children process German OVS sentences in Study 1 and German passives in Study 2. Offline data (i.e., accuracy data) and online data (i.e., eye-gaze and pupillometry data) were analyzed to explore whether children can assign thematic roles during sentence comprehension and processing. Executive functions, language-internal and -external factors were investigated as potential predictors for children’s sentence comprehension and processing.
Throughout the literature, there are contradicting findings on the relation between language and executive functions. While some results show a bilingual cognitive advantage over monolingual speakers, others suggest there is no relationship between bilingualism and executive functions. If bilingual children possess more advanced executive function abilities than monolingual children, then this might also be reflected in a better performance on linguistic tasks. In the current studies monolingual and bilingual children were tested by means of two executive function tasks: the Flanker task and the task-switching paradigm. However, these findings showed no bilingual cognitive advantages and no better performance by bilingual children in the linguistic tasks. The performance was rather comparable between bilingual and monolingual children, or even better for the monolingual group. This may be due to cross-linguistic influences and language experience (i.e., language input and output). Italian was used because it does not syntactically overlap with the structure of German OVS sentences, and it only overlapped with one of the two types of sentence condition used for the passive study - considering the subject-(finite)verb alignment. The findings showed a better performance of bilingual children in the passive sentence structure that syntactically overlapped in the two languages, providing evidence for cross-linguistic influences.
Further factors for children’s sentence comprehension were considered. The parents’ education, the number of older siblings and language experience variables were derived from a language background questionnaire completed by parents. Scores of receptive vocabulary and grammar, visual and short-term memory and reasoning ability were measured by means of standardized tests. It was shown that higher German language experience by bilinguals correlates with better accuracy in German OVS sentences but not in passive sentences. Memory capacity had a positive effect on the comprehension of OVS and passive sentences in the bilingual group. Additionally, a role was played by executive function abilities in the comprehension of OVS sentences and not of passive sentences. It is suggested that executive function abilities might help children in the sentence comprehension task when the linguistic structures are not yet fully mastered.
Altogether, these findings show that bilinguals’ poorer performance in the comprehension and processing of German OVS is mainly due to reduced language experience in German, and that the different performance of bilingual children with the two types of passives is mainly due to cross-linguistic influences.
The increasing number of known exoplanets raises questions about their demographics and the mechanisms that shape planets into how we observe them today. Young planets in close-in orbits are exposed to harsh environments due to the host star being magnetically highly active, which results in high X-ray and extreme UV fluxes impinging on the planet. Prolonged exposure to this intense photoionizing radiation can cause planetary atmospheres to heat up, expand and escape into space via a hydrodynamic escape process known as photoevaporation. For super-Earth and sub-Neptune-type planets, this can even lead to the complete erosion of their primordial gaseous atmospheres. A factor of interest for this particular mass-loss process is the activity evolution of the host star. Stellar rotation, which drives the dynamo and with it the magnetic activity of a star, changes significantly over the stellar lifetime. This strongly affects the amount of high-energy radiation received by a planet as stars age. At a young age, planets still host warm and extended envelopes, making them particularly susceptible to atmospheric evaporation. Especially in the first gigayear, when X-ray and UV levels can be 100 - 10,000 times higher than for the present-day sun, the characteristics of the host star and the detailed evolution of its high-energy emission are of importance.
In this thesis, I study the impact of stellar activity evolution on the high-energy-induced atmospheric mass loss of young exoplanets. The PLATYPOS code was developed as part of this thesis to calculate photoevaporative mass-loss rates over time. The code, which couples parameterized planetary mass-radius relations with an analytical hydrodynamic escape model, was used, together with Chandra and eROSITA X-ray observations, to investigate the future mass loss of the two young multiplanet systems V1298 Tau and K2-198. Further, in a numerical ensemble study, the effect of a realistic spread of activity tracks on the small-planet radius gap was investigated for the first time. The works in this thesis show that for individual systems, in particular if planetary masses are unconstrained, the difference between a young host star following a low-activity track vs. a high-activity one can have major implications: the exact shape of the activity evolution can determine whether a planet can hold on to some of its atmosphere, or completely loses its envelope, leaving only the bare rocky core behind. For an ensemble of simulated planets, an observationally-motivated distribution of activity tracks does not substantially change the final radius distribution at ages of several gigayears. My simulations indicate that the overall shape and slope of the resulting small-planet radius gap is not significantly affected by the spread in stellar activity tracks. However, it can account for a certain scattering or fuzziness observed in and around the radius gap of the observed exoplanet population.
Protected cultivation in greenhouses or polytunnels offers the potential for sustainable production of high-yield, high-quality vegetables. This is related to the ability to produce more on less land and to use resources responsibly and efficiently. Crop yield has long been considered the most important factor. However, as plant-based diets have been proposed for a sustainable food system, the targeted enrichment of health-promoting plant secondary metabolites should be addressed. These metabolites include carotenoids and flavonoids, which are associated with several health benefits, such as cardiovascular health and cancer protection.
Cover materials generally have an influence on the climatic conditions, which in turn can affect the levels of secondary metabolites in vegetables grown underneath. Plastic materials are cost-effective and their properties can be modified by incorporating additives, making them the first choice. However, these additives can migrate and leach from the material, resulting in reduced service life, increased waste and possible environmental release. Antifogging additives are used in agricultural films to prevent the formation of droplets on the film surface, thereby increasing light transmission and preventing microbiological contamination.
This thesis focuses on LDPE/EVA covers and incorporated antifogging additives for sustainable protected cultivation, following two different approaches. The first addressed the direct effects of leached antifogging additives using simulation studies on lettuce leaves (Lactuca sativa var capitata L). The second determined the effect of antifog polytunnel covers on lettuce quality. Lettuce is usually grown under protective cover and can provide high nutritional value due to its carotenoid and flavonoid content, depending on the cultivar.
To study the influence of simulated leached antifogging additives on lettuce leaves, a GC-MS method was first developed to analyze these additives based on their fatty acid moieties. Three structurally different antifogging additives (reference material) were characterized outside of a polymer matrix for the first time. All of them contained more than the main fatty acid specified by the manufacturer. Furthermore, they were found to adhere to the leaf surface and could not be removed by water or partially by hexane.
The incorporation of these additives into polytunnel covers affects carotenoid levels in lettuce, but not flavonoids, caffeic acid derivatives and chlorophylls. Specifically, carotenoids were higher in lettuce grown under polytunnels without antifog than with antifog. This has been linked to their effect on the light regime and was suggested to be related to carotenoid function in photosynthesis.
In terms of protected cultivation, the use of LDPE/EVA polytunnels affected light and temperature, and both are closely related. The carotenoid and flavonoid contents of lettuce grown under polytunnels was reversed, with higher carotenoid and lower flavonoid levels. At the individual level, the flavonoids detected in lettuce did not differ however, lettuce carotenoids adapted specifically depending on the time of cultivation. Flavonoid reduction was shown to be transcriptionally regulated (CHS) in response to UV light (UVR8). In contrast, carotenoids are thought to be regulated post-transcriptionally, as indicated by the lack of correlation between carotenoid levels and transcripts of the first enzyme in carotenoid biosynthesis (PSY) and a carotenoid degrading enzyme (CCD4), as well as the increased carotenoid metabolic flux. Understanding the regulatory mechanisms and metabolite adaptation strategies could further advance the strategic development and selection of cover materials.
Nils-Hendrik Grohmann beschäftigt sich mit dem noch andauernden Stärkungsprozess der UN-Menschenrechtsvertragsorgane. Er analysiert, welche rechtlichen Befugnisse die Ausschüsse haben, ob sie von sich aus Vorschläge einbringen können und inwieweit sie ihre Verfahrensweisen bisher aufeinander abgestimmt haben. Ein weiterer Schwerpunkt liegt auf der Zusammenarbeit zwischen den verschiedenen Ausschüssen und der Frage, welche Rolle das Treffen der Vorsitzenden bei der Stärkung spielen kann.
Die vorliegende Masterarbeit widmet sich der Frage, inwiefern die neuesten Lehrwerke für den gymnasialen Französischunterricht, Découvertes 1 (Klett) und À plus 1 (Cornelsen) aus dem Jahr 2020, sprachvernetzende Inhalte nutzen, um auf vorgelernte Sprachen und frühere Spracherwerbsprozesse hinzuweisen oder darauf zurückzugreifen. Der Fokus liegt dabei auf der Schul- und/oder Erstsprache Deutsch sowie der ersten Fremdsprache Englisch, wobei auch andere auftretende Sprachen in die Untersuchung einbezogen werden.
Die Arbeit leistet einen Beitrag zum fachdidaktischen Diskurs bezüglich mehrsprachigkeitsdidaktischer Inhalte in Fremdsprachenlehrwerken. Darüber hinaus kann sie Lehrkräften aufzeigen, wie diese aktuellen Lehrwerke ihren mehrsprachigkeitsorientierten Unterricht begleiten können.
Die Einleitung betont die Relevanz der Sprachvernetzung für den Fremdsprachenunterricht, insbesondere im Hinblick auf die individuelle Mehrsprachigkeit der Schüler*innen. Es wird auf das Potenzial des interlingualen Transfers hingewiesen, das u. a. in einer Lernerleichterung sowie der Förderung der Sprachbewusstheit und der Sprachlernbewusstheit besteht.
In Kapitel 2 werden die theoretischen Grundlagen für die Analyse gelegt, indem Mehrsprachigkeit und Mehrsprachigkeitsdidaktik, Sprachvernetzung und ihr Potenzial näher betrachtet werden. Zudem wird anhand des Deutschen und Englischen aufgezeigt, welches sprachliche Transferpotenzial im Anfangsunterricht Französisch eingebracht werden könnte. Auch die Bedingungen dafür, dass Schüler*innen den interlingualen Transfer in ihrem Spracherwerb einsetzen, werden besprochen.
Kapitel 3 gibt einen Überblick über den Forschungsstand zu Sprachvernetzung und Mehrsprachigkeit in Fremdsprachenlehrwerken und identifiziert die Forschungslücke, die diese Arbeit zu schließen versucht.
In Kapitel 4 werden die Forschungsfrage und ihre Unterfragen formuliert, die untersuchten Lehrwerke beschrieben und die Auswahl der Lehrwerke und der untersuchten Lehrwerkskomponenten begründet. Zudem wird die Methodik der vergleichenden Lehrwerkanalyse erläutert.
Die Ergebnisse der Analyse werden in Kapitel 5 ausführlich dargestellt. Es wird aufgezeigt, welche sprachvernetzenden Inhalte in den jeweiligen Lehrwerken vorkommen – in welcher Form und unter Einbezug welcher Sprachen und sprachlichen Ebenen.
In Kapitel 6 werden die Ergebnisse diskutiert und analysiert, wobei auf die Mehrsprachigkeitskonzepte der Lehrwerke und die Trends bei den sprachvernetzenden Inhalten eingegangen wird.
Im abschließenden Kapitel 7 wird zusammenfassend betont, dass beide Lehrwerke viele sprachvernetzende Inhalte anbieten, die das Potenzial haben, mehrsprachigkeitsdidaktisches Arbeiten zu unterstützen. Insbesondere auf der Produktionsebene werden jedoch noch zu wenige Transferprozesse initiiert. Zudem wird aufgezeigt, welche weiteren Untersuchungen ergänzend möglich sind, z. B. hinsichtlich des Einsatzes der sprachvernetzenden Inhalte im Unterricht.
Global warming, driven primarily by the excessive emission of greenhouse gases such as carbon dioxide into the atmosphere, has led to severe and detrimental environmental impacts. Rising global temperatures have triggered a cascade of adverse effects, including melting glaciers and polar ice caps, more frequent and intense heat waves disrupted weather patterns, and the acidification of oceans. These changes adversely affect ecosystems, biodiversity, and human societies, threatening food security, water availability, and livelihoods. One promising solution to mitigate the harmful effects of global warming is the widespread adoption of solar cells, also known as photovoltaic cells. Solar cells harness sunlight to generate electricity without emitting greenhouse gases or other pollutants. By replacing fossil fuel-based energy sources, solar cells can significantly reduce CO2 emissions, a significant contributor to global warming. This transition to clean, renewable energy can help curb the increasing concentration of greenhouse gases in the atmosphere, thereby slowing down the rate of global temperature rise.
Solar energy’s positive impact extends beyond emission reduction. As solar panels become more efficient and affordable, they empower individuals, communities, and even entire nations to generate electricity and become less dependent on fossil fuels. This decentralized energy generation can enhance resilience in the face of climate-related challenges. Moreover, implementing solar cells creates green jobs and stimulates technological innovation, further promoting sustainable economic growth. As solar technology advances, its integration with energy storage systems and smart grids can ensure a stable and reliable energy supply, reducing the need for backup fossil fuel power plants that exacerbate environmental degradation.
The market-dominant solar cell technology is silicon-based, highly matured technology with a highly systematic production procedure. However, it suffers from several drawbacks, such as: 1) Cost: still relatively high due to high energy consumption due to the need to melt and purify silicon, and the use of silver as an electrode, which hinders their widespread availability, especially in low-income countries. 2) Efficiency: theoretically, it should deliver around 29%; however, the efficiency of most of the commercially available silicon-based solar cells ranges from 18 – 22%. 3) Temperature sensitivity: The efficiency decreases with the increase in the temperature, affecting their output. 4) Resource constraints: silicon as a raw material is unavailable in all countries, creating supply chain challenges.
Perovskite solar cells arose in 2011 and matured very rapidly in the last decade as a highly efficient and versatile solar cell technology. With an efficiency of 26%, high absorption coefficients, solution processability, and tunable band gap, it attracted the attention of the solar cells community. It represented a hope for cheap, efficient, and easily processable next-generation solar cells. However, lead toxicity might be the block stone hindering perovskite solar cells’ market reach. Lead is a heavy and bioavailable element that makes perovskite solar cells environmentally unfriendly technology. As a result, scientists try to replace lead with a more environmentally friendly element. Among several possible alternatives, tin was the most suitable element due to its electronic and atomic structure similarity to lead.
Tin perovskites were developed to alleviate the challenge of lead toxicity. Theoretically, it shows very high absorption coefficients, an optimum band gap of 1.35 eV for FASnI3, and a very high short circuit current, which nominates it to deliver the highest possible efficiency of a single junction solar cell, which is around 30.1% according to Schockly-Quisser limit. However, tin perovskites’ efficiency still lags below 15% and is irreproducible, especially from lab to lab. This humble performance could be attributed to three reasons: 1) Tin (II) oxidation to tin (IV), which would happen due to oxygen, water, or even by the effect of the solvent, as was discovered recently. 2) fast crystallization dynamics, which occurs due to the lateral exposure of the P-orbitals of the tin atom, which enhances its reactivity and increases the crystallization pace. 3) Energy band misalignment: The energy bands at the interfaces between the perovskite absorber material and the charge selective layers are not aligned, leading to high interfacial charge recombination, which devastates the photovoltaic performance. To solve these issues, we implemented several techniques and approaches that enhanced the efficiency of tin halide perovskites, providing new chemically safe solvents and antisolvents. In addition, we studied the energy band alignment between the charge transport layers and the tin perovskite absorber.
Recent research has shown that the principal source of tin oxidation is the solvent known as dimethylsulfoxide, which also happens to be one of the most effective solvents for processing perovskite. The search for a stable solvent might prove to be the factor that makes all the difference in the stability of tin-based perovskites. We started with a database of over 2,000 solvents and narrowed it down to a series of 12 new solvents that are suitable for processing FASnI3 experimentally. This was accomplished by looking into 1) the solubility of the precursor chemicals FAI and SnI2, 2) the thermal stability of the precursor solution, and 3) the potential to form perovskite. Finally, we show that it is possible to manufacture solar cells using a novel solvent system that outperforms those produced using DMSO. The results of our research give some suggestions that may be used in the search for novel solvents or mixes of solvents that can be used to manufacture stable tin-based perovskites.
Due to the quick crystallization of tin, it is more difficult to deposit tin-based perovskite films from a solution than manufacturing lead-based perovskite films since lead perovskite is more often utilized. The most efficient way to get high efficiencies is to deposit perovskite from dimethyl sulfoxide (DMSO), which slows down the quick construction of the tin-iodine network that is responsible for perovskite synthesis. This is the most successful approach for achieving high efficiencies. Dimethyl sulfoxide, which is used in the processing, is responsible for the oxidation of tin, which is a disadvantage of this method. This research presents a potentially fruitful alternative in which 4-(tert-butyl) pyridine can substitute dimethyl sulfoxide in the process of regulating crystallization without causing tin oxidation to take place. Perovskite films that have been formed from pyridine have been shown to have a much-reduced defect density. This has resulted in increased charge mobility and better photovoltaic performance, making pyridine a desirable alternative for use in the deposition of tin perovskite films.
The precise control of perovskite precursor crystallization inside a thin film is of utmost importance for optimizing the efficiency and manufacturing of solar cells. The deposition process of tin-based perovskite films from a solution presents difficulties due to the quick crystallization of tin compared to the more often employed lead perovskite. The optimal approach for attaining elevated efficiencies entails using dimethyl sulfoxide (DMSO) as a medium for depositing perovskite. This choice of solvent impedes the tin-iodine network’s fast aggregation, which plays a crucial role in the production of perovskite. Nevertheless, this methodology is limited since the utilization of dimethyl sulfoxide leads to the oxidation of tin throughout the processing stage. In this thesis, we present a potentially advantageous alternative approach wherein 4-(tert-butyl) pyridine is proposed as a substitute for dimethyl sulfoxide in regulating crystallization processes while avoiding the undesired consequence of tin oxidation. Films of perovskite formed using pyridine as a solvent have a notably reduced density of defects, resulting in higher mobility of charges and improved performance in solar applications. Consequently, the utilization of pyridine for the deposition of tin perovskite films is considered advantageous.
Tin perovskites are suffering from an apparent energy band misalignment. However, the band diagrams published in the current body of research display contradictions, resulting in a dearth of unanimity. Moreover, comprehensive information about the dynamics connected with charge extraction is lacking. This thesis aims to ascertain the energy band locations of tin perovskites by employing the kelvin probe and Photoelectron yield spectroscopy methods. This thesis aims to construct a precise band diagram for the often-utilized device stack. Moreover, a comprehensive analysis is performed to assess the energy deficits inherent in the current energetic structure of tin halide perovskites. In addition, we investigate the influence of BCP on the improvement of electron extraction in C60/BCP systems, with a specific emphasis on the energy factors involved. Furthermore, transient surface photovoltage was utilized to investigate the charge extraction kinetics of frequently studied charge transport layers, such as NiOx and PEDOT as hole transport layers and C60, ICBA, and PCBM as electron transport layers. The Hall effect, KP, and TRPL approaches accurately ascertain the p-doping concentration in FASnI3. The results consistently demonstrated a value of 1.5 * 1017 cm-3. Our research findings highlight the imperative nature of autonomously constructing the charge extraction layers for tin halide perovskites, apart from those used for lead perovskites.
The crystallization of perovskite precursors relies mainly on the utilization of two solvents. The first one dissolves the perovskite powder to form the precursor solution, usually called the solvent. The second one precipitates the perovskite precursor, forming the wet film, which is a supersaturated solution of perovskite precursor and in the remains of the solvent and the antisolvent. Later, this wet film crystallizes upon annealing into a full perovskite crystallized film. In our research context, we proposed new solvents to dissolve FASnI3, but when we tried to form a film, most of them did not crystallize. This is attributed to the high coordination strength between the metal halide and the solvent molecules, which is unbreakable by the traditionally used antisolvents such as Toluene and Chlorobenzene. To solve this issue, we introduce a high-throughput antisolvent screening in which we screened around 73 selected antisolvents against 15 solvents that can form a 1M FASnI3 solution. We used for the first time in tin perovskites machine learning algorithm to understand and predict the effect of an antisolvent on the crystallization of a precursor solution in a particular solvent. We relied on film darkness as a primary criterion to judge the efficacy of a solvent-antisolvent pair. We found that the relative polarity between solvent and antisolvent is the primary factor that affects the solvent-antisolvent interaction. Based on our findings, we prepared several high-quality tin perovskite films free from DMSO and achieved an efficiency of 9%, which is the highest DMSO tin perovskite device so far.
Der Fall des T. Annius Milo bietet für den Lateinunterricht großes didaktisches Potenzial. Denn an seinem Beispiel kann die Lektüre eines lateinischen Textes hervorragend mit realienkundlichen Aspekten verknüpft und es können plausible Bezüge zur Gegenwart hergestellt werden. Die vorliegende Masterarbeit zeigt, welch reiches Themenspektrum in Ciceros Rede Pro Milone steckt. Dazu zählen der historische Kontext des Falls, der Tatbestand des Mordes und der Ablauf des damaligen Gerichtsverfahrens. Darüber hinaus wird das römische Recht mit dem heutzutage in Deutschland geltenden Strafrecht verglichen. Und zu guter Letzt wird hier die Glaubwürdigkeit verschiedener schriftlicher Zeugnisse geprüft, insbesondere die Frage, ob die überlieferte Rede das einstige Prozessgeschehen in authentischer Weise widerspiegelt.
Die Begrenzung systemischer Risiken ist essentieller Bestandteil der neuen internationalen Finanzmarktordnung. Dabei galt es nicht nur die Verflechtung der Banken untereinander, sondern auch die Verbindung zwischen den Staatsfinanzen und der Solvenz der nationalen Bankensysteme (dem sog. Risikoverbund zwischen Staat und Banken) zu durchbrechen. Der Beitrag beleuchtet die Entwicklung der Forderungen gegenüber Staaten in den Bankbilanzen der Euroländer und des Eurosystems im Zeitverlauf sowie den daraus erwachsenden Risiken für die Finanzstabilität. Hierzu werden die Determinanten des Risikoverbunds theoretisch wie empirisch analysiert. Die fiskalische Kapazität der Eurostaaten wird anhand verschiedener Faktoren wie der Verschuldungsquote, dem Leistungsbilanzsaldo und der Kredit-BIP Lücke aufgezeigt; anschließend werden die Strukturen der Bankensysteme im Euroraum untersucht. Im Einzelnen werden die private und staatliche Gesamtverschuldung, die konsolidierte Bankenbilanzsumme und die darin enthaltenen Verbindlichkeiten sowie der Anteil des Bankensektors an der Bruttowertschöpfung in Relation zur Wirtschaftsleistung betrachtet. Außerdem finden NPE-Bestände in den Bankbilanzen sowie die Renditen der emittierten Staatsanleihen und damit in Verbindung stehenden CDS-Spreads Betrachtung. Zusätzlich werden die Konzentration, der Verschuldungsgrad, Liquiditätsziffern sowie länderspezifische Unterschiede in Art und Fristigkeit der Refinanzierung der Bankensektoren abgebildet. Auf Basis der empirischen Befunde werden im Hinblick auf die wechselseitigen Ansteckungseffekte zwischen Banken und Staaten Implikationen für die Finanzmarktregulierung diskutiert.
Leadership plays an important role for the efficient and fair solution of social dilemmas but the effectiveness of a leader can vary substantially. Two main factors of leadership impact are the ability to induce high contributions by all group members and the (expected) fair use of power. Participants in our experiment decide about contributions to a public good. After all contributions are made, the leader can choose how much of the joint earnings to assign to herself; the remainder is distributed equally among the followers. Using machine learning techniques, we study whether the content of initial open statements by the group members predicts their behavior as a leader and whether groups are able to identify such clues and endogenously appoint a “good” leader to solve the dilemma. We find that leaders who promise fairness are more likely to behave fairly, and that followers appoint as leaders those who write more explicitly about fairness and efficiency. However, in their contribution decision, followers focus on the leader’s first-move contribution and place less importance on the content of the leader’s statements.
Relativistic pair beams produced in the cosmic voids by TeV gamma rays from blazars are expected to produce a detectable GeV-scale cascade emission missing in the observations. The suppression of this secondary cascade implies either the deflection of the pair beam by intergalactic magnetic fields (IGMFs) or an energy loss of the beam due to the electrostatic beam-plasma instability. IGMF of femto-Gauss strength is sufficient to significantly deflect the pair beams reducing the flux of secondary cascade below the observational limits. A similar flux reduction may result in the absence of the IGMF from the beam energy loss by the instability before the inverse Compton cooling. This dissertation consists of two studies about the instability role in the evolution of blazar-induced beams.
Firstly, we investigated the effect of sub-fG level IGMF on the beam energy loss by the instability. Considering IGMF with correlation lengths smaller than a few kpc, we found that such fields increase the transverse momentum of the pair beam particles, dramatically reducing the linear growth rate of the electrostatic instability and hence the energy-loss rate of the pair beam. Our results show that the IGMF eliminates beam plasma instability as an effective energy-loss agent at a field strength three orders of magnitude below that needed to suppress the secondary cascade emission by magnetic deflection. For intermediate-strength IGMF, we do not know a viable process to explain the observed absence of GeV-scale cascade emission and hence can be excluded.
Secondly, we probed how the beam-plasma instability feeds back on the beam, using a realistic two-dimensional beam distribution. We found that the instability broadens the beam opening angles significantly without any significant energy loss, thus confirming a recent feedback study on a simplified one-dimensional beam distribution. However, narrowing diffusion feedback of the beam particles with Lorentz factors less than 1e6 might become relevant even though initially it is negligible. Finally, when considering the continuous creation of TeV pairs, we found that the beam distribution and the wave spectrum reach a new quasi-steady state, in which the scattering of beam particles persists and the beam opening angle may increase by a factor of hundreds. This new intrinsic scattering of the cascade can result in time delays of around ten years, thus potentially mimicking the IGMF deflection. Understanding the implications on the GeV cascade emission requires accounting for inverse Compton cooling and simulating the beam-plasma system at different points in the IGM.
With Arctic ground as a huge and temperature-sensitive carbon reservoir, maintaining low ground temperatures and frozen conditions to prevent further carbon emissions that contrib-ute to global climate warming is a key element in humankind’s fight to maintain habitable con-ditions on earth. Former studies showed that during the late Pleistocene, Arctic ground condi-tions were generally colder and more stable as the result of an ecosystem dominated by large herbivorous mammals and vast extents of graminoid vegetation – the mammoth steppe. Characterised by high plant productivity (grassland) and low ground insulation due to animal-caused compression and removal of snow, this ecosystem enabled deep permafrost aggrad-ation. Now, with tundra and shrub vegetation common in the terrestrial Arctic, these effects are not in place anymore. However, it appears to be possible to recreate this ecosystem local-ly by artificially increasing animal numbers, and hence keep Arctic ground cold to reduce or-ganic matter decomposition and carbon release into the atmosphere.
By measuring thaw depth, total organic carbon and total nitrogen content, stable carbon iso-tope ratio, radiocarbon age, n-alkane and alcohol characteristics and assessing dominant vegetation types along grazing intensity transects in two contrasting Arctic areas, it was found that recreating conditions locally, similar to the mammoth steppe, seems to be possible. For permafrost-affected soil, it was shown that intensive grazing in direct comparison to non-grazed areas reduces active layer depth and leads to higher TOC contents in the active layer soil. For soil only frozen on top in winter, an increase of TOC with grazing intensity could not be found, most likely because of confounding factors such as vertical water and carbon movement, which is not possible with an impermeable layer in permafrost. In both areas, high animal activity led to a vegetation transformation towards species-poor graminoid-dominated landscapes with less shrubs. Lipid biomarker analysis revealed that, even though the available organic material is different between the study areas, in both permafrost-affected and sea-sonally frozen soils the organic material in sites affected by high animal activity was less de-composed than under less intensive grazing pressure. In conclusion, high animal activity af-fects decomposition processes in Arctic soils and the ground thermal regime, visible from reduced active layer depth in permafrost areas. Therefore, grazing management might be utilised to locally stabilise permafrost and reduce Arctic carbon emissions in the future, but is likely not scalable to the entire permafrost region.
Organic-inorganic hybrids based on P3HT and mesoporous silicon for thermoelectric applications
(2024)
This thesis presents a comprehensive study on synthesis, structure and thermoelectric transport properties of organic-inorganic hybrids based on P3HT and porous silicon. The effect of embedding polymer in silicon pores on the electrical and thermal transport is studied. Morphological studies confirm successful polymer infiltration and diffusion doping with roughly 50% of the pore space occupied by conjugated polymer. Synchrotron diffraction experiments reveal no specific ordering of the polymer inside the pores. P3HT-pSi hybrids show improved electrical transport by five orders of magnitude compared to porous silicon and power factor values comparable or exceeding other P3HT-inorganic hybrids. The analysis suggests different transport mechanisms in both materials. In pSi, the transport mechanism relates to a Meyer-Neldel compansation rule. The analysis of hybrids' data using the power law in Kang-Snyder model suggests that a doped polymer mainly provides charge carriers to the pSi matrix, similar to the behavior of a doped semiconductor. Heavily suppressed thermal transport in porous silicon is treated with a modified Landauer/Lundstrom model and effective medium theories, which reveal that pSi agrees well with the Kirkpatrick model with a 68% percolation threshold. Thermal conductivities of hybrids show an increase compared to the empty pSi but the overall thermoelectric figure of merit ZT of P3HT-pSi hybrid exceeds both pSi and P3HT as well as bulk Si.
Optimizing power analysis for randomized experiments: Design parameters for student achievement
(2024)
Randomized trials (RTs) are promising methodological tools to inform evidence-based reform to enhance schooling. Establishing a robust knowledge base on how to promote student achievement requires sensitive RT designs demonstrating sufficient statistical power and precision to draw conclusive and correct inferences on the effectiveness of educational programs and innovations. Proper power analysis is therefore an integral component of any informative RT on student achievement. This venture critically hinges on the availability of reasonable input variance design parameters (and their inherent uncertainties) that optimally reflect the realities around the prospective RT—precisely, its target population and outcome, possibly applied covariates, the concrete design as well as the planned analysis. However, existing compilations in this vein show far-reaching shortcomings.
The overarching endeavor of the present doctoral thesis was to substantively expand available resources devoted to tweak the planning of RTs evaluating educational interventions. At the core of this thesis is a systematic analysis of design parameters for student achievement, generating reliable and versatile compendia and developing thorough guidance to support apt power analysis to design strong RTs. To this end, the thesis at hand bundles two complementary studies which capitalize on rich data of several national probability samples from major German longitudinal large-scale assessments.
Study I applied two- and three-level latent (covariate) modeling to analyze design parameters for a wide spectrum of mathematical-scientific, verbal, and domain-general achievement outcomes. Three vital covariate sets were covered comprising (a) pretests, (b) sociodemographic characteristics, and (c) their combination. The accumulated estimates were additionally summarized in terms of normative distributions.
Study II specified (manifest) single-, two-, and three-level models and referred to influential psychometric heuristics to analyze design parameters and develop concise selection guidelines for covariate (a) types of varying bandwidth-fidelity (domain-identical, cross-domain, fluid intelligence pretests; sociodemographic characteristics), (b) combinations quantifying incremental validities, and (c) time lags of 1- to 7-year-lagged pretests scrutinizing validity degradation. The estimates for various mathematical-scientific and verbal achievement outcomes were meta-analytically integrated and employed in precision simulations.
In doing so, Studies I and II addressed essential gaps identified in previous repertoires in six major dimensions: Taken together, this thesis accumulated novel design parameters and deliberate guidance for RT power analysis (1) tailored to four German student (sub)populations across the entire school career from Grade 1 to 12, (2) matched to 21 achievement (sub)domains, (3) adjusted for 11 covariate sets enriched by empirically supported guidelines, (4) adapted to six RT designs, (5) suitable for latent and manifest analysis models, (6) which were cataloged along with quantifications of their associated uncertainties. These resources are complemented by a plethora of illustrative application examples to gently direct psychological and educational researchers through pivotal steps in the process of RT design.
The striking heterogeneity of the design parameter estimates across all these dimensions constitutes the overall, joint key result of Studies I and II. Hence, this work convincingly reinforces calls for a close match between design parameters and the specific peculiarities of the target RT’s research context.
All in all, the present doctoral thesis offers a—so far unique—nuanced and extensive toolkit to optimize power analysis for sound RTs on student achievement in the German (and similar) school context. It is of utmost importance that research does not tire to spawn robust evidence on what actually works to improve schooling. With this in mind, I hope that the emerging compendia and guidance contribute to the quality and rigor of our randomized experiments in psychology and education.
Large parts of the Earth’s interior are inaccessible to direct observation, yet global geodynamic processes are governed by the physical material properties under extreme pressure and temperature conditions. It is therefore essential to investigate the deep Earth’s physical properties through in-situ laboratory experiments. With this goal in mind, the optical properties of mantle minerals at high pressure offer a unique way to determine a variety of physical properties, in a straight-forward, reproducible, and time-effective manner, thus providing valuable insights into the physical processes of the deep Earth. This thesis focusses on the system Mg-Fe-O, specifically on the optical properties of periclase (MgO) and its iron-bearing variant ferropericlase ((Mg,Fe)O), forming a major planetary building block. The primary objective is to establish links between physical material properties and optical properties. In particular the spin transition in ferropericlase, the second-most abundant phase of the lower mantle, is known to change the physical material properties. Although the spin transition region likely extends down to the core-mantle boundary, the ef-fects of the mixed-spin state, where both high- and low-spin state are present, remains poorly constrained.
In the studies presented herein, we show how optical properties are linked to physical properties such as electrical conductivity, radiative thermal conductivity and viscosity. We also show how the optical properties reveal changes in the chemical bonding. Furthermore, we unveil how the chemical bonding, the optical and other physical properties are affected by the iron spin transition. We find opposing trends in the pres-sure dependence of the refractive index of MgO and (Mg,Fe)O. From 1 atm to ~140 GPa, the refractive index of MgO decreases by ~2.4% from 1.737 to 1.696 (±0.017). In contrast, the refractive index of (Mg0.87Fe0.13)O (Fp13) and (Mg0.76Fe0.24)O (Fp24) ferropericlase increases with pressure, likely because Fe Fe interactions between adjacent iron sites hinder a strong decrease of polarizability, as it is observed with increasing density in the case of pure MgO. An analysis of the index dispersion in MgO (decreasing by ~23% from 1 atm to ~103 GPa) reflects a widening of the band gap from ~7.4 eV at 1 atm to ~8.5 (±0.6) eV at ~103 GPa. The index dispersion (between 550 and 870 nm) of Fp13 reveals a decrease by a factor of ~3 over the spin transition range (~44–100 GPa). We show that the electrical band gap of ferropericlase significantly widens up to ~4.7 eV in the mixed spin region, equivalent to an increase by a factor of ~1.7. We propose that this is due to a lower electron mobility between adjacent Fe2+ sites of opposite spin, explaining the previously observed low electrical conductivity in the mixed spin region. From the study of absorbance spectra in Fp13, we show an increasing covalency of the Fe-O bond with pressure for high-spin ferropericlase, whereas in the low-spin state a trend to a more ionic nature of the Fe-O bond is observed, indicating a bond weakening effect of the spin transition. We found that the spin transition is ultimately caused by both an increase of the ligand field-splitting energy and a decreasing spin-pairing energy of high-spin Fe2+.
The urban heat island (UHI) effect, describing an elevated temperature of urban areas compared with their natural surroundings, can expose urban dwellers to additional heat stress, especially during hot summer days. A comprehensive understanding of the UHI dynamics along with urbanization is of great importance to efficient heat stress mitigation strategies towards sustainable urban development. This is, however, still challenging due to the difficulties of isolating the influences of various contributing factors that interact with each other. In this work, I present a systematical and quantitative analysis of how urban intrinsic properties (e.g., urban size, density, and morphology) influence UHI intensity.
To this end, we innovatively combine urban growth modelling and urban climate simulation to separate the influence of urban intrinsic factors from that of background climate, so as to focus on the impact of urbanization on the UHI effect. The urban climate model can create a laboratory environment which makes it possible to conduct controlled experiments to separate the influences from different driving factors, while the urban growth model provides detailed 3D structures that can be then parameterized into different urban development scenarios tailored for these experiments. The novelty in the methodology and experiment design leads to the following achievements of our work.
First, we develop a stochastic gravitational urban growth model that can generate 3D structures varying in size, morphology, compactness, and density gradient. We compare various characteristics, like fractal dimensions (box-counting, area-perimeter scaling, area-population scaling, etc.), and radial gradient profiles of land use share and population density, against those of real-world cities from empirical studies. The model shows the capability of creating 3D structures resembling real-world cities. This model can generate 3D structure samples for controlled experiments to assess the influence of some urban intrinsic properties in question. [Chapter 2]
With the generated 3D structures, we run several series of simulations with urban structures varying in properties like size, density and morphology, under the same weather conditions. Analyzing how the 2m air temperature based canopy layer urban heat island (CUHI) intensity varies in response to the changes of the considered urban factors, we find the CUHI intensity of a city is directly related to the built-up density and an amplifying effect that urban sites have on each other. We propose a Gravitational Urban Morphology (GUM) indicator to capture the neighbourhood warming effect. We build a regression model to estimate the CUHI intensity based on urban size, urban gross building volume, and the GUM indicator. Taking the Berlin area as an example, we show the regression model capable of predicting the CUHI intensity under various urban development scenarios. [Chapter 3]
Based on the multi-annual average summer surface urban heat island (SUHI) intensity derived from Land surface temperature, we further study how urban intrinsic factors influence the SUHI effect of the 5,000 largest urban clusters in Europe. We find a similar 3D GUM indicator to be an effective predictor of the SUHI intensity of these European cities. Together with other urban factors (vegetation condition, elevation, water coverage), we build different multivariate linear regression models and a climate space based Geographically Weighted Regression (GWR) model that can better predict SUHI intensity. By investigating the roles background climate factors play in modulating the coefficients of the GWR model, we extend the multivariate linear model to a nonlinear one by integrating some climate parameters, such as the average of daily maximal temperature and latitude. This makes it applicable across a range of background climates. The nonlinear model outperforms linear models in SUHI assessment as it captures the interaction of urban factors and the background climate. [Chapter 4]
Our work reiterates the essential roles of urban density and morphology in shaping the urban thermal environment. In contrast to many previous studies that link bigger cities with higher UHI intensity, we show that cities larger in the area do not necessarily experience a stronger UHI effect. In addition, the results extend our knowledge by demonstrating the influence of urban 3D morphology on the UHI effect. This underlines the importance of inspecting cities as a whole from the 3D perspective. While urban 3D morphology is an aggregated feature of small-scale urban elements, the influence it has on the city-scale UHI intensity cannot simply be scaled up from that of its neighbourhood-scale components. The spatial composition and configuration of urban elements both need to be captured when quantifying urban 3D morphology as nearby neighbourhoods also cast influences on each other. Our model serves as a useful UHI assessment tool for the quantitative comparison of urban intervention/development scenarios. It can support harnessing the capacity of UHI mitigation through optimizing urban morphology, with the potential of integrating climate change into heat mitigation strategies.
The remarkable antifouling properties of zwitterionic polymers in controlled environments are often counteracted by their delicate mechanical stability. In order to improve the mechanical stabilities of zwitterionic hydrogels, the effect of increased crosslinker densities was thus explored. In a first approach, terpolymers of zwitterionic monomer 3-[N -2(methacryloyloxy)ethyl-N,N-dimethyl]ammonio propane-1-sulfonate (SPE), hydrophobic monomer butyl methacrylate (BMA), and photo-crosslinker 2-(4-benzoylphenoxy)ethyl methacrylate (BPEMA) were synthesized. Thin hydrogel coatings of the copolymers were then produced and photo-crosslinked. Studies of the swollen hydrogel films showed that not only the mechanical stability but also, unexpectedly, the antifouling properties were improved by the presence of hydrophobic BMA units in the terpolymers.
Based on the positive results shown by the amphiphilic terpolymers and in order to further test the impact that hydrophobicity has on both the antifouling properties of zwitterionic hydrogels and on their mechanical stability, a new amphiphilic zwitterionic methacrylic monomer, 3-((2-(methacryloyloxy)hexyl)dimethylammonio)propane-1-sulfonate (M1), was synthesized in good yields in a multistep synthesis. Homopolymers of M1 were obtained by free-radical polymerization. Similarly, terpolymers of M1, zwitterionic monomer SPE, and photo-crosslinker BPEMA were synthesized by free-radical copolymerization and thoroughly characterized, including its solubilities in selected solvents.
Also, a new family of vinyl amide zwitterionic monomomers, namely 3-(dimethyl(2-(N -vinylacetamido)ethyl)ammonio)propane-1-sulfonate (M2), 4-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)butane-1-sulfonate (M3), and 3-(dimethyl(2-(N-vinylacetamido)ethyl)ammonio)propyl sulfate (M4), together with the new photo-crosslinker 4-benzoyl-N-vinylbenzamide (M5) that is well-suited for copolymerization with vinylamides, are introduced within the scope of the present work. The monomers are synthesized with good yields developing a multistep synthesis. Homopolymers of the new vinyl amide zwitterionic monomers are obtained by free-radical polymerization and thoroughly characterized. From the solubility tests, it is remarkable that the homopolymers produced are fully soluble in water, evidence of their high hydrophilicity. Copolymerization of the vinyl amide zwitterionic monomers, M2, M3, and M4 with the vinyl amide photo-crosslinker M5 proved to require very specific polymerization conditions. Nevertheless, copolymers were successfully obtained by free-radical copolymerization under appropriate conditions.
Moreover, in an attempt to mitigate the intrinsic hydrophobicity introduced in the copolymers by the photo-crosslinkers, and based on the proven affinity of quaternized diallylamines to copolymerize with vinyl amides, a new quaternized diallylamine sulfobetaine photo-crosslinker 3-(diallyl(2-(4-benzoylphenoxy)ethyl)ammonio)propane-1-sulfonate (M6) is synthesized. However, despite a priori promising copolymerization suitability, copolymerization with the vinyl amide zwitterionic monomers could not be achieved.
Moss-microbe associations are often characterised by syntrophic interactions between the microorganisms and their hosts, but the structure of the microbial consortia and their role in peatland development remain unknown.
In order to study microbial communities of dominant peatland mosses, Sphagnum and brown mosses, and the respective environmental drivers, four study sites representing different successional stages of natural northern peatlands were chosen on a large geographical scale: two brown moss-dominated, circumneutral peatlands from the Arctic and two Sphagnum-dominated, acidic peat bogs from subarctic and temperate zones.
The family Acetobacteraceae represented the dominant bacterial taxon of Sphagnum mosses from various geographical origins and displayed an integral part of the moss core community. This core community was shared among all investigated bryophytes and consisted of few but highly abundant prokaryotes, of which many appear as endophytes of Sphagnum mosses. Moreover, brown mosses and Sphagnum mosses represent habitats for archaea which were not studied in association with peatland mosses so far. Euryarchaeota that are capable of methane production (methanogens) displayed the majority of the moss-associated archaeal communities. Moss-associated methanogenesis was detected for the first time, but it was mostly negligible under laboratory conditions. Contrarily, substantial moss-associated methane oxidation was measured on both, brown mosses and Sphagnum mosses, supporting that methanotrophic bacteria as part of the moss microbiome may contribute to the reduction of methane emissions from pristine and rewetted peatlands of the northern hemisphere.
Among the investigated abiotic and biotic environmental parameters, the peatland type and the host moss taxon were identified to have a major impact on the structure of moss-associated bacterial communities, contrarily to archaeal communities whose structures were similar among the investigated bryophytes. For the first time it was shown that different bog development stages harbour distinct bacterial communities, while at the same time a small core community is shared among all investigated bryophytes independent of geography and peatland type.
The present thesis displays the first large-scale, systematic assessment of bacterial and archaeal communities associated both with brown mosses and Sphagnum mosses. It suggests that some host-specific moss taxa have the potential to play a key role in host moss establishment and peatland development.
To manage tabular data files and leverage their content in a given downstream task, practitioners often design and execute complex transformation pipelines to prepare them. The complexity of such pipelines stems from different factors, including the nature of the preparation tasks, often exploratory or ad-hoc to specific datasets; the large repertory of tools, algorithms, and frameworks that practitioners need to master; and the volume, variety, and velocity of the files to be prepared. Metadata plays a fundamental role in reducing this complexity: characterizing a file assists end users in the design of data preprocessing pipelines, and furthermore paves the way for suggestion, automation, and optimization of data preparation tasks.
Previous research in the areas of data profiling, data integration, and data cleaning, has focused on extracting and characterizing metadata regarding the content of tabular data files, i.e., about the records and attributes of tables. Content metadata are useful for the latter stages of a preprocessing pipeline, e.g., error correction, duplicate detection, or value normalization, but they require a properly formed tabular input. Therefore, these metadata are not relevant for the early stages of a preparation pipeline, i.e., to correctly parse tables out of files. In this dissertation, we turn our focus to what we call the structure of a tabular data file, i.e., the set of characters within a file that do not represent data values but are required to parse and understand the content of the file. We provide three different approaches to represent file structure, an explicit representation based on context-free grammars; an implicit representation based on file-wise similarity; and a learned representation based on machine learning.
In our first contribution, we use the grammar-based representation to characterize a set of over 3000 real-world csv files and identify multiple structural issues that let files deviate from the csv standard, e.g., by having inconsistent delimiters or containing multiple tables. We leverage our learnings about real-world files and propose Pollock, a benchmark to test how well systems parse csv files that have a non-standard structure, without any previous preparation. We report on our experiments on using Pollock to evaluate the performance of 16 real-world data management systems.
Following, we characterize the structure of files implicitly, by defining a measure of structural similarity for file pairs. We design a novel algorithm to compute this measure, which is based on a graph representation of the files' content. We leverage this algorithm and propose Mondrian, a graphical system to assist users in identifying layout templates in a dataset, classes of files that have the same structure, and therefore can be prepared by applying the same preparation pipeline.
Finally, we introduce MaGRiTTE, a novel architecture that uses self-supervised learning to automatically learn structural representations of files in the form of vectorial embeddings at three different levels: cell level, row level, and file level. We experiment with the application of structural embeddings for several tasks, namely dialect detection, row classification, and data preparation efforts estimation.
Our experimental results show that structural metadata, either identified explicitly on parsing grammars, derived implicitly as file-wise similarity, or learned with the help of machine learning architectures, is fundamental to automate several tasks, to scale up preparation to large quantities of files, and to provide repeatable preparation pipelines.
The evaluation of process-oriented cognitive theories through time-ordered observations is crucial for the advancement of cognitive science. The findings presented herein integrate insights from research on eye-movement control and sentence comprehension during reading, addressing challenges in modeling time-ordered data, statistical inference, and interindividual variability. Using kernel density estimation and a pseudo-marginal likelihood for fixation durations and locations, a likelihood implementation of the SWIFT model of eye-movement control during reading (Engbert et al., Psychological Review, 112, 2005, pp. 777–813) is proposed. Within the broader framework of data assimilation, Bayesian parameter inference with adaptive Markov Chain Monte Carlo techniques is facilitated for reliable model fitting. Across the different studies, this framework has shown to enable reliable parameter recovery from simulated data and prediction of experimental summary statistics. Despite its complexity, SWIFT can be fitted within a principled Bayesian workflow, capturing interindividual differences and modeling experimental effects on reading across different geometrical alterations of text. Based on these advancements, the integrated dynamical model SEAM is proposed, which combines eye-movement control, a traditionally psychological research area, and post-lexical language processing in the form of cue-based memory retrieval (Lewis & Vasishth, Cognitive Science, 29, 2005, pp. 375–419), typically the purview of psycholinguistics. This proof-of-concept integration marks a significant step forward in natural language comprehension during reading and suggests that the presented methodology can be useful to develop complex cognitive dynamical models that integrate processes at levels of perception, higher cognition, and (oculo-)motor control. These findings collectively advance process-oriented cognitive modeling and highlight the importance of Bayesian inference, individual differences, and interdisciplinary integration for a holistic understanding of reading processes. Implications for theory and methodology, including proposals for model comparison and hierarchical parameter inference, are briefly discussed.
The European Water Framework Directive (WFD) has identified river morphological alteration and diffuse pollution as the two main pressures affecting water bodies in Europe at the catchment scale. Consequently, river restoration has become a priority to achieve the WFD's objective of good ecological status. However, little is known about the effects of stream morphological changes, such as re-meandering, on in-stream nitrate retention at the river network scale. Therefore, catchment nitrate modeling is necessary to guide the implementation of spatially targeted and cost-effective mitigation measures. Meanwhile, Germany, like many other regions in central Europe, has experienced consecutive summer droughts from 2015-2018, resulting in significant changes in river nitrate concentrations in various catchments. However, the mechanistic exploration of catchment nitrate responses to changing weather conditions is still lacking.
Firstly, a fully distributed, process-based catchment Nitrate model (mHM-Nitrate) was used, which was properly calibrated and comprehensively evaluated at numerous spatially distributed nitrate sampling locations. Three calibration schemes were designed, taking into account land use, stream order, and mean nitrate concentrations, and they varied in spatial coverage but used data from the same period (2011–2019). The model performance for discharge was similar among the three schemes, with Nash-Sutcliffe Efficiency (NSE) scores ranging from 0.88 to 0.92. However, for nitrate concentrations, scheme 2 outperformed schemes 1 and 3 when compared to observed data from eight gauging stations. This was likely because scheme 2 incorporated a diverse range of data, including low discharge values and nitrate concentrations, and thus provided a better representation of within-catchment heterogenous. Therefore, the study suggests that strategically selecting gauging stations that reflect the full range of within-catchment heterogeneity is more important for calibration than simply increasing the number of stations.
Secondly, the mHM-Nitrate model was used to reveal the causal relations between sequential droughts and nitrate concentration in the Bode catchment (3200 km2) in central Germany, where stream nitrate concentrations exhibited contrasting trends from upstream to downstream reaches. The model was evaluated using data from six gauging stations, reflecting different levels of runoff components and their associated nitrate-mixing from upstream to downstream. Results indicated that the mHM-Nitrate model reproduced dynamics of daily discharge and nitrate concentration well, with Nash-Sutcliffe Efficiency ≥ 0.73 for discharge and Kling-Gupta Efficiency ≥ 0.50 for nitrate concentration at most stations. Particularly, the spatially contrasting trends of nitrate concentration were successfully captured by the model. The decrease of nitrate concentration in the lowland area in drought years (2015-2018) was presumably due to (1) limited terrestrial export loading (ca. 40% lower than that of normal years 2004-2014), and (2) increased in-stream retention efficiency (20% higher in summer within the whole river network). From a mechanistic modelling perspective, this study provided insights into spatially heterogeneous flow and nitrate dynamics and effects of sequential droughts, which shed light on water-quality responses to future climate change, as droughts are projected to be more frequent.
Thirdly, this study investigated the effects of stream restoration via re-meandering on in-stream nitrate retention at network-scale in the well-monitored Bode catchment. The mHM-Nitrate model showed good performance in reproducing daily discharge and nitrate concentrations, with median Kling-Gupta values of 0.78 and 0.74, respectively. The mean and standard deviation of gross nitrate retention efficiency, which accounted for both denitrification and assimilatory uptake, were 5.1 ± 0.61% and 74.7 ± 23.2% in winter and summer, respectively, within the stream network. The study found that in the summer, denitrification rates were about two times higher in lowland sub-catchments dominated by agricultural lands than in mountainous sub-catchments dominated by forested areas, with median ± SD of 204 ± 22.6 and 102 ± 22.1 mg N m-2 d-1, respectively. Similarly, assimilatory uptake rates were approximately five times higher in streams surrounded by lowland agricultural areas than in those in higher-elevation, forested areas, with median ± SD of 200 ± 27.1 and 39.1 ± 8.7 mg N m-2 d-1, respectively. Therefore, restoration strategies targeting lowland agricultural areas may have greater potential for increasing nitrate retention. The study also found that restoring stream sinuosity could increase net nitrate retention efficiency by up to 25.4 ± 5.3%, with greater effects seen in small streams. These results suggest that restoration efforts should consider augmenting stream sinuosity to increase nitrate retention and decrease nitrate concentrations at the catchment scale.
Leitfaden für die Erstellung von kommunalen Aktionsplänen zur Steigerung der urbanen Klimaresilienz
(2024)
Die durch Klimaveränderungen hervorgerufenen Auswirkungen auf Menschen und Umwelt werden immer offensichtlicher: Neben der gesundheitlichen Gefährdung durch Hitzewellen, die deutschlandweit seit einigen Jahren eine steigende Rate an Todes- und Krankheitsfällen zur Folge hat sind in den letzten Jahren zunehmend Starkniederschläge und daraus resultierenden Überschwemmungen bzw. Sturzfluten aufgetreten. Diese ziehen zum Teil immensen wirtschaftlichen Schäden, aber auch Beeinträchtigungen für die menschliche Gesundheit – sowohl physisch als auch psychisch – sowie gar Todesopfer nach sich. Es ist davon auszugehen, dass diese Extremwetterereignisse zukünftiger noch häufiger auftreten werden.
Um die Bevölkerung besser vor den Folgen dieser Wetterextreme zu schützen, sind neben Klimaschutzmaßnahmen auch Vorsorge- und Anpassungsmaßnahmen zur Steigerung der kommunalen Klimaresilienz dringend notwendig. Dazu bedarf es einerseits einer Auseinandersetzung mit den eigenen kommunalen Risiken und daraus resultierenden Handlungsbedarfen, und andererseits eines interdisziplinären, querschnittsorientierten und prozessorientierten Planens und Handelns. Aktionspläne sollen diese beiden Aspekte bündeln.
In den letzten Jahren sind einige kommunale und kommunenübergreifende (Hitze-) aufgestellt worden. Diese unterscheiden sich jedoch in ihrem Inhalt und Umfang zum Teil erheblich. Mit dem vorliegenden Leitfaden soll eine effektive Hilfestellung geschaffen werden, um Kommunen bzw. die kommunale Verwaltung auf dem Weg zum eigenen Aktionsplan zu unterstützt. Dabei fokussiert der Leitfaden auf die Herausforderungen, die sich durch vermehrte Hitze- und Starkregenereignisse ergeben. Er stützt sich auf schon vorhandene Arbeitshilfen, Handlungsempfehlungen, Leitfäden und weitere Hinweise und verweist an vielen Stellen auch darauf. So soll ein praxistauglicher Leitfaden entstehen, der flexibel anwendbar ist. Mit Hilfe des vorliegenden Leitfadens können Kommunen ihre Aktivitäten auf Hitze oder Starkregen fokussieren oder einen umfassenden Aktionsplan für beide Themenbereiche erstellen.
The icosahedral non-hydrostatic large eddy model (ICON-LEM) was applied around the drift track of the Multidisciplinary Observatory Study of the Arctic (MOSAiC) in 2019 and 2020. The model was set up with horizontal grid-scales between 100m and 800m on areas with radii of 17.5km and 140 km. At its lateral boundaries, the model was driven by analysis data from the German Weather Service (DWD), downscaled by ICON in limited area mode (ICON-LAM) with horizontal grid-scale of 3 km.
The aim of this thesis was the investigation of the atmospheric boundary layer near the surface in the central Arctic during polar winter with a high-resolution mesoscale model. The default settings in ICON-LEM prevent the model from representing the exchange processes in the Arctic boundary layer in accordance to the MOSAiC observations. The implemented sea-ice scheme in ICON does not include a snow layer on sea-ice, which causes a too slow response of the sea-ice surface temperature to atmospheric changes. To allow the sea-ice surface to respond faster to changes in the atmosphere, the implemented sea-ice parameterization in ICON was extended with an adapted heat capacity term.
The adapted sea-ice parameterization resulted in better agreement with the MOSAiC observations. However, the sea-ice surface temperature in the model is generally lower than observed due to biases in the downwelling long-wave radiation and the lack of complex surface structures, like leads. The large eddy resolving turbulence closure yielded a better representation of the lower boundary layer under strongly stable stratification than the non-eddy-resolving turbulence closure. Furthermore, the integration of leads into the sea-ice surface reduced the overestimation of the sensible heat flux for different weather conditions.
The results of this work help to better understand boundary layer processes in the central Arctic during the polar night. High-resolving mesoscale simulations are able to represent temporally and spatially small interactions and help to further develop parameterizations also for the application in regional and global models.
Vergangenheit ist vergangen, Geschichte wird gemacht. An diesem Konstruktionsprozess sind nicht nur die historischen Akteur:innen und deren Quellen, sondern in besonderem Maße auch die Historiker:innen, die sich mit diesen auseinandersetzen, beteiligt. Sie sind es, die die Quellen erst zum Sprudeln bringen. Was dabei zutage tritt, ist somit in hohem Maße von den Forschenden selbst, von ihren Vorannahmen und Methoden aber auch von ihren sozialen, kulturellen und biografischen Prägungen abhängig. Das hier vorgestellte Prozessmodell versucht, diese als Einflussfaktoren zu fassen und sichtbar zu machen, um auf dieser Basis eine erweiterte wissenschaftliche (Selbst-)Reflexion zu ermöglichen.
In the aftermath of the Shoah and the ostensible triumph of nationalism, it became common in historiography to relegate Jews to the position of the “eternal other” in a series of binaries: Christian/Jewish, Gentile/Jewish, European/Jewish, non-Jewish/Jewish, and so forth. For the longest time, these binaries remained characteristic of Jewish historiography, including in the Central European context. Assuming instead, as the more recent approaches in Habsburg studies do, that pluriculturalism was the basis of common experience in formerly Habsburg Central Europe, and accepting that no single “majority culture” existed, but rather hegemonies were imposed in certain contexts, then the often used binaries are misleading and conceal the complex and sometimes even paradoxical conditions that shaped Jewish life in the region before the Shoah.
The very complexity of Habsburg Central Europe both in synchronic and diachronic perspective precludes any singular historical narrative of “Habsburg Jewry,” and it is not the intention of this volume to offer an overview of “Habsburg Jewish history.” The selected articles in this volume illustrate instead how important it is to reevaluate categories, deconstruct historical narratives, and reconceptualize implemented approaches in specific geographic, temporal, and cultural contexts in order to gain a better understanding of the complex and pluricultural history of the Habsburg Empire and the region as a whole.
The landscape of software self-adaptation is shaped in accordance with the need to cost-effectively achieve and maintain (software) quality at runtime and in the face of dynamic operation conditions. Optimization-based solutions perform an exhaustive search in the adaptation space, thus they may provide quality guarantees. However, these solutions render the attainment of optimal adaptation plans time-intensive, thereby hindering scalability. Conversely, deterministic rule-based solutions yield only sub-optimal adaptation decisions, as they are typically bound by design-time assumptions, yet they offer efficient processing and implementation, readability, expressivity of individual rules supporting early verification. Addressing the quality-cost trade-of requires solutions that simultaneously exhibit the scalability and cost-efficiency of rulebased policy formalism and the optimality of optimization-based policy formalism as explicit artifacts for adaptation. Utility functions, i.e., high-level specifications that capture system objectives, support the explicit treatment of quality-cost trade-off. Nevertheless, non-linearities, complex dynamic architectures, black-box models, and runtime uncertainty that makes the prior knowledge obsolete are a few of the sources of uncertainty and subjectivity that render the elicitation of utility non-trivial.
This thesis proposes a twofold solution for incremental self-adaptation of dynamic architectures. First, we introduce Venus, a solution that combines in its design a ruleand an optimization-based formalism enabling optimal and scalable adaptation of dynamic architectures. Venus incorporates rule-like constructs and relies on utility theory for decision-making. Using a graph-based representation of the architecture, Venus captures rules as graph patterns that represent architectural fragments, thus enabling runtime extensibility and, in turn, support for dynamic architectures; the architecture is evaluated by assigning utility values to fragments; pattern-based definition of rules and utility enables incremental computation of changes on the utility that result from rule executions, rather than evaluating the complete architecture, which supports scalability. Second, we introduce HypeZon, a hybrid solution for runtime coordination of multiple off-the-shelf adaptation policies, which typically offer only partial satisfaction of the quality and cost requirements. Realized based on meta-self-aware architectures, HypeZon complements Venus by re-using existing policies at runtime for balancing the quality-cost trade-off.
The twofold solution of this thesis is integrated in an adaptation engine that leverages state- and event-based principles for incremental execution, therefore, is scalable for large and dynamic software architectures with growing size and complexity. The utility elicitation challenge is resolved by defining a methodology to train utility-change prediction models. The thesis addresses the quality-cost trade-off in adaptation of dynamic software architectures via design-time combination (Venus) and runtime coordination (HypeZon) of rule- and optimization-based policy formalisms, while offering supporting mechanisms for optimal, cost-effective, scalable, and robust adaptation. The solutions are evaluated according to a methodology that is obtained based on our systematic literature review of evaluation in self-healing systems; the applicability and effectiveness of the contributions are demonstrated to go beyond the state-of-the-art in coverage of a wide spectrum of the problem space for software self-adaptation.
Sigmund Freud, the founder of psychoanalysis, began his intellectual life with the Jewish Bible and also ended it with it. He began by reading the Philippson Bible together, especially with his father Jacob Freud, and ended by studying the figure of Moses. This study systematically traces this preoccupation and shows that the Jewish Bible was a constant reference for Freud and determined his Jewish identity. This is shown by analysing family documents, religious instruction and references to the Bible in Freud's writings and correspondence.
Volcanic hydrothermal systems are an integral part of most volcanoes and typically involve a heat source, adequate fluid supply, and fracture or pore systems through which the fluids can circulate within the volcanic edifice. Associated with this are subtle but powerful processes that can significantly influence the evolution of volcanic activity or the stability of the near-surface volcanic system through mechanical weakening, permeability reduction, and sealing of the affected volcanic rock. These processes are well constrained for rock samples by laboratory analyses but are still difficult to extrapolate and evaluate at the scale of an entire volcano. Advances in unmanned aircraft systems (UAS), sensor technology, and photogrammetric processing routines now allow us to image volcanic surfaces at the centimeter scale and thus study volcanic hydrothermal systems in great detail. This thesis aims to explore the potential of UAS approaches for studying the structures, processes, and dynamics of volcanic hydrothermal systems but also to develop methodological approaches to uncover secondary information hidden in the data, capable of indicating spatiotemporal dynamics or potentially critical developments associated with hydrothermal alteration. To accomplish this, the thesis describes the investigation of two near-surface volcanic hydrothermal systems, the El Tatio geyser field in Chile and the fumarole field of La Fossa di Vulcano (Italy), both of which are among the best-studied sites of their kind. Through image analysis, statistical, and spatial analyses we have been able to provide the most detailed structural images of both study sites to date, with new insights into the driving forces of such systems but also revealing new potential controls, which are summarized in conceptual site-specific models. Furthermore, the thesis explores methodological remote sensing approaches to detect, classify and constrain hydrothermal alteration and surface degassing from UAS-derived data, evaluated them by mineralogical and chemical ground-truthing, and compares the alteration pattern with the present-day degassing activity. A significant contribution of the often neglected diffuse degassing activity to the total amount of degassing is revealed and constrains secondary processes and dynamics associated with hydrothermal alteration that lead to potentially critical developments like surface sealing. The results and methods used provide new approaches for alteration research, for the monitoring of degassing and alteration effects, and for thermal monitoring of fumarole fields, with the potential to be incorporated into volcano monitoring routines.
The Central Andean region is characterized by diverse climate zones with sharp transitions between them. In this work, the area of interest is the South-Central Andes in northwestern Argentina that borders with Bolivia and Chile. The focus is the observation of soil moisture and water vapour with Global Navigation Satellite System (GNSS) remote-sensing methodologies. Because of the rapid temporal and spatial variations of water vapour and moisture circulations, monitoring this part of the hydrological cycle is crucial for understanding the mechanisms that control the local climate. Moreover, GNSS-based techniques have previously shown high potential and are appropriate for further investigation. This study includes both logistic-organization effort and data analysis. As for the prior, three GNSS ground stations were installed in remote locations in northwestern Argentina to acquire observations, where there was no availability of third-party data.
The methodological development for the observation of the climate variables of soil moisture and water vapour is independent and relies on different approaches. The soil-moisture estimation with GNSS reflectometry is an approximation that has demonstrated promising results, but it has yet to be operationally employed. Thus, a more advanced algorithm that exploits more observations from multiple satellite constellations was developed using data from two pilot stations in Germany. Additionally, this algorithm was slightly modified and used in a sea-level measurement campaign. Although the objective of this application is not related to monitoring hydrological parameters, its methodology is based on the same principles and helps to evaluate the core algorithm. On the other hand, water-vapour monitoring with GNSS observations is a well-established technique that is utilized operationally. Hence, the scope of this study is conducting a meteorological analysis by examining the along-the-zenith air-moisture levels and introducing indices related to the azimuthal gradient.
The results of the experiments indicate higher-quality soil moisture observations with the new algorithm. Furthermore, the analysis using the stations in northwestern Argentina illustrates the limits of this technology because of varying soil conditions and shows future research directions. The water-vapour analysis points out the strong influence of the topography on atmospheric moisture circulation and rainfall generation. Moreover, the GNSS time series allows for the identification of seasonal signatures, and the azimuthal-gradient indices permit the detection of main circulation pathways.
The Women, Peace and Security Agenda (WPSA) is an international framework addressing the disproportionate impact of armed conflict on women and girls and promoting their meaningful participation in peacebuilding efforts. The Security Council called on Member States to develop National Action Plans (NAPs) to operationalize the four pillars of the Agenda. This study looks at the relevant steps undertaken by both Germany and the European Union. The author calls for improvements on either level and makes four recommendations.
Additive manufacturing (AM) processes enable the production of metal structures with exceptional design freedom, of which laser powder bed fusion (PBF-LB) is one of the most common. In this process, a laser melts a bed of loose feedstock powder particles layer-by-layer to build a structure with the desired geometry. During fabrication, the repeated melting and rapid, directional solidification create large temperature gradients that generate large thermal stress. This thermal stress can itself lead to cracking or delamination during fabrication. More often, large residual stresses remain in the final part as a footprint of the thermal stress. This residual stress can cause premature distortion or even failure of the part in service. Hence, knowledge of the residual stress field is critical for both process optimization and structural integrity.
Diffraction-based techniques allow the non-destructive characterization of the residual stress fields. However, such methods require a good knowledge of the material of interest, as certain assumptions must be made to accurately determine residual stress. First, the measured lattice plane spacings must be converted to lattice strains with the knowledge of a strain-free material state. Second, the measured lattice strains must be related to the macroscopic stress using Hooke's law, which requires knowledge of the stiffness of the material. Since most crystal structures exhibit anisotropic material behavior, the elastic behavior is specific to each lattice plane of the single crystal. Thus, the use of individual lattice planes in monochromatic diffraction residual stress analysis requires knowledge of the lattice plane-specific elastic properties. In addition, knowledge of the microstructure of the material is required for a reliable assessment of residual stress.
This work presents a toolbox for reliable diffraction-based residual stress analysis. This is presented for a nickel-based superalloy produced by PBF-LB. First, this work reviews the existing literature in the field of residual stress analysis of laser-based AM using diffraction-based techniques. Second, the elastic and plastic anisotropy of the nickel-based superalloy Inconel 718 produced by PBF-LB is studied using in situ energy dispersive synchrotron X-ray and neutron diffraction techniques. These experiments are complemented by ex situ material characterization techniques. These methods establish the relationship between the microstructure and texture of the material and its elastic and plastic anisotropy. Finally, surface, sub-surface, and bulk residual stress are determined using a texture-based approach. Uncertainties of different methods for obtaining stress-free reference values are discussed.
The tensile behavior in the as-built condition is shown to be controlled by texture and cellular sub-grain structure, while in the heat-treated condition the precipitation of strengthening phases and grain morphology dictate the behavior. In fact, the results of this thesis show that the diffraction elastic constants depend on the underlying microstructure, including texture and grain morphology. For columnar microstructures in both as-built and heat-treated conditions, the diffraction elastic constants are best described by the Reuss iso-stress model. Furthermore, the low accumulation of intergranular strains during deformation demonstrates the robustness of using the 311 reflection for the diffraction-based residual stress analysis with columnar textured microstructures. The differences between texture-based and quasi-isotropic approaches for the residual stress analysis are shown to be insignificant in the observed case. However, the analysis of the sub-surface residual stress distributions show, that different scanning strategies result in a change in the orientation of the residual stress tensor. Furthermore, the location of the critical sub-surface tensile residual stress is related to the surface roughness and the microstructure. Finally, recommendations are given for the diffraction-based determination and evaluation of residual stress in textured additively manufactured alloys.
Èto-clefts are Russian focus constructions with the demonstrative pronoun èto ‘this’ at the beginning: “Èto Mark vyigral gonku” (“It was Mark who won the race”). They are often being compared with English it-clefts, German es-clefts, as well as the corresponding focus-background structures in other languages.
In terms of semantics, èto-clefts have two important properties which are cross-linguistically typical for clefts: existence presupposition (“Someone won the race”) and exhaustivity (“Nobody except Mark won the race”). However, the exhaustivity effects are not as strong as exhaustivity effects in structures with the exclusive only and require more research.
At the same time, the question if the syntactic structure of èto-clefts matches the biclausal structure of English and German clefts, remains open. There are arguments in favor of biclausality, as well as monoclausality. Besides, there is no consistency regarding the status of èto itself.
Finally, the information structure of èto-clefts has remained underexplored in the existing literature.
This research investigates the information-structural, syntactic, and semantic properties of Russian clefts, both theoretically (supported by examples from Russian text corpora and judgments from native speakers) and experimentally. It is determined which desired changes in the information structure motivate native speakers to choose an èto-cleft and not the canonical structure or other focus realization tools. Novel syntactic tests are conducted to find evidence for bi-/monoclausality of èto-clefts, as well as for base-generation or movement of the cleft pivot. It is hypothesized that èto has a certain important function in clefts, and its status is investigated. Finally, new experiments on the nature of exhaustivity in èto-clefts are conducted. They allow for direct cross-linguistic comparison, using an incremental-information paradigm with truth-value judgments.
In terms of information structure, this research makes a new proposal that presents èto-clefts as structures with an inherent focus-background bipartitioning. Even though èto-clefts are used in typical focus contexts, evidence was found that èto-clefts (as well as Russian thetic clefts) allow for both new information focus and contrastive focus. Èto-clefts are pragmatically acceptable when a singleton answer to the implied question is expected (e.g. “It was Mark who won the race” but not “It was Mark who came to the party”). Importantly, èto in Russian clefts is neither dummy, nor redundant, but is a topic expression; conveys familiarity which triggers existence presupposition; refers to an instantiated event, or a known/perceivable situation; finally, èto plays an important role in the spoken language as a tool for speech coherency and a focus marker.
In terms of syntax, this research makes a new monoclausal proposal and shows evidence that the cleft pivot undergoes movement to the left peripheral position. Èto is proposed to be TopP.
Finally, in terms of semantics, a novel cross-linguistic evaluation of Russian clefts is made. Experiments show that the exhaustivity inference in èto-clefts is not robust. Participants used different strategies in resolving exhaustivity, falling into 2 groups: one group considered èto-clefts exhaustive, while another group considered them non-exhaustive. Hence, there is evidence for the pragmatic nature of exhaustivity in èto-clefts. The experimental results for èto-clefts are similar to the experimental results for clefts in German, French and Akan. It is concluded that speakers use different tools available in their languages to produce structures with similar interpretive properties.
The African weakly electric fishes (Mormyridae) exhibit a remarkable adaptive radiation possibly due to their species-specific electric organ discharges (EODs). It is produced by a muscle-derived electric organ that is located in the caudal peduncle. Divergence in EODs acts as a pre-zygotic isolation mechanism to drive species radiations. However, the mechanism behind the EOD diversification are only partially understood.
The aim of this study is to explore the genetic basis of EOD diversification from the gene expression level across Campylomormyrus species/hybrids and ontogeny. I firstly produced a high quality genome of the species C. compressirostris as a valuable resource to understand the electric fish evolution.
The next study compared the gene expression pattern between electric organs and skeletal muscles in Campylomormyrus species/hybrids with different types of EOD duration. I identified several candidate genes with an electric organ-specific expression, e.g. KCNA7a, KLF5, KCNJ2, SCN4aa, NDRG3, MEF2. The overall genes expression pattern exhibited a significant association with EOD duration in all analyzed species/hybrids. The expression of several candidate genes, e.g. KCNJ2, KLF5, KCNK6 and KCNQ5, possibly contribute to the regulation of EOD duration in Campylomormyrus due to their increasing or decreasing expression. Several potassium channel genes showed differential expression during ontogeny in species and hybrid with EOD alteration, e.g. KCNJ2.
I next explored allele specific expression of intragenus hybrids by crossing the duration EOD species C. compressirostris with the medium duration EOD species C. tshokwe and the elongated duration EOD species C. rhynchophorus. The hybrids exhibited global expression dominance of the C. compressirostris allele in the adult skeletal muscle and electric organ, as well as in the juvenile electric organ. Only the gene KCNJ2 showed dominant expression of the allele from C. rhynchophorus, and this was increasingly dominant during ontogeny. It hence supported our hypothesis that KCNJ2 is a key gene of regulating EOD duration. Our results help us to understand, from a genetic perspective, how gene expression effect the EOD diversification in the African weakly electric fish.
Microalgae have been recognized as a promising green production platform for recombinant proteins. The majority of studies on recombinant protein expression have been conducted in the green microalga C. reinhardtii. While promising improvement regarding nuclear transgene expression in this alga has been made, it is still inefficient due to epigenetic silencing, often resulting in low yields that are not competitive with other expressor organisms. Other microalgal species might be better suited for high-level protein expression, but are limited in their availability of molecular tools.
The red microalga Porphyridium purpureum recently emerged as candidate for the production of recombinant proteins. It is promising in that transformation vectors are episomally maintained as autonomously replicating plasmids in the nucleus at a high copy number, thus leading to high expression values in this red alga.
In this work, we expand the genetic tools for P. purpureum and investigate parameters that govern efficient transgene expression. We provide an improved transformation protocol to streamline the generation of transgenic lines in this organism. After being able to efficiently generate transgenic lines, we showed that codon usage is a main determinant of high-level transgene expression, not only at the protein level but also at the level of mRNA accumulation. The optimized expression constructs resulted in YFP accumulation up to an unprecedented 5% of the total soluble protein. Furthermore, we designed new constructs conferring efficient transgene expression into the culture medium, simplifying purification and harvests of recombinant proteins. To further improve transgene expression, we tested endogenous promoters driving the most highly transcribed genes in P. purpureum and found minor increase of YFP accumulation.
We employed the previous findings to express complex viral antigens from the hepatitis B virus and the hepatitis C virus in P. purpureum to demonstrate its feasibility as producer of biopharmaceuticals. The viral glycoproteins were successfully produced to high levels and could reach their native confirmation, indicating a functional glycosylation machinery and an appropriate folding environment in this red alga. We could successfully upscale the biomass production of transgenic lines and with that provide enough material for immunization trials in mice that were performed in collaboration. These trials showed no toxicity of neither the biomass nor the purified antigens, and, additionally, the algal-produced antigens were able to elicit a strong and specific immune response.
The results presented in this work pave the way for P. purpureum as a new promising producer organism for biopharmaceuticals in the microalgal field.
Eskalation des Commitments in Wirtschaftsinformatik Projekten: eine kognitiv-affektive Perspektive
(2024)
Projekte im Bereich der Wirtschaftsinformatik (IS-Projekte) sind von zentraler Bedeutung für die Steuerung von Unternehmensstrategien und die Aufrechterhaltung von Wettbewerbsvorteilen, überschreiten jedoch häufig das Budget, sprengen den Zeitrahmen und weisen eine hohe Misserfolgsquote auf. Diese Dissertation befasst sich mit den psychologischen Grundlagen menschlichen Verhaltens - insbesondere Kognition und Emotion - im Zusammenhang mit einem weit verbreiteten Problem im IS-Projektmanagement: der Tendenz, an fehlgehenden Handlungssträngen festzuhalten, auch Eskalation des Commitments (Englisch: “escalation of commitment” - EoC) genannt.
Mit einem kombinierten Forschungsansatz (dem Mix von qualitativen und quantitativen Methoden) untersuche ich in meiner Dissertation die emotionalen und kognitiven Grundlagen der Entscheidungsfindung hinter eskalierendem Commitment zu scheiternden IS-Projekten und deren Entwicklung über die Zeit. Die Ergebnisse eines psychophysiologischen Laborexperiments liefern Belege auf die Vorhersagen bezüglich der Rolle von negativen und komplexen situativen Emotionen der kognitiven Dissonanz Theorie gegenüber der Coping-Theorie und trägt zu einem besseren Verständnis dafür bei, wie sich Eskalationstendenzen während sequenzieller Entscheidungsfindung aufgrund kognitiver Lerneffekte verändern. Mit Hilfe psychophysiologischer Messungen, einschließlich der Daten-Triangulation zwischen elektrodermaler und kardiovaskulärer Aktivität sowie künstliche Intelligenz-basierter Analyse von Gesichtsmikroexpressionen, enthüllt diese Forschung physiologische Marker für eskalierendes Commitment. Ergänzend zu dem Experiment zeigt eine qualitative Analyse text-basierter Reflexionen während der Eskalationssituationen, dass Entscheidungsträger verschiedene kognitive Begründungsmuster verwenden, um eskalierende Verhaltensweisen zu rechtfertigen, die auf eine Sequenz von vier unterschiedlichen kognitiven Phasen schließen lassen.
Durch die Integration von qualitativen und quantitativen Erkenntnissen entwickelt diese Dissertation ein umfassendes theoretisches Model dafür, wie Kognition und Emotion eskalierendes Commitment über die Zeit beeinflussen. Ich schlage vor, dass eskalierendes Commitment eine zyklische Anpassung von Denkmodellen ist, die sich durch Veränderungen in kognitiven Begründungsmustern, Variationen im zeitlichen Kognitionsmodus und Interaktionen mit situativen Emotionen und deren Erwartung auszeichnet. Der Hauptbeitrag dieser Arbeit liegt in der Entflechtung der emotionalen und kognitiven Mechanismen, die eskalierendes Commitment im Kontext von IS-Projekten antreiben. Die Erkenntnisse tragen dazu bei, die Qualität von Entscheidungen unter Unsicherheit zu verbessern und liefern die Grundlage für die Entwicklung von Deeskalationsstrategien. Beteiligte an „in Schieflage geratenden“ IS-Projekten sollten sich der Tendenz auf fehlgeschlagenen Aktionen zu beharren und der Bedeutung der zugrundeliegenden emotionalen und kognitiven Dynamiken bewusst sein.
Die fortschreitende Digitalisierung durchzieht immer mehr Lebensbereiche und führt zu immer komplexeren sozio-technischen Systemen. Obwohl diese Systeme zur Lebenserleichterung entwickelt werden, können auch unerwünschte Nebeneffekte entstehen. Ein solcher Nebeneffekt könnte z.B. die Datennutzung aus Fitness-Apps für nachteilige Versicherungsentscheidungen sein. Diese Nebeneffekte manifestieren sich auf allen Ebenen zwischen Individuum und Gesellschaft. Systeme mit zuvor unerwarteten Nebeneffekten können zu sinkender Akzeptanz oder einem Verlust von Vertrauen führen. Da solche Nebeneffekte oft erst im Gebrauch in Erscheinung treten, bedarf es einer besonderen Betrachtung bereits im Konstruktionsprozess. Mit dieser Arbeit soll ein Beitrag geleistet werden, um den Konstruktionsprozess um ein geeignetes Hilfsmittel zur systematischen Reflexion zu ergänzen.
In vorliegender Arbeit wurde ein Analysetool zur Identifikation und Analyse komplexer Interaktionssituationen in Software-Entwicklungsprojekten entwickelt. Komplexe Interaktionssituationen sind von hoher Dynamik geprägt, aus der eine Unvorhersehbarkeit der Ursache-Wirkungs-Beziehungen folgt. Hierdurch können die Akteur*innen die Auswirkungen der eigenen Handlungen nicht mehr überblicken, sondern lediglich im Nachhinein rekonstruieren. Hieraus können sich fehlerhafte Interaktionsverläufe auf vielfältigen Ebenen ergeben und oben genannte Nebeneffekte entstehen. Das Analysetool unterstützt die Konstrukteur*innen in jeder Phase der Entwicklung durch eine angeleitete Reflexion, um potenziell komplexe Interaktionssituationen zu antizipieren und ihnen durch Analyse der möglichen Ursachen der Komplexitätswahrnehmung zu begegnen.
Ausgehend von der Definition für Interaktionskomplexität wurden Item-Indikatoren zur Erfassung komplexer Interaktionssituationen entwickelt, die dann anhand von geeigneten Kriterien für Komplexität analysiert werden. Das Analysetool ist als „Do-It-Yourself“ Fragebogen mit eigenständiger Auswertung aufgebaut. Die Genese des Fragebogens und die Ergebnisse der durchgeführten Evaluation an fünf Softwarentwickler*innen werden dargestellt. Es konnte festgestellt werden, dass das Analysetool bei den Befragten als anwendbar, effektiv und hilfreich wahrgenommen wurde und damit eine hohe Akzeptanz bei der Zielgruppe genießt. Dieser Befund unterstützt die gute Einbindung des Analysetools in den Software-Entwicklungsprozess.
The present paper proposes a novel approach for equilibrium selection in the infinitely repeated prisoner’s dilemma where players can communicate before choosing their strategies. This approach yields a critical discount factor that makes different predictions for cooperation than the usually considered sub-game perfect or risk dominance critical discount factors. In laboratory experiments, we find that our factor is useful for predicting cooperation. For payoff changes where the usually considered factors and our factor make different predictions, the observed cooperation is consistent with the predictions based on our factor.
Wo programmiert wird, da passieren Fehler. Um das Debugging, also die Suche sowie die Behebung von Fehlern in Quellcode, stärker explizit zu adressieren, verfolgt die vorliegende Arbeit das Ziel, entlang einer prototypischen Lernumgebung sowohl ein systematisches Vorgehen während des Debuggings zu vermitteln als auch Gestaltungsfolgerungen für ebensolche Lernumgebungen zu identifizieren. Dazu wird die folgende Forschungsfrage gestellt: Wie verhalten sich die Lernenden während des kurzzeitigen Gebrauchs einer Lernumgebung nach dem Cognitive Apprenticeship-Ansatz mit dem Ziel der expliziten Vermittlung eines systematischen Debuggingvorgehens und welche Eindrücke entstehen während der Bearbeitung?
Zur Beantwortung dieser Forschungsfrage wurde orientierend an literaturbasierten Implikationen für die Vermittlung von Debugging und (medien-)didaktischen Gestaltungsaspekten eine prototypische Lernumgebung entwickelt und im Rahmen einer qualitativen Nutzerstudie mit Bachelorstudierenden informatischer Studiengänge erprobt. Hierbei wurden zum einen anwendungsbezogene Verbesserungspotenziale identifiziert. Zum anderen zeigte sich insbesondere gegenüber der Systematisierung des Debuggingprozesses innerhalb der Aufgabenbearbeitung eine positive Resonanz. Eine Untersuchung, inwieweit sich die Nutzung der Lernumgebung längerfristig auf das Verhalten von Personen und ihre Vorgehensweisen während des Debuggings auswirkt, könnte Gegenstand kommender Arbeiten sein.
Rapidly growing seismic and macroseismic databases and simplified access to advanced machine learning methods have in recent years opened up vast opportunities to address challenges in engineering and strong motion seismology from novel, datacentric perspectives. In this thesis, I explore the opportunities of such perspectives for the tasks of ground motion modeling and rapid earthquake impact assessment, tasks with major implications for long-term earthquake disaster mitigation.
In my first study, I utilize the rich strong motion database from the Kanto basin, Japan, and apply the U-Net artificial neural network architecture to develop a deep learning based ground motion model. The operational prototype provides statistical estimates of expected ground shaking, given descriptions of a specific earthquake source, wave propagation paths, and geophysical site conditions. The U-Net interprets ground motion data in its spatial context, potentially taking into account, for example, the geological properties in the vicinity of observation sites. Predictions of ground motion intensity are thereby calibrated to individual observation sites and earthquake locations.
The second study addresses the explicit incorporation of rupture forward directivity into ground motion modeling. Incorporation of this phenomenon, causing strong, pulse like ground shaking in the vicinity of earthquake sources, is usually associated with an intolerable increase in computational demand during probabilistic seismic hazard analysis (PSHA) calculations. I suggest an approach in which I utilize an artificial neural network to efficiently approximate the average, directivity-related adjustment to ground motion predictions for earthquake ruptures from the 2022 New Zealand National Seismic Hazard Model. The practical implementation in an actual PSHA calculation demonstrates the efficiency and operational readiness of my model. In a follow-up study, I present a proof of concept for an alternative strategy in which I target the generalizing applicability to ruptures other than those from the New Zealand National Seismic Hazard Model.
In the third study, I address the usability of pseudo-intensity reports obtained from macroseismic observations by non-expert citizens for rapid impact assessment. I demonstrate that the statistical properties of pseudo-intensity collections describing the intensity of shaking are correlated with the societal impact of earthquakes. In a second step, I develop a probabilistic model that, within minutes of an event, quantifies the probability of an earthquake to cause considerable societal impact. Under certain conditions, such a quick and preliminary method might be useful to support decision makers in their efforts to organize auxiliary measures for earthquake disaster response while results from more elaborate impact assessment frameworks are not yet available.
The application of machine learning methods to datasets that only partially reveal characteristics of Big Data, qualify the majority of results obtained in this thesis as explorative insights rather than ready-to-use solutions to real world problems. The practical usefulness of this work will be better assessed in the future by applying the approaches developed to growing and increasingly complex data sets.
Organic solar cells (OSCs) represent a new generation of solar cells with a range of captivating attributes including low-cost, light-weight, aesthetically pleasing appearance, and flexibility. Different from traditional silicon solar cells, the photon-electron conversion in OSCs is usually accomplished in an active layer formed by blending two kinds of organic molecules (donor and acceptor) with different energy levels together.
The first part of this thesis focuses on a better understanding of the role of the energetic offset and each recombination channel on the performance of these low-offset OSCs. By combining advanced experimental techniques with optical and electrical simulation, the energetic offsets between CT and excitons, several important insights were achieved: 1. The short circuit current density and fill-factor of low-offset systems are largely determined by field-dependent charge generation in such low-offset OSCs. Interestingly, it is strongly evident that such field-dependent charge generation originates from a field-dependent exciton dissociation yield. 2. The reduced energetic offset was found to be accompanied by strongly enhanced bimolecular recombination coefficient, which cannot be explained solely by exciton repopulation from CT states. This implies the existence of another dark decay channel apart from CT.
The second focus of the thesis was on the technical perspective. In this thesis, the influence of optical artifacts in differential absorption spectroscopy upon the change of sample configuration and active layer thickness was studied. It is exemplified and discussed thoroughly and systematically in terms of optical simulations and experiments, how optical artifacts originated from non-uniform carrier profile and interference can manipulate not only the measured spectra, but also the decay dynamics in various measurement conditions. In the end of this study, a generalized methodology based on an inverse optical transfer matrix formalism was provided to correct the spectra and decay dynamics manipulated by optical artifacts.
Overall, this thesis paves the way for a deeper understanding of the keys toward higher PCEs in low-offset OSC devices, from the perspectives of both device physics and characterization techniques.
Efficiently managing large state is a key challenge for data management systems. Traditionally, state is split into fast but volatile state in memory for processing and persistent but slow state on secondary storage for durability. Persistent memory (PMem), as a new technology in the storage hierarchy, blurs the lines between these states by offering both byte-addressability and low latency like DRAM as well persistence like secondary storage. These characteristics have the potential to cause a major performance shift in database systems.
Driven by the potential impact that PMem has on data management systems, in this thesis we explore their use of PMem. We first evaluate the performance of real PMem hardware in the form of Intel Optane in a wide range of setups. To this end, we propose PerMA-Bench, a configurable benchmark framework that allows users to evaluate the performance of customizable database-related PMem access. Based on experimental results obtained with PerMA-Bench, we discuss findings and identify general and implementation-specific aspects that influence PMem performance and should be considered in future work to improve PMem-aware designs. We then propose Viper, a hybrid PMem-DRAM key-value store. Based on PMem-aware access patterns, we show how to leverage PMem and DRAM efficiently to design a key database component. Our evaluation shows that Viper outperforms existing key-value stores by 4–18x for inserts while offering full data persistence and achieving similar or better lookup performance. Next, we show which changes must be made to integrate PMem components into larger systems. By the example of stream processing engines, we highlight limitations of current designs and propose a prototype engine that overcomes these limitations. This allows our prototype to fully leverage PMem's performance for its internal state management. Finally, in light of Optane's discontinuation, we discuss how insights from PMem research can be transferred to future multi-tier memory setups by the example of Compute Express Link (CXL).
Overall, we show that PMem offers high performance for state management, bridging the gap between fast but volatile DRAM and persistent but slow secondary storage. Although Optane was discontinued, new memory technologies are continuously emerging in various forms and we outline how novel designs for them can build on insights from existing PMem research.
Du sollst nicht essen
(2024)
Zwar sind Menschen biologisch gesehen Allesesser, dennoch gibt es keine Gemeinschaft, die alle ihr zur Verfügung stehenden Nahrungsmittel voll ausschöpft. Immer wird etwas nicht gegessen. Warum wir nicht essen, was wir nicht essen – das beleuchtet dieser Sammelband aus neuro-, ernährungs-, gesellschafts- und religionswissenschaftlicher Perspektive. Ein „religiöser Nutriscore“ gibt Auskunft über die wichtigsten Verzichtsregeln in Judentum, Christentum und Islam. Eine Fotostrecke veranschaulicht, wie bestimmte Speisen zu Festen und Feiertagen zu einem heiligen Essen werden. Nicht zuletzt werden Wege aufgezeigt, wie Menschen, die verschiedene Speiseregeln befolgen, dennoch zusammen essen können – inklusive Praxistest in der Unimensa.