Refine
Has Fulltext
- yes (122) (remove)
Year of publication
- 2020 (122) (remove)
Document Type
- Doctoral Thesis (122) (remove)
Is part of the Bibliography
- yes (122)
Keywords
- Maschinelles Lernen (3)
- Anden (2)
- Andes (2)
- Arktis (2)
- Boden (2)
- Chemometrie (2)
- Datenassimilation (2)
- Diffusion (2)
- Galaktische Archäologie (2)
- Geomagnetic activity (2)
- Geomagnetische Aktivität (2)
- Geophysik (2)
- ICP-OES (2)
- Klima (2)
- Nanopartikel (2)
- Nährelemente (2)
- PCA (2)
- PLSR (2)
- Paleoclimatology (2)
- Paläoklimatologie (2)
- Perowskit (2)
- Photochemie (2)
- Porous carbon (2)
- RFA (2)
- Rashba effect (2)
- Rashba-Effekt (2)
- Solarzellen (2)
- Spaltsätze (2)
- Stratigraphy (2)
- Translation (2)
- XRF (2)
- clefts (2)
- diffusion (2)
- emulsion (2)
- machine learning (2)
- memory (2)
- remote sensing (2)
- solar cells (2)
- 3D field calculations (1)
- 3D-Feldsimulationen (1)
- A-bar movement (1)
- A-quer-Bewegung (1)
- AM1 (1)
- AM1/FOMO (1)
- AM1/FOMO-CI (1)
- ARPES (1)
- ATPS (1)
- Abgleich von Abhängigkeiten (1)
- Adipositas (1)
- Adressnormalisierung (1)
- Affekt (1)
- Aid Effectiveness (1)
- Akan (1)
- Algorithmen (1)
- Algorithms (1)
- Alpine Fault (1)
- Altitude (1)
- Ammonia (1)
- Ammoniak (1)
- Analogmodell (1)
- Analogue Model (1)
- Anfoderungsprofil Judo (1)
- Anregungsspektren (1)
- Anreicherungsmethoden (1)
- Antikörper-Färbung (1)
- Anwendungsbedingungen (1)
- Aptamer (1)
- Aptamers (1)
- Arbeitgeberattraktivität (1)
- Archaeolithoporella (1)
- Archäomagnetismus (1)
- Arctic (1)
- Arctic-midlatitude linkages (1)
- Argentina (1)
- Argentinien (1)
- Array-Entwurf (1)
- Arrhenius (1)
- Asia (1)
- Asien (1)
- Astronomical instrumentation (1)
- Astrophysik (1)
- Astroteilchenphysik (1)
- Atmosphäre (1)
- Atomic Force Microscope (1)
- Aufbau Ost (1)
- Automatizität (1)
- Autotropher Nitrat-Aufnahme in Gewässernetzen (1)
- Autotrophic nitrate uptake in river networks (1)
- Azobenzene (1)
- Azobenzene containing surfactant (1)
- Azobenzol (1)
- Azobenzol enthaltende Moleküle (1)
- Azobenzol enthaltendes Tensid (1)
- B3LYP (1)
- BCH code (1)
- BCH-Code (1)
- Bacteria (1)
- Bakterien (1)
- Bank filtration (1)
- Bayesian Inference (1)
- Beanspruchungsfolgen (1)
- Bending energy (1)
- Benetzung (1)
- Beratungssituationen (1)
- Berny-Algorithmus (1)
- Berufliche Wiedereingliederung (1)
- Bestandsparameter (1)
- Beugungseffizienz (1)
- Biegeenergie (1)
- Big Data (1)
- Bildlinguistik (1)
- Bildungstechnologien (1)
- Biochemie (1)
- Biofilme (1)
- Biogeochemie (1)
- Biogeochemistry (1)
- Biogeographie (1)
- Biokompatibilität (1)
- Biomasse (1)
- Biotop (1)
- Bioverfügbarkeit (1)
- Blickbewegungen (1)
- Bodenanalytik (1)
- CO2 degassing (1)
- CO2-Entgasung (1)
- Calculus of Variation (1)
- Caldera-ähnliche Topographie (1)
- Candidate Journey (1)
- Canopy parameters (1)
- Carbonate-Silicate reactions (1)
- Cardinality estimation (1)
- Caribbean (1)
- Carrara marble (1)
- Carrara-marmor (1)
- Catchment and in-stream water quality (1)
- Cell-cell adhesion (1)
- Central Andes (1)
- Change Data Capture (1)
- Chemerin (1)
- Chemometrics (1)
- Chemotaxis (1)
- Chloroplast gene expression (1)
- Chloroplasten-Genexpression (1)
- Climate (1)
- Cluster (1)
- Codierungstheorie (1)
- Coding theory (1)
- Cognitive linguistics (1)
- Coiled Coil (1)
- Cone sheet (1)
- Conic compartments (1)
- Contamination Control (1)
- Crowd Resourcing (1)
- Cyanobacteria (1)
- Cyanobakterien (1)
- Cybersecurity e-Learning (1)
- Cybersicherheit E-Learning (1)
- D3 (1)
- DFT (1)
- DRYM (1)
- Data Assimilation (1)
- Data Integration (1)
- Data Mining (1)
- Data Profiling (1)
- Data assimilation (1)
- Data profiling (1)
- Data quality (1)
- Data-Driven Methods (1)
- Daten Assimilation (1)
- Datenabgleich (1)
- Datenaufbereitung (1)
- Datenbereinigung (1)
- Datengetriebene Methoden (1)
- Datenintegration (1)
- Datenqualität (1)
- Datensatzverknüpfung (1)
- Decolmation (1)
- Defektchemie (1)
- Defekte (1)
- Deformationsmechanismen (1)
- Dekolmation (1)
- Der Rückzug aus dem Gazastreifen (1)
- Deutsch (1)
- Development Aid (1)
- Dichtefunktionaltheorie (1)
- Dielektrophorese (1)
- Differential Geometrie (1)
- Differential Geometry (1)
- Digital Multimodal Linguistics (1)
- Digital education (1)
- Digitale Bildung (1)
- Digitale multimodale Linguistik (1)
- Direkte Manipulation (1)
- Dispersionskorrektur (1)
- Djerba (1)
- Doppelpuls (1)
- Drogenhandel (1)
- Drought tolerance (1)
- Duplikaterkennung (1)
- Dwarf galaxies (1)
- Dynamic Data (1)
- Düngeempfehlung (1)
- E-Z Isomerisierung (1)
- EEG (1)
- EKP (1)
- ERP (1)
- Early Earth (1)
- Early Starvation 1 (1)
- Early starvation protein (1)
- Earth's magnetic field (1)
- Earth's mantle (1)
- Economics of Convention (1)
- Education technologies (1)
- Einschätzung der Diffusion (1)
- Einzel-Objekt-Nachweis (1)
- Einzelmolekül-Biosensor (1)
- Einzelmolekülkraftspektroskopie (1)
- Einzelzellbewegung (1)
- El Niño-Southern Oscillation (ENSO) (1)
- El Niño-Südliche Oszillation (1)
- Electrocatalysis (1)
- Elektret (1)
- Elektrochemie (1)
- Elektrokatalyse (1)
- Elektronenstrukturrechnung (1)
- Emotion (1)
- Employer Branding (1)
- Emulsion (1)
- Emulsionen (1)
- Energieumwandlung (1)
- Englisch (1)
- English (1)
- Entitätsauflösung (1)
- Entlastungsspannung (1)
- Entwicklungszusammenarbeit (1)
- Eocene (1)
- Eozän (1)
- Epistemologie (1)
- Erdbeben (1)
- Erdbebenkatalogdaten (1)
- Erdbebenquellen-Array (1)
- Erdmagnetfeld (1)
- Erdmantel (1)
- Erdrutsche (1)
- Erinnerung (1)
- Erinnerungskultur (1)
- Eutrophierung (1)
- Exhaustivitätsinferenz (1)
- Experimental Physics (1)
- Experimentalphysik (1)
- Experimente (1)
- Expert interviews (1)
- Experteninterview (1)
- Exploration (1)
- Extremniederschläge (1)
- Eye movements (1)
- Eyetracking (1)
- Eyring (1)
- FELS (1)
- FGF21 (1)
- FRAP (1)
- Fachkräftemangel (1)
- Fahrkompetenz (1)
- Fault Healing (1)
- Fehlende Werte (1)
- Feldmaus (1)
- Fernerkundung (1)
- Fernerkundungsprodukte (1)
- Fibre-fed spectroscopy (1)
- Flagella (1)
- Flagellen (1)
- Flow (1)
- Fluid-Gesteins-Wechselwirkung (1)
- Fluorchemie (1)
- Fluorescence fluctuation spectroscopy (1)
- Fluoreszenz-Mikroskopie (1)
- Fluoreszenzfluktuationsspektroskopie (1)
- Fluoreszenzproteine (1)
- Fokus (1)
- Forecasting (1)
- Foreign Keys (1)
- Foreign Keys Discovery (1)
- Formgleichungen von Vesikeln (1)
- Forschend Entdeckendes Lernen (1)
- Fotografieforschung (1)
- Fotogrammetrie (1)
- Fractals (1)
- Fraktale (1)
- Frame Analyse; Französische Entwicklungsagentur (1)
- Frame Analysis (1)
- Französisch (1)
- Frauen (1)
- French (1)
- French Development Agency (1)
- Frequenzverdopplung (1)
- Frühe Erdgeschichte (1)
- Fulgimide (1)
- Fully distributed nitrate modeling (1)
- Functional dependencies (1)
- Funktionale Abhängigkeiten (1)
- Funktionalisierung (1)
- Förster Resonanz Energie Transfer (FRET) (1)
- Förster resonance energy transfer (FRET) (1)
- GAUSSIAN (1)
- GPI (1)
- Galactic Archaeology (1)
- Galactic archaeology (1)
- Galaxien (1)
- Galaxienbalken (1)
- Galaxienbulges (1)
- Galaxienentwicklung (1)
- Galaxienstruktur (1)
- Gammastrahlung (1)
- Gammastrahlungsastronomie (1)
- Gedächtnis (1)
- Gelatine-Analogmodellierung (1)
- Genomic Mining (1)
- Genomics (1)
- Genomik (1)
- Geodynamik (1)
- Geological heterogeneity (1)
- Geologie (1)
- Geologische Heterogenität (1)
- Geology (1)
- Geomagnetic index (1)
- Geomagnetic observatory (1)
- Geomagnetischer Index (1)
- Geomagnetisches Observatorium (1)
- Geometric Analysis (1)
- Geometric Data Analysis (1)
- Geometrieoptimierung (1)
- Geometrische Analysis (1)
- Geometrische Datenanalyse (1)
- Geomicrobiology (1)
- Geomikrobiologie (1)
- Geomorphologie (1)
- Geophysics (1)
- Geosciences (1)
- Geowissenschaften (1)
- German (1)
- German Studies (1)
- Germanistica (1)
- Germanistik (1)
- Gitterdynamik (1)
- Glissant (1)
- Glucan water dikinase (1)
- Glucan-Wasser-Dikinase (1)
- Glukose Oxidation (1)
- Glykolipide (1)
- Graphbedingungen (1)
- Graphen (1)
- Graphtransformationen (1)
- Graphtransformationssysteme (1)
- Grasland (1)
- Grenzflächen (1)
- Grenzflächenrekombination (1)
- Groundwater modelling (1)
- Grundwassermodellierung (1)
- Grundwasserneubildung (1)
- Grüne Infrastruktur (1)
- Habitat (1)
- Halachic (1)
- Halo der Milchstraße (1)
- Halogenid-Perowskite (1)
- Hartree Fock (1)
- Hasserkennung (1)
- Heat transport (1)
- Herdzeit Parameter Abschätzung (1)
- Herzentwicklung (1)
- Herzklappe (1)
- Heterogene Einzugsgebietsreaktionen (1)
- Heterogeneous catchement responses (1)
- Heterogenität (1)
- High-frequency data (1)
- Hill Youth (1)
- Histidin-Metall Koordination (1)
- Hochdurchsatzsequenzierung (1)
- Hochfrequenzdaten (1)
- Hochtemperatur Gesteinsdeformtion (1)
- Hydrogel (1)
- Hydrologie (1)
- Hydrophobic and hydrophillic interactions (1)
- Hydrophobizität (1)
- Hyperpolarisierbarkeit (1)
- Hyperspektral (1)
- Höhe (1)
- ICON (1)
- IMAGE EUV (1)
- IR laser (1)
- Iconolinguistica (1)
- Identitätsorientierte Unternehmensführung (1)
- Igbo (1)
- Image studies (1)
- Inclusion Dependency (1)
- Inclusion Dependency Discovery (1)
- Incremental Discovery (1)
- Incrementally Inclusion Dependencies Discovery (1)
- Index (1)
- Inklusionsabhängigkeiten (1)
- Inklusionsabhängigkeiten Entdeckung (1)
- Inner magnetosphere (1)
- Innere Magnetosphäre (1)
- Institutional Complexity (1)
- Institutionelle Komplexität (1)
- Integrale Feldspektroskopie (1)
- Ionenmobilitätsspektrometrie (1)
- Ionic liquids (1)
- Ionische Flüssigkeiten (1)
- Ionosphäre (1)
- Isomerisierung Kinetik (1)
- Jahreszeitenvorhersage (1)
- Janus colloids (1)
- Janus-Kolloid (1)
- Kalfon, Mosheh, ha-Kohen (1)
- Kalman Filter (1)
- Kalman filter (1)
- Kampfsport (1)
- Kant (1)
- Karbonat-Silikat-Reaktionen (1)
- Kardinalitätsschätzung (1)
- Kardiologische Rehabilitation (1)
- Karibik (1)
- Kartierung (1)
- Kartographie (1)
- Kegelförmige Geometrien (1)
- Kern-Schale Aufkonvertierende Nanopartikel (1)
- Kinetics of photoisomerization (1)
- Klimapolitik (1)
- Klimatologie (1)
- Kognitive Linguistik (1)
- Kohlenhydrate (1)
- Kohn Sham (1)
- Kolonialismus (1)
- Kontaminationskontrolle (1)
- Kp index (1)
- Kp-Index (1)
- Kritik der Urteilskraft (1)
- Kugelflächenfunktionen (1)
- Körper (1)
- LDDO (1)
- LDPC code (1)
- LDPC-Code (1)
- LIBS (1)
- Ladungsspeicherung und -transport (1)
- Landschaft (1)
- Landschaftsökologie (1)
- Landwirtschaft (1)
- Lascar Volcano (1)
- Laser induzierte Breakdown Spektroskopie (1)
- Lava dome (1)
- Lavadom (1)
- Learning Analytics (1)
- Learning experience (1)
- Lebendigkeit (1)
- Leistungsrückmeldung (1)
- Lernerlebnis (1)
- Lernumgebung (1)
- Lesen (1)
- Licht-Materie-Wechselwirkung (1)
- Light-Matter Coupling (1)
- Like-Early Starvation 1 (1)
- Like-Early starvation protein (1)
- Linguistica cognitiva (1)
- Linguistica del testo (1)
- Linguistica digitale multimodale (1)
- Lipide (1)
- Local adaptation (1)
- Lokalisierte Deformation (1)
- Lokalisierung von Verformung (1)
- Luteinester (1)
- MOOC (1)
- MOOCs (1)
- MP2 (1)
- Machine learning (1)
- Magellanic Clouds (1)
- Magellansche Wolken (1)
- Mapping (1)
- Marronage (1)
- Maschinen- und Anlagenbau (1)
- Massenaussterben (1)
- Massive Open Online Courses (1)
- Mathematical Physics (1)
- Mathematische Physik (1)
- Mbembe (1)
- Meaning Structure (1)
- Mechanical engineering (1)
- Mechanosensation (1)
- Mediengebrauch (1)
- Medienpraxeologie (1)
- Medienwissenschaft (1)
- Medizinisch-beruflich orientierte Rehabilitation (MBOR) (1)
- Melt inclusions (1)
- Membran (1)
- Metadata Discovery (1)
- Metadaten Entdeckung (1)
- Metal-poor stars (1)
- Metallarme Sterne (1)
- Metamorphism (1)
- Metamorphose (1)
- Metanome (1)
- Metasomatism (1)
- Metasomatose (1)
- Meteorologie (1)
- Methane (1)
- Methanogene (1)
- Methanotrophe (1)
- Microbial communities (1)
- Microtus arvalis (1)
- Mikrobielle Gemeinschaften (1)
- Mikrobieller Abbau von organischen Material (1)
- Mikrogel (1)
- Mikrostrukturen (1)
- Mikroviskosität (1)
- Milchstraße (1)
- Milky Way (1)
- Milky Way Halo (1)
- Min-Proteine (1)
- Min-proteins (1)
- Mind2 (1)
- Mineralzusammensetzung (1)
- Missing values (1)
- Modal expansion method (1)
- Modeling (1)
- Modellieren (1)
- Modellierung (1)
- Molecular motors (1)
- Molekular-dynamik (1)
- Molekulare Motoren (1)
- Moleküle in äußeren Feldern (1)
- Monsoon (1)
- Monsun (1)
- Morphologie (1)
- Motivation (1)
- Multi-object spectroscopy (1)
- Multimode fibres (1)
- Multiple Correspondence Analysis (1)
- Multiple Korrespondenzanalyse (1)
- Myodus glareolus (1)
- Nachhaltige Entwicklung (1)
- Nachhaltiges Finanzwesen (1)
- Nachhaltigkeitstransformation (1)
- Nahrunsgergänzungsmittel (1)
- Namibia (1)
- Nano-Elektroden (1)
- Nanoparticle (1)
- Nanoparticles (1)
- Nanoplättchen (1)
- Narcocultura (1)
- Natural Products (1)
- Naturstoffe (1)
- Neural networks (1)
- Neuronale Netze (1)
- Neutronen Diffraktion (1)
- Neutronen Reflektometrie (1)
- Neutronen aus kosmischer Höhenstrahlung (1)
- Nitrogen Physisorption (1)
- Nordostbrasilien (1)
- Normalmodenanalyse (1)
- Nostoc punctiforme (1)
- Nutrients (1)
- Nutzerinteraktion (1)
- Oberflächenelektromyographie (1)
- Oberflächengitter (1)
- Oberflächentopografie (1)
- Oberfächen (1)
- Obesity (1)
- Oculomotor control (1)
- Okulomotorik (1)
- Online-Lernen (1)
- Optical remote sensing (1)
- Optische Fernerkundung (1)
- Ordnung der Partikel auf der Oberfläche (1)
- Organic matter mineralization (1)
- Ornstein-Uhlenbeck Process (1)
- Ornstein-Uhlenbeck Prozess (1)
- P2P (1)
- PEG brushes (1)
- PEG-Funktionalisierung (1)
- PNIPAM (1)
- Pachkungsdichte (1)
- Paleogeography (1)
- Paläogeographie (1)
- Paläomagnetismus (1)
- Pantoea stewartii (1)
- Partial Differential Equations (1)
- Partielle Differential Gleichungen (1)
- Partikelverben (1)
- Peer Assessment (1)
- Perceived Relevance (1)
- Percolation (1)
- Perkolation (1)
- Perm (1)
- Permafrostdegradation (1)
- Permian (1)
- Perovskite (1)
- Perowskit Solarzellen (1)
- Petrologie (1)
- Petrology (1)
- Pflanzen (1)
- Philosophie der Biologie (1)
- Phosphoglucan water dikinase (1)
- Phosphoglucan-Wasser-Dikinase (1)
- Phosphorylation process (1)
- Phosphorylierungsprozess (1)
- Photochemistry (1)
- Photogrammetry (1)
- Photokatalyse (1)
- Photopolymer (1)
- Photostrukturierung von Polymerfilmen (1)
- Photovoltaik (1)
- Physics Education (1)
- Physics Problems (1)
- Physikaufgaben (1)
- Physikdidaktik (1)
- Planungsstrategien (1)
- Plasmasphere (1)
- Plasmasphäre (1)
- Plasmonen (1)
- Plasmons (1)
- Playa (1)
- Politics (1)
- Politik (1)
- Politolinguistica (1)
- Politolinguistics (1)
- Politolinguistik (1)
- Polyelektrolyt-Multischichten (1)
- Polymere (1)
- Polypropylen (1)
- Polystyrol Nano-Sphären (1)
- Populationsanalyse (1)
- Porous silica particles (1)
- Poröser Kohlenstoff (1)
- Postcolonial (1)
- Postkolonial (1)
- Pragmatica (1)
- Pragmatics (1)
- Pragmatik (1)
- Preaktivierung (1)
- Precision Agriculture (1)
- Preisschild (1)
- Price Tag (1)
- Prognose (1)
- Programmieren (1)
- Protein (1)
- Protein complex assembly (1)
- Proteinkomplexassemblierung (1)
- Prädikatsinterpretation (distributiv vs. nicht-distributiv) (1)
- QM/MM (1)
- Qualitative content analysis (1)
- Quantenausbeute (1)
- Quantenpunkte (1)
- Quantifizierung von Unsicherheit (1)
- Quantumdots (1)
- Quell-Array optimales Design (1)
- R-PE (1)
- REM (1)
- Random Environments (1)
- Random Walk (1)
- Rasterkraftmikroskopie (1)
- Raumplanung (1)
- Reading (1)
- Reconstruction of eastern Germany (1)
- Redox (1)
- Redoxchemie (1)
- Relation (1)
- Relativsätze (1)
- Religious Zionism (1)
- Religiöser Zionismus (1)
- Repertory Grid (1)
- Resonante Energie Transfer (1)
- Results-Based Management (1)
- Return to work (1)
- Rheologie (1)
- Ribosome profiling (1)
- Riff (1)
- Ringstromelektronen (1)
- Romance Studies (1)
- Romanistica (1)
- Romanistik (1)
- Rothberg (1)
- Rumpfkraft (1)
- Räumlich verteilte Nitratmodellierung (1)
- Räumliche und zeitliche Nitratvariabilität (1)
- Rötelmaus (1)
- Rückgabe (1)
- S-indd++ (1)
- SAM (1)
- SAXS (1)
- SEM (1)
- SFG (1)
- SHG (1)
- Salt pan (1)
- Salzpfanne (1)
- Satellitenmission Swarm (1)
- Satzverarbeitung (1)
- Scalability (1)
- Schema discovery (1)
- Schema-Entdeckung (1)
- Scherzonen (1)
- Schimmelpilze (1)
- Schmelzeinschlüsse (1)
- Schädel (1)
- Seasonal prediction (1)
- Secondary Metabolites (1)
- Sedimente (1)
- Sedimentologie (1)
- Sedimentology (1)
- Seismizitätsmodellierung (1)
- Seismologie (1)
- Seismology (1)
- Seismotektonik (1)
- Sekundärmetabolite (1)
- Selbstwirsamkeitserwartungen (1)
- Semantica (1)
- Semantics (1)
- Semantik (1)
- Semiotic Testology (1)
- Semiotica (1)
- Semiotics (1)
- Semiotik (1)
- Semiotische Textologie (1)
- Settlers (1)
- Shape equations of vesicles (1)
- Siedler (1)
- Signalübertragung (1)
- Simulation (1)
- Single-cell motility (1)
- Skalierbarkeit (1)
- Skalierungsmethode von Champagne (1)
- Skriptsprachen (1)
- Smartphone (1)
- Soil (1)
- South Africa (1)
- Soziale Arbeit (1)
- Soziale Medien (1)
- Space climate (1)
- Space weather (1)
- Spatial and temporal nitrate variability (1)
- Spin Textur (1)
- Sport (1)
- Spread F (1)
- Sprungwahrscheinlichkeit (1)
- Spröde Vorläufer (1)
- Squeak/Smalltalk (1)
- Standort des Streuers (1)
- Starch metabolism (1)
- Sterne (1)
- Sternenpopulationen (1)
- Stickstoff Physisorption (1)
- Strahlungsgürtel (1)
- Stratigrafie (1)
- Stratigraphie (1)
- Stratosphere-troposphere coupling (1)
- Stratospheric polar vortex (1)
- Stratosphären-Troposphären-Kopplung (1)
- Stratosphärischer Polarwirbel (1)
- Streuresonanzen (1)
- Strukturgeologie (1)
- Stärkestoffwechsel (1)
- Störungszonenarchitektur (1)
- Subduktion (1)
- Subjektivitäten (1)
- Submarine permafrost (1)
- Submariner Permafrost (1)
- Subsea permafrost (1)
- Supernovaüberrest (1)
- Surface Relief Grating (SRG) (1)
- Svalbard (1)
- Südafrika (1)
- TDDFT (1)
- Tansania (1)
- Tanzania (1)
- Teamarbeit (1)
- Teilchenbeschleunigung (1)
- Tektonik (1)
- Tensor (1)
- Testologia semiotica (1)
- Text Linguistics (1)
- Textklassifikation (1)
- Textlinguistik (1)
- The disengagement from Gaza Strip (1)
- Topological Crystalline Insulator (1)
- Topological Insulator (1)
- Topologischer Isolator (1)
- Topologischer kristalliner Isolator (1)
- Touchpoint Management (1)
- Trajektorien (1)
- Translation feedback regulation (1)
- Translationsfeedbackregulation (1)
- Translationstheorie (1)
- Trockentoleranz (1)
- Tully-Algorithmus (1)
- Uferfiltration (1)
- Uncertainty Quantification (1)
- Unsicherheiten (1)
- Urban (1)
- Validation (1)
- Validierung (1)
- Variabilität (1)
- Variationsrechung (1)
- Verbindungspfade zwischen der Arktis und den mittleren Breiten (1)
- Verifikation induktiver Invarianten (1)
- Verteilungsmuster (1)
- Verwerfungen (1)
- Virtual Laboratory (1)
- Virtuelles Labor (1)
- Volcano (1)
- Vorhersagen (1)
- Vorlanddeformation (1)
- Vulkan (1)
- Vulkan Lascar (1)
- Vulkan-Überwachung (1)
- Vulnerabilität (1)
- WAXS (1)
- War for Talents (1)
- Wasser-in-Wasser (1)
- Wellen-Teilchen Wechselwirkungen (1)
- Wellengleichung (1)
- Weltraumklima (1)
- Weltraumwetter (1)
- Werkzeugbau (1)
- Werteprofil (1)
- Wirksamkeit von Entwicklungszusammenarbeit (1)
- Wirkstoff-Freisetzun (1)
- Wirkungsorientiertes Management (1)
- Wismut (1)
- Wuchiapingian (1)
- Wuchiapingium (1)
- Wärmetransport (1)
- Yamabe problem (1)
- Yamabe-Problem (1)
- Z-E Isomerisierung (1)
- Zebrafisch (1)
- Zeitaufgelöste Lumineszenz (1)
- Zell-zell Adhäsion (1)
- Zentralanden (1)
- Zufällige Stochastische Irrfahrt (1)
- Zufällige Umgebungen (1)
- Zwei-Prozess (1)
- Zwerg Galaxien (1)
- address normalization (1)
- aesthetics (1)
- affect (1)
- agriculture (1)
- algorithmic image recognition (1)
- algorithmische Bilderkennung (1)
- anaphoric existence presupposition (1)
- anaphorische Existenzpräsupposition (1)
- antibody staining (1)
- application conditions (1)
- archeomagnetism (1)
- arctic (1)
- array design (1)
- assessment of the diffusion (1)
- astroparticle physics (1)
- astrophysics (1)
- athletic performance (1)
- atmosphere (1)
- authigene Mineralbildung (1)
- authigenic mineral formation (1)
- automatic (1)
- azobenzene containing molecules (1)
- bayessche Inferenz (1)
- belowground herbivory (1)
- beobachtende Seismologie (1)
- bioavailability (1)
- biochemistry (1)
- biocompatibility (1)
- biofilm (1)
- biofilms (1)
- biogeography (1)
- biological membranes (1)
- biologische Membranen (1)
- biomass (1)
- biotope (1)
- bismuth (1)
- blended learning (1)
- body (1)
- brittle deformation (1)
- brittle precursors (1)
- caldera-like topography (1)
- carbohydrates (1)
- carbon nitride (1)
- cardiac rehabilitation (1)
- cardiac valves (1)
- charge storage and transport (1)
- chemische Oberflächen-Modifikationen (1)
- chiral separation (1)
- chirale Trennung (1)
- chloroplast (1)
- circumferential dike (1)
- cis-trans Isomerisierung (1)
- climate (1)
- climate policy (1)
- climatology (1)
- close packing (1)
- coarse grained Molekulardynamiken (1)
- coarse grained molecular dynamics (1)
- coarse-graining (1)
- coiled coil (1)
- collaborative learning (1)
- collaborative work (1)
- colonialism (1)
- combat sport (1)
- cone sheet (1)
- core-shell UCNP (1)
- cosmic-ray neutron sensing (1)
- counseling at schools (1)
- critique of judgement (1)
- cuerpo (1)
- data assimilation (1)
- data cleaning (1)
- data matching (1)
- data preparation (1)
- decolonial (1)
- deep biosphere (1)
- deep convection (1)
- defect chemistry (1)
- definite Pseudospaltsätze (1)
- definite pseudoclefts (1)
- deformation mechanisms (1)
- dekolonial (1)
- demands (1)
- detecção de nêutrons de raios cósmicos (1)
- dichteste Packung (1)
- dielectrophoresis (1)
- dietary supplements (1)
- diffraction efficiency (1)
- diffusioosmotic flow (1)
- diffusioosmotischer Fluss (1)
- digital learning (1)
- digital media (1)
- digitale Medien (1)
- digitales Lernen (1)
- direct manipulation (1)
- distribution pattern (1)
- driving competence (1)
- drug release (1)
- drug trafficking (1)
- dual-process (1)
- duplicate detection (1)
- dynamic hyperpolarizability (1)
- dynamische Hyperpolarisierbarkeit (1)
- eLearning (1)
- earthquake (1)
- earthquake bulletin data (1)
- earthquake source array (1)
- eastern south–central Andes (1)
- ecological modelling (1)
- electret (1)
- electric and magnetic fields (1)
- electrical switches (1)
- electrochemistry (1)
- elektrische und magnetische Felder (1)
- elektronische Schalter (1)
- emotion (1)
- energy conversion (1)
- enrichments methods (1)
- entity resolution (1)
- epistemology (1)
- equatorial plasma depletions (1)
- eutrophication (1)
- exercise (1)
- exhaustive inference (1)
- exopolysaccharide (1)
- experiment (1)
- experimental studies (1)
- experimentelle Studien (1)
- exploration (1)
- extreme rainfall (1)
- eye tracking (1)
- fMRI (1)
- fMRT (1)
- fault healing (1)
- fault zone architecture (1)
- faults (1)
- first passage (1)
- fluid rock interaction (1)
- fluorescence microscopy (1)
- fluorescent proteins (1)
- fluorous chemistry (1)
- focus (1)
- foreland deformation (1)
- formal verification (1)
- formale Verifikation (1)
- freie Aktivierungsenthalpie (1)
- functionalization (1)
- galaxies (1)
- galaxy bars (1)
- galaxy bulges (1)
- galaxy evolution (1)
- galaxy structure (1)
- gamma rays (1)
- gamma-ray astronomy (1)
- gelatin analogue modeling (1)
- genocide (1)
- geodynamic modeling (1)
- geodynamics (1)
- geodynamische Modellierung (1)
- geomorphology (1)
- geophysics (1)
- gestreute Phasen (1)
- global-hyperbolisch (1)
- globally hyperbolic (1)
- glucose oxidation (1)
- glycolipids (1)
- graph constraints (1)
- graph transformation systems (1)
- graph transformations (1)
- graphene (1)
- grassland (1)
- green infrastructure (1)
- green investments (1)
- grobkörnig (1)
- groundwater recharge (1)
- grüne Investitionen (1)
- habitat (1)
- halide perovskite (1)
- hate speech detection (1)
- heart development (1)
- heterogene Katalyse (1)
- heterogeneous catalysis (1)
- high light (1)
- high temperature rock deformation (1)
- histidine-metal coordination (1)
- human remains (1)
- hydraulische Bodeneigenschaften (1)
- hydrological modelling (1)
- hydrologische Modellierung (1)
- hydrology (1)
- hydrophoben und hydrophile Wechselwirkungen (1)
- hydrophobicity (1)
- hyperpolarizability (1)
- immediacy (1)
- index (1)
- inductive invariant checking (1)
- inquiry based learning (1)
- integral field spectroscopy (1)
- interfacial recombination (1)
- intraspecific trait variation (1)
- intraspezifische Merkmalsvariation (1)
- ion mobility spectrometry (1)
- ionic defects (1)
- ionosphere (1)
- judo-specific pulling movement (1)
- judospezifische Anrissbewegung (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariant (1)
- k-induktive Invariante (1)
- kollaboratives Arbeiten (1)
- kollaboratives Lernen (1)
- landscape (1)
- landscape ecology (1)
- landslides (1)
- laser induced breakdown spectroscopy (1)
- laser-geheizte Diamantstempelzelle (1)
- laser-heated Diamond Anvil Cell (1)
- lattice dynamics (1)
- learning environment (1)
- lipids (1)
- liveness (1)
- localized deformation (1)
- lokale Anpassung (1)
- lutein esters (1)
- market-based view (1)
- martini (1)
- mass extinction (1)
- massive open online courses (1)
- matching dependencies (1)
- mathematical modeling (1)
- mathematische Modellierung (1)
- mechanical and acoustical properties (1)
- mechanische und akustische Eigenschaften (1)
- mechanosensation (1)
- media praxeology (1)
- media studies (1)
- media use (1)
- membrane (1)
- menschliche Überreste (1)
- metalorganic frameworks (1)
- meteorology (1)
- methane (1)
- methanogens (1)
- methanotrophs (1)
- microbial activity (1)
- microbial community (1)
- microgel (1)
- microstructures (1)
- microviscosity (1)
- mikrobielle Aktivität (1)
- mikrobielle Gemeinschaft (1)
- mineral composition (1)
- modelling (1)
- mold fungi (1)
- molecular dynamics (1)
- molekularer Abstand (1)
- morphological impairments (1)
- morphologische Störungen (1)
- morphology (1)
- motivation (1)
- mujeres (1)
- mulifractional diffusion (1)
- multi-fraktionelle Diffusion (1)
- muscle activity (1)
- nano-electrodes (1)
- nanoscale heat transfer (1)
- nanoskaliger Wärmetransport (1)
- narcocultura (1)
- narcoculture (1)
- narcotráfico (1)
- needs analysis judo (1)
- neutron diffraction (1)
- neutron reflectometry (1)
- next generation sequencing (1)
- nicht-lineare Optik (1)
- nichtadiabatische Kopplung (1)
- nichtadibatische Dynamik (1)
- nichtlineare Optik (1)
- nichtstrahlende Verluste (1)
- non-adiabatic coupling (1)
- non-adiabatic dynamic (1)
- non-gaussianity (1)
- non-linear optics (1)
- nonradiative losses (1)
- nordeste do Brasil (1)
- northeast of Brazil (1)
- numerical modeling (1)
- numerische Modellierung (1)
- observational seismology (1)
- off-specular scattering (1)
- online learning (1)
- optimal array configuration (1)
- optimale Array-Konfiguration (1)
- ordering of particles on the surface (1)
- organic chemistry (1)
- oxidative Proteinmodifikationen (1)
- oxidative protein modifications (1)
- paleomagnetism (1)
- parallel immobilization of biomolecules (1)
- parallele Immobilisierung von Biomolekülen (1)
- particle acceleration (1)
- particle verbs (1)
- peer assessment (1)
- performance feedback (1)
- permafrost degradation (1)
- perovskite (1)
- perovskite solar cells (1)
- philosophy of biology (1)
- photo-structuring of polymer films (1)
- photocatalysis (1)
- photography research (1)
- photoredox catalysis (1)
- photosensitive Polymer (1)
- phreatic eruption (1)
- phreatische Eruption (1)
- physics education (1)
- planning strategies (1)
- plants (1)
- polyelectrolyte multilayers (1)
- polymers (1)
- polypropylene (1)
- polystyrene nano-spheres (1)
- poröse Kohlenstoffmaterialien (1)
- poröse Siliciumdioxidpartikel (1)
- post-translational modifications (1)
- postcolonial (1)
- postkolonial (1)
- posttranslationale Modifikationen (1)
- power spectral analysis (1)
- practical driving (1)
- praktische Fahrerlaubnisprüfung (1)
- preactivation (1)
- predicate interpretation (distributive vs. non-distributive) (1)
- prediction (1)
- programming (1)
- propriedades hidráulicas do solo (1)
- protein (1)
- qualitative Inhaltsanalyse (1)
- quantum yield (1)
- radiation belts (1)
- random diffusivity (1)
- randomisierte kontrollierte Studie (RCT) (1)
- randomized controlled trial (1)
- raumbezogene Ökologie (1)
- recarga de águas subterrâneas (1)
- record linkage (1)
- redox (1)
- redox chemistry (1)
- reef (1)
- relativization (1)
- reliability (1)
- repatriation (1)
- resonance energy transfer (1)
- rheology (1)
- ribosome profiling (1)
- ring current electrons (1)
- räumlich und zeitlich kontrollierte Wirkstoff-Freisetzung (1)
- scattered phases (1)
- scatterer location (1)
- scattering resonances (1)
- scripting languages (1)
- second harmonic generation (1)
- sediments (1)
- seismic activity (1)
- seismic array (1)
- seismic event localization (1)
- seismic source-time function estimation (1)
- seismicity modelling (1)
- seismische Aktivität (1)
- seismische Ereignislokalisierung (1)
- seismisches Array (1)
- selbstassemblierende Monolagen (1)
- selbstbestimmtes Lesen (1)
- selbstorganisierte Einzelschichten (1)
- self-assembled monolayer (1)
- self-assembled monolayers (1)
- self-efficacy (1)
- self-paced reading (1)
- sensoriamento remoto (1)
- sentence processing (1)
- shear zones (1)
- signalling (1)
- simulation (1)
- single-molecule biosensor (1)
- single-molecule force spectroscopy (1)
- single-object detection (1)
- skull (1)
- social media (1)
- social uses (1)
- social work (1)
- soil analysis (1)
- soil hydraulic properties (1)
- source array optimal design (1)
- soziale Gebrauchsweisen (1)
- spatial ecology (1)
- spatial planning (1)
- spatially and temporally controlled drug release (1)
- spektrale Leistungsdichte (1)
- spezifisches Krafttraining (1)
- spherical harmonics (1)
- spin texture (1)
- sport-specific resistance training (1)
- sportliche Leistung (1)
- spread F (1)
- spröde Deformation (1)
- starker Konvektion (1)
- stars (1)
- static hyperpolarizability (1)
- statische Hyperpolarisierbarkeit (1)
- stellar populations (1)
- stewartan (1)
- strain localization (1)
- structural geology (1)
- sub-diffraction gratings (1)
- subduction (1)
- subjectivities (1)
- subjetividades (1)
- supernova remnant (1)
- surface chemical treatment (1)
- surface topography (1)
- surfaces and interfaces (1)
- sustainability transformation (1)
- sustainable development (1)
- sustainable finance (1)
- swarm mission (1)
- synthetic array beam power (1)
- synthetische Array-Strahlleistung (1)
- teamwork (1)
- tectonics (1)
- text classification (1)
- theoretische Chemie (1)
- tiefe Biosphäre (1)
- time-resolved luminescence (1)
- tool building (1)
- trajectory (1)
- trajectory surface hopping (1)
- trans-cis Isomerisierung (1)
- transcrystalline polypropylene (1)
- transition state (1)
- transkristallines Polypropylen (1)
- translation (1)
- trunk muscle strength (1)
- ultrafast magnetism (1)
- ultrafast x-ray diffraction (1)
- ultraschnelle Röntgendiffraktion (1)
- ultraschneller Magnetimus (1)
- umlaufender Deich (1)
- uncertainties (1)
- unloading stress (1)
- unterirdische Pflanzenfresser (1)
- upconverting nanoparticles (1)
- urban (1)
- user interaction (1)
- variability (1)
- visual word recognition (1)
- visuelle Worterkennung (1)
- volcanic plumbing system (1)
- volcano monitoring (1)
- volcano seismology (1)
- vulkanische Seismologie (1)
- vulkanisches System (1)
- vulnerability (1)
- water-in-water (1)
- wave equation (1)
- wave-particle interactions (1)
- western Eger Rift (1)
- westlichen Eger-Graben (1)
- wetting (1)
- women (1)
- work-related medical rehabilitation (1)
- zebrafish (1)
- zufälligen Diffusivität (1)
- Ärztemangel (1)
- Ästhetik (1)
- Ökonomie der Konventionen (1)
- Übergangszustand (1)
- äquatorialen Plasma-Verarmungen (1)
- ökologische Modellierung (1)
- östlich-südzentrale Anden (1)
- ג'רבה (1)
- הכהן, משה כלפון בן שלום דניאל יהודה, (1)
- הלכתית (1)
- התנתקות מרצועת עזה (1)
- מתנחלים (1)
- נוער הגבעות (1)
- ציונות דתית (1)
- תג מחיר (1)
- wahrgenommene Relevanz (1)
Institute
- Institut für Physik und Astronomie (26)
- Institut für Geowissenschaften (20)
- Institut für Biochemie und Biologie (18)
- Institut für Chemie (13)
- Hasso-Plattner-Institut für Digital Engineering GmbH (8)
- Institut für Umweltwissenschaften und Geographie (7)
- Department Psychologie (3)
- Institut für Mathematik (3)
- Department Linguistik (2)
- Department Sport- und Gesundheitswissenschaften (2)
With rising complexity of today's software and hardware systems and the hypothesized increase in autonomous, intelligent, and self-* systems, developing correct systems remains an important challenge. Testing, although an important part of the development and maintainance process, cannot usually establish the definite correctness of a software or hardware system - especially when systems have arbitrarily large or infinite state spaces or an infinite number of initial states. This is where formal verification comes in: given a representation of the system in question in a formal framework, verification approaches and tools can be used to establish the system's adherence to its similarly formalized specification, and to complement testing.
One such formal framework is the field of graphs and graph transformation systems. Both are powerful formalisms with well-established foundations and ongoing research that can be used to describe complex hardware or software systems with varying degrees of abstraction. Since their inception in the 1970s, graph transformation systems have continuously evolved; related research spans extensions of expressive power, graph algorithms, and their implementation, application scenarios, or verification approaches, to name just a few topics.
This thesis focuses on a verification approach for graph transformation systems called k-inductive invariant checking, which is an extension of previous work on 1-inductive invariant checking. Instead of exhaustively computing a system's state space, which is a common approach in model checking, 1-inductive invariant checking symbolically analyzes graph transformation rules - i.e. system behavior - in order to draw conclusions with respect to the validity of graph constraints in the system's state space. The approach is based on an inductive argument: if a system's initial state satisfies a graph constraint and if all rules preserve that constraint's validity, we can conclude the constraint's validity in the system's entire state space - without having to compute it.
However, inductive invariant checking also comes with a specific drawback: the locality of graph transformation rules leads to a lack of context information during the symbolic analysis of potential rule applications. This thesis argues that this lack of context can be partly addressed by using k-induction instead of 1-induction. A k-inductive invariant is a graph constraint whose validity in a path of k-1 rule applications implies its validity after any subsequent rule application - as opposed to a 1-inductive invariant where only one rule application is taken into account. Considering a path of transformations then accumulates more context of the graph rules' applications.
As such, this thesis extends existing research and implementation on 1-inductive invariant checking for graph transformation systems to k-induction. In addition, it proposes a technique to perform the base case of the inductive argument in a symbolic fashion, which allows verification of systems with an infinite set of initial states. Both k-inductive invariant checking and its base case are described in formal terms. Based on that, this thesis formulates theorems and constructions to apply this general verification approach for typed graph transformation systems and nested graph constraints - and to formally prove the approach's correctness.
Since unrestricted graph constraints may lead to non-termination or impracticably high execution times given a hypothetical implementation, this thesis also presents a restricted verification approach, which limits the form of graph transformation systems and graph constraints. It is formalized, proven correct, and its procedures terminate by construction. This restricted approach has been implemented in an automated tool and has been evaluated with respect to its applicability to test cases, its performance, and its degree of completeness.
"How Wenzel and Cassie were wrong" – this was the eye-catching title of an article published by Lichao Gao and Thomas McCarthy in 2007, in which fundamental interpretations of wetting behavior were put into question. The authors initiated a discussion on a subject, which had been generally accepted a long time ago and they showed that wetting phenomena were not as fully understood as imagined. Similarly, this thesis tries to put a focus on certain aspects of liquid wetting, which so far have been widely neglected in terms of interpretation and experimental proof. While the effect of surface roughness on the macroscopically observed wetting behavior is commonly and reliably interpreted according to the well-known models of Wenzel and Cassie/Baxter, the size-scale of the structures responsible for the surface's rough texture has not been of further interest. Analogously, the limits of these models have not been described and exploited. Thus, the question arises, what will happen when the size of surface structures is reduced to the size of the contacting liquid molecules itself? Are common methods still valid or can deviations from macroscopic behavior be observed?
This thesis wants to create a starting point regarding these questions. In order to investigate the effect of smallest-scale surface structures on liquid wetting, a suitable model system is developed by means of self-assembled monolayer (SAM) formation from (fluoro)organic thiols of differing lengths of the alkyl chain. Surface topographies are created which rely on size differences of several Ångströms and exhibit surprising wetting behavior depending on the choice of the individual precursor system. Thus, contact angles are experimentally detected, which deviate considerably from theoretical calculations based on Wenzel and Cassie/Baxter models and confirm that sub-nm surface topographies affect wetting. Moreover, experimentally determined wetting properties are found to correlate well to an assumed scale-dependent surface tension of the contacting liquid. This behavior has already been described for scattering experiments taking into account capillary waves on the liquid surface induced by temperature and had been predicted earlier by theoretical calculations.
However, the investigation of model surfaces requires the provision of suitable precursor molecules, which are not commercially available and opens up a door to the exotic chemistry of fluoro-organic materials. During the course of this work, the synthesis of long-chain precursors is examined with a particular focus put on oligomerically pure semi-fluorinated n-alkyl thiols and n-alkyl trichlorosilanes. For this, general protocols for the syntheses of the desired compounds are developed and product mixtures are assayed to be separated into fractions of individual chain lengths by fluorous-phase high-performance liquid chromatography (F-HPLC).
The transition from model systems to technically more relevant surfaces and applications is initiated through the deposition of SAMs from long-chain fluorinated n-alkyl trichlorosilanes. Depositions are accomplished by a vapor-phase deposition process conducted on a pilot-scale set-up, which enables the exact control of relevant process parameters. Thus, the influence of varying deposition conditions on the properties of the final coating is examined and analyzed for the most important parameters. The strongest effect is observed for the partial pressure of reactive water vapor, which directly controls the extent of precursor hydrolysis during the deposition process. Experimental results propose that the formation of ordered monolayers rely on the amount of hydrolyzed silanol species present in the deposition system irrespective of the exact grade of hydrolysis. However, at increased amounts of species which are able to form cross-linked molecules due to condensation reactions, films deteriorate in quality. This effect is assumed to be caused by the introduction of defects within the film and the adsorption of cross linked agglomerates. Deposition conditions are also investigated for chain extended precursor species and reveal distinct differences caused by chain elongation.
This thesis is concerned with Data Assimilation, the process of combining model predictions with observations. So called filters are of special interest. One is inter- ested in computing the probability distribution of the state of a physical process in the future, given (possibly) imperfect measurements. This is done using Bayes’ rule. The first part focuses on hybrid filters, that bridge between the two main groups of filters: ensemble Kalman filters (EnKF) and particle filters. The first are a group of very stable and computationally cheap algorithms, but they request certain strong assumptions. Particle filters on the other hand are more generally applicable, but computationally expensive and as such not always suitable for high dimensional systems. Therefore it exists a need to combine both groups to benefit from the advantages of each. This can be achieved by splitting the likelihood function, when assimilating a new observation and treating one part of it with an EnKF and the other part with a particle filter.
The second part of this thesis deals with the application of Data Assimilation to multi-scale models and the problems that arise from that. One of the main areas of application for Data Assimilation techniques is predicting the development of oceans and the atmosphere. These processes involve several scales and often balance rela- tions between the state variables. The use of Data Assimilation procedures most often violates relations of that kind, which leads to unrealistic and non-physical pre- dictions of the future development of the process eventually. This work discusses the inclusion of a post-processing step after each assimilation step, in which a minimi- sation problem is solved, which penalises the imbalance. This method is tested on four different models, two Hamiltonian systems and two spatially extended models, which adds even more difficulties.
Orogenic peridotites represent portions of upper subcontinental mantle now incorporated in mountain belts. They often contain layers, lenses and irregular bodies of pyroxenite and eclogite. The origin of this heterogeneity and the nature of these layers is still debated but it is likely to involve processes such as transient melts coming from the crust or the mantle and segregating in magma conduits, crust-mantle interaction, upwelling of the asthenosphere and metasomatism. All these processes occur in the lithospheric mantle and are often related with the subduction of crustal rocks to mantle depths. In fact, during subduction, fluids and melts are released from the slab and can interact with the overlying mantle, making the study of deep melts in this environment crucial to understand mantle heterogeneity and crust-mantle interaction. The aim of this thesis is precisely to better constrain how such processes take place studying directly the melt trapped as primary inclusions in pyroxenites and eclogites. The Bohemian Massif, crystalline core of the Variscan belt, is targeted for these purposes because it contains orogenic peridotites with layers of pyroxenite and eclogite and other mafic rocks enclosed in felsic high pressure and ultra-high pressure crustal rocks. Within this Massif mafic rocks from two areas have been selected: the garnet clinopyroxenite in orogenic peridotite of the Granulitgebirge and the ultra-high pressure eclogite in the diamond-bearing gneisses of the Erzgebirge. In both areas primary melt inclusions were recognized in the garnet, ranging in size between 2-25 µm and with different degrees of crystallization, from glassy to polycrystalline. They have been investigated with Micro Raman spectroscopy and EDS mapping and the mineral assemblage is kumdykolite, phlogopite, quartz, kokchetavite, phase with a main Raman peak at 430 cm-1, phase with a main Raman peak at 412 cm-1, white mica and calcite with some variability in relative abundance depending on the case study. In the Granulitgebirge osumilite and pyroxene are also present, whereas calcite is one of the main phases in the Erzgebirge. The presence of glass and the mineral assemblage in the nanogranitoids suggest that they were former droplets of melt trapped in the garnet while it was growing. Glassy inclusions and re-homogenized nanogranitoids show a silicate melt that is granitic, hydrous, high in alkalis and weakly peraluminous. The melt is also enriched in both case studies in Cs, Pb, Rb, U, Th, Li and B suggesting the involvement of crustal component, i.e. white mica (main carrier of Cs, Pb, Rb, Li and B), and a fluid (Cs, Th and U) in the melt producing reaction. The whole rock in both cases mainly consists of garnet and clinopyroxene with, in Erzgebirge samples, the additional presence of quartz both in the matrix and as a polycrystalline inclusion in the garnet. The latter is interpreted as a quartz pseudomorph after coesite and occurs in the same microstructural position as the melt inclusions. Both rock types show a crustal and subduction zone signature with garnet and clinopyroxene in equilibrium. Melt was likely present during the metamorphic peak of the rock, as it occurs in garnet.
Our data suggest that the processes most likely responsible for the formation of the investigated rocks in both areas is a metasomatic reaction between a melt produced in the crust and mafic layers formerly located in the mantle wedge for the Granulitgebirge and in the subducted continental crust itself in the Erzgebirge. Thus metasomatism in the first case took place in the mantle overlying the slab, whereas in the second case metasomatism took place in the continental crust that already contained, before subduction, mafic layers. Moreover, the presence of former coesite in the same microstructural position of the melt inclusions in the Erzgebirge garnets suggest that metasomatism took place at ultra-high pressure conditions.
Summarizing, in this thesis we provide new insights into the geodynamic evolution of the Bohemian Massif based on the study of melt inclusions in garnet in two different mafic rock types, combining the direct microstructural and geochemical investigation of the inclusions with the whole-rock and mineral geochemistry. We report for the first time data, directly extracted from natural rocks, on the metasomatic melt responsible for the metasomatism of several areas of the Bohemian Massif. Besides the two locations here investigated, belonging to the Saxothuringian Zone, a signature similar to the investigated melt is clearly visible in pyroxenite and peridotite of the T-7 borehole (again Saxothuringian Zone) and the durbachite suite located in the Moldanubian Zone.
תקציר העבודה
בערב יום העצמאות 1967, שבועות אחדים ערב מלחמת ששת הימים זעק הרב צבי יהודה קוק (1891-1982) "איפה חברון שלנו – אתם שוכחים את זה?! ואיפה שכם שלנו – אנחנו שוכחים את זה?! ואיפה עבר הירדן שלנו איפה כל רגב ורגב? כל חלק וחלק, של ארבע אמות ארץ ד'. הבידינו לוותר על איזה מילימטר מהן? חלילה וחס ושלום"
דבריו הנחרצים של הרב, יחד עם כיבוש שטחי הגדה המערבית, חברון שכם וירושלים כמו גם חצי האי סיני ורצועת עזה במהלך ששת ימי הלחימה ביוני 1967, הביאו להתפרצות העוצמתית ביותר של תחושת פעמי משיח בקרב הציבור הציוני דתי. וכפי שצויין בדבריו של הרב ישראל אריאל, בואו של המשיח היה עניין של שעות אחדות.
אולם כעשר שנים מאוחר יותר בשנת 1978, חתמה ממשלת ישראל על הסכם שלום עם מצרים. במסגרת הסכם השלום, הוחזרו כל שטחי חצי האי סיני לידי המצרים ופורקה ההתיישבות בחבל ימית . תושבי חבל ימית נעקרו מבתיהם (בהם התגוררו משנת 1971- אפריל 1982). גם הציבור הציוני דתי נאלץ למצוא הסברים לכך שמדינת ישראל פועלת בניגוד לציפיות שנתלו בה מבחינת היותה אבן דרך בדרך לגאולה.
תלמידיו של הרב צבי יהודה קוק, שרובם כבר החזיקו במשרות רבניות. חוו את חוויית העקירה בשיא פועלם. הרב צבי ישראל טאו, מתלמידיו הקרובים של הרב צבי יהודה קוק, אסר על תלמידיו להשתתף בהפגנות. הוא פסק כי חייבים לכבד את החלטת הממשלה שנבחרה על ידי רוב יהודי. הוא דבק בדבריו של הרב צבי יהודה קוק כשאמר "העם לא איתנו" ו"דינא דמלכותא דינא" בייחוד כשמדובר בממשלה יהודית הנבחרת על העם.
אולם השבר הגדול, כפי שמוטי ענברי כתב "the profound theological crisis " היה ההתנתקות מרצועת עזה בשנת 2005. הקושייה היתה - האם מדינה שעוקרת התיישבות יהודית ומוסרת שטחים מארץ ישראל לידי אויביה, יכולה עוד להיקרא מדינה קדושה.
הדיסוננס הקוגניטיבי במסגרת ההתנתקות מרצועת עזה
כדי להבין את משבר ההתנתקות – יש לחלקו לשניים. האחד: המשבר האמוני כלפי המדינה – הציבור הציוני דתי החזיק באמונה כי זאת המדינה שהיא "יסוד כיסא השם בעולם" וזאת המדינה "אותה חזו הנביאים " (דבריו של הרב צבי יהודה קוק בעקבות אכזבתו מהעדרם של המקומות הקדושים בשטחה של מדינת ישראל, לפני מלחמת ששת הימים).
והמשבר השני הוא: המשבר האמוני כלפי האלוקות אשר "איפשרה" לתוכנית זאת לצאת לפועל.
ישנו, אם כן, את הקונפליקט כלפי החלטות המדינה החילונית הקיימת לבין התפיסה האמונית המתעקשת לראות אותה כמדינה קדושה. וכן הקונפליקט כלפי האמונה בה הם מחזיקים – אשר לפיה, עם ישראל נמצא בשעת גאולה וזו הולכת ומתעצמת לקראת בואו של המשיח. לפי תפיסת מציאות אמונית זו - תוכנית ההתנתקות אינה אמורה להתממש כלל.
וזו אכן התממשה. אם כן, כיצד התגבר ציבור זה ושיקם את התפיסה האמונית שלו כלפי האלוקות, זאת שאלה אחת. והשאלה השניה כיצד הוא שיקם את יחסו כלפי המדינה, והאם הוא עדיין רואה בה את "זאת המדינה שחזו הנביאים".
השתמשתי בתיאוריית הדיסוננס הקוגניטיבי של ליאון ופסטינגר ועמיתיו כדי לנתח ולהעריך את המשבר האמוני שעבר ציבור זה. מאמריו של ענברי וניתוחו את המשבר האמוני שחל בקרב הציבור הדתי ציוני בתום תוכנית ההתנתקות. שימשו גם כן כלי בעבודה זו, על מנת להעריך את ההתפתחויות הנוספות שחלו בקרב הציבור הזה במלאת עשר שנים מאז מימוש תוכנית ההתנתקות. כמו גם הופעת אירועי האלימות והטרור המכונים "אירועי תג מחיר".
לפי חמשת העקרונות של פסטינגר ועמיתיו ניתן לצפות שחברי קבוצה כלשהיא יגבירו את להט האמונה בעקבות הפרכת אמונתם, כלומר "כישלון הנבואי" במידה ומתקיימים חמשת התנאים הבאים: 1. על האמונה לנבוע מתוך שכנוע עמוק ועליה להיות רלונטית לפעולה שהמאמין עושה או איך שהוא מתנהג, 2. המאמין, האדם המחזיק באמונה זו חייב למסור את עצמו למענה, עליו לבצע החלטות חשובות שלא ניתן לבטלם. ככל שהפעולות חשובות יותר, וככל שקשה יותר לבטלם, כך גדלה בהתאמה המחוייבות של הפרט לאמונה. לדוגמא: להתפטר מעבודה, לעבור דירה. 3. האמונה חייבת להיות ספציפית למדי ומעורבת מספיק במציאות כך שאירועים מציאותיים יכולים באופן חד משמעי להפריך את האמונה. 4. אירוע הסותר את האמונה חייב להתרחש ולהיות מזוהה על ידי הפרט המאמין. 5. המאמין חייב לקבל תמיכה חברתית, כמעט לא יתכן שפרט מאמין מבודד יכול יהיה לעמוד בפני ראיות סותרות שהדגשנו מעלה.
בהנתן חמשת התנאים האלו, יש לצפות שהפרט המאמין -החבר בקבוצה של פרטים המשוכנעים באמונתם ויכולים לתמוך אחד בשני, ימשיך להחזיק באמונתו באופן יציב, ובנוסף, הוא וחבריו ימשיכו לגייס מאמינים חדשים.
אולם בניגוד למקרים שנבחנו על ידי פסטינגר ועמיתיו, תוכנית ההתנתקות מרצועת עזה, התאפיינה בכך שתאריך הפינוי לא נקבע על ידי מנהיגי הציבור הציוני דתי, אלא על ידי ראש ממשלת ישראל דאז אריאל שרון. הוא החליט יחד עם ממשלת ישראל על פינוי רצועת עזה ב15 באוגוסט 2005 (י' באב תשס"ה) מכל תושביה היהודים, פירוק הבסיסים הצבאיים וחיסול כל סממן של ריבונות ישראלית בה. יש כאן בעצם, תמונת תשליל, אירוע קיצוני עם תאריך יעד מוגדר שהציבור הציוני דתי נלחם כדי לבטלו.
תגובות הרבנים, תגובות שנבחנו על פי תיאוריית הדיסוננס הקוגניטיבי של המתנחלים לפני הנסיגה ולאחריה
בעבודה, בחנתי את תיאולוגיות רבני הציונות הדתית הפעילים ביותר בתקופה מדוברת זו והם: הרב דב ליאור, הרב זלמן מלמד, הרב צבי ישראל טאו, הרב חנן פורת ז"ל והרב שלמה אבינר. התיאולוגיות נבחנו לפני ההתנתקות ולאחריה, כמו גם עשר שנים לאחר ההתנתקות. בין השאלות שנבחנו הם: כיצד השתנה היחס לממלכתיות, האם יש חובה לסרב פקודה, כיצד הם נימקו את הגזירה לציבור תלמידיהם. בנוסף לכך, נבחנה תופעת ה“תג מחיר“, בדקתי האם היא תוצאה ממשבר ההתנתקות מרצועת עזה או המשך ישיר של התפתחות "תנועת גוש אמונים" וההתמודדיות היומיומית בין המדינה הממשית החילונית לבין המדינה הדתית האוטופית, אליה הם שואפים. כמו כן בחנתי את יחסם של הרבנים כלפי הפעילים בטרור זה, כלפי הפלסטינים וכלפי הצבא.
הרב דב ליאור
הרב דב ליאור, היה רב הישוב קרית ארבע - חברון וראש ישיבת ההסדר ניר שבמקום. נולד בשנת 1933 בירוסלב בגליציה למשפחת חסידי בעלז, בשנת 1939 עם פרוץ הכוחות הגרמניים לפולין, נמלטה משפחתו לרוסיה. בשנת 1948 עלה באוניית המעפילים "נגבה" לארץ ישראל, ובשנת 1949 עבר לישיבת "מרכז הרב" בירושלים, ולמד עם הרב צבי יהודה קוק. הרב ליאור היה הרב הראשון שנשלח על ידי הרב צבי לכהן כרב הישוב כפר הרא"ה, ומאוחר יותר עבר לקריית ארבע, שם הוא כיהן כרב היישוב עד לפרישתו.
גישתו להחלטה על ההתנתקות מרצועת עזה היתה: "אחרי עשרות שנים של חוסר חינוך" תוצאה היא –"התנתקות מארץ ישראל". אולם על הציבור הציוני דתי: "לבלום את הרשעות הזאת. [...] אם זה יבוצע זה עלול לפגוע בקיומנו כעם וקיומנו כמדינה". הדרך להיאבק לשיטתו היא: "בהתנגדות פסיבית". למרות שהציבור הציוני דתי הוא ממלכתי "דינא דמלכותא דינא" אבל לא דבר שכנגד תורת ישראל, אם ממשלת ישראל מחליטה להרוס ישוב ולהסגיר שטח לידי האויב, יש להראות זאת כגזירה וצריכים לעמוד כנגד זה מכל המשתמע מכך. [...] הם יכולים לקבוע בענייני מיסים, מהירות הנסיעה.
החזון, לפי הרב ליאור הוא ש "לא ירחק היום והציבור שלנו יחליט בכל מערכות החיים הציבוריים, במשפט בכלכלה. לדעתו: "מערכת המשפט זה החילול הגדול שיותר שיש במדינה. להתכונן לקראת הנהגת עם ישראל ואם הציבור שלנו יקח את ההנהגה יהיה שלום אמת, אימתם של ישראל תהיה מוטלת על כל המחבלים ותורת ישראל ברוחה והשפעתה תהיה ניכרת בכל תחומי חיינו הציבוריים. נעבור את התקופה הקשה הזאת ולא ניפול ביאוש ונזכה לראות בישועת ה'" . מצד אחד הרב ליאור קרא למאבק שהכלים שלו אינם סטדנדרטים שדרש שלא ללכת כמו צאן ולקרוא על החלטות הכנסת "קדוש". ומצד שני הוא קרא למאבק פסיבי. מאחרית דבריו של הרב ליאור במאמר לעיל בפסקה האחרונה, ניכר כי הוא השלים עם הגזירה של תוכנית ההתנתקות ונותר רק לקרוא לציבור הציוני דתי לקחת את ההנהגה לידיים.
לאחר ההתנתקות
כחודש לאחר ההתנתקות, בכנס שנערך ב- ג' באלול תשס"ה, הביע הרב ליאור את דעתו. הוא נתן במה לשאלות ולטרדות שקיימות בקרב הציבור הציוני דתי, כמו איך להתייחס למדינה, האם להתנתק ממנה, האם יש צורך לפשפש במעשינו, לחזור בתשובה? "ומכאן עולות שאלות כבדות משקל: האם אנחנו חטאנו, האם אנשי הגוש חטאו, האם עם ישראל חטא. אנחנו מחפשים מהי בדיוק נקודת החולשה, לא מתוך כוונה להטיל אשמות במישהו, אלא כדי לדעת לתקן."
הוא הוסיף כי: "מי שחש את פעמי הגאולה, לא ייפול בייאוש ובחולשת הדעת גם במצבים של מעין נסיגה. לא נאבד בגלל זה את האמון שלנו במהלך האלוקי הגדול של שיבת עם ישראל לארצו. כמו כן, הרב ליאור כתב בגליון י' של ירחון „קומי אורי“ בעקבות המלחמה בלבנון השניה בשנת 2006: "אין לי ספק שהיסורים הקשים שפוקדים את עמנו במלחמה הזאת, שנמשכה למעלה מחודש, באו בעקבות הפשע החמור לפני כשנה, הגליית היהודים מגוש קטיף, החרבת ביתם והסגרת חבלי ארץ לידי אויב "
לפי הרב ליאור אין צורך בתשובה או היטהרות, או חזרה לערכים שננטשו, אלא הסבר תכליתי מחודש לאירוע ה'הכישלון הנבואי'. מדבריו של הרב ליאור לפני ההתנתקות, הוא הניח שתוכנית ההתנתקות יכולה להתבטל, וזה תלוי במאבק של מיישמי התוכנית, כלומר החיילים והמפקדים. במידה והם יסרבו פקודה, אז התוכנית תתבטל.
הרב זלמן ברוך מלמד
הרב זלמן ברוך מלמד, נולד בתל אביב בשנת 1937. הוא ראש ישיבת בית אל וממקימי ערוץ 7 ואתר האינטרנט Yeshiva, אתר האינטרנט הראשון להפצת שיעורי תורה של רבני הציונות הדתית. הוא למד בישיבת "מרכז הרב" אצל הרב צבי יהודה קוק משנת 1954. לאחר כעשר שנים כתלמידו הקרוב, הוא התמנה לר"מ (רב ומלמד) בישיבה. בשנת 1978 נשלח בתמיכת הרב צבי יהודה קוק להקים ישיבה בבסיס הצבאי שליד בית אל. מאוחר יותר הוקמה במקום ההתנחלות בית אל.
לפי גישתו, המאבק על גוש קטיף הוא מאבק סמוי שיש בין שני חלקי העם: "אין כאן הבדלים פוליטים של הערכות מדיניות אם הצעד הזה נכון או לא נכון. לא זו המחלוקת, אלא איזה מהות תהיה למדינה - מדינה לכל אזרחיה חסרת זהות או מדינה יהודית בעלת תוכן יהודי" . הוא ציין שמדובר ב"משבר זהות הציונות", משבר שחל בקרב חלק מהציבור וכעת מתנהל מאבק בין שני חלקי ציבור זה.
יחסו לסירוב פקודה היה ברור: "ואני אומר לצבא ולשוטרים, אם תכשלו ולא תוכלו לעקור את הישובים, הצלחתם. אתם יכולים כבר עכשיו לומר שזו משימה בלתי אפשרית וכבר עכשיו להצליח, צה"ל לא צריך לנצח את היהודים, צה"ל צריך לנצח את האוייב." הרב מלמד קרא לשוטרים וחיילים לסרב פקודה על מנת לעשות הכל כדי לסכל את תכנית ההתנתקות.
לפי גישתו האמונית: "המכות האלו שאנו מקבלים הם מפני שהשקפת השמאל החילוני מתרסקת, אין קיום לעם ישראל בלי אמונה, ותוך כדי פרפורי הגסיסה של השמאל הוא מכה בכוחותיו האחרונים. לאחר מכן תקום הנהגה יהודית אמונית שתוביל את המדינה לקראת הגאולה "
הרב חנן פורת (1943-2011)
חנן פורת היה ממייסדי תנועת "גוש אמונים". הוא הגיע לכפר עציון עם משפחתו בשנת 1943 בהיותו בן חצי שנה. המשפחה פונתה מכפר עציון בהיותו כבן שש בתחילת מלחמת העצמאות, 1948. מאוחר יותר למד חנן פורת מספר שנים בישיבת "כרם ביבנה", לפני שעבר יחד עם גרעין הגחלת לישיבת "מרכז הרב". שם למד עם הרב צבי יהודה קוק. בשנת 2000 ייסד הרב פורת את עלון פרשת השבוע "מעט מן האור", בו פירסם חידושי תורה. העלון חולק חינם בבתי כנסת בכניסת השבת והופיע עד שנת 2014 (כשלוש שנים לאחר מותו).
תגובתו לפני ההתנתקות: "אנו קוראים לכל הדבקים בארץ ישראל, ולכל המאמינים בחי-עולמים וזורעים: אל תגררו אחרי קמפיין-תקשורתי זה שכל מגמתו להעצים את הציפיה ל'יום פקודה' ולקובעו כעובדה מוגמרת כבר עתה. [...] אנחנו ממשיכים יום יום לנטוע ולזרוע, ובזכותם יהפוך בעזרת ה' יתברך יום הפקודה, מיום 'בשורת איוב' ליום 'בשורת גאולה'."
הרב פורת קרא להמשיך ולפעול כרגיל ולא להתייחס להוראות הפינוי על כל סעיפיהם, (פנייה למנהלת ההתנתקות על מנת להסדיר מקום חלופי כמו גם אומדן הרכוש לצורך קבלת כספי הפיצויים). הוא התנגד לאלימות, והסביר כי פעולות אלימות נובעות מכעס שאין לו מקום: "שלטון הכעס על האדם הריהו חלילה בבחינת שלטון זר של אל נכר"
בעלון הראשון של הגליון "מעט מן האור" אשר הופיע זמן קצר אחרי ההתנתקות, באה לידי ביטוי תגובת האבל החריפה על האובדן של ההתיישבות בגוש קטיף, אך יחד איתה גם נבואת נחמה: "עשינו מה שגזרת עלינו, עשה אתה מה שעליך לעשות!" "בחסדי ה' יתברך עוד נשוב לגוש קטיף לבנות ולהבנות בו, חרף הייסורים שראו עלינו מידי אדם ובשובנו בשנית...והפכתי אבלם למחול ושימחתים מיגונם" (ירמיה ל"א ז-יב) .
אכזבתו ממוסדות המדינה בעקבות הגירוש, בא לידי ביטוי בעלון שיצא כחודש לאחר ההתנתקות. "הריקבון הזה - שפשה בכל רקמות השלטון, מחייב אותנו לחשבון נפש עמוק ונוקב, באשר ליחסינו לכל הנהגת המדינה, ולא רק כלפי העומד בראשה."
במלאות חמש שנים להתנתקות, הוא התייחס לכישלון המאבק בשיחה שהתנהלה בדצמבר 2009 ב"מכון מאיר" בין הרב פורת לבין חגי לונדין. בשיחה – "הוא הירבה להתייחס לפסוק "ושבו בנים לגבולם" שהינו חלק מנבואת הנביא ירמיהו לגבי שיבת עם ישראל לארצו. "התגשמות החלום של ילדי כפר עציון שידענו שיום יבוא ואנחנו נשוב". "עקירת הישובים היא פצע פתוח שותת דם. כדאי לא לשכוח שמעבר לפגיעה בזכויות האדם, היא העובדה כאילו סטרו בפרצופה של רחל שאלוקים הבטיח לה בנבואה "ושבו בנים לגבולם"". מבחינתו של פורת - הציונות החילונית היתה מבחינת החוצפה כלפי שמיים, של "אנחנו נעקור בנים מגבולם".
הרב שלמה חיים אבינר
ראש ישיבת עטרת ירושלים, השוכנת ברובע היהודי וגם רב הישוב "בית אל" א', נולד בצרפת בשנת 1943 ועלה לישראל בשנת 1966. לאחר שירותו הצבאי הצטרף לישיבת "מרכז הרב“. לאחר מלחמת ששת הימים, הצטרף לקבוצה ששהתה במלון "פארק" במטרה לחדש את הישוב היהודי בחברון (אפריל 1968). לימים, רב הישוב קשת ברמת הגולן. ומשנת 1981 ועד היום, רב הישוב "בית אל" א'.
מבחינת הרב אבינר התגשמות הגאולה הארצית חלה על ידי ייסודה של מדינת ישראל ומוסדותיה על אדמות ארץ ישראל. הוא הדגיש ומדגיש את "קדושת הממלכתיות", וזאת באה לידי ביטוי ביחסו לצבא, להחלטות הממשלה ולחוקיה. מבחינתו אלה הם בגדר קודש, שאין לערער עליו. אולם מצד שני, הוא קרא לסרבנות אזרחית, לא לשתף פעולה אזרחית עם מנהלת ההתנתקות בכל שלבי הפינוי. ואף קרא לחרם צרכני על אותם אזרחים המשתפים פעולה עימה. בפרק על הרב אבינר בעבודת הדוקטורט, התייחסתי גם לנושא 'מסירות הנפש'. נושא שהתמלא בתוכן חדש, זאת לראשונה על ידי הרב צבי יהודה קוק ותלמידיו.
למרות שהרב אבינר ראה את עצמו כרב בעל גישה ממלכתית ולא פסל את הקריאה לסרבנות. הוא היה בין הכוחות המסייעים אקטיבית להרגעת הציבור המפונה, כשתפקידו היה למנוע את ההתנגדות לפינוי. בנוסף לכך, הרב אבינר קרע את בגדי הציבור המפונה, לאות אבל.
הרב צבי ישראל טאו
לפי תיאולוגיית הרב טאו, שורש הסיבה לגזירת ההתנתקות היא ההתנתקות מהחילונים, אותם הוא מכנה "המנותקים מן הקודש". אנשים שמבחינתו שייכים לעם ישראל, אך ריקים בתוכנם ולכן האובססיה שהם גילו כלפי תוכנית ההתנתקות מהווה עבורם חלופה לאובססיה דתית: "רוממות הרוח שבאה בעקבות 'מלחמת ששת הימים' שככה במהרה, ואת מקומה תפסו הספקנות וחוסר הודאות בכל השייך לענייננו הלאומי. [...] כיצד נקלענו לשבר אידיאלוגי כה עמוק עד כדי 'פוסט ציונות' מוצהרת ועד כדי 'תוכנית ההתנתקות' בימינו". לפי הרב טאו יש חשיבות גדולה להבנה, שהעולם בשיח תמידי המתנהל פנים מול פנים. כלומר ברגע שישנו ניתוק בעם מן התכלית האמיתית, אזי מאיימת תוכנית התנתקות חיצונית. וזאת אמורה לעורר את האומה כולה לחשיבה מחודשת לחזרה לעקרונות, וממנה לעשייה.
"לכן, לא בחסימת כבישים ובהפגנות אלימות וכוחניות נהיה פועלים עם אל לישועת עמנו, פעולה זו מתייחסת לסימפטומים ולתופעות החיצוניות ולא לשורשיהן וסיבותיהן, וכמוה כנשיכת הכלב את המקל". הוא המליץ לתלמידיו, להתרחק מן הזעם (הפגנות וחסימת כבישים) ומהיאוש: "התמלאות בזעם וביאוש מזיקה לכל המצב הכללי, ומוסיפה נפילה על נפילה.
התייחסותו לסירוב פקודה:
הרב טאו התנגד לסירוב הפקודה, מאחר ולהשקפתו אין לנשוך במקל המכה, אלא להאבק בשורש הבעיה, שהיא הניתוק בעם. אלא שככל שתוכנית ההתנתקות קרבה למועדה המתוכנן, חל שינוי בהשקפתו. החוקר יאיר שלג, התייחס לשינוי הזה במאמר שפירסם ב"הארץ": "הסרבנות האפורה של הרב טאו" לפיה: "סרבנות מפורשת אכן אסורה, אבל על התלמידים להבהיר למפקדיהם ש"אינם מסוגלים" למלא פקודה מעין זו."
הרב טאו קרא למאמיניו תלמידיו ותומכיו שלא להתנגד לסירוב פקודה, אלא להמנע מקיום הפקודה. ומדברים שפורסמו כעשר שנים לאחר ההתנתקות בהם אמר הרב "לא היינו צריכים להיות בהתנתקות". הרב טאו האמין גם כן שלהתנתקות יש סיכוי טוב להתבטל, אם השורש והבסיס לקיומה יתבטל. כלומר, הוא האמין ביכולתו של הפרט בקבוצה לפעול למען כישלון תוכנית ההתנתקות. קריאתו של הרב טאו לפעולות "פנים אל פנים" על מנת לקרב את עם ישראל לאידיאולוגיה שלו, אפשר לראותה כקריאה לגיוס מאמינים, אולם קריאה זו נשארה רק בין כתלי בית המדרש.
כעשר שנים לאחר ההתנתקות מרצועת עזה
מסיכום הממצאים בעבודה זו, יש לציין כי - גם הרבנים שקראו לסירוב פקודה וגם הרבנים שקראו לכבד את החלטת הממשלה יישרו קו בסופו של דבר והם רואים את תוכנית ההתנתקות כמשבר מקומי. כמעט כל הרבנים, מלבד הרב טאו- טוענים שיש לראות את התמונה כולה שהיא בכללה חיובית. שהרי ההתיישבות היהודית ממשיכה להתרחב כמו גם עולם התורה ועולם הישיבות.
הרב חנן פורת ז"ל האמין שעם ישראל עוד יחזור למקומות מהם הוא גורש, כפי שקרה בעבר עם משפחתו אשר גורשה מכפר עציון.
הרב זלמן מלמד אמר במספר ניסוחים: "יש פה ושם צרה, אך החיבור שלנו לארץ-ישראל לא נעצר מלכת והוא הולך ומתחזק. הארץ נבנית בגליל ובנגב, בשרון ובשומרון ובכל רחבי ארץ-ישראל. יש מקומות שיש עליהם מאבק, אבל בסך הכול ההתרחבות וההתקדמות לא נעצרו"
הרב דב ליאור: "לא נאבד בגלל זה את האמון שלנו במהלך האלוקי הגדול של שיבת עם ישראל לארצו" .
הרב אבינר: "אך כאמור איננו מתייאשים, איננו עוסקים בהאשמה עצמית או בהאשמת הזולת ביחס לעבר, אלא מושכים קדימה אל העתיד מתוך כיסופים גדולים עוד יותר" .
הרב טאו: "המשך הבניין: אבל אנחנו מבינים שלא ביום אחד אפשר לבנות את ירושלים ואת גאולת-ישראל. לכן אבותיכם הלכו להמשיך את ההתיישבות כבר באותו לילה של הפינוי, ובבוקר כבר עלו על הקרקע בחבל חלוצה. הם לא נשברו. ודאי הם הצטערו על האובדן" .
אירועי „תג מחיר“
אולם יחד עם תמונה אופטימית זאת, קיימת התפתחות נוספת שאינה המשך ישיר של תלמידי הרב צבי קוק וישיבת "מרכז הרב". התפתחות אירועי האלימות והטרור המכונות „תג מחיר“.
כשלוש שנים לקח לציבור הציוני דתי להחלים ממשבר ההתנתקות ולתכנן תגובה נאותה כלפי אכזבתו העמוקה מהמדינה. אולם כבר בפינוי עמונה ב1 לפברואר 2006, מספר חודשים לאחר ההתנתקות, חלה תפנית חדה ביחס לקדושת החלטות המדינה. כשצעירי הציבור הציוני דתי פעלו בכל כוחם כדי למנוע את פינוי ההתנחלות עמונה. פינוי שהתבצע באלימות כוחות הביטחון. זאת בניגוד בוטה למאבק נגד ההתנתקות שהיה לרוב, נקי מאלימות .
פעולות „תג מחיר“ התחילו במהלך 2008 בעקבות פינוי מבנים במאחז יצהר. בכתבה שסיקרה את האירוע נכתב כי השיטה היא: "לגבות „תג מחיר“ גבוה על כל פעולה מסוג זה של הצבא או המשטרה". בפעולה הראשונה זו של „תג מחיר“: "נחסמו לתנועה צומת שילה, צומת הטי, צומת רחלים, חווארה, צומת חטיבת שומרון ועוד. ביצהר יודעים לספר שצה"ל הודיע שאין ביכולתו לשלוח כוחות לכל המקומות על מנת לעצור את המחאה, כך שהשיטה הוכיחה את עצמה. במקום התפתחו עימותים קשים בין יהודים לערבים, ושטחים גדולים של מרעה וזיתים נשרפו. גם בעסירה אל קבלייה התפתח עימות גדול ובמהלכו נשרף בית, תוך כדי שהצבא מודיע בקשר ש"אין לו כוחות לשלוח למקום."
אירועים אלו ניתנים לבחינה הן כתוצאה ישירה של תוכנית ההתנתקות והן כתופעה בפני עצמה:
• „תג מחיר“ השפעתה של תוכנית ההתנתקות על הציונות הדתית: הפעילים בפעולות אלו שייכים לציבור הציוני דתי ולאותם משפחות אשר גורשו מחבל גוש קטיף, ואשר חוו את המשבר באופן אישי ולא רק באופן פוליטי ואמוני. מכאן לקשר האפשרי שאירועי „תג מחיר“ הינם בעצם, תוצר ישיר של המשבר האמוני שחווה ציבור זה כתוצאה מההתנתקות מרצועת עזה. והם, הפעילים מביעים בצורה זו את חוסר האמון שלהם כלפי המדינה, כמו גם את יכולתה לקחת אחריות על עתידם. בדרך זו הם מביעים גם את האכזבה שחשו כלפי רבני הציונות הדתית הממלכתיים. שלדעתם, יישרו קו עם החלטת המדינה ואין הם עוד מוחים כלפיה - כפי שנקטו בתקופת ההתנתקות. חוסר האותנטיות של הרבנים, כפי שכתבתי על כך בפרק הסיכום בעבודה.
כחיזוק לכך ראו במאמרה של ענת רוט: "הציונות הדתית במבחן הממלכתיות – מכפר מימון לעמונה". בו היא מתארת את השינוי שחל בקרב הציבור הציוני דתי שתמך במדינה. ציבור אשר נמנע מאלימות מתוך אידיאולוגיה, ואת התעוררותו למציאות ביום שאחרי. התעוררות שהביאה לאלימות כפי שנחזתה. "משהוכח כביכול שניתן לנצח את המתנחלים ולפנות יישובים בנקל, במהירות ובלא אלימות, יגבר תאבונם של אלה ותיסלל הדרך להתנתקות הבאה". ,
• אירועי „תג מחיר“ – כתוצאה מהתפתחות רדיקליזם פוליטי בישראל: אירועי „תג מחיר“ יתכן והיו יכולים להתפתח ללא קשר להתנתקות מרצועת עזה. כפי שנראה במאמרו של דון יחיא אליעזר משנת 2003 . במאמרו, הוא ציין שההתנגדות למדינה היהודית החילונית, היא בעצם "הדיסוננס הקוגניטיבי" שהציבור הציוני דתי מנסה ליישב בתוכו. אולם ללא הצלחה רבה. הוא מפרט כיצד מערך הישיבות הלאומיות תרם רבות לתהליך הרדיקליזציה הפוליטית של הציבור הציוני דתי וזאת לפני שדובר על תוכנית ההתנתקות או פינוי מאחזים: "בגישת ההרחבה הפונדמנטליסטית בנוסח הרב קוק היה טמון מלכתחילה הפוטנציאל של רדיקליזם לאומי – פוליטי, שכן הרחבת תחומה של הקדושה הדתית בגישה זו והחלתה (של הקדושה,) על ערכי הלאומיות המודרנית עשויות להביא למאבק בלתי מתפשר למען המטרות הלאומיות, הנתפסות כחלק בלתי נפרד מן המכלול המקודש של עולם הערכים הדתי". בדבריו אלה של דון יחיא הוא תיאר את הרדיקליות המתבקשת במצב בו עלולה להתרחש התנגשות בין עולם הערכים הלאומי דתי לבין החלטותיה של המדינה. לפי דון יחיא, פעולות אלו - הן תוצאה של הקושי להכיל את החילוניות של מוסדות המדינה. קושי שקשה ליישבו ועל כן הוא מתפרץ בצורה של פעולות טרור. הפעילים מנסים כביכול, לפתור את הדיסוננס מתוכם, בדרך פעולות תג מחיר. דבריו נכתבו לפני ההחלטה על ההתנתקות מרצועת עזה.
• „תג מחיר“ – כתוצאה ממעבר בין תודעת מהגר לתודעת קולוניאליסט: דרך נוספת להסבר תופעת „תג מחיר“ היא: המעבר קטגורי ממצב של תודעת מהגר לתודעת קולוניאליסט. בנובמבר 1995, רצח יגאל אמיר את ראש הממשלה דאז יצחק רבין. הציבור הציוני דתי כולו הפך ברחוב כמו גם בתקשורת, לנרדף והואשם ברצח זה . המאבק הלא אלים בתוכנית ההתנתקות, היתה הזדמנות להראות לציבור החילוני ולמעצבי דעת הקהל, כי דוקא הציבור שנחשב לרצחני ואלים ונדחק הצידה בכלימה, נמנע מאלימות לחלוטין. ההנחה היתה שאם יעברו את המבחן, הם יתקבלו לקבוצת האליטה. כלומר, ההתרחקות מאלימות היתה כלי למטרה גבוהה יותר מאשר סיכול תוכנית ההתנתקות.
כמה שנים מאוחר יותר, בניהם של המשפחות שפונו מגוש קטיף ומרצועת עזה, חשים ביטחון בצדקת דרכם וקובעים סדר יום המתאים לעקרונותיהם ומעל הכל, לא מנסים למצוא חן. כלומר אצל נערי הגבעות חל תהליך של התקרבות ברמה ההתנהגותית לשבט הציוני החילוני, שניהם אינם מתנצלים ומשדרים ביטחון ויוהרה, כדמות היהודי החדש של בן גוריון. שתי קבוצות אלו: הציוני החילוני וקבוצת נוער הגבעות, שתיהן קוראות להקים יישות ציונית על האדמה שנכבשה מתושבים המקומיים.
לפי ההיסטוריונים החדשים המפעל הציוני הוא מפעל קולוניאליסטי: שהמשאב העיקרי בו הוא האדמה, מעבר לכך לא היה רצון להטמע בחברה המקומית הטבעית, בדיוק כמו הקולוניות הצרפתיות והבריטיות ברחבי אפריקה, שהיו מבודדות וניזונו מכוח עבודה זול של התושבים הטבעיים, תוך כדי ניצול משאבי המקום.
קבוצת המהגרים הלבנים – הבורים- שנטמעה במרחב הטבעי של דרום אפריקה והפכה לשליט על קבוצת הילידים בה, תוך דבקות בתנ"ך ובאדמה . היא נסיון להבין את העתיד האפשרי של נוער הגבעות ופעולות „תג מחיר“. בחינה של קבוצה זו, יכולה גם להסביר את דבקותם של נערי הגבעות בתורתו של הרב גינזבורג. שכן הרב גינזבורג הינו מהגר על פי הגדרה, הוא נולד בארצות הברית ועלה לישראל. אולם, זה לא מנע ממנו להרגיש שייך למקום בו לא נולד, ויתרה מזו לראות במקום זה כמקום הטבעי לו ולא לאותה אוכלוסיה שנולדה במקום וניהלה את חייה במשך דורות רבים בו.
כך בדומה לבורים בדרום אפריקה אשר נדדו למרכז היבשת על מנת לברוח מעימות עם הכובשים הבריטים ולבסוף החליטו לשתף עימם פעולה על מנת לקדם את עקרונותיהם, כך גם נדדו אנשי הציונות הדתית לשטחים הכבושים על מנת לחיות על פי חוקי אמונתם. משבר ההתנתקות הבהיר להם שעליהם ליטול חלק בממשל החילוני הישראלי ולהשפיע על החלטותיו.
התייחסות רבני הציונות הדתית לפעולות „תג מחיר“
הרב צבי ישראל טאו
התגובה לפעולות „תג מחיר“ נמצאה במאמרו של דר' גדי גבריהו יושב ראש הפורום "תג מאיר" בעיתון הארץ, פורום הפועל כנגד פעולות „תג מחיר“: "מוכרחים לחזור ולשנן דברים שאמר הרב צבי טאו - מנהיג ישיבות הקו - על אנשי המחתרת היהודית הראשונה(המחתרת היהודית פעלה בראשית שנות השמונים ונתפסה ב27.4.1984: "יש לנו עסק עם כת משיחית, שרוצה להביא את הגאולה לעם ישראל עם נשק ביד; [...]זוהי תפיסה של לומדי קבלה שטחיים, קטנוניים, והם גורמים בזה להרס ולחורבן" (חגי סגל, "אחים יקרים") . לפי מקור זה, הרב טאו, בהנחה שנשאר נאמן לדעותיו מאז ראה בפעילי „תג מחיר“, כת משיחית המעוניינת להביא גאולה לעם ישראל באמצעות אלימות, פגיעה באיסלאם או בחפים מפשע. והם אלו הפוגעים בתהליך הגאולה ומביאים לחורבנו.
הרב שלמה אבינר
באתר הציוני דתי "כיפה", פרסם אורי פולק את התייחסותו של הרב שלמה אבינר לפעולות „תג מחיר“, תחת הכותרת "אסור לפגוע ברכוש ערבי": "הויכוח שלנו עם הערבים הוא בשאלה של מי הארץ הזאת אבל זה לא מרשה לנו להעליב אותם, לגנוב אותם, או להציק להם".
הרב זלמן מלמד
לאחר מספר פעולות „תג מחיר“, הוא התייחס אליהן בשיעורו השבועי וענה שראשית כל, לפני שמגנים את הפעולה יש לבדוק האם המפגע הוא יהודי ואם אכן כן, אז לברר מהיכן הוא, כלומר מאיזה חוגים המפגעים מגיעים. הוא הוסיף כי, הפיגועים אינם מסייעים לציבור הציוני הדתי במטרתו, ואף "ויש סיכוי שזה מזיק" . תגובתו של הרב לא היתה נחרצת ואף ניסתה להפנות את פעולות אלה לשוליו של הציבור הציוני דתי.
הרב דוב ליאור
הרב ליאור והתייחסותו לפעולות „תג מחיר“: לא ניתן למצוא התייחסות מדוייקת של הרב ליאור לפעולות אלו. אולם חשוב לציין שהוא נעצר בחשד להסתה מאחר ונתן את הסכמתו ואת המלצתו לספר תורת המלך המתאר את האופן ההלכתי המאפשר לפגוע בשאינם יהודים.
עלייתו של הרב יצחק גינזבורג
הרב גינזבורג דוגל בשני עיקרים: ארץ ישראל השלמה והמשיחיות. את רעיון "ארץ ישראל השלמה" הוא לא רואה כמטרה בפני עצמה אלא כדרך. הדרך היחידה המאפשרת ליהודי להגיע למהות. ציטוט מדבריו: "אבל מדוע צריך את כל הארץ? מי שאומר כך אינו מבין מהי באמת ארץ ישראל, "אֶרֶץ אֲשֶׁר ה' אֱלֹהֶיךָ דֹּרֵשׁ אֹתָהּ תָּמִיד עֵינֵי ה' אֱלֹהֶיךָ בָּהּ" – אשר כולה ניתנה רק לעם ישראל, ואיננו רשאים לתת לנכרים גם חלק קטן ממנה .
הרב גינזבורג אינו משתייך לאידיאולוגית הציונות הדתית של תורת הרב קוק. ומתחילת דרכו הדתית, נחשב לקיצוני בדיעותיו שלא חשש להשמיען כמו גם לפרסם חוברת שבה הוא משבח מעשה אלימות קיצוני בברכת "ברוך", "ברוך הגבר" , . הקריאה בדבריו של הרב גינזבורג היא לחזרה לטבע, חזרה לרגש המקורי הטהור, אותו רגש שאימצו גם נערי הגבעות בבחירתם לעסוק בחקלאות, רעיית בעלי חיים וקרבה מיידית לאדמה.
התייחסות הרב גינזבורג לפעולות „תג מחיר“: "זה קרה ביצהר בחול המועד פסח. כינס הרב יצחק גינזבורג, התוועדות תחת הכותרת "בארעא דישראל בני חורין"". יהודה יפרח הוסיף: "נדמה שזו הפעם הראשונה שבה הוגה בסדר גודל של הרב יצחק גינזבורג מתייחס ישירות לתופעה הספציפית של מעשי „תג מחיר“ ומכיל אותה. אז אין באמת ואקום. מאחורי האירועים שמשגעים את המדינה יש תפיסת עולם מגובשת עם כיוון ועם יעד" הרב גינזבורג רואה בפעולות אלה, חלק הכרחי בהתפתחות הנפש היהודית וחירותה משלטון זר.
המשבר האמוני לפי ענברי
ענברי בחן את המשבר האמוני בקרב אנשי הציונות הדתית לאחר ההתנתקות מרצועת עזה וכתב במאמרו : "תהליך ההתנתקות מהווה מקרה בחן לבדיקת התמודדותו של הציבור הציוני דתי בכללותו עם משבר האמונה, ובפרט - בדיקת התמודדותם של מורי ההלכה של ציבור זה, אלה המבקשים לעצב את דפוסי התנהלותו הדתית." הוא הדגיש עוד: "שבחינת עמדותיו של הפלג האקטיביסטי של רבני גוש אמונים, המזוהה עם אסכולת מרכז הרב, איננה יכולה לשקף את עמדותיה של תנועת ההתנחלות בכללותה" . לפי ענברי, מדובר בשתי אסכולות חשיבה שהתפצלו מהאידיאולוגיה הציונית דתית. ושאין הבדל מהותי באידיאולוגיה של הפלגים השונים בציונות הדתית, אלא בהבדלים שעל דרך הפעולה: "אף שבדיעבד כבר הצטמצמו הפערים בין שני הזרמים. מתגובות שני הצדדים נראה ששני המחנות המנוגדים שואפים במהותם לכינונה של מדינת תורה שתחליף את המדינה החילונית חסרת ייעוד הקודש, אולם הפולמוס ביניהם הוא בעיקר על דרך הפעולה הנכונה.
ענברי סיים את מאמרו בשאלה, מה ניתן לחזות לעתיד, כיצד ולאן תתפתח האידיאולוגיה הציונית דתית: "עולה השאלה, לאן נוטה המערכת של הציונות הדתית וגוש אמונים, ואיזו מן המגמות אני מעריך שתגבר. לשם כך מן הראוי ראשית לומר שבחינה כנה של התנהלות ציבור גוש אמונים ערב תוכנית ההתנתקות מלמדת שרק מיעוט ממנו השתתף בהפגנות נגד התוכנית". אלא שבשנת 2012 בראיון עם תומר פרסיקו שהתפרסם באתר "לולאת האל" . שבע שנים לאחר ההתנתקות, הדגיש ענברי את הרדיקליזציה שחלה בעקבות ההתנתקות: "אני רואה את התחזקות עמדות המיעוט, ותוהה לאן פונה תנועה זו. אני רואה שתי מגמות פוטנציאליות – האחת היא שהאכזבה והכשלון יביאו להתפרצות אלימה ותקיפה."
סיכום
הבאתי את תגובותיהם של הרבנים להתנתקות מרצועת עזה וניתחתי אותה על פי תיאוריית הדיסוננס קוגניטיבי. אירועי תג מחיר נכללו בעבודה מסיבות שצויינו לעיל. אולם בהכרח קשר בין אירועי תג מחיר לבין הרבנים שדיעותיהם נבחנו בעבודה זו. הם אינם תומכים בפעולות אלו וכן פעולות אלו לא יצאו מבית מדרשם. ההתפתחות הנ"ל, קרי, אירועי תג מחיר משקפת את התרחקותו של הדור ההמשך מדרך רבני הציונות הדתית. בעצם העבודה מהווה מבט פנוראמי על תהליך חילופי הגברא, כלומר חילופי דיעות והחלפתם של המנהיגים, תהליך שנעשה מתוך השטח.
מכאן שאפשר להניח כי תם תפקידם של הרבנים הללו אשר היו תלמידיו של הרב צבי יהודה קוק. תם עידן הברית בין מדינת ישראל החילונית לבין הציבור הציוני הדתי. יתכן כי אנחנו כעת חווים עידן בו למנהיגים החדשים של הציונות הדתית כמו גם לציבור הצעיר יש דרישות שכבר לא ניתן להדחיק למען המטרה הגדולה (הקמת מדינת ישראל), כפי שהיה בעבר. היום הדור הצעיר דורש את מקומו בהנהגה של המדינה ובעיצוב עתידה הקרוב והרחוק. והנה דבריו של מנהיג מפלגת "בית היהודי" נפתלי בנט בכנס לציון העשור להתנתקות אמר כך: "מטרת הניתוח – מות ההתיישבות ושבירת רוח הימין. מטרת ההתנתקות היתה עצירת עליית אליטות שבמשך שנים צברו השפעה לגיטימית אבל מסוכנת לדעת אנשי השמאל. הם הרגישו שמישהו שינה את החוקים ולא טרח לספר להם". לטענתו, ההתנתקות היתה הדרך למנוע את ייהוד החברה הישראלית, כמו גם מאבק בין אליטות ישנות לחדשות.
נפתלי בנט הטיב להציג את הדברים נכונה, אכן לא מדובר בחילוקי דיעות על עניין החזרת שטחים או שלום עם הפלסטינים אלא מדובר על מהותה של המדינה ולאן מועדות פניה. התחזקותו של המחנה הציוני דתי בראשות נפתלי בנט מראה כי אכן הוא צדק.
אולם, השאלה הרלונטית למחקרי זה היתה, האם רדיקליזציה זו, היא צעד נוסף שתחילתו היא בשורשיו של גוש אמונים או שהרדיקליזציה נבעה מהמשבר האמוני שחל בעקבות ההתנתקות.
על התגברות האלימות מעיר ענברי, כי מאחר ואין גינוי מצד המנהיגים של הציונות הדתית, לא באופן נחרץ, אזי נראה כי היא תגבר. "במצב שבו אין גינוי לאלימות מצד האוטוריטות המרכזיות , במצב שבו היריב הפוליטי מתואר באופן דמנולוגי (ערב רב, קליפה וכו') ובמצב שבו אידיאולוגיה המצדיקה אלימות ספונטנית תופסת תאוצה בקרב חוגים רדיקלים, אין להתפלא שהאלימות הופכת לחלק בלתי נפרד ממפעל ההתנחלויות. "
מה היתה המטרה העיקרית במניעת האלימות במהלך ההתנתקות? במאמרה של ענת רוט התקבל הרושם כי היה חשוב לציבור הציוני דתי להיות חלק מהתרבות הדמוקרטית ולהתרחק מהשם הרע שיצא לציונות הדתית שלאחר רצח רבין: "המתנחלים" חזרו וטענו כי למרות שהם רואים בהתנתקות מהלך "אנטי ציוני, לא מוסרי ולא דמוקרטי", הם מחויבים לכללי הדמוקרטיה ואינם מתכוונים להתנגד לו באלימות" .
נראה כי הטיעונים האלו, של ענברי מצד אחד ושל רוט מצד שני, הם שחקנים במפה הכללית של המהפך שחל בציונות הדתית והתקדמותה לעבר עמדות שלטוניות בהם יבואו לידי ביטוי תפישותיה האמוניות והמדיניות. שורות אלו נכתבות ערב הבחירות לכנסת העשרים ואחת והן יהוו ראיה לחלק מההנחות שהונחו בסיכום זה.
Single-column data profiling
(2020)
The research area of data profiling consists of a large set of methods and processes to examine a given dataset and determine metadata about it. Typically, different data profiling tasks address different kinds of metadata, comprising either various statistics about individual columns (Single-column Analysis) or relationships among them (Dependency Discovery). Among the basic statistics about a column are data type, header, the number of unique values (the column's cardinality), maximum and minimum values, the number of null values, and the value distribution. Dependencies involve, for instance, functional dependencies (FDs), inclusion dependencies (INDs), and their approximate versions.
Data profiling has a wide range of conventional use cases, namely data exploration, cleansing, and integration. The produced metadata is also useful for database management and schema reverse engineering. Data profiling has also more novel use cases, such as big data analytics. The generated metadata describes the structure of the data at hand, how to import it, what it is about, and how much of it there is. Thus, data profiling can be considered as an important preparatory task for many data analysis and mining scenarios to assess which data might be useful and to reveal and understand a new dataset's characteristics.
In this thesis, the main focus is on the single-column analysis class of data profiling tasks. We study the impact and the extraction of three of the most important metadata about a column, namely the cardinality, the header, and the number of null values.
First, we present a detailed experimental study of twelve cardinality estimation algorithms. We classify the algorithms and analyze their efficiency, scaling far beyond the original experiments and testing theoretical guarantees. Our results highlight their trade-offs and point out the possibility to create a parallel or a distributed version of these algorithms to cope with the growing size of modern datasets.
Then, we present a fully automated, multi-phase system to discover human-understandable, representative, and consistent headers for a target table in cases where headers are missing, meaningless, or unrepresentative for the column values. Our evaluation on Wikipedia tables shows that 60% of the automatically discovered schemata are exact and complete. Considering more schema candidates, top-5 for example, increases this percentage to 72%.
Finally, we formally and experimentally show the ghost and fake FDs phenomenon caused by FD discovery over datasets with missing values. We propose two efficient scores, probabilistic and likelihood-based, for estimating the genuineness of a discovered FD. Our extensive set of experiments on real-world and semi-synthetic datasets show the effectiveness and efficiency of these scores.
Lately, the integration of upconverting nanoparticles (UCNP) in industrial, biomedical and scientific applications has been increasingly accelerating, owing to the exceptional photophysical properties that UCNP offer. Some of the most promising applications lie in the field of medicine and bioimaging due to such advantages as, among others, deeper tissue penetration, reduced optical background, possibility for multicolor imaging, and lower toxicity, compared to many known luminophores. However, some questions regarding not only the fundamental photophysical processes, but also the interaction of the UCNP with other luminescent reporters frequently used for bioimaging and the interaction with biological media remain unanswered. These issues were the primary motivation for the presented work.
This PhD thesis investigated several aspects of various properties and possibilities for bioapplications of Yb3+,Tm3+-doped NaYF4 upconverting nanoparticles. First, the effect of Gd3+ doping on the structure and upconverting behaviour of the nanocrystals was assessed. The ageing process of the UCNP in cyclohexane was studied over 24 months on the samples with different Gd3+ doping concentrations. Structural information was gathered by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), dynamic light scattering (DLS), and discussed in relation to spectroscopic results, obtained through multiparameter upconversion luminescence studies at various temperatures (from 4 K to 295 K). Time-resolved and steady-state emission spectra recorded over this ample temperature range allowed for a deeper understanding of photophysical processes and their dependence on structural changes of UCNP.
A new protocol using a commercially available high boiling solvent allowed for faster and more controlled production of very small and homogeneous UCNP with better photophysical properties, and the advantages of a passivating NaYF4 shell were shown.
Förster resonance energy transfer (FRET) between four different species of NaYF4: Yb3+, Tm3+ UCNP (synthesized using the improved protocol) and a small organic dye was studied. The influence of UCNP composition and the proximity of Tm3+ ions (donors in the process of FRET) to acceptor dye molecules have been assessed. The brightest upconversion luminescence was observed in the UCNP with a protective inert shell. UCNP with Tm3+ ions only in the shell were the least bright, but showed the most efficient energy transfer.
In the final part, two surface modification strategies were applied to make UCNP soluble in water, which simultaneously allowed for linking them via a non-toxic copper-free click reaction to the liposomes, which served as models for further cell experiments. The results were assessed on a confocal microscope system, which was made possible by lesser known downshifting properties of Yb3+, Tm3+-doped UCNP. Preliminary antibody-staining tests using two primary and one dye-labelled secondary antibodies were performed on MDCK-II cells.
Over the last decades, the Arctic regions of the earth have warmed at a rate 2–3 times faster than the global average– a phenomenon called Arctic Amplification. A complex, non-linear interplay of physical processes and unique pecularities in the Arctic climate system is responsible for this, but the relative role of individual processes remains to be debated. This thesis focuses on the climate change and related processes on Svalbard, an archipelago in the North Atlantic sector of the Arctic, which is shown to be a "hotspot" for the amplified recent warming during winter. In this highly dynamical region, both oceanic and atmospheric large-scale transports of heat and moisture interfere with spatially inhomogenous surface conditions, and the corresponding energy exchange strongly shapes the atmospheric boundary layer. In the first part, Pan-Svalbard gradients in the surface air temperature (SAT) and sea ice extent (SIE) in the fjords are quantified and characterized. This analysis is based on observational data from meteorological stations, operational sea ice charts, and hydrographic observations from the adjacent ocean, which cover the 1980–2016 period. It is revealed that typical estimates of SIE during late winter range from 40–50% (80–90%) in the western (eastern) parts of Svalbard. However, strong SAT warming during winter of the order of 2–3K per decade dictates excessive ice loss, leaving fjords in the western parts essentially ice-free in recent winters. It is further demostrated that warm water currents on the west coast of Svalbard, as well as meridional winds contribute to regional differences in the SIE evolution. In particular, the proximity to warm water masses of the West Spitsbergen Current can explain 20–37% of SIE variability in fjords on west Svalbard, while meridional winds and associated ice drift may regionally explain 20–50% of SIE variability in the north and northeast. Strong SAT warming has overruled these impacts in recent years, though.
In the next part of the analysis, the contribution of large-scale atmospheric circulation changes to the Svalbard temperature development over the last 20 years is investigated. A study employing kinematic air-back trajectories for Ny-Ålesund reveals a shift in the source regions of lower-troposheric air over time for both the winter and the summer season. In winter, air in the recent decade is more often of lower-latitude Atlantic origin, and less frequent of Arctic origin. This affects heat- and moisture advection towards Svalbard, potentially manipulating clouds and longwave downward radiation in that region. A closer investigation indicates that this shift during winter is associated with a strengthened Ural blocking high and Icelandic low, and contributes about 25% to the observed winter warming on Svalbard over the last 20 years. Conversely, circulation changes during summer include a strengthened Greenland blocking high which leads to more frequent cold air advection from the central Arctic towards Svalbard, and less frequent air mass origins in the lower latitudes of the North Atlantic. Hence, circulation changes during winter are shown to have an amplifying effect on the recent warming on Svalbard, while summer circulation changes tend to mask warming.
An observational case study using upper air soundings from the AWIPEV research station in Ny-Ålesund during May–June 2017 underlines that such circulation changes during summer are associated with tropospheric anomalies in temperature, humidity and boundary layer height.
In the last part of the analysis, the regional representativeness of the above described changes around Svalbard for the broader Arctic is investigated. Therefore, the terms in the diagnostic temperature equation in the Arctic-wide lower troposphere are examined for the Era-Interim atmospheric reanalysis product. Significant positive trends in diabatic heating rates, consistent with latent heat transfer to the atmosphere over regions of increasing ice melt, are found for all seasons over the Barents/Kara Seas, and in individual months in the vicinity of Svalbard. The above introduced warm (cold) advection trends during winter (summer) on Svalbard are successfully reproduced. Regarding winter, they are regionally confined to the Barents Sea and Fram Strait, between 70°–80°N, resembling a unique feature in the whole Arctic. Summer cold advection trends are confined to the area between eastern Greenland and Franz Josef Land, enclosing Svalbard.
Cleft exhaustivity
(2020)
In this dissertation a series of experimental studies are presented which demonstrate that the exhaustive inference of focus-background it-clefts in English and their cross-linguistic counterparts in Akan, French, and German is neither robust nor systematic. The inter-speaker and cross-linguistic variability is accounted for with a discourse-pragmatic approach to cleft exhaustivity, in which -- following Pollard & Yasavul 2016 -- the exhaustive inference is derived from an interaction with another layer of meaning, namely, the existence presupposition encoded in clefts.
To investigate the reliability and stability of spherical harmonic models based on archeo/-paleomagnetic data, 2000 Geomagnetic models were calculated. All models are based on the same data set but with randomized uncertainties. Comparison of these models to the geomagnetic field model gufm1 showed that large scale magnetic field structures up to spherical harmonic degree 4 are stable throughout all models. Through a ranking of all models by comparing the dipole coefficients to gufm1 more realistic uncertainty estimates were derived than the authors of the data provide.
The derived uncertainty estimates were used in further modelling, which combines archeo/-paleomagnetic and historical data. The huge difference in data count, accuracy and coverage of these two very different data sources made it necessary to introduce a time dependent spatial damping, which was constructed to constrain the spatial complexity of the model. Finally 501 models were calculated by considering that each data point is a Gaussian random variable, whose mean is the original value and whose standard deviation is its uncertainty. The final model arhimag1k is calculated by taking the mean of the 501 sets of Gauss coefficients. arhimag1k fits different dependent and independent data sets well. It shows an early reverse flux patch at the core-mantle boundary between 1000 AD and 1200 AD at the location of the South Atlantic Anomaly today. Another interesting feature is a high latitude flux patch over Greenland between 1200 and 1400 AD. The dipole moment shows a constant behaviour between 1600 and 1840 AD.
In the second part of the thesis 4 new paleointensities from 4 different flows of the island Fogo, which is part of Cape Verde, are presented. The data is fitted well by arhimag1k with the exception of the value at 1663 of 28.3 microtesla, which is approximately 10 microtesla lower than the model suggest.
The goal of this thesis was to thoroughly investigate the behavior of multimode fibres to aid the development of modern and forthcoming fibre-fed spectrograph systems. Based on the Eigenmode Expansion Method, a field propagation model was created that can emulate effects in fibres relevant for astronomical spectroscopy, such as modal noise, scrambling, and focal ratio degradation. These effects are of major concern for any fibre-coupled spectrograph used in astronomical research. Changes in the focal ratio, modal distribution of light or non-perfect scrambling limit the accuracy of measurements, e.g. the flux determination of the astronomical object, the sky-background subtraction and detection limit for faint galaxies, or the spectral line position accuracy used for the detection of extra-solar planets.
Usually, fibres used for astronomical instrumentation are characterized empirically through tests. The results of this work allow to predict the fibre behaviour under various conditions using sophisticated software tools to simulate the waveguide behaviour and mode transport of fibres.
The simulation environment works with two software interfaces. The first is the mode solver module FemSIM from Rsoft. It is used to calculate all the propagation modes and effective refractive indexes of a given system. The second interface consists of Python scripts which enable the simulation of the near- and far-field outputs of a given fibre. The characteristics of the input field can be manipulated to emulate real conditions. Focus variations, spatial translation, angular fluctuations, and disturbances through the mode coupling factor can also be simulated.
To date, complete coherent propagation or complete incoherent propagation can be simulated. Partial coherence was not addressed in this work. Another limitation of the simulations is that they work exclusively for the monochromatic case and that the loss coefficient of the fibres is not considered. Nevertheless, the simulations were able to match the results of realistic measurements.
To test the validity of the simulations, real fibre measurements were used for comparison. Two fibres with different cross-sections were characterized. The first fibre had a circular cross-section, and the second one had an octagonal cross-section. The utilized test-bench was originally developed for the prototype fibres of the 4MOST fibre feed characterization. It allowed for parallel laser beam measurements, light cone measurements, and scrambling measurements. Through the appropriate configuration, the acquisition of the near- and/or far-field was feasible.
By means of modal noise analysis, it was possible to compare the near-field speckle patterns of simulations and measurements as a function of the input angle. The spatial frequencies that originate from the modal interference could be analyzed by using the power spectral density analysis. Measurements and simulations yielded similar results. Measurements with induced modal scrambling were compared to simulations using incoherent propagation and once again similar results were achieved. Through both measurements and simulations, the enlargement of the near-field distribution could be observed and analyzed. The simulations made it possible to explain incoherent intensity fluctuations that appear in real measurements due to the field distribution of the active propagation modes.
By using the Voigt analysis in the far-field distribution, it was possible to separate the modal diffusion component in order to compare it with the simulations. Through an appropriate assessment, the modal diffusion component as a function of the input angle could be translated into angular divergence. The simulations gave the minimal angular divergence of the system. Through the mean of the difference between simulations and measurements, a figure of merit is given which can be used to characterize the angular divergence of real fibres using the simulations. Furthermore, it was possible to simulate light cone measurements. Due to the overall consistent results, it can be stated that the simulations represent a good tool to assist the fibre characterization process for fibre-fed spectrograph systems.
This work was possible through the BMBF Grant 05A14BA1 which was part of the phase A study of the fibre system for MOSAIC, a multi-object spectrograph for the Extremely Large Telescope (ELT-MOS).
This work presents a new design for programming environments that promote the exploration of domain-specific software artifacts and the construction of graphical tools for such program comprehension tasks. In complex software projects, tool building is essential because domain- or task-specific tools can support decision making by representing concerns concisely with low cognitive effort. In contrast, generic tools can only support anticipated scenarios, which usually align with programming language concepts or well-known project domains.
However, the creation and modification of interactive tools is expensive because the glue that connects data to graphics is hard to find, change, and test. Even if valuable data is available in a common format and even if promising visualizations could be populated, programmers have to invest many resources to make changes in the programming environment. Consequently, only ideas of predictably high value will be implemented. In the non-graphical, command-line world, the situation looks different and inspiring: programmers can easily build their own tools as shell scripts by configuring and combining filter programs to process data.
We propose a new perspective on graphical tools and provide a concept to build and modify such tools with a focus on high quality, low effort, and continuous adaptability. That is, (1) we propose an object-oriented, data-driven, declarative scripting language that reduces the amount of and governs the effects of glue code for view-model specifications, and (2) we propose a scalable UI-design language that promotes short feedback loops in an interactive, graphical environment such as Morphic known from Self or Squeak/Smalltalk systems.
We implemented our concept as a tool building environment, which we call VIVIDE, on top of Squeak/Smalltalk and Morphic. We replaced existing code browsing and debugging tools to iterate within our solution more quickly. In several case studies with undergraduate and graduate students, we observed that VIVIDE can be applied to many domains such as live language development, source-code versioning, modular code browsing, and multi-language debugging. Then, we designed a controlled experiment to measure the effect on the time to build tools. Several pilot runs showed that training is crucial and, presumably, takes days or weeks, which implies a need for further research.
As a result, programmers as users can directly work with tangible representations of their software artifacts in the VIVIDE environment. Tool builders can write domain-specific scripts to populate views to approach comprehension tasks from different angles. Our novel perspective on graphical tools can inspire the creation of new trade-offs in modularity for both data providers and view designers.
This dissertation aims to deliver a transcendental interpretation of Immanuel Kant's Kritik der Urteilskraft, considering both its coherence with other critical works as well as the internal coherence of the work itself. This interpretation is called transcendental insofar as special emphasis is placed on the newly introduced cognitive power, namely the reflective power of judgement, guided by the a priori principle of purposiveness. In this way the seeming manifold of themes, varying from judgements of taste through culture to teleological judgements about natural purposes, are discussed exclusively in regard of their dependence on this faculty and its transcendental principle. In contrast, in contemporary scholarship the book is often treated as a fragmented work, consisting of different independent parts, while my focus lies on the continuity comprised primarily of the activity of the power of judgement.
Going back to certain central yet silently presupposed concepts, adopted from previous critical works, the main contribution of this study is to integrate the KU within the overarching critical project. More specifically, I have argue how the need for the presupposition by the reflective power of judgement follows from the peculiar character of our sense-dependent discursive mind. Because we are sense-dependent discursive minds, we do not and cannot have immediate insight into all of nature's features. The particular constitution of our mind rather demands conceptually informed representations which mediately refer to objects.
Having said that, the principle of purposiveness, namely the presupposition that nature is organized in concert with the particular constitution of our mind, is a necessary condition for the possibility of reflection on nature's empirical features. Reflection refers on my account to a process of selecting features in order to allow a classification, including reflection on the method, means and selection criteria. Rather than directly contributing to cognition, like the categories, reflective judgements thus express our ignorance when it comes to the motivation behind nature's design, and this is most forcefully expressed by judgements of taste and teleological judgements about organized matter. In this way, reflection, regardless whether it is manifested in concept acquisition, scientific systematization, judgements of taste or judgements about organized matter, relies on a principle of the power of judgement which is revealed and justified in this transcendental inquiry.
The development of bioinspired self-assembling materials, such as hydrogels, with promising applications in cell culture, tissue engineering and drug delivery is a current focus in material science. Biogenic or bioinspired proteins and peptides are frequently used as versatile building blocks for extracellular matrix (ECM) mimicking hydrogels. However, precisely controlling and reversibly tuning the properties of these building blocks and the resulting hydrogels remains challenging. Precise control over the viscoelastic properties and self-healing abilities of hydrogels are key factors for developing intelligent materials to investigate cell matrix interactions. Thus, there is a need to develop building blocks that are self-healing, tunable and self-reporting. This thesis aims at the development of α-helical peptide building blocks, called coiled coils (CCs), which integrate these desired properties. Self-healing is a direct result of the fast self-assembly of these building blocks when used as material cross-links. Tunability is realized by means of reversible histidine (His)-metal coordination bonds. Lastly, implementing a fluorescent readout, which indicates the CC assembly state, self-reporting hydrogels are obtained.
Coiled coils are abundant protein folding motifs in Nature, which often have mechanical function, such as in myosin or fibrin. Coiled coils are superhelices made up of two or more α-helices wound around each other. The assembly of CCs is based on their repetitive sequence of seven amino acids, so-called heptads (abcdefg). Hydrophobic amino acids in the a and d position of each heptad form the core of the CC, while charged amino acids in the e and g position form ionic interactions. The solvent-exposed positions b, c and f are excellent targets for modifications since they are more variable. His-metal coordination bonds are strong, yet reversible interactions formed between the amino acid histidine and transition metal ions (e.g. Ni2+, Cu2+ or Zn2+). His-metal coordination bonds essentially contribute to the mechanical stability of various high-performance proteinaceous materials, such as spider fangs, Nereis worm jaws and mussel byssal threads. Therefore, I bioengineered reversible His-metal coordination sites into a well-characterized heterodimeric CC that served as tunable material cross-link. Specifically, I took two distinct approaches facilitating either intramolecular (Chapter 4.2) and/or intermolecular (Chapter 4.3) His-metal coordination.
Previous research suggested that force-induced CC unfolding in shear geometry starts from the points of force application. In order to tune the stability of a heterodimeric CC in shear geometry, I inserted His in the b and f position at the termini of force application (Chapter 4.2). The spacing of His is such that intra-CC His-metal coordination bonds can form to bridge one helical turn within the same helix, but also inter-CC coordination bonds are not generally excluded. Starting with Ni2+ ions, Raman spectroscopy showed that the CC maintained its helical structure and the His residues were able to coordinate Ni2+. Circular dichroism (CD) spectroscopy revealed that the melting temperature of the CC increased by 4 °C in the presence of Ni2+. Using atomic force microscope (AFM)-based single molecule force spectroscopy, the energy landscape parameters of the CC were characterized in the absence and the presence of Ni2+. His-Ni2+ coordination increased the rupture force by ~10 pN, accompanied by a decrease of the dissociation rate constant. To test if this stabilizing effect can be transferred from the single molecule level to the bulk viscoelastic material properties, the CC building block was used as a non-covalent cross-link for star-shaped poly(ethylene glycol) (star-PEG) hydrogels. Shear rheology revealed a 3-fold higher relaxation time in His-Ni2+ coordinating hydrogels compared to the hydrogel without metal ions. This stabilizing effect was fully reversible when using an excess of the metal chelator ethylenediaminetetraacetate (EDTA). The hydrogel properties were further investigated using different metal ions, i.e. Cu2+, Co2+ and Zn2+. Overall, these results suggest that Ni2+, Cu2+ and Co2+ primarily form intra-CC coordination bonds while Zn2+ also participates in inter-CC coordination bonds. This may be a direct result of its different coordination geometry.
Intermolecular His-metal coordination bonds in the terminal regions of the protein building blocks of mussel byssal threads are primarily formed by Zn2+ and were found to be intimately linked to higher-order assembly and self-healing of the thread. In the above example, the contribution of intra-CC and inter-CC His-Zn2+ cannot be disentangled. In Chapter 4.3, I redesigned the CC to prohibit the formation of intra-CC His-Zn2+ coordination bonds, focusing only on inter-CC interactions. Specifically, I inserted His in the solvent-exposed f positions of the CC to focus on the effect of metal-induced higher-order assembly of CC cross-links. Raman and CD spectroscopy revealed that this CC building block forms α-helical Zn2+ cross-linked aggregates. Using this CC as a cross-link for star-PEG hydrogels, I showed that the material properties can be switched from viscoelastic in the absence of Zn2+ to elastic-like in the presence of Zn2+. Moreover, the relaxation time of the hydrogel was tunable over three orders of magnitude when using different Zn2+:His ratios. This tunability is attributed to a progressive transformation of single CC cross-links into His-Zn2+ cross-linked aggregates, with inter-CC His-Zn2+ coordination bonds serving as an additional, cross-linking mode.
Rheological characterization of the hydrogels with inter-CC His-Zn2+ coordination raised the question whether the His-Zn2+ coordination bonds between CCs or also the CCs themselves rupture when shear strain is applied. In general, the amount of CC cross-links initially formed in the hydrogel as well as the amount of CC cross-links breaking under force remains to be elucidated. In order to more deeply probe these questions and monitor the state of the CC cross-links when force is applied, a fluorescent reporter system based on Förster resonance energy transfer (FRET) was introduced into the CC (Chapter 4.4). For this purpose, the donor-acceptor pair carboxyfluorescein and tetramethylrhodamine was used. The resulting self-reporting CC showed a FRET efficiency of 77 % in solution. Using this fluorescently labeled CC as a self-reporting, reversible cross-link in an otherwise covalently cross-linked star-PEG hydrogel enabled the detection of the FRET efficiency change under compression force. This proof-of-principle result sets the stage for implementing the fluorescently labeled CCs as molecular force sensors in non-covalently cross-linked hydrogels.
In summary, this thesis highlights that rationally designed CCs are excellent reversibly tunable, self-healing and self-reporting hydrogel cross-links with high application potential in bioengineering and biomedicine. For the first time, I demonstrated that His-metal coordination-based stabilization can be transferred from the single CC level to the bulk material with clear viscoelastic consequences. Insertion of His in specific sequence positions was used to implement a second non-covalent cross-linking mode via intermolecular His-metal coordination. This His-metal binding induced aggregation of the CCs enabled for reversibly tuning the hydrogel properties from viscoelastic to elastic-like. As a proof-of-principle to establish self-reporting CCs as material cross-links, I labeled a CC with a FRET pair. The fluorescently labelled CC acts as a molecular force sensor and first preliminary results suggest that the CC enables the detection of hydrogel cross-link failure under compression force. In the future, fluorescently labeled CC force sensors will likely not only be used as intelligent cross-links to study the failure of hydrogels but also to investigate cell-matrix interactions in 3D down to the single molecule level.
Die vorliegende Untersuchung analysierte den direkten Zusammenhang eines berufsbezogenen Angebots Sozialer Gruppenarbeit mit dem Ergebnis beruflicher Wiedereingliederung bei Rehabilitandinnen und Rehabilitanden in besonderen beruflichen Problemlagen. Sie wurde von der Deutschen Rentenversicherung Bund als Forschungsprojekt vom 01.01.2013 bis 31.12. 2015 gefördert und an der Professur für Rehabilitationswissenschaften der Universität Potsdam realisiert.
Die Forschungsfrage lautete: Kann eine intensive sozialarbeiterische Gruppenintervention im Rahmen der stationären medizinischen Rehabilitation soweit auf die Stärkung sozialer Kompetenzen und die Soziale Unterstützung von Rehabilitandinnen und Rehabilitanden einwirken, dass sich dadurch langfristige Verbesserungen hinsichtlich der beruflichen Wiedereingliederung im Vergleich zur konventionellen Behandlung ergeben?
Die Studie gliederte sich in eine qualitative und eine quantitative Erhebung mit einer zwischenliegenden Intervention. Eingeschlossen waren 352 Patientinnen und Patienten im Alter zwischen 18 und 65 Jahren mit kardiovaskulären Diagnosen, deren Krankheitsbilder häufig von komplexen Problemlagen begleitet sind, verbunden mit einer schlechten sozialmedizinischen Prognose.
Die Evaluation der Gruppenintervention erfolgte in einem clusterrandomisierten kontrollierten Studiendesign, um einen empirischen Nachweis darüber zu erbringen, inwieweit die Intervention gegenüber der regulären sozialarbeiterischen Behandlung höhere Effekte erzielen kann. Die Interventionsgruppen nahmen am Gruppenprogramm teil, die Kontrollgruppen erhielten die reguläre sozialarbeiterische Behandlung.
Im Ergebnis konnte mit dieser Stichprobe kein Nachweis zur Verbesserung der beruflichen Wiedereingliederung, der gesundheitsbezogenen Arbeitsfähigkeit, der Lebensqualität sowie der Sozialen Unterstützung durch die Teilnahme am sozialarbeiterischen Gruppenprogramm erbracht werden. Die Return-To-Work-Rate betrug 43,7 %, ein Viertel der Untersuchungsgruppe befand sich nach einem Jahr in Arbeitslosigkeit. Die durchgeführte Gruppenintervention ist dem konventionellen Setting Sozialer Arbeit als gleichwertig anzusehen.
Schlussfolgernd wurde auf eine sozialarbeiterische Unterstützung der beruflichen Wiedereingliederung über einen längeren Zeitraum nach einer kardiovaskulären Erkrankung verwiesen, insbesondere durch wohnortnahe Angebote zu einem späteren Zeitpunkt bei stabilerer Gesundheit. Aus den Erhebungen ließen sich mögliche Erfolge bei engerer Kooperation zwischen dem Fachbereich der Sozialen Arbeit und der Psychologie ableiten. Ebenfalls gab es Hinweise auf die einflussreiche Rolle der Angehörigen, die durch Einbindung in die Soziale Beratung unterstützend auf den Wiedereingliederungsprozess wirken könnten. Die Passgenauigkeit der untersuchten sozialarbeiterischen Gruppeninterventionen ist durch eine gezielte Soziale Diagnostik zu verbessern.
Chloroplasts are the photosynthetic organelles in plant and algae cells that enable photoautotrophic growth. Due to their prokaryotic origin, modern-day chloroplast genomes harbor 100 to 200 genes. These genes encode for core components of the photosynthetic complexes and the chloroplast gene expression machinery, making most of them essential for the viability of the organism. The regulation of those genes is predominated by translational adjustments. The powerful technique of ribosome profiling was successfully used to generate highly resolved pictures of the translational landscape of Arabidopsis thaliana cytosol, identifying translation of upstream open reading frames and long non-coding transcripts. In addition, differences in plastidial translation and ribosomal pausing sites were addressed with this method. However, a highly resolved picture of the chloroplast translatome is missing. Here, with the use of chloroplast isolation and targeted ribosome affinity purification, I generated highly enriched ribosome profiling datasets of the chloroplasts translatome for Nicotiana tabacum in the dark and light. Chloroplast isolation was found unsuitable for the unbiased analysis of translation in the chloroplast but adequate to identify potential co-translational import. Affinity purification was performed for the small and large ribosomal subunit independently. The enriched datasets mirrored the results obtained from whole-cell ribosome profiling. Enhanced translational activity was detected for psbA in the light. An alternative translation initiation mechanism was not identified by selective enrichment of small ribosomal subunit footprints. In sum, this is the first study that used enrichment strategies to obtain high-depth ribosome profiling datasets of chloroplasts to study ribosome subunit distribution and chloroplast associated translation.
Ever-changing light intensities are challenging the photosynthetic capacity of photosynthetic organism. Increased light intensities may lead to over-excitation of photosynthetic reaction centers resulting in damage of the photosystem core subunits. Additional to an expensive repair mechanism for the photosystem II core protein D1, photosynthetic organisms developed various features to reduce or prevent photodamage. In the long-term, photosynthetic complex contents are adjusted for the efficient use of experienced irradiation. However, the contribution of chloroplastic gene expression in the acclimation process remained largely unknown. Here, comparative transcriptome and ribosome profiling was performed for the early time points of high-light acclimation in Nicotiana tabacum chloroplasts in a genome-wide scale. The time- course data revealed stable transcript level and only minor changes in translational activity of specific chloroplast genes during high-light acclimation. Yet, psbA translation was increased by two-fold in the high light from shortly after the shift until the end of the experiment. A stress-inducing shift from low- to high light exhibited increased translation only of psbA. This study indicate that acclimation fails to start in the observed time frame and only short-term responses to reduce photoinhibition were observed.
Remembering the dismembered
(2020)
This thesis – written in co-authorship with Tanzanian activist Mnyaka Sururu Mboro – examines different cases of repatriation of ancestral remains to African countries and communities through the prism of postcolonial memory studies. It follows the theft and displacement of prominent ancestors from East and Southern Africa (Sarah Baartman, Dawid Stuurman, Mtwa Mkwawa, Songea Mbano, King Hintsa and the victims of the Ovaherero and Nama genocides) and argues that efforts made for the repatriation of their remains have contributed to a transnational remembrance of colonial violence.
Drawing from cultural studies theories such as "multidirectional memory", "rehumanisation" and "necropolitics", the thesis argues for a new conceptualisation or "re-membrance" in repatriation, through processes of reunion, empowerment, story-telling and belonging. Besides, the afterlives of the dead ancestors, who stand at the centre of political debates on justice and reparations, remind of their past struggles against colonial oppression. They are therefore "memento vita", fostering counter-discourses that recognize them
as people and stories.
This manuscript is accompanied by a “(web)site of memory” where some of the research findings are made available to a wider audience. This blog also hosts important sound material which appears in the thesis as interventions by external contributors. Through QR codes, both the written and the digital version are linked with each other to problematize the idea of a written monograph and bring a polyphonic perspective to those diverse, yet connected, histories.
Glycosylphosphatidylinositols (GPIs) are highly complex glycolipids that serve as membrane anchors to a large variety of eukaryotic proteins. These are covalently attached to a group of peripheral proteins called GPI-anchored proteins (GPI-APs) through a post-translational modification in the endoplasmic reticulum. The GPI anchor is a unique structure composed of a glycan, with phospholipid tail at one end and a phosphoethanolamine linker at the other where the protein attaches. The glycan part of the GPI comprises a conserved pseudopentasaccharide core that could branch out to carry additional glycosyl or phosphoethanolamine units. GPI-APs are involved in a diverse range of cellular processes, few of which are signal transduction, protein trafficking, pathogenesis by protozoan parasites like the malaria- causing parasite Plasmodium falciparum. GPIs can also exist freely on the membrane surface without an attached protein such as those found in parasites like Toxoplasma gondii, the causative agent of Toxoplasmosis. These molecules are both structurally and functionally diverse, however, their structure-function relationship is still poorly understood. This is mainly because no clear picture exists regarding how the protein and the glycan arrange with respect to the lipid layer. Direct experimental evidence is rather scarce, due to which inconclusive pictures have emerged, especially regarding the orientation of GPIs and GPI-APs on membrane surfaces and the role of GPIs in membrane organization. It appears that computational modelling through molecular dynamics simulations would be a useful method to make progress. In this thesis, we attempt to explore characteristics of GPI anchors and GPI-APs embedded in lipid bilayers by constructing molecular models at two different resolutions – all-atom and coarse-grained.
First, we show how to construct a modular molecular model of GPIs and GPI-anchored proteins that can be readily extended to a broad variety of systems, addressing the micro-heterogeneity of GPIs. We do so by creating a hybrid link to which GPIs of diverse branching and lipid tails of varying saturation with their optimized force fields, GLYCAM06 and Lipid14 respectively, can be attached. Using microsecond simulations, we demonstrate that GPI prefers to “flop-down” on the membrane, thereby, strongly interacting with the lipid heads, over standing upright like a “lollipop”. Secondly, we extend the model of the GPI core to carry out a systematic study of the structural aspects of GPIs carrying different side chains (parasitic and human GPI variants) inserted in lipid bilayers. Our results demonstrate the importance of the side branch residues as these are the most accessible, and thereby, recognizable epitopes. This finding qualitatively agrees with experimental observations that highlight the role of the side branches in immunogenicity of GPIs and the specificity thereof. The overall flop-down orientation of the GPIs with respect to the bilayer surface presents the side chain residues to face the solvent. Upon attaching the green fluorescent protein (GFP) to the GPI, it is seen to lie in close proximity to the bilayer, interacting both with the lipid heads and glycan part of the GPI. However the orientation of GFP is sensitive to the type of GPI it is attached to. Finally, we construct a coarse-grained model of the GPI and GPI-anchored GFP using a modified version of the MARTINI force-field, using which the timescale is enhanced by at least an order of magnitude compared to the atomistic system.
This study provides a theoretical perspective on the conformational behavior of the GPI core and some of its branched variations in presence of lipid bilayers, as well as draws comparisons with experimental observations. Our modular atomistic model of GPI can be further employed to study GPIs of variable branching, and thereby, aid in designing future experiments especially in the area of vaccines and drug therapies. Our coarse-grained model can be used to study dynamic aspects of GPIs and GPI-APs w.r.t plasma membrane organization. Furthermore, the backmapping technique of converting coarse-grained trajectory back to the atomistic model would enable in-depth structural analysis with ample conformational sampling.
Organizations incorporate the institutional demands from their environment in order to be deemed legitimate and survive. Yet, complexifying societies promulgate multiple and sometimes inconsistent institutional prescriptions. When these prescriptions collide, organizations are said to face “institutional complexity”. How does an organization then incorporate incompatible demands? What are the consequences of institutional complexity for an organization? The literature provides contradictory conceptual and empirical insights on the matter. A central assumption, however, remains that internal incompatibilities generate tensions that, under certain conditions, can escalate into intractable conflicts, resulting in dysfunctionality and loss of legitimacy. The present research is an inquiry into what happens inside an organization when it incorporates complex institutional demands.
To answer this question, I focus on how individuals inside an organization interpret a complex institutional prescription. I examine how members of the French Development Agency interpret ‘results-based management’, a central but complex concept of organizing in the field of development aid. I use an inductive mixed methods design to systematically explore how different interpretations of results-based management relate to one another and to the organizational context in which they are embedded.
The results reveal that results-based management is a contested concept in the French Development Agency. I find multiple interpretations of the concept, which are attached to partly incompatible rationales about “who we are” and “what we do as an organization”. These rationales nevertheless coexist as balanced forces, without escalating into open conflict. The analysis points to four reasons for this peaceful coexistence of diverging rationales inside one and the same organization: 1) individuals’ capacity to manipulate different interpretations of a complex institutional demand, 2) the nature of interpretations, which makes them more or less prone to conflict, 3) the balanced distribution of rationales across the organizational sub-contexts and 4) the shared rules of interpretation provided by the larger socio-cultural context.
This research shows that an organization that incorporates institutional complexity comes to represent different, partly incompatible things to its members without being at war with itself. In doing so, it contributes to our knowledge of institutional complexity and organizational hybridity. It also advances our understanding of internal organizational legitimacy and of the translation of managerial concepts in organizations.
Towards seasonal prediction: stratosphere-troposphere coupling in the atmospheric model ICON-NWP
(2020)
Stratospheric variability is one of the main potential sources for sub-seasonal to seasonal predictability in mid-latitudes in winter. Stratospheric pathways play an important role for long-range teleconnections between tropical phenomena, such as the quasi-biennial oscillation (QBO) and El Niño-Southern Oscillation (ENSO), and the mid-latitudes on the one hand, and linkages between Arctic climate change and the mid-latitudes on the other hand. In order to move forward in the field of extratropical seasonal predictions, it is essential that an atmospheric model is able to realistically simulate the stratospheric circulation and variability. The numerical weather prediction (NWP) configuration of the ICOsahedral Non-hydrostatic atmosphere model ICON is currently being used by the German Meteorological Service for the regular weather forecast, and is intended to produce seasonal predictions in future. This thesis represents the first extensive evaluation of Northern Hemisphere stratospheric winter circulation in ICON-NWP by analysing a large set of seasonal ensemble experiments.
An ICON control climatology simulated with a default setup is able to reproduce the basic behaviour of the stratospheric polar vortex. However, stratospheric westerlies are significantly too weak and major stratospheric warmings too frequent, especially in January. The weak stratospheric polar vortex in ICON is furthermore connected to a mean sea level pressure (MSLP) bias pattern resembling the negative phase of the Arctic Oscillation (AO). Since a good representation of the drag exerted by gravity waves is crucial for a realistic simulation of the stratosphere, three sensitivity experiments with reduced gravity wave drag are performed. Both a reduction of the non-orographic and orographic gravity wave drag respectively, lead to a strengthening of the stratospheric vortex and thus a bias reduction in winter, in particular in January. However, the effect of the non-orographic gravity wave drag on the stratosphere is stronger. A third experiment, combining a reduced orographic and non-orographic drag, exhibits the largest stratospheric bias reductions. The analysis of stratosphere-troposphere coupling based on an index of the Northern Annular Mode demonstrates that ICON realistically represents downward coupling. This coupling is intensified and more realistic in experiments with a reduced gravity wave drag, in particular with reduced non-orographic drag. Tropospheric circulation is also affected by the reduced gravity wave drag, especially in January, when the strongly improved stratospheric circulation reduces biases in the MSLP patterns. Moreover, a retuning of the subgrid-scale orography parameterisations leads to a significant error reduction in the MSLP in all months. In conclusion, the combination of these adjusted parameterisations is recommended as a current optimal setup for seasonal simulations with ICON.
Additionally, this thesis discusses further possible influences on the stratospheric polar vortex, including the influence of tropical phenomena, such as QBO and ENSO, as well as the influence of a rapidly warming Arctic. ICON does not simulate the quasi-oscillatory behaviour of the QBO and favours weak easterlies in the tropical stratosphere. A comparison with a reanalysis composite of the easterly QBO phase reveals, that the shift towards the easterly QBO in ICON further weakens the stratospheric polar vortex. On the other hand, the stratospheric reaction to ENSO events in ICON is realistic. ICON and the reanalysis exhibit a weakened stratospheric vortex in warm ENSO years. Furthermore, in particular in winter, warm ENSO events favour the negative phase of the Arctic Oscillation, whereas cold events favour the positive phase. The ICON simulations also suggest a significant effect of ENSO on the Atlantic-European sector in late winter. To investigate the influence of Arctic climate change on mid-latitude circulation changes, two differing approaches with transient and fixed sea ice conditions are chosen. Neither ICON approach exhibits the mid-latitude tropospheric negative Arctic Oscillation circulation response to amplified Arctic warming, as it is discussed on the basis of observational evidence. Nevertheless, adding a new model to the current and active discussion on Arctic-midlatitude linkages, further contributes to the understanding of divergent conclusions between model and observational studies.
Successfully completing any data science project demands careful consideration across its whole process. Although the focus is often put on later phases of the process, in practice, experts spend more time in earlier phases, preparing data, to make them consistent with the systems' requirements or to improve their models' accuracies. Duplicate detection is typically applied during the data cleaning phase, which is dedicated to removing data inconsistencies and improving the overall quality and usability of data. While data cleaning involves a plethora of approaches to perform specific operations, such as schema alignment and data normalization, the task of detecting and removing duplicate records is particularly challenging. Duplicates arise when multiple records representing the same entities exist in a database. Due to numerous reasons, spanning from simple typographical errors to different schemas and formats of integrated databases. Keeping a database free of duplicates is crucial for most use-cases, as their existence causes false negatives and false positives when matching queries against it. These two data quality issues have negative implications for tasks, such as hotel booking, where users may erroneously select a wrong hotel, or parcel delivery, where a parcel can get delivered to the wrong address. Identifying the variety of possible data issues to eliminate duplicates demands sophisticated approaches.
While research in duplicate detection is well-established and covers different aspects of both efficiency and effectiveness, our work in this thesis focuses on the latter. We propose novel approaches to improve data quality before duplicate detection takes place and apply the latter in datasets even when prior labeling is not available. Our experiments show that improving data quality upfront can increase duplicate classification results by up to 19%. To this end, we propose two novel pipelines that select and apply generic as well as address-specific data preparation steps with the purpose of maximizing the success of duplicate detection. Generic data preparation, such as the removal of special characters, can be applied to any relation with alphanumeric attributes. When applied, data preparation steps are selected only for attributes where there are positive effects on pair similarities, which indirectly affect classification, or on classification directly. Our work on addresses is twofold; first, we consider more domain-specific approaches to improve the quality of values, and, second, we experiment with known and modified versions of similarity measures to select the most appropriate per address attribute, e.g., city or country.
To facilitate duplicate detection in applications where gold standard annotations are not available and obtaining them is not possible or too expensive, we propose MDedup. MDedup is a novel, rule-based, and fully automatic duplicate detection approach that is based on matching dependencies. These dependencies can be used to detect duplicates and can be discovered using state-of-the-art algorithms efficiently and without any prior labeling. MDedup uses two pipelines to first train on datasets with known labels, learning to identify useful matching dependencies, and then be applied on unseen datasets, regardless of any existing gold standard. Finally, our work is accompanied by open source code to enable repeatability of our research results and application of our approaches to other datasets.
Unlike today’s prevailing terrestrial features, the geologic past of Central Asia witnessed marine environments and conditions as well. A vast, shallow sea, known as proto-Paratethys, extended across Eurasia from the Mediterranean Tethys to the Tarim Basin in western China during Cretaceous to Paleogene times. This sea formed about 160 million years ago (during Jurassic times) when the waters of the Tethys Ocean flooded into Eurasia. It drastically retreated to the west and became isolated as the Paratethys during the Late Eocene-Oligocene (ca. 34 Ma).
Having well-constrained timing and paleogeography for the Cretaceous-Paleogene proto-Paratethys sea incursions in Central Asia is essential to properly understand and distinguish the controlling mechanisms and their link to Asian paleoenvironmental and paleoclimatic change. The Cretaceous-Paleogene tectonic evolution of the Pamir and Tibet and their far-field effects play a significant role on the sedimentological and structural evolution of the Central Asian basins and on the evolution of the proto-Paratethys sea fluctuations as well. Comparing the records of the sea incursions to the tectonic and eustatic events has paramount importance to reveal the controlling mechanisms behind the sea incursions. However, due to inaccuracies in the dating of rocks (mostly continental rocks and marine rocks with benthic microfossils providing low-resolution biostratigraphic constraints) and conflicting results, there has been no consensus on the timing of the sea incursions and interpretation of their records has been in question. Here, we present a new chronostratigraphic framework based on biostratigraphy and magnetostratigraphy as well as a detailed paleoenvironmental analysis for the Cretaceous and Paleogene proto-Paratethys Sea incursions in the Tajik and Tarim basins, in Central Asia. This enables us to identify the major drivers of marine fluctuations and their potential consequences on regional and global climate, particularly Asian aridification and the global carbon cycle perturbations such as the Paleocene-Eocene Thermal Maximum (PETM). To estimate the paleogeographic evolution of the proto-Paratethys Sea, the refined age constraints and detailed paleoenvironmental interpretations are combined with successive paleogeographic maps. Regional coastlines and depositional environments during the Cretaceous-Paleogene sea advances and retreats were drawn based on the results of this thesis and integrated with existing literature to generate new paleogeographic maps.
Before its final westward retreat in the Eocene, a total of six Cretaceous and Paleogene major sea incursions have been distinguished from the sedimentary records of the Tajik and Tarim basins in Central Asia. All have been studied and documented here.
We identify the presence of marine conditions already in the Early Cretaceous in the western Tajik Basin, followed by the Cenomanian (ca. 100 Ma) and Santonian (ca. 86 Ma) major marine incursions far into the eastern Tajik and Tarim basins separated by a Turonian-Coniacian (ca. 92-86 Ma) regression. Basin-wide tectonic subsidence analyses imply that the Early Cretaceous invasion of the sea into the Tajik Basin is related to increased Pamir tectonism (at ca. 130 – 90 Ma) in a retro-arc basin setting inferred to be linked to collision and subduction. This tectonic event mainly governed the Cenomanian (ca. 100 Ma) sea incursion in conjunction with a coeval global eustatic high resulting in the maximum geographic extent of the sea. The following Turonian-Coniacian (ca. 92-86 Ma) major regression, driven by eustasy, coincides with a sharp slowdown in tectonic subsidence related to a regime change in Pamir tectonism from compression to extension. The Santonian (ca. 86 Ma) major sea incursion was more likely controlled dominantly by eustasy as also evidenced by the coeval fluctuations in the west Siberian Basin. During the early Maastrichtian, the global Late Cretaceous cooling is inferred from the disappearance of mollusk-rich limestones and the dominance of bryozoan-rich and echinoderm-rich limestones in the Tajik Basin documenting the first evidence for the Late Cretaceous cooling event in Central Asia.
Following the last Cretaceous sea incursion, a major regional restriction event, marked by the exceptionally thick (≤ 400 m) shelf evaporites is assigned a Danian-Selandian age (ca. 63-59 Ma). This is followed by the largest recorded proto-Paratethys sea incursion with a transgression estimated as early Thanetian (ca. 59-57 Ma) and a regression within the Ypresian (ca. 53-52 Ma). The transgression of the next incursion is now constrained as early Lutetian (ca. 47-46 Ma), whereas its regression is constrained as late Lutetian (ca. 41 Ma) and is associated with a drastic increase in both tectonic subsidence and basin infilling. The age of the final and least pronounced sea incursion restricted to the westernmost margin of the Tarim Basin is assigned as Bartonian–Priabonian (ca. 39.7-36.7 Ma). We interpret the long-term westward retreat of the proto-Paratethys Sea starting at ca. 41 Ma to be associated with far-field tectonic effects of the Indo-Asia collision and Pamir/Tibetan plateau uplift. Short-term eustatic sea level transgressions are superimposed on this long-term regression and seem coeval with the transgression events in the other northern Peri-Tethyan sedimentary provinces for the 1st and 2nd Paleogene sea incursions. However, the last Paleogene sea incursion is interpreted as related to tectonism. The transgressive and regressive intervals of the proto-Paratethys Sea correlate well with the reported humid and arid phases, respectively in the Qaidam and Xining basins, thus demonstrating the role of the proto-Paratethys Sea as an important moisture source for the Asian interior and its regression as a contributor to Asian aridification.
We lastly study the mechanics, relative contribution and preservation efficiency of ancient epicontinental seas as carbon sinks with new and existing data, using organic rich (sapropel) deposits dated to the PETM from the extensive epicontinental proto-Paratethys and West Siberian seas. We estimate ca. 1390±230 Gt organic C burial, a substantial amount compared to previously estimated global total excess organic C burial (ca. 1700-2900 Gt) is focused in the proto-Paratethys and West Siberian seas alone. We also speculate that enhanced organic carbon burial later over much of the proto-Paratethys (and later Paratethys) basin (during the deposition of the Kuma Formation and Maikop series, repectively) may have majorly contributed to drawdown of atmospheric carbon dioxide before and during the EOT cooling and glaciation of Antarctica. For past periods with smaller epicontinental seas, the effectiveness of this negative carbon cycle feedback was arguably diminished, and the same likely applies to the present-day.
With his September 2015 speech “Breaking the tragedy of the horizon”, the President of the Central Bank of England, Mark Carney, put climate change on the agenda of financial market regulators. Until then, climate change had been framed mainly as a problem of negative externalities leading to long-term economic costs, which resulted in countries trying to keep the short-term costs of climate action to a minimum. Carney argued that climate change, as well as climate policy, can also lead to short-term financial risks, potentially causing strong adjustments in asset prices. Analysing the effect of a sustainability transition on the financial sector challenges traditional economic and financial analysis and requires a much deeper understanding of the interrelations between climate policy and financial markets.
This dissertation thus investigates the implications of climate policy for financial markets as well as the role of financial markets in a transition to a sustainable economy. The approach combines insights from macroeconomic and financial risk analysis. Following an introduction and classification in Chapter 1, Chapter 2 shows a macroeconomic analysis that combines ambitious climate targets (negative externality) with technological innovation (positive externality), adaptive expectations and an investment program, resulting in overall positive macroeconomic outcomes. The analysis also reveals the limitations of climate economic models in their representation of financial markets. Therefore, the subsequent part of this dissertation is concerned with the link between climate policies and financial markets. In Chapter 3, an empirical analysis of stock-market responses to the announcement of climate policy targets is performed to investigate impacts of climate policy on financial markets. Results show that 1) international climate negotiations have an effect on asset prices and 2) investors increasingly recognize transition risks in carbon-intensive investments. In Chapter 4, an analysis of equity markets and the interbank market shows that transition risks can potentially affect a large part of the equity market and that financial interconnections can amplify negative shocks. In Chapter 5, an analysis of mortgage loans shows how information on climate policy and the energy performance of buildings can be integrated into risk management and reflected in interest rates.
While costs of climate action have been explored at great depth, this dissertation offers two main contributions. First, it highlights the importance of a green investment program to strengthen the macroeconomic benefits of climate action. Second, it shows different approaches on how to integrate transition risks and opportunities into financial market analysis. Anticipating potential losses and gains in the value of financial assets as early as possible can make the financial system more resilient to transition risks and can stimulate investments into the decarbonization of the economy.
Gold at the nanoscale
(2020)
In this cumulative dissertation, I want to present my contributions to the field of plasmonic nanoparticle science. Plasmonic nanoparticles are characterised by resonances of the free electron gas around the spectral range of visible light. In recent years, they have evolved as promising components for light based nanocircuits, light harvesting, nanosensors, cancer therapies, and many more.
This work exhibits the articles I authored or co-authored in my time as PhD student at the University of Potsdam. The main focus lies on the coupling between localised plasmons and excitons in organic dyes. Plasmon–exciton coupling brings light–matter coupling to the nanoscale. This size reduction is accompanied by strong enhancements of the light field which can, among others, be utilised to enhance the spectroscopic footprint of molecules down to single molecule detection, improve the efficiency of solar cells, or establish lasing on the nanoscale. When the coupling exceeds all decay channels, the system enters the strong coupling regime. In this case, hybrid light–matter modes emerge utilisable as optical switches, in quantum networks, or as thresholdless lasers. The present work investigates plasmon–exciton coupling in gold–dye core–shell geometries and contains both fundamental insights and technical novelties. It presents a technique which reveals the anticrossing in coupled systems without manipulating the particles themselves. The method is used to investigate the relation between coupling strength and particle size. Additionally, the work demonstrates that pure extinction measurements can be insufficient when trying to assess the coupling regime. Moreover, the fundamental quantum electrodynamic effect of vacuum induced saturation is introduced. This effect causes the vacuum fluctuations to diminish the polarisability of molecules and has not yet been considered in the plasmonic context.
The work additionally discusses the reaction of gold nanoparticles to optical heating. Such knowledge is of great importance for all potential optical applications utilising plasmonic nanoparticles since optical excitation always generates heat. This heat can induce a change in the optical properties, but also mechanical changes up to melting can occur. Here, the change of spectra in coupled plasmon–exciton particles is discussed and explained with a precise model. Moreover, the work discusses the behaviour of gold nanotriangles exposed to optical heating. In a pump–probe measurement, X-ray probe pulses directly monitored the particles’ breathing modes. In another experiment, the triangles were exposed to cw laser radiation with varying intensities and illumination areas. X-ray diffraction directly measured the particles’ temperature. Particle melting was investigated with surface enhanced Raman spectroscopy and SEM imaging demonstrating that larger illumination areas can cause melting at lower intensities. An elaborate methodological and theoretical introduction precedes the articles. This way, also readers without specialist’s knowledge get a concise and detailed overview of the theory and methods used in the articles. I introduce localised plasmons in metal nanoparticles of different shapes. For this work, the plasmons were mostly coupled to excitons in J-aggregates. Therefore, I discuss these aggregates of organic dyes with sharp and intense resonances and establish an understanding of the coupling between the two systems. For ab initio simulations of the coupled systems, models for the systems’ permittivites are presented, too. Moreover, the route to the sample fabrication – the dye coating of gold nanoparticles, their subsequent deposition on substrates, and the covering with polyelectrolytes – is presented together with the measurement methods that were used for the articles.
Perovskite solar cells have become one of the most studied systems in the quest for new, cheap and efficient solar cell materials. Within a decade device efficiencies have risen to >25% in single-junction and >29% in tandem devices on top of silicon. This rapid improvement was in many ways fortunate, as e. g. the energy levels of commonly used halide perovskites are compatible with already existing materials from other photovoltaic technologies such as dye-sensitized or organic solar cells. Despite this rapid success, fundamental working principles must be understood to allow concerted further improvements. This thesis focuses on a comprehensive understanding of recombination processes in functioning devices.
First the impact the energy level alignment between the perovskite and the electron transport layer based on fullerenes is investigated. This controversial topic is comprehensively addressed and recombination is mitigated through reducing the energy difference between the perovskite conduction band minimum and the LUMO of the fullerene. Additionally, an insulating blocking layer is introduced, which is even more effective in reducing this recombination, without compromising carrier collection and thus efficiency. With the rapid efficiency development (certified efficiencies have broken through the 20% ceiling) and thousands of researchers working on perovskite-based optoelectronic devices, reliable protocols on how to reach these efficiencies are lacking. Having established robust methods for >20% devices, while keeping track of possible pitfalls, a detailed description of the fabrication of perovskite solar cells at the highest efficiency level (>20%) is provided. The fabrication of low-temperature p-i-n structured devices is described, commenting on important factors such as practical experience, processing atmosphere & temperature, material purity and solution age. Analogous to reliable fabrication methods, a method to identify recombination losses is needed to further improve efficiencies. Thus, absolute photoluminescence is identified as a direct way to quantify the Quasi-Fermi level splitting of the perovskite absorber (1.21eV) and interfacial recombination losses the transport layers impose, reducing the latter to ~1.1eV. Implementing very thin interlayers at both the p- and n-interface (PFN-P2 and LiF, respectively), these losses are suppressed, enabling a VOC of up to 1.17eV. Optimizing the device dimensions and the bandgap, 20% devices with 1cm2 active area are demonstrated. Another important consideration is the solar cells’ stability if subjected to field-relevant stressors during operation. In particular these are heat, light, bias or a combination thereof. Perovskite layers – especially those incorporating organic cations – have been shown to degrade if subjected to these stressors. Keeping in mind that several interlayers have been successfully used to mitigate recombination losses, a family of perfluorinated self-assembled monolayers (X-PFCn, where X denotes I/Br and n = 7-12) are introduced as interlayers at the n-interface. Indeed, they reduce interfacial recombination losses enabling device efficiencies up to 21.3%. Even more importantly they improve the stability of the devices. The solar cells with IPFC10 are stable over 3000h stored in the ambient and withstand a harsh 250h of MPP at 85◦C without appreciable efficiency losses. To advance further and improve device efficiencies, a sound understanding of the photophysics of a device is imperative. Many experimental observations in recent years have however drawn an inconclusive picture, often suffering from technical of physical impediments, disguising e. g. capacitive discharge as recombination dynamics. To circumvent these obstacles, fully operational, highly efficient perovskites solar cells are investigated by a combination of multiple optical and optoelectronic probes, allowing to draw a conclusive picture of the recombination dynamics in operation. Supported by drift-diffusion simulations, the device recombination dynamics can be fully described by a combination of first-, second- and third-order recombination and JV curves as well as luminescence efficiencies over multiple illumination intensities are well described within the model. On this basis steady state carrier densities, effective recombination constants, densities-of-states and effective masses are calculated, putting the devices at the brink of the radiative regime. Moreover, a comprehensive review of recombination in state-of-the-art devices is given, highlighting the importance of interfaces in nonradiative recombination. Different strategies to assess these are discussed, before emphasizing successful strategies to reduce interfacial recombination and pointing towards the necessary steps to further improve device efficiency and stability. Overall, the main findings represent an advancement in understanding loss mechanisms in highly efficient solar cells. Different reliable optoelectronic techniques are used and interfacial losses are found to be of grave importance for both efficiency and stability. Addressing the interfaces, several interlayers are introduced, which mitigate recombination losses and degradation.
The field of gamma-ray astronomy opened a new window into the non-thermal universe that allows studying the acceleration sites of cosmic rays and the role of cosmic rays on evolutionary processes in galaxies. The detection of almost one hundred Galactic very-high-energy (VHE: 0.1−100TeV) gamma-ray sources in the Milky Way demonstrates that particle acceleration up to tens of TeV energies is a common phenomenon. Furthermore, the detection of VHE gamma rays from other galaxies has confirmed that cosmic rays are not exclusively accelerated in the Milky Way. The rapid development of gamma-ray astronomy in the past two decades has led to a transition from the detection and study of individual sources to source population studies. To answer the question, whether the VHE gamma-ray source population of the Milky Way is unique, observations of galaxies, for which individual sources can be resolved, are required. Such galaxies are the Magellanic Clouds, two satellite galaxies of the Milky Way, which have been surveyed by the H.E.S.S. experiment in the last decade. In this thesis, data from a total of 450 hours of H.E.S.S. observations towards the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) are presented. During the analysis of the data sets, special emphasis is put on the evaluation of systematic uncertainties of the experiment in order to assure an unbiased flux estimation of the potential VHE gamma-ray sources of the Magellanic Clouds. A detailed analysis of the survey data revealed the detection of the gamma-ray binary LMCP3, the most powerful gamma-ray binary known so far, that is located in the LMC, and thus, increases the number of known VHE gamma-ray sources in the LMC to four. No other VHE gamma-ray source is detected in the Magellanic Clouds and integral flux upper limits are estimated. These flux upper limits are used to perform a source population study based on known VHE source classes and existing multi-wavelength catalogues. A comparison of the source populations of the Magellanic Clouds and the Milky Way revealed that no other source in the Magellanic Clouds is as bright as the most luminous VHE gamma-ray source in the LMC: the pulsar wind nebula N 157B, and that one-third of the source population of the Magellanic Clouds is less luminous than the other known VHE gamma-ray sources in the LMC. For only a couple of sources luminosity levels of Galactic VHE sources, that are more than one order of magnitude fainter than the detected sources in the LMC, are constrained. Based on the flux upper limits, differences on the TeV source populations in the Magellanic Clouds and the Milky Way as well as the importance of the source environments will be discussed.
Largescale patterns of global land use change are very frequently accompanied by natural habitat loss. To assess the consequences of habitat loss for the remaining natural and semi-natural biotopes, inclusion of cumulative effects at the landscape level is required. The interdisciplinary concept of vulnerability constitutes an appropriate assessment framework at the landscape level, though with few examples of its application for ecological assessments. A comprehensive biotope vulnerability analysis allows identification of areas most affected by landscape change and at the same time with the lowest chances of regeneration.
To this end, a series of ecological indicators were reviewed and developed. They measured spatial attributes of individual biotopes as well as some ecological and conservation characteristics of the respective resident species community. The final vulnerability index combined seven largely independent indicators, which covered exposure, sensitivity and adaptive capacity of biotopes to landscape changes. Results for biotope vulnerability were provided at the regional level. This seems to be an appropriate extent with relevance for spatial planning and designing the distribution of nature reserves.
Using the vulnerability scores calculated for the German federal state of Brandenburg, hot spots and clusters within and across the distinguished types of biotopes were analysed. Biotope types with high dependence on water availability, as well as biotopes of the open landscape containing woody plants (e.g., orchard meadows) are particularly vulnerable to landscape changes. In contrast, the majority of forest biotopes appear to be less vulnerable. Despite the appeal of such generalised statements for some biotope types, the distribution of values suggests that conservation measures for the majority of biotopes should be designed specifically for individual sites. Taken together, size, shape and spatial context of individual biotopes often had a dominant influence on the vulnerability score.
The implementation of biotope vulnerability analysis at the regional level indicated that large biotope datasets can be evaluated with high level of detail using geoinformatics. Drawing on previous work in landscape spatial analysis, the reproducible approach relies on transparent calculations of quantitative and qualitative indicators. At the same time, it provides a synoptic overview and information on the individual biotopes. It is expected to be most useful for nature conservation in combination with an understanding of population, species, and community attributes known for specific sites. The biotope vulnerability analysis facilitates a foresighted assessment of different land uses, aiding in identifying options to slow habitat loss to sustainable levels. It can also be incorporated into planning of restoration measures, guiding efforts to remedy ecological damage. Restoration of any specific site could yield synergies with the conservation objectives of other sites, through enhancing the habitat network or buffering against future landscape change.
Biotope vulnerability analysis could be developed in line with other important ecological concepts, such as resilience and adaptability, further extending the broad thematic scope of the vulnerability concept. Vulnerability can increasingly serve as a common framework for the interdisciplinary research necessary to solve major societal challenges.
Cardiac valves are essential for the continuous and unidirectional flow of blood throughout the body. During embryonic development, their formation is strictly connected to the mechanical forces exerted by blood flow. The endocardium that lines the interior of the heart is a specialized endothelial tissue and is highly sensitive to fluid shear stress. Endocardial cells harbor a signal transduction machinery required for the translation of these forces into biochemical signaling, which strongly impacts cardiac morphogenesis and physiology. To date, we lack a solid understanding on the mechanisms by which endocardial cells sense the dynamic mechanical stimuli and how they trigger different cellular responses. In the zebrafish embryo, endocardial cells at the atrioventricular canal respond to blood flow by rearranging from a monolayer to a double-layer, composed of a luminal cell population subjected to blood flow and an abluminal one that is not exposed to it. These early morphological changes lead to the formation of an immature valve leaflet. While previous studies mainly focused on genes that are positively regulated by shear stress, the mechanisms regulating cell behaviors and fates in cells that lack the stimulus of blood flow are largely unknown. One key discovery of my work is that the flow-sensitive Notch receptor and Krüppel-like factor (Klf) 2, one of the best characterized flow-regulated transcriptional factors, are activated by shear stress but that they function in two parallel signal transduction pathways. Each of these two pathways is essential for the rearrangement of atrioventricular cells into an immature double-layered valve leaflets. A second key discovery of my study is the finding that both Notch and Klf2 signaling negatively regulate the expression of the angiogenesis receptor Vegfr3/Flt4, which becomes restricted to abluminal endocardial cells of the valve leaflet. Within these cells, Flt4 downregulates the expressions of the cell adhesion proteins Alcam and VE-cadherin. A loss of Flt4 causes abluminal endocardial cells to ectopically express Notch, which is normally restricted to luminal cells, and impairs valve morphology. My study suggests that abluminal endocardial cells that do not experience mechanical stimuli loose Notch expression and this triggers expression of Flt4. In turn, Flt4 negatively regulates Notch on the abluminal side of the valve leaflet. These antagonistic signaling activities and fine-tuned gene regulatory mechanisms ultimately shape cardiac valve leaflets by inducing unique differences in the fates of endocardial cells.
The hepatokine FGF21 and the adipokine chemerin have been implicated as metabolic regulators and mediators of inter-tissue crosstalk. While FGF21 is associated with beneficial metabolic effects and is currently being tested as an emerging therapeutic for obesity and diabetes, chemerin is linked to inflammation-mediated insulin resistance. However, dietary regulation of both organokines and their role in tissue interaction needs further investigation.
The LEMBAS nutritional intervention study investigated the effects of two diets differing in their protein content in obese human subjects with non-alcoholic fatty liver disease (NAFLD). The study participants consumed hypocaloric diets containing either low (LP: 10 EN%, n = 10) or high (HP: 30 EN%, n = 9) dietary protein 3 weeks prior to bariatric surgery. Before and after the intervention the participants were anthropometrically assessed, blood samples were drawn, and hepatic fat content was determined by MRS. During bariatric surgery, paired subcutaneous and visceral adipose tissue biopsies as well as liver biopsies were collected. The aim of this thesis was to investigate circulating levels and tissue-specific regulation of (1) FGF21 and (2) chemerin in the LEMBAS cohort. The results were compared to data obtained in 92 metabolically healthy subjects with normal glucose tolerance and normal liver fat content.
(1) Serum FGF21 concentrations were elevated in the obese subjects, and strongly associated with intrahepatic lipids (IHL). In accordance, FGF21 serum concentrations increased with severity of NAFLD as determined histologically in the liver biopsies. Though both diets were successful in reducing IHL, the effect was more pronounced in the HP group. FGF21 serum concentrations and mRNA expression were bi-directionally regulated by dietary protein, independent from metabolic improvements. In accordance, in the healthy study subjects, serum FGF21 concentrations dropped by more than 60% in response to the HP diet. A short-term HP intervention confirmed the acute downregulation of FGF21 within 24 hours. Lastly, experiments in HepG2 cell cultures and primary murine hepatocytes identified nitrogen metabolites (NH4Cl and glutamine) to dose-dependently suppress FGF21 expression.
(2) Circulating chemerin concentrations were considerably elevated in the obese versus lean study participants and differently associated with markers of obesity and NAFLD in the two cohorts. The adipokine decreased in response to the hypocaloric interventions while an unhealthy high-fat diet induced a rise in chemerin serum levels. In the lean subjects, mRNA expression of RARRES2, encoding chemerin, was strongly and positively correlated with expression of several cytokines, including MCP1, TNFα, and IL6, as well as markers of macrophage infiltration in the subcutaneous fat depot. However, RARRES2 was not associated with any cytokine assessed in the obese subjects and the data indicated an involvement of chemerin not only in the onset but also resolution of inflammation. Analyses of the tissue biopsies and experiments in human primary adipocytes point towards a role of chemerin in adipogenesis while discrepancies between the in vivo and in vitro data were detected.
Taken together, the results of this thesis demonstrate that circulating FGF21 and chemerin levels are considerably elevated in obesity and responsive to dietary interventions. FGF21 was acutely and bi-directionally regulated by dietary protein in a hepatocyte-autonomous manner. Given that both, a lack in essential amino acids and excessive nitrogen intake, exert metabolic stress, FGF21 may serve as an endocrine signal for dietary protein balance. Lastly, the data revealed that chemerin is derailed in obesity and associated with obesity-related inflammation. However, future studies on chemerin should consider functional and regulatory differences between secreted and tissue-specific isoforms.
Die Praktische Fahrerlaubnisprüfung dient der Erfassung und Beurteilung der Fahrkompe-tenz von Fahrerlaubnisbewerbern. Die aus dieser Prüfung gewonnenen Rückschlüsse auf das Niveau der Fahrkompetenz sollen insbesondere auch der Weiterentwicklung des Bewerbers dienen. Bisher erhalten Bewerber nur bei nicht bestandener Praktischer Fahrerlaubnisprü-fung eine Auflistung der wichtigsten Fehler, die zum Nichtbestehen geführt haben. Für ein zielgerichtetes Weiterlernen ist es aber notwendig, dass die Ergebnisse der Leistungserfas-sung und der Leistungsbewertung gemäß prüfungsdidaktischer Grundsätze pädagogisch an-spruchsvoll an alle Fahranfänger (unabhängig vom Prüfungsergebnis) zurückgemeldet wer-den.
Das Ziel der vorliegenden Arbeit besteht darin, die Gestaltungsgrundlagen und einen Umset-zungsvorschlag für ein kompetenzbezogenes und lernförderliches Rückmeldesystem für die Praktische Fahrerlaubnisprüfung zu erarbeiten. Dieses Rückmeldesystem soll in der Praxis erprobt werden. Darüber hinaus sollen anhand einer Bewerberbefragung zur Nutzerzufrie-denheit Erkenntnisse für die Weiterentwicklung gewonnen werden. Der Entwicklungs- und Erprobungsprozess des optimierten Rückmeldesystems lässt sich in drei Projektphasen auf-teilen:
1. Im Zuge der Optimierungsarbeiten zur Praktischen Fahrerlaubnisprüfung wurde in der ersten Projektphase ein neues Rückmeldesystem erarbeitet, das aus einem kompetenz-bezogenen mündlichen Auswertungsgespräch und einer ergänzenden schriftlichen Rückmeldung einschließlich weiterführender Lernhinweise für alle Bewerber besteht. Dieses Rückmeldesystem soll einerseits die Fahranfänger dabei unterstützen, die Leis-tungsbewertung inhaltlich besser zu verstehen sowie ein zielgerichtetes Weiterlernen ermöglichen. Andererseits soll es die Bewerber dazu motivieren, die festgestellten Kompetenzdefizite weiter zu bearbeiten, und dadurch Lernzuwachs fördern.
2. Das Rückmeldesystem wurde in der zweiten Projektphase in verschiedenen Modell-
regionen Deutschlands anhand von ca. 9.000 realen Praktischen Fahrerlaubnisprüfun-gen erprobt. Die Fahrerlaubnisbewerber, die in den Modellregionen an einer optimier-ten Praktischen Fahrerlaubnisprüfung teilgenommen und somit eine schriftliche Rückmeldung gemäß der optimierten Vorgaben bzw. einen individuellen Zugangscode zum Downloadbereich erhalten haben, wurden zu einer Befragung eingeladen. Dabei wurden vor allem Aspekte der Akzeptanz und der Lernwirksamkeit aus Sicht der Be-werber erfasst. Ziel war es, die Qualität der verkehrspädagogischen Gestaltung des Rückmeldesystems und seinen Nutzen zu untersuchen, um die erprobte Rückmeldung weiterzuentwickeln. Für die Bewerberbefragung wurde eine Onlinebefragung mit ei-nem standardisierten Fragebogen durchgeführt.
3. Die Erprobungs- und Befragungsergebnisse dienten in der dritten Projektphase der Ableitung von Schlussfolgerungen für die Weiterentwicklung des Rückmeldesystems. Die vorliegenden Ergebnisse der Felderprobung deuten darauf hin, dass die Bereitstel-lung einer schriftlichen, ausführlichen Rückmeldung zu den Prüfungsleistungen der Praktischen Fahrerlaubnisprüfungen insgesamt als nützlich und gewinnbringend ange-sehen wird. Allerdings wurde auch deutlich, dass bezüglich der Umsetzung noch Op-timierungspotenzial besteht. Im Anschluss an die Erprobung wurde die schriftliche Rückmeldung daher – ausgehend von den Nutzererfahrungen während der Felderpro-bung – umfassend überarbeitet und eine revidierte Version vorgelegt.
Als Ergebnis der Arbeit liegt ein in mehreren Schritten entwickeltes, empirisch fundiertes und erprobtes Rückmeldesystem vor, das eine differenzierte Kompetenzrückmeldung er-möglicht. Die umfassende Rückmeldung bietet künftig einerseits eine verbesserte Ausgangs-lage für eine ggf. anschließende Wiederholungsprüfung und andererseits ist es dem Bewer-ber anhand der aufgezeigten Stärken und Schwächen auch nach einer bestandenen Prüfung möglich, diese Rückmeldung für das weitere Lernen zu nutzen.
Even though the majority of individuals know that exercising is healthy, a high percentage struggle to achieve the recommended amount of exercise. The (social-cognitive) theories that are commonly applied to explain exercise motivation refer to the assumption that people base their decisions mainly on rational reasoning. However, behavior is not only bound to reflection. In recent years, the role of automaticity and affect for exercise motivation has been increasingly discussed. In this dissertation, central assumptions of the affective–reflective theory of physical inactivity and exercise (ART; Brand & Ekkekakis, 2018), an exercise-specific dual-process theory that emphasizes the role of a momentary automatic affective reaction for exercise-decisions, were examined. The central aim of this dissertation was to investigate exercisers and non-exercisers automatic affective reactions to exercise-related stimuli (i.e., type-1 process). In particular, the two components of the ART’s type-1 process, that are, automatic associations with exercise and the automatic affective valuation to exercise, were under study.
In the first publication (Schinkoeth & Antoniewicz, 2017), research on automatic (evaluative) associations with exercise was summarized and evaluated in a systematic review. The results indicated that automatic associations with exercise appeared to be relevant predictors for exercise behavior and other exercise-related variables, providing evidence for a central assumption of the ART’s type-1 process. Furthermore, indirect methods seem to be suitable to assess automatic associations. The aim of the second publication (Schinkoeth, Weymar, & Brand, 2019) was to approach the somato-affective core of the automatic valuation of exercise using analysis of reactivity in vagal HRV while viewing exercise-related pictures. Results revealed that differences in exercise volume could be regressed on HRV reactivity. In light of the ART, these findings were interpreted as evidence of an inter-individual affective reaction elicited at the thought of exercise and triggered by exercise-stimuli. In the third publication (Schinkoeth & Brand, 2019, subm.), it was sought to disentangle and relate to each other the ART’s type-1 process components—automatic associations and the affective valuation of exercise. Automatic associations to exercise were assessed with a recoding-free variant of an implicit association test (IAT). Analysis of HRV reactivity was applied to approach a somatic component of the affective valuation, and facial reactions in a facial expression (FE) task served as indicators of the automatic affective reaction’s valence. Exercise behavior was assessed via self-report. The measurement of the affective valuation’s valence with the FE task did not work well in this study. HRV reactivity was predicted by the IAT score and did also statistically predict exercise behavior. These results thus confirm and expand upon the results of publication two and provide empirical evidence for the type-1 process, as defined in the ART. This dissertation advances the field of exercise psychology concerning the influence of automaticity and affect on exercise motivation. Moreover, both methodical implications and theoretical extensions for the ART can be derived from the results.
The current thesis is focused on the properties of graphene supported by metallic substrates and specifically on the behaviour of electrons in such systems. Methods of scanning tunneling microscopy, electron diffraction and photoemission spectroscopy were applied to study the structural and electronic properties of graphene. The purpose of the first part of this work is to introduce the most relevant aspects of graphene physics and the methodical background of experimental techniques used in the current thesis.
The scientific part of this work starts with the extensive study by means of scanning tunneling microscopy of the nanostructures that appear in Au intercalated graphene on Ni(111). This study was aimed to explore the possible structural explanations of the Rashba-type spin splitting of ~100 meV experimentally observed in this system — much larger than predicted by theory. It was demonstrated that gold can be intercalated under graphene not only as a dense monolayer, but also in the form of well-periodic arrays of nanoclusters, a structure previously not reported. Such nanocluster arrays are able to decouple graphene from the strongly interacting Ni substrate and render it quasi-free-standing, as demonstrated by our DFT study. At the same time calculations confirm strong enhancement of the proximity-induced SOI in graphene supported by such nanoclusters in comparison to monolayer gold. This effect, attributed to the reduced graphene-Au distance in the case of clusters, provides a large Rashba-type spin splitting of ~60 meV.
The obtained results not only provide a possible mechanism of SOI enhancement in this particular system, but they can be also generalized for graphene on other strongly interacting substrates intercalated by nanostructures of heavy noble d metals.
Even more intriguing is the proximity of graphene to heavy sp-metals that were predicted to induce an intrinsic SOI and realize a spin Hall effect in graphene. Bismuth is the heaviest stable sp-metal and its compounds demonstrate a plethora of exciting physical phenomena. This was the motivation behind the next part of the current thesis, where structural and electronic properties of a previously unreported phase of Bi-intercalated graphene on Ir(111) were studied by means of scanning tunneling microscopy, spin- and angle-resolved photoemission spectroscopy and electron diffraction. Photoemission experiments revealed a remarkable, nearly ideal graphene band structure with strongly suppressed signatures of interaction between graphene and the Ir(111) substrate, moreover, the characteristic moiré pattern observed in graphene on Ir(111) by electron diffraction and scanning tunneling microscopy was strongly suppressed after intercalation. The whole set of experimental data evidences that Bi forms a dense intercalated layer that efficiently decouples graphene from the substrate. The interaction manifests itself only in the n-type charge doping (~0.4 eV) and a relatively small band gap at the Dirac point (~190 meV). The origin of this minor band gap is quite intriguing and in this work it was possible to exclude a wide range of mechanisms that could be responsible for it, such as induced intrinsic spin-orbit interaction, hybridization with the substrate states and corrugation of the graphene lattice. The main origin of the band gap was attributed to the A-B symmetry breaking and this conclusion found support in the careful analysis of the interference effects in photoemission that provided the band gap estimate of ~140 meV.
While the previous chapters were focused on adjusting the properties of graphene by proximity to heavy metals, graphene on its own is a great object to study various physical effects at crystal surfaces. The final part of this work is devoted to a study of surface scattering resonances by means of photoemission spectroscopy, where this effect manifests itself as a distinct modulation of photoemission intensity. Though scattering resonances were widely studied in the past by means of electron diffraction, studies about their observation in photoemission experiments started to appear only recently and they are very scarce.
For a comprehensive study of scattering resonances graphene was selected as a versatile model system with adjustable properties. After the theoretical and historical introduction to the topic of scattering resonances follows a detailed description of the unusual features observed in the photoemission spectra obtained in this work and finally the equivalence between these features and scattering resonances is proven. The obtained photoemission results are in a good qualitative agreement with the existing theory, as verified by our calculations in the framework of the interference model. This simple model gives a suitable explanation for the general experimental observations.
The possibilities of engineering the scattering resonances were also explored. A systematic study of graphene on a wide range of substrates revealed that the energy position of the resonances is in a direct relation to the magnitude of charge transfer between graphene and the substrate. Moreover, it was demonstrated that the scattering resonances in graphene on Ir(111) can be suppressed by nanopatterning either by a superlattice of Ir nanoclusters or by atomic hydrogen. These effects were attributed to strong local variations of tork function and/or destruction of long-range order of thephene lattice. The tunability of scattering resonances can be applied for optoelectronic devices based on graphene. Moreover, the results of this study expand the general understanding of the phenomenon of scattering resonances and are applicable to many other materials besides graphene.
Cosa avviene quando coscienze linguistiche distinte, oltre ad essere separate dall’epoca, dall’area geografica di provenienza o dalla differenziazione sociale, dalle diverse dimensioni linguistiche, appartengono anche a domini semiotici diversi? È quel che accade ogni volta che comunichiamo in rete, l’interazione digitale è infatti l’ambito di comunicazione ibrido per eccellenza: in esso alla mescolanza di lingue diverse si sovrappone la mescolanza di codici diversi. Partendo dal presupposto che siano i nuovi bisogni espressivi e le nuove situazioni comunicative a spingere verso le innovazioni linguistiche, sembra dunque interessante tener conto del rilievo assunto dal repertorio visuale – e più in generale multimodale – nell’uso spontaneo dei nuovi media e constatare come le particolari strategie di costruzione del significato attualmente in atto non possano ormai più prescindere da queste seconde dimensioni. Del loro peso nell’uso digitale della lingua è bene avere consapevolezza per affrontare senza pregiudizi tutte le novità ad essa connesse. Un ruolo di centrale importanza nell’approccio al linguaggio verbale in Internet è legato alla funzione indessicale della lingua che, unito alla presenza di un archivio di riferimento di conoscenze del mondo condiviso, innesca un nuovo tipo d’inferenzialità nel ricevente. La conversazione attraverso i social network consente infatti azioni che non necessariamente sono presenti nello scambio vis-a-vis, ma che invece sono peculiari di Facebook, Twitter, G+, Instagram, Flickr e in generale dei social network: la condivisione di materiale multimediale di vario genere, l’opzione di richiamare i messaggi relativi a un tema specifico e la possibilità di glossarlo. Il materiale multimediale diventa così al tempo stesso parte integrante della comunicazione e modalità espressiva, focus del discorso e linguaggio metaforico condiviso. Questo lavoro di ricerca indaga come ambiti di ricerca diversi, e apparentemente distanti fra loro, possano interagire produttivamente con il panorama scientifico delle scienze del linguaggio, dell’immagine e della comunicazione, giungendo alla formulazione di un modello aggiornato dell'ibridazione linguistica che caratterizza la comunicazione in rete.
In recent years, a substantial number of psycholinguistic studies and of studies on acquired language impairments have investigated the case of morphologically complex words. These have provided evidence for what is known as ‘morphological decomposition’, i.e. a mechanism that decomposes complex words into their constituent morphemes during online processing. This is believed to be a fundamental, possibly universal mechanism of morphological processing, operating irrespective of a word’s specific properties.
However, current accounts of morphological decomposition are mostly based on evidence from suffixed words and compound words, while prefixed words have been comparably neglected. At the same time, it has been consistently observed that, across languages, prefixed words are less widespread than suffixed words. This cross-linguistic preference for suffixing morphology has been claimed to be grounded in language processing and language learning mechanisms. This would predict differences in how prefixed words are processed and therefore also affected in language impairments, challenging the predictions of the major accounts of morphological decomposition.
Against this background, the present thesis aims at reducing the gap between the accounts of morphological decomposition and the accounts of the suffixing preference, by providing a thorough empirical investigation of prefixed words. Prefixed words are examined in three different domains: (i) visual word processing in native speakers; (ii) visual word processing in non-native speakers; (iii) acquired morphological impairments. The processing studies employ the masked priming paradigm, tapping into early stages of visual word recognition. Instead, the studies on morphological impairments investigate the errors produced in reading aloud tasks.
As for native processing, the present work first focuses on derivation (Publication I), specifically investigating whether German prefixed derived words, both lexically restricted (e.g. inaktiv ‘inactive’) and unrestricted (e.g. unsauber ‘unclean’) can be efficiently decomposed. I then present a second study (Publication II) on a Bantu language, Setswana, which offers the unique opportunity of testing inflectional prefixes, and directly comparing priming with prefixed inflected primes (e.g. dikgeleke ‘experts’) to priming with prefixed derived primes (e.g. bokgeleke ‘talent’). With regard to non-native processing (Publication I), the priming effects obtained from the lexically restricted and unrestricted prefixed derivations in native speakers are additionally compared to the priming effects obtained in a group of non-native speakers of German. Finally, in the two studies on acquired morphological impairments, the thesis investigates whether prefixed derived words yield different error patterns than suffixed derived words (Publication III and IV).
For native speakers, the results show evidence for morphological decomposition of both types of prefixed words, i.e. lexically unrestricted and restricted derivations, as well as of prefixed inflected words. Furthermore, non-native speakers are also found to efficiently decompose prefixed derived words, with parallel results to the group of native speakers. I therefore conclude that, for the early stages of visual word recognition, the relative position of stem and affix in prefixed versus suffixed words does not affect how efficiently complex words are decomposed, either in native or in non-native processing. In the studies on acquired language impairments, instead, prefixes are consistently found to be more impaired than suffixes. This is explained in terms of a learnability disadvantage for prefixed words, which may cause weaker representations of the information encoded in affixes when these precede the stem (prefixes) as compared to when they follow it (suffixes). Based on the impairment profiles of the individual participants and on the nature of the task, this dissociation is assumed to emerge from later processing stages than those that are tapped into by masked priming. I therefore conclude that the different characteristics of prefixed and suffixed words do come into play at later processing stages, during which the lexical-semantic information contained in the different constituent morphemes is processed.
The findings presented in the four manuscripts significantly contribute to our current understanding of the mechanisms involved in processing prefixed words. Crucially, the thesis constrains the processing disadvantage for prefixed words to later processing stages, thereby suggesting that theories trying to establish links between language universals and processing mechanisms should more carefully consider the different stages involved in language processing and what factors are relevant for each specific stage.
Metal halide perovskites have merged as an attractive class of materials for photovoltaic applications due to their excellent optoelectronic properties. However, the long term stability is a roadblock for this class of material’s industrial pathway. Increasing evidence shows that intrinsic defects in perovskite promote material degradation. Consequently, understanding defect behaviours in perovskite materials is essential to further improve device stability and performance. This dissertation, hence, focuses on the topic of defect chemistry in halide perovskites.
The first part of the dissertation gives a brief overview of the defect properties in halide perovskite. Subsequently, the second part shows that doping methylammonium lead iodide with a small amount of alkaline earth metals (Sr and Mg) creates a higher quality, less defective material resulted in high open circuit voltages in both n-i-p and p-i-n architecture. It has been found that the mechanism of doping has two distinct regimes in which a low doping concentration enables the inclusion of the dopants into the lattice whereas higher doping concentrations lead to phase segregation. The material can be more n-doped in the low doping regime while being less n-doped in the high doping regime. The threshold of these two regimes is based on the atomic size of the dopants.
The next part of the dissertation examines the photo-induced degradation in methylammonium lead iodide. This degradation mechanism links closely to the formation and migration of ionic defects. After they are formed, these ionic defects can migrate, however, not freely depending on the defect concentration and their distribution. In fact, a highly concentrated defect region such as the grain boundaries can inhibit the migration of ionic defects. This has implications for material design as perovskite solar cells normally employ a polycrystalline thin-film which has a high density of grain boundary.
The final study presented in this PhD dissertation focuses on the stability of the state-of-the-art triple cation perovskite-based solar devices under external bias. Prolonged bias (more than three hours) is found to promote amorphization in halide perovskite. The amorphous phase is suspected to accumulate at the interfaces especially between the hole selective layer and perovskite. This amorphous phase inhibits the charge collection and severely affects the device performance. Nonetheless, the devices can recover after resting without bias in the dark. This amorphization is attributed to ionic defect migration most likely halides. This provides a new understanding of the potential degradation mechanisms in perovskite solar cells under operational conditions.
After endosymbiosis, chloroplasts lost most of their genome. Many former endosymbiotic genes are now nucleus-encoded and the products are re-imported post-translationally. Consequently, photosynthetic complexes are built of nucleus- and plastid-encoded subunits in a well-defined stoichiometry. In Chlamydomonas, the translation of chloroplast-encoded photosynthetic core subunits is feedback-regulated by the assembly state of the complexes they reside in. This process is called Control by Epistasy of Synthesis (CES) and enables the efficient production of photosynthetic core subunits in stoichiometric amounts. In chloroplasts of embryophytes, only Rubisco subunits have been shown to be feedback-regulated. That opens the question if there is additional CES regulation in embryophytes. I analyzed chloroplast gene expression in tobacco and Arabidopsis mutants with assembly defects for each photosynthetic complex to broadly answer this question. My results (i) confirmed CES within Rubisco and hint to potential translational feedback regulation in the synthesis of (ii) cytochrome b6f (Cyt b6f) and (iii) photosystem II (PSII) subunits. This work suggests a CES network in PSII that links psbD, psbA, psbB, psbE, and potentially psbH expression by a feedback mechanism that at least partially differs from that described in Chlamydomonas. Intriguingly, in the Cyt b6f complex, a positive feedback regulation that coordinates the synthesis of PetA and PetB was observed, which was not previously reported in Chlamydomonas. No evidence for CES interactions was found in the expression of NDH and ATP synthase subunits of embryophytes. Altogether, this work provides solid evidence for novel assembly-dependent feedback regulation mechanisms controlling the expression of photosynthetic genes in chloroplasts of embryophytes.
In order to obtain a comprehensive inventory of the rbcL and psbA RNA-binding proteomes (including factors that regulate their expression, especially factors involved in CES), an aptamer based affinity purification method was adapted and refined for the specific purification these transcripts from tobacco chloroplasts. To this end, three different aptamers (MS2, Sephadex ,and streptavidin binding) were stably introduced into the 3’ UTRs of psbA and rbcL by chloroplast transformation. RNA aptamer based purification and subsequent chip analysis (RAP Chip) demonstrated a strong enrichment of psbA and rbcL transcripts and currently, ongoing mass spectrometry analyses shall reveal potential regulatory factors. Furthermore, the suborganellar localization of MS2 tagged psbA and rbcL transcripts was analyzed by a combined affinity, immunology, and electron microscopy approach and demonstrated the potential of aptamer tags for the examination of the spatial distribution of chloroplast transcripts.
Depending on the biochemical and biotechnical approach, the aim of this work was to understand the mechanism of protein-glucan interactions in regulation and control of starch degradation. Although starch degradation starts with the phosphorylation process, the mechanisms by which this process is controlling and adjusting starch degradation are not yet fully understood. Phosphorylation is a major process performed by the two dikinases enzymes α-glucan, water dikinase (GWD) and phosphoglucan water dikinase (PWD). GWD and PWD enzymes phosphorylate the starch granule surface; thereby stimulate starch degradation by hydrolytic enzymes. Despite these important roles for GWD and PWD, so far the biochemical processes by which these enzymes are able to regulate and adjust the rate of phosphate incorporation into starch during the degradation process haven‘t been understood. Recently, some proteins were found associated with the starch granule. Two of these proteins are named Early Starvation Protein 1 (ESV1) and its homologue Like-Early Starvation Protein 1 (LESV). It was supposed that both are involved in the control of starch degradation, but their function has not been clearly known until now. To understand how ESV1 and LESV-glucan interactions are regulated and affect the starch breakdown, it was analyzed the influence of ESV1 and LESV proteins on the phosphorylating enzyme GWD and PWD and hydrolysing enzymes ISA, BAM, and AMY. However, the analysis determined the location of LESV and ESV1 in the chloroplast stroma of Arabidopsis. Mass spectrometry data predicted ESV1and LESV proteins as a product of the At1g42430 and At3g55760 genes with a predicted mass of ~50 kDa and ~66 kDa, respectively. The ChloroP program predicted that ESV1 lacks the chloroplast transit peptide, but it predicted the first 56 amino acids N-terminal region as a chloroplast transit peptide for LESV. Usually, the transit peptide is processed during transport of the proteins into plastids. Given that this processing is critical, two forms of each ESV1 and LESV were generated and purified, a full-length form and a truncated form that lacks the transit peptide, namely, (ESV1and tESV1) and (LESV and tLESV), respectively. Both protein forms were included in the analysis assays, but only slight differences in glucan binding and protein action between ESV1 and tESV1 were observed, while no differences in the glucan binding and effect on the GWD and PWD action were observed between LESV and tLESV. The results revealed that the presence of the N-terminal is not massively altering the action of ESV1 or LESV. Therefore, it was only used the ESV1 and tLESV forms data to explain the function of both proteins.
However, the analysis of the results revealed that LESV and ESV1 proteins bind strongly at the starch granule surface. Furthermore, not all of both proteins were released after their incubation with starches after washing the granules with 2% [w/v] SDS indicates to their binding to the deeper layers of the granule surface. Supporting of this finding comes after the binding of both proteins to starches after removing the free glucans chains from the surface by the action of ISA and BAM. Although both proteins are capable of binding to the starch structure, only LESV showed binding to amylose, while in ESV1, binding was not observed. The alteration of glucan structures at the starch granule surface is essential for the incorporation of phosphate into starch granule while the phosphorylation of starch by GWD and PWD increased after removing the free glucan chains by ISA. Furthermore, PWD showed the possibility of starch phosphorylation without prephosphorylation by GWD.
Biochemical studies on protein-glucan interactions between LESV or ESV1 with different types of starch showed a potentially important mechanism of regulating and adjusting the phosphorylation process while the binding of LESV and ESV1 leads to altering the glucan structures of starches, hence, render the effect of the action of dikinases enzymes (GWD and PWD) more able to control the rate of starch degradation. Despite the presence of ESV1 which revealed an antagonistic effect on the PWD action as the PWD action was decreased without prephosphorylation by GWD and increased after prephosphorylation by GWD (Chapter 4), PWD showed a significant reduction in its action with or without prephosphorylation by GWD in the presence of ESV1 whether separately or together with LESV (Chapter 5). However, the presence of LESV and ESV1 together revealed the same effect compared to the effect of each one alone on the phosphorylation process, therefore it is difficult to distinguish the specific function between them. However, non-interactions were detected between LESV and ESV1 or between each of them with GWD and PWD or between GWD and PWD indicating the independent work for these proteins. It was also observed that the alteration of the starch structure by LESV and ESV1 plays a role in adjusting starch degradation rates not only by affecting the dikinases but also by affecting some of the hydrolysing enzymes since it was found that the presence of LESV and ESV1leads to the reduction of the action of BAM, but does not abolish it.
Im Jahre 1960 behauptete Yamabe folgende Aussage bewiesen zu haben: Auf jeder kompakten Riemannschen Mannigfaltigkeit (M,g) der Dimension n ≥ 3 existiert eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung. Diese Aussage ist äquivalent zur Existenz einer Lösung einer bestimmten semilinearen elliptischen Differentialgleichung, der Yamabe-Gleichung. 1968 fand Trudinger einen Fehler in seinem Beweis und infolgedessen beschäftigten sich viele Mathematiker mit diesem nach Yamabe benannten Yamabe-Problem. In den 80er Jahren konnte durch die Arbeiten von Trudinger, Aubin und Schoen gezeigt werden, dass diese Aussage tatsächlich zutrifft. Dadurch ergeben sich viele Vorteile, z.B. kann beim Analysieren von konform invarianten partiellen Differentialgleichungen auf kompakten Riemannschen Mannigfaltigkeiten die Skalarkrümmung als konstant vorausgesetzt werden.
Es stellt sich nun die Frage, ob die entsprechende Aussage auch auf Lorentz-Mannigfaltigkeiten gilt. Das Lorentz'sche Yamabe Problem lautet somit: Existiert zu einer gegebenen räumlich kompakten global-hyperbolischen Lorentz-Mannigfaltigkeit (M,g) eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung? Das Ziel dieser Arbeit ist es, dieses Problem zu untersuchen.
Bei der sich aus dieser Fragestellung ergebenden Yamabe-Gleichung handelt es sich um eine semilineare Wellengleichung, deren Lösung eine positive glatte Funktion ist und aus der sich der konforme Faktor ergibt. Um die für die Behandlung des Yamabe-Problems benötigten Grundlagen so allgemein wie möglich zu halten, wird im ersten Teil dieser Arbeit die lokale Existenztheorie für beliebige semilineare Wellengleichungen für Schnitte auf Vektorbündeln im Rahmen eines Cauchy-Problems entwickelt. Hierzu wird der Umkehrsatz für Banachräume angewendet, um mithilfe von bereits existierenden Existenzergebnissen zu linearen Wellengleichungen, Existenzaussagen zu semilinearen Wellengleichungen machen zu können. Es wird bewiesen, dass, falls die Nichtlinearität bestimmte Bedingungen erfüllt, eine fast zeitglobale Lösung des Cauchy-Problems für kleine Anfangsdaten sowie eine zeitlokale Lösung für beliebige Anfangsdaten existiert.
Der zweite Teil der Arbeit befasst sich mit der Yamabe-Gleichung auf global-hyperbolischen Lorentz-Mannigfaltigkeiten. Zuerst wird gezeigt, dass die Nichtlinearität der Yamabe-Gleichung die geforderten Bedingungen aus dem ersten Teil erfüllt, so dass, falls die Skalarkrümmung der gegebenen Metrik nahe an einer Konstanten liegt, kleine Anfangsdaten existieren, so dass die Yamabe-Gleichung eine fast zeitglobale Lösung besitzt. Mithilfe von Energieabschätzungen wird anschließend für 4-dimensionale global-hyperbolische Lorentz-Mannigfaltigkeiten gezeigt, dass unter der Annahme, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist, eine zeitglobale Lösung der Yamabe-Gleichung existiert, die allerdings nicht notwendigerweise positiv ist. Außerdem wird gezeigt, dass, falls die H2-Norm der Skalarkrümmung bezüglich der gegebenen Metrik auf einem kompakten Zeitintervall auf eine bestimmte Weise beschränkt ist, die Lösung positiv auf diesem Zeitintervall ist. Hierbei wird ebenfalls angenommen, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist. Falls zusätzlich hierzu gilt, dass die Skalarkrümmung bezüglich der gegebenen Metrik negativ ist und die Metrik gewisse Bedingungen erfüllt, dann ist die Lösung für alle Zeiten in einem kompakten Zeitintervall positiv, auf dem der Gradient der Skalarkrümmung auf eine bestimmte Weise beschränkt ist. In beiden Fällen folgt unter den angeführten Bedingungen die Existenz einer zeitglobalen positiven Lösung, falls M = I x Σ für ein beschränktes offenes Intervall I ist. Zum Schluss wird für M = R x Σ ein Beispiel für die Nichtexistenz einer globalen positiven Lösung angeführt.
Ferruginous conditions were a prominent feature of the oceans throughout the Precambrian Eons and thus throughout much of Earth’s history. Organic matter mineralization and diagenesis within the ferruginous sediments that deposited from Earth’s early oceans likely played a key role in global biogeochemical cycling. Knowledge of organic matter mineralization in ferruginous sediments, however, remains almost entirely conceptual, as modern analogue environments are extremely rare and largely unstudied, to date. Lake Towuti on the island of Sulawesi, Indonesia is such an analogue environment and the purpose of this PhD project was to investigate the rates and pathways of organic matter mineralization in its ferruginous sediments.
Lake Towuti is the largest tectonic lake in Southeast Asia and is hosted in the mafic and ultramafic rocks of the East Sulawesi Ophiolite. It has a maximum water depth of 203 m and is weakly thermally stratified. A well-oygenated surface layer extends to 70 m depth, while waters below 130 m are persistently anoxic. Intensive weathering of the ultramafic catchment feeds the lake with large amounts of iron(oxy)hydroxides while the runoff contains only little sulfate, leading to sulfate-poor (< 20 µM) lake water and anoxic ferruginous conditions below 130 m. Such conditions are analogous to the ferruginous water columns that persisted throughout much of the Archean and Proterozoic eons. Short (< 35 cm) sediment cores were collected from different water depths corresponding to different bottom water redox conditions. Also, a drilling campaign of the International Continental Scientific Drilling Program (ICDP) retrieved a 114 m long sediment core dedicated for geomicrobiological investigations from a water depth of 153 m, well below the depth of oxygen penetration at the time of sampling. Samples collected from these sediment cores form the fundament of this thesis and were used to perform a suite of biogeochemical and microbiological analyses.
Geomirobiological investigations depend on uncontaminated samples. However, exploration of subsurface environments relies on drilling, which requires the use of a drilling fluid. Drilling fluid infiltration during drilling can not be avoided. Thus, in order to trace contamination of the sediment core and to identify uncontaminated samples for further analyses a simple and inexpensive technique for assessing contamination during drilling operations was developed and applied during the ICDP drilling campaign. This approach uses an aqeous fluorescent pigment dispersion commonly used in the paint industry as a particulate tracer. It has the same physical properties as conventionally used particulate tracers. However, the price is nearly four orders of magnitude lower solving the main problem of particulate tracer approaches. The approach requires only a minimum of equipment and allows for a rapid contamination assessment potentially even directly on site, while the senstitivity is in the range of already established approaches. Contaminated samples in the drill core were identified and not included for further geomicrobiological investigations.
Biogeochemical analyses of short sediment cores showed that Lake Towutis sediments are strongly depleted in electron acceptors commonly used in microbial organic matter mineralization (i.e. oxygen, nitrate, sulfate). Still, the sediments harbor high microbial cell densities, which are a function of redox conditions of Lake Towuti’s bottom water. In shallow water depths bottom water oxygenation leads to a higher input of labile organic matter and electron acceptors like sulfate and iron, which promotes a higher microbial abundance. Microbial analyses showed that a versatile microbial community with a potential to perform metabolisms related to iron and sulfate reduction, fermentation as well as methanogenesis inhabits Lake Towuti’s surface sediments.
Biogeochemical investigations of the upper 12 m of the 114 m sediment core showed that Lake Towuti’s sediment is extremely rich in iron with total concentrations up to 2500 µmol cm-3 (20 wt. %), which makes it the natural sedimentary environment with the highest total iron concentrations studied to date. In the complete or near absence of oxygen, nitrate and sulfate, organic matter mineralization in ferruginous sediments would be expected to proceed anaerobically via the energetically most favorable terminal electron acceptors available - in this case ferric iron. Astonishingly, however, methanogenesis is the dominant (>85 %) organic matter mineralization process in Lake Towuti’s sediment. Reactive ferric iron known to be available for microbial iron reduction is highly abundant throughout the upper 12 m and thus remained stable for at least 60.000 years. The produced methane is not oxidized anaerobically and diffuses out of the sediment into the water column. The proclivity towards methanogenesis, in these very iron-rich modern sediments, implies that methanogenesis may have played a more important role in organic matter mineralization thoughout the Precambrian than previously thought and thus could have been a key contributor to Earth’s early climate dynamics.
Over the whole sequence of the 114 m long sediment core siderites were identified and characterized using high-resolution microscopic and spectroscopic imaging together with microchemical and geochemical analyses. The data show early diagenetic growth of siderite crystals as a response to sedimentary organic matter mineralization. Microchemical zoning was identified in all siderite crystals. Siderite thus likely forms during diagenesis through growth on primary existing phases and the mineralogical and chemical features of these siderites are a function of changes in redox conditions of the pore water and sediment over time. Identification of microchemical zoning in ancient siderites deposited in the Precambrian may thus also be used to infer siderite growth histories in ancient sedimentary rocks including sedimentary iron formations.
Die Gesundheitswirtschaft in Deutschland steht vor zahlreichen Veränderungen in ihrem Umfeld. Dabei mangelt es der Krankenhauslandschaft an ärztlichem Personal. Es fehlt insbesondere an Arztstunden bei zugleich steigendem Bedarf an medizinischen Leistungen. Der demografische Wandel, Abwanderung ins Ausland (z. B. in die Schweiz) und Extrapolationen wie Feminisierung des Arztberufs und der Wertewandel respektive Generationenforschung begünstigen diese
Entwicklung. Zudem wird der Arbeitsplatz Krankenhaus zunehmend als unattraktiv von zukünftigen Arbeitnehmerkohorten angesehen. Nachwuchsärzte entscheiden sich bereits im Studium vermehrt gegen die kurative Medizin beziehungsweise gegen eine medizinisch-klinische Tätigkeit. Ein virulentes Beschaffungsproblem zeichnet sich ab, das es zu lösen gilt.
Das Forschungsziel ist die Entwicklung eines ganzheitlichen strategischen Lösungsansatzes mit marktorientierter Akzentuierung für den Arbeitgeber Krankenhaus. Dabei wird das Krankenhaus als wissens- und kompetenzintensive Dienstleistungsorganisation definiert. Employer Branding stellt den Bezugsrahmen eines marktorientierten Personalmanagements dar. Ein forschungstheoretischer Unterbau wird durch strategische Managementansätze integriert. Employer Branding schlägt die Brücke vom Market-based View zum Competence-based View und ist Markenmanagement im Kontext des strategischen Personalmanagements. In der vorliegenden Arbeit wird ein holistischer Bezugsrahmen vorgestellt, der die Wirkungszusammenhänge respektive Employer Branding darstellt. Das Herzstück ist die Employer Value Proposition, die auf der Markenidentität der Organisation basiert. Ziel des Employer Branding Ansatzes ist es, unter anderem eine präferenzerzeugende Wirkung im Prozess der Arbeitsplatzwahl sicherzustellen.
Die Zielrichtung und die Erkenntnis-Interessen erfordern einen breit angelegten Forschungsansatz, der qualitative und quantitative Methoden kombiniert. Ziel der leitfadengesteuerten Tiefeninterviews (exploratives Studiendesign) ist es, bestehende und noch zu entwickelnde Kompetenzstärken der Krankenhausorganisationen zu identifizieren. Bei der Stichprobe handelt es sich um ein „Typical Case Sampling“ (N=12). Defizitäre Befunde, welche Werte und Attraktivitätsfaktoren bei angehenden Ärzten ausgeprägt sind, werden bestätigt. In der Krankenhauslandschaft wird eine fragmentarische und reaktive Herangehensweise identifiziert, die die erfolgreiche Rekrutierung von qualifiziertem Krankenhauspersonal erschwert.
Durch die qualitative Marktforschung werden Anforderungen des Marktes – also zukünftiges, ärztliches Krankenhauspersonal – auf faktorenanalytischer Basis analysiert. Die Stichprobe (N=475) ist isomorph. Dabei wird der Prozess der Einstellungsbildung in das neobehavioristische Erklärungsmodell des Käuferverhaltens eingeordnet. Die Vereinbarkeit von Beruf, Karriere und Familie sowie die betriebliche Kinderbetreuung werden als Schlüsselkomponenten erkannt.
Wichtigste Arbeitswerte in Bezug auf einen attraktiven Arbeitgeber sind Verlässlichkeit, Verantwortung und Respekt. Diese Komponenten haben kommunikativen Nutzen und Differenzierungskraft. Schließlich bewerten Bewerber ein Krankenhaus positiver im Prozess der Arbeitsplatzwahl, je mehr die Werte des potentiellen Arbeitnehmers mit dem Werteprofil der Person übereinstimmen (Person-Organisation Fit).
Krankenhausorganisationen, die den Employer Branding Ansatz implementieren und als Chance zur Definition ihrer Stärken, ihrer Vorzüge und ihres Leistungsangebots als Arbeitgeber annehmen, rüsten sich für das verstärkte Werben um Jungmediziner. Schließlich setzt Employer Branding als marktorientierter Strategieansatz intern und extern Kräfte frei, die Differenzierungsvorteile gegenüber anderen Arbeitgebern bringen. Dabei hat Employer Branding positive Auswirkungen auf den Human Ressource-Bereich entlang der Wertschöpfungskette und mindert das gesamtwirtschaftliche Problem.
In nature, bacteria are found to reside in multicellular communities encased in self-produced extracellular matrices. Indeed, biofilms are the default lifestyle of the bacteria which cause persistent infections in humans. The biofilm assembly protects bacterial cells from desiccation and limits the effectiveness of antimicrobial treatments. A myriad of biomolecules in the extracellular matrix, including proteins, exopolysaccharides, lipids, extracellular DNA and other, form a dense and viscoelastic three dimensional network. Many studies emphasized that a destabilization of the mechanical integrity of biofilm architectures potentially eliminates the protective shield and renders bacteria more susceptible to the immune system and antibiotics. Pantoea stewartii is a plant pathogen which infects monocotyledons such as maize and sweet corn. These bacteria produce dense biofilms in the xylem of infected plants which cause wilting of plants and crops. Stewartan is an exopolysaccharide which is produced by Pantoea stewartii and secreted as the major component to the extracellular matrix. It consists of heptasaccharide repeating units with a high degree of polymerization (2-4 MDa). In this work, the physicochemical properties of stewartan were investigated to understand the contributions of this exopolysaccharide to the mechanical integrity and cohesiveness of Pantoea stewartii biofilms. Therefore, a coarse-grained model of stewartan was developed with computational techniques to obtain a model for its three dimensional structural features. Here, coarse-grained molecular dynamic simulations revealed that the exopolysaccharide forms a hydrogel in which the exopolysaccharide chains arrange into a three dimensional mesh-like network. Simulations at different concentrations were used to investigate the influence of the water content on the network formation. Stewartan was further purified from 72 h grown Pantoea stewartii biofilms and the diffusion of bacteriophage and differently-sized nanoparticles (which ranged from 1.1 to 193 nm diameter) was analyzed in reconstituted stewartan solutions. Fluorescence correlation spectroscopy and single-particle tracking revealed that the stewartan network impeded the mobility of a set of differently-sized fluorescent particles in a size-dependent manner. Diffusion of these particles became more anomalous, as characterized by fitting the diffusion data to an anomalous diffusion model, with increasing stewartan concentrations. Further bulk and microrheological experiments were used to analyze the transitions in stewartan fluid behavior and stewartan chain entanglements were described. Moreover, it was noticed, that a small fraction of bacteriophage particles was trapped in small-sized pores deviating from classical random walks which highlighted the structural heterogeneity of the stewartan network. Additionally, the mobility of fluorescent particles
also depended on the charge of the stewartan exopolysaccharide and a model of a molecular sieve for the stewartan network was proposed. The here reported structural features of the stewartan polymers were used to provide a detailed description of the mechanical properties of typically glycan-based biofilms such as the one from Pantoea stewartii.
In addition, the mechanical properties of the biofilm architecture are permanently sensed by the embedded bacteria and enzymatic modifications of the extracellular matrix take place to address environmental cues. Hence, in this work the influence of enzymatic degradation of the stewartan exopolysaccharides on the overall exopolysaccharide network structure was analyzed to describe relevant physiological processes in Pantoea stewartii biofilms. Here, the stewartan hydrolysis kinetics of the tailspike protein from the ΦEa1h bacteriophage, which is naturally found to infect Pantoea stewartii cells, was compared to WceF. The latter protein is expressed from the Pantoea stewartii stewartan biosynthesis gene cluster wce I-III. The degradation of stewartan by the ΦEa1h tailspike protein was shown to be much faster than the hydrolysis kinetics of WceF, although both enzymes cleaved the β D GalIII(1→3)-α-D-GalI glycosidic linkage from the stewartan backbone. Oligosaccharide fragments which were produced during the stewartan cleavage, were analyzed in size-exclusion chromatography and capillary electrophoresis. Bioinformatic studies and the analysis of a WceF crystal structure revealed a remarkably high structural similarity of both proteins thus unveiling WceF as a bacterial tailspike-like protein. As a consequence, WceF might play a role in stewartan chain length control in Pantoea stewartii biofilms.
Sekundäre Pflanzenstoffe und ihre gesundheitsfördernden Eigenschaften sind in den letzten zwei Jahrzehnten vielfach ernährungsphysiologisch untersucht und spezifische positive Effekte im humanen Organismus zum Teil sehr genau beschrieben worden. Zu den Carotinoiden zählend ist der sekundäre Pflanzenstoff Lutein insbesondere in der Prävention von ophthalmologischen Erkrankungen in den Mittelpunkt der Forschung gerückt. Das ausschließlich von Pflanzen und einigen Algen synthetisierte Xanthophyll wird über die pflanzliche Nahrung insbesondere grünes Blattgemüse in den humanen Organismus aufgenommen. Dort akkumuliert es bevorzugt im Makulapigment der Retina des menschlichen Auges und ist bedeutend im Prozess der Aufrechterhaltung der Funktionsfähigkeit der Photorezeptorzellen. Im Laufe des Alterns kann die Abnahme der Dichte des Makulapigments und der Abbau von Lutein beobachtet werden. Die dadurch eintretende Destabilisierung der Photorezeptorzellen im Zusammenhang mit einer veränderten Stoffwechsellage im alternden Organismus kann zur Ausprägung der altersbedingten Makuladegeneration (AMD) führen. Die pathologische Symptomatik der Augenerkrankung reicht vom Verlust der Sehschärfe bis hin zum irreversiblen Erblinden. Da therapeutische Mittel ausschließlich ein Fortschreiten verhindern, bestehen hier Forschungsansätze präventive Maßnahmen zu finden. Die Supplementierung von luteinhaltigen Präparaten bietet dabei einen Ansatzpunkt. Auf dem Markt finden sich bereits Nahrungsergänzungsmittel (NEM) mit Lutein in verschiedenen Applikationen. Limitierend ist dabei die Stabilität und Bioverfügbarkeit von Lutein, welches teilweise kostenintensiv und mit unbekannter Reinheit zu erwerben ist. Aus diesem Grund wäre die Verwendung von Luteinestern als die pflanzliche Speicherform des Luteins im Rahmen eines NEMs vorteilhaft. Neben ihrer natürlichen, höheren Stabilität sind Luteinester nachhaltig und kostengünstig einsetzbar.
In dieser Arbeit wurden physikochemische und ernährungsphysiologisch relevante Aspekte in dem Produktentwicklungsprozess eines NEMs mit Luteinestern in einer kolloidalen Formulierung untersucht. Die bisher einzigartige Anwendung von Luteinestern in einem Mundspray sollte die Aufnahme des Wirkstoffes insbesondere für ältere Menschen erleichtern und verbessern. Unter Beachtung der Ergebnisse und der ernährungsphysiologischen Bewertung sollten u.a. Empfehlungen für die Rezepturzusammensetzungen einer Miniemulsion (Emulsion mit Partikelgrößen <1,0 µm) gegeben werden. Eine Einschätzung der Bioverfügbarkeit der Luteinester aus den entwickelten, kolloidalen Formulierungen konnte anhand von Studien zur Resorption- und Absorptionsverfügbarkeit in vitro ermöglicht werden.
In physikalischen Untersuchungen wurden zunächst Basisbestandteile für die Formulierungen präzisiert. In ersten wirkstofffreien Musteremulsionen konnten ausgewählte Öle als Trägerphase sowie Emulgatoren und Löslichkeitsvermittler (Peptisatoren) hinsichtlich ihrer Eignung zur Bereitstellung einer Miniemulsion physikalisch geprüft werden. Die beste Stabilität und optimale Eigenschaften einer Miniemulsion zeigten sich bei der Verwendung von MCT-Öl (engl. medium chain triglyceride) bzw. Rapsöl in der Trägerphase sowie des Emulgators Tween® 80 (Tween 80) allein oder in Kombination mit dem Molkenproteinhydrolysat Biozate® 1 (Biozate 1).
Aus den physikalischen Untersuchungen der Musteremulsionen gingen die Präemulsionen als Prototypen hervor. Diese enthielten den Wirkstoff Lutein in verschiedenen Formen. So wurden Präemulsionen mit Lutein, mit Luteinestern sowie mit Lutein und Luteinestern konzipiert, welche den Emulgator Tween 80 oder die Kombination mit Biozate 1 enthielten. Bei der Herstellung der Präemulsionen führte die Anwendung der Emulgiertechniken Ultraschall mit anschließender Hochdruckhomogenisation zu den gewünschten Miniemulsionen. Beide eingesetzten Emulgatoren boten optimale Stabilisierungseffekte. Anschließend erfolgte die physikochemische Charakterisierung der Wirkstoffe. Insbesondere Luteinester aus Oleoresin erwiesen sich hier als stabil gegenüber verschiedenen Lagerungsbedingungen. Ebenso konnte bei einer kurzzeitigen Behandlung der Wirkstoffe unter spezifischen mechanischen, thermischen, sauren und basischen Bedingungen eine Stabilität von Lutein und Luteinestern gezeigt werden. Die Zugabe von Biozate 1 bot dabei nur für Lutein einen zusätzlichen Schutz. Bei längerer physikochemischer Behandlung unterlagen die in den Miniemulsionen eingebrachten Wirkstoffe moderaten Abbauvorgängen. Markant war deren Sensitivität gegenüber dem basischen Milieu. Im Rahmen der Rezepturentwicklung des NEMs war hier die Empfehlung, eine Miniemulsion mit einem leicht saurem pH-Milieu zum Schutz des Wirkstoffes durch kontrollierte Zugabe weiterer Inhaltstoffe zu gestalten.
Im weiteren Entwicklungsprozess des NEMs wurden Fertigrezepturen mit dem Wirkstoff Luteinester aufgestellt. Die alleinige Anwendung des Emulgators Biozate 1 zeigte sich dabei als ungeeignet. Die weiterhin zur Verfügung stehenden Fertigrezepturen enthielten in der Öl-phase neben dem Wirkstoff das MCT-ÖL oder Rapsöl sowie a-Tocopherol zur Stabilisierung. Die Wasserphase bestand aus dem Emulgator Tween 80 oder einer Kombination aus Tween 80 und Biozate 1. Zusatzstoffe waren zudem als mikrobiologischer Schutz Ascorbinsäure und Kaliumsorbat sowie für sensorische Effekte Xylitol und Orangenaroma. Die Anordnung der Basisrezeptur und das angewendete Emulgierverfahren lieferten stabile Miniemulsionen. Weiterhin zeigten langfristige Lagerungsversuche mit den Fertigrezepturen bei 4°C, dass eine Aufrechterhaltung der geforderten Luteinestermenge im Produkt gewährleistet war. Analoge Untersuchungen an einem luteinhaltigen, marktgängigen Präparat bestätigten dagegen eine bereits bei kurzfristiger Lagerung auftretende Instabilität von Lutein.
Abschließend wurde durch Resorptions- und Absorptionsstudien in vitro mit den Präemulsionen und Fertigrezepturen die Bioverfügbarkeit von Luteinestern geprüft. Nach Behandlung in einem etablierten in vitro Verdaumodell konnte eine geringfügige Resorptionsverfügbarkeit der Luteinester definiert werden. Limitiert war eine Micellarisierung des Wirkstoffes aus den konzipierten Formulierungen zu beobachten. Eine enzymatische Spaltung der Luteinester zu freiem Lutein wurde nur begrenzt festgestellt. Spezifität und Aktivität von entsprechenden hydrolytischen Lipasen sind als äußerst gering gegenüber Luteinestern zu bewerten. In sich anschließenden Zellkulturversuchen mit der Zelllinie Caco-2 wurden keine zytotoxischen Effekte durch die relevanten Inhaltsstoffe in den Präemulsionen gezeigt. Dagegen konnten eine Sensibilität gegenüber den Fertigrezepturen beobachtet werden. Diese sollte im Zusammenhang mit Irritationen der Schleimhäute des Magen-Darm-Traktes bedacht werden. Eine weniger komplexe Rezeptur könnte die beobachteten Einschränkungen möglicherweise minimieren. Abschließende Absorptionsstudien zeigten, dass grundsätzlich eine geringfügige Aufnahme von vorrangig Lutein, aber auch Luteinmonoestern in den Enterocyten aus Miniemulsionen erfolgen kann. Dabei hatte weder Tween 80 noch Biozate 1 einen förderlichen Einfluss auf die Absorptionsrate von Lutein oder Luteinestern. Die Metabolisierung der Wirkstoffe durch vorherigen in vitro-Verdau steigerte die zelluläre Aufnahme von Wirkstoffen aus Formulierungen mit Lutein und Luteinestern gleichermaßen. Die beobachtete Aufnahme von Lutein und Luteinmonoestern in den Enterocyten scheint über passive Diffusion zu erfolgen, wobei auch der aktive Transport nicht ausgeschlossen werden kann. Dagegen können Luteindiester aufgrund ihrer Molekülgröße nicht über den Weg der Micellarisierung und einfachen Diffusion in die Enterocyten gelangen. Ihre Aufnahme in die Dünndarmepithelzellen bedarf einer vorherigen hydrolytischen Spaltung durch spezifische Lipasen. Dieser Schritt limitiert wiederum die effektive Aufnahme der Luteinester in die Zellen bzw. stellt eine Einschränkung in ihrer Bioverfügbarkeit im Vergleich zu freiem Lutein dar.
Zusammenfassend konnte für die physikochemisch stabilen Luteinester eine geringe Bioverfügbarkeit aus kolloidalen Formulierungen gezeigt werden. Dennoch ist die Verwendung als Wirkstoffquelle für den sekundären Pflanzenstoff Lutein in einem NEM zu empfehlen. Im Zusammenhang mit der Aufnahme von luteinreichen, pflanzlichen Lebensmitteln kann trotz der zu erwartenden geringen Bioverfügbarkeit der Luteinester aus dem NEM ein Beitrag zur Verbesserung des Luteinstatus erreicht werden. Entsprechende Publikationen zeigten eindeutige Korrelationen zwischen der Aufnahme von luteinesterhaltigen Präparaten und einem Anstieg der Luteinkonzentration im Serum bzw. der Makulapigmentdichte in vivo. Die geringfügig bessere Bioverfügbarkeit von freiem Lutein steht im kritischen Zusammenhang mit seiner Instabilität und Kostenintensität. Bilanzierend wurde im Rahmen dieser Arbeit das marktgängige Produkt Vita Culus® konzipiert. Im Ausblick sollten humane Interventionsstudien mit dem NEM die abschließende Bewertung der Bioverfügbarkeit von Luteinestern aus dem Präparat möglich machen.
Urbanization and agricultural land use are two of the main drivers of global changes with effects on ecosystem functions and human wellbeing. Green Infrastructure is a new approach in spatial planning contributing to sustainable urban development, and to address urban challenges, such as biodiversity conservation, climate change adaptation, green economy development, and social cohesion. Because the research focus has been mainly on open green space structures, such as parks, urban forest, green building, street green, but neglected spatial and functional potentials of utilizable agricultural land, this thesis aims at fill this gap.
This cumulative thesis addresses how agricultural land in urban and peri-urban landscapes can contribute to the development of urban green infrastructure as a strategy to promote sustainable urban development. Therefore, a number of different research approaches have been applied. First, a quantitative, GIS-based modeling approach looked at spatial potentials, addressing the heterogeneity of peri-urban landscape that defines agricultural potentials and constraints. Second, a participatory approach was applied, involving stakeholder opinions to evaluate multiple urban functions and benefits. Finally, an evidence synthesis was conducted to assess the current state of research on evidence to support future policy making at different levels.
The results contribute to the conceptual understanding of urban green infrastructures as a strategic spatial planning approach that incorporates inner-urban utilizable agricultural land and the agriculturally dominated landscape at the outer urban fringe. It highlights the proposition that the linkage of peri-urban farmland with the green infrastructure concept can contribute to a network of multifunctional green spaces to provide multiple benefits to the urban system and to successfully address urban challenges. Four strategies are introduced for spatial planning with the contribution of peri-urban farmland to a strategically planned multifunctional network, namely the connecting, the productive, the integrated, and the adapted way. Finally, this thesis sheds light on the opportunities that arise from the integration of the peri- urban farmland in the green infrastructure concept to support transformation towards a more sustainable urban development. In particular, the inherent core planning principle of multifunctionality endorses the idea of co-benefits that are considered crucial to trigger transformative processes.
This work concludes that the linkage of peri-urban farmland with the green infrastructure concept is a promising action field for the development of new pathways for urban transformation towards sustainable urban development. Along with these outcomes, attention is drawn to limitations that remain to be addressed by future research, especially the identification of further mechanisms required to support policy integration at all levels.
The East Asian monsoons characterize the modern-day Asian climate, yet their geological history and driving mechanisms remain controversial. The southeasterly summer monsoon provides moisture, whereas the northwesterly winter monsoon sweeps up dust from the arid Asian interior to form the Chinese Loess Plateau. The onset of this loess accumulation, and therefore of the monsoons, was thought to be 8 million years ago (Ma). However, in recent years these loess records have been extended further back in time to the Eocene (56-34 Ma), a period characterized by significant changes in both the regional geography and global climate. Yet the extent to which these reconfigurations drive atmospheric circulation and whether the loess-like deposits are monsoonal remains debated. In this thesis, I study the terrestrial deposits of the Xining Basin previously identified as Eocene loess, to derive the paleoenvironmental evolution of the region and identify the geological processes that have shaped the Asian climate.
I review dust deposits in the geological record and conclude that these are commonly represented by a mix of both windblown and water-laid sediments, in contrast to the pure windblown material known as loess. Yet by using a combination of quartz surface morphologies, provenance characteristics and distinguishing grain-size distributions, windblown dust can be identified and quantified in a variety of settings. This has important implications for tracking aridification and dust-fluxes throughout the geological record.
Past reversals of Earth’s magnetic field are recorded in the deposits of the Xining Basin and I use these together with a dated volcanic ash layer to accurately constrain the age to the Eocene period. A combination of pollen assemblages, low dust abundances and other geochemical data indicates that the early Eocene was relatively humid suggesting an intensified summer monsoon due to the warmer greenhouse climate at this time. A subsequent shift from predominantly freshwater to salt lakes reflects a long-term aridification trend possibly driven by global cooling and the continuous uplift of the Tibetan Plateau. Superimposed on this aridification are wetter intervals reflected in more abundant lake deposits which correlate with highstands of the inland proto-Paratethys Sea. This sea covered the Eurasian continent and thereby provided additional moisture to the winter-time westerlies during the middle to late Eocene.
The long-term aridification culminated in an abrupt shift at 40 Ma reflected by the onset of windblown dust, an increase in steppe-desert pollen, the occurrence of high-latitude orbital cycles and northwesterly winds identified in deflated salt deposits. Together, these indicate the onset of a Siberian high atmospheric pressure system driving the East Asian winter monsoon as well as dust storms and was triggered by a major sea retreat from the Asian interior. These results therefore show that the proto-Paratethys Sea, though less well recognized than the Tibetan Plateau and global climate, has been a major driver in setting up the modern-day climate in Asia.
Redox signalling in plants
(2020)
Once proteins are synthesized, they can additionally be modified by post-translational modifications (PTMs). Proteins containing reactive cysteine thiols, stabilized in their deprotonated form due to their local environment as thiolates (RS-), serve as redox sensors by undergoing a multitude of oxidative PTMs (Ox-PTMs). Ox-PTMs such as S-nitrosylation or formation of inter- or intra-disulfide bridges induce functional changes in these proteins. Proteins containing cysteines, whose thiol oxidation state regulates their functions, belong to the so-called redoxome. Such Ox-PTMs are controlled by site-specific cellular events that play a crucial role in protein regulation, affecting enzyme catalytic sites, ligand binding affinity, protein-protein interactions or protein stability. Reversible protein thiol oxidation is an essential regulatory mechanism of photosynthesis, metabolism, and gene expression in all photosynthetic organisms. Therefore, studying PTMs will remain crucial for understanding plant adaptation to external stimuli like fluctuating light conditions. Optimizing methods suitable for studying plants Ox-PTMs is of high importance for elucidation of the redoxome in plants. This study focusses on thiol modifications occurring in plant and provides novel insight into in vivo redoxome of Arabidopsis thaliana in response to light vs. dark. This was achieved by utilizing a resin-assisted thiol enrichment approach. Furthermore, confirmation of candidates on the single protein level was carried out by a differential labelling approach. The thiols and disulfides were differentially labelled, and the protein levels were detected using immunoblot analysis. Further analysis was focused on light-reduced proteins. By the enrichment approach many well studied redox-regulated proteins were identified. Amongst those were fructose 1,6-bisphosphatase (FBPase) and sedoheptulose-1,7-bisphosphatase (SBPase) which have previously been described as thioredoxin system targeted enzymes. The redox regulated proteins identified in the current study were compared to several published, independent results showing redox regulated proteins in Arabidopsis leaves, root, mitochondria and specifically S-nitrosylated proteins. These proteins were excluded as potential new candidates but remain as a proof-of-concept to the enrichment experiments to be effective. Additionally, CSP41A and CSP41B proteins, which emerged from this study as potential targets of redox-regulation, were analyzed by Ribo-Seq. The active translatome study of csp41a mutant vs. wild-type showed most of the significant changes at end of the night, similarly as csp41b. Yet, in both mutants only several chloroplast-encoded genes were altered. Further studies of CSP41A and CSP41B proteins are needed to reveal their functions and elucidate the role of redox regulation of these proteins.
Water quality in river systems is of growing concern due to rising anthropogenic pressures and climate change. Mitigation efforts have been placed under the guidelines of different governance conventions during last decades (e.g., the Water Framework Directive in Europe). Despite significant improvement through relatively straightforward measures, the environmental status has likely reached a plateau. A higher spatiotemporal accuracy of catchment nitrate modeling is, therefore, needed to identify critical source areas of diffuse nutrient pollution (especially for nitrate) and to further guide implementation of spatially differentiated, cost-effective mitigation measures. On the other hand, the emerging high-frequency sensor monitoring upgrades the monitoring resolution to the time scales of biogeochemical processes and enables more flexible monitoring deployments under varying conditions. The newly available information offers new prospects in understanding nitrate spatiotemporal dynamics. Formulating such advanced process understanding into catchment models is critical for model further development and environmental status evaluation. This dissertation is targeting on a comprehensive analysis of catchment and in-stream nitrate dynamics and is aiming to derive new insights into their spatial and temporal variabilities through the new fully distributed model development and the new high-frequency data.
Firstly, a new fully distributed, process-based catchment nitrate model (the mHM-Nitrate model) is developed based on the mesoscale Hydrological Model (mHM) platform. Nitrate process descriptions are adopted from the Hydrological Predictions for the Environment (HYPE), with considerable improved implementations. With the multiscale grid-based discretization, mHM-Nitrate balances the spatial representation and the modeling complexity. The model has been thoughtfully evaluated in the Selke catchment (456 km2), central Germany, which is characterized by heterogeneous physiographic conditions. Results show that the model captures well the long-term discharge and nitrate dynamics at three nested gauging stations. Using daily nitrate-N observations, the model is also validated in capturing short-term fluctuations due to changes in runoff partitioning and spatial contribution during flooding events. By comparing the model simulations with the values reported in the literature, the model is capable of providing detailed and reliable spatial information of nitrate concentrations and fluxes. Therefore, the model can be taken as a promising tool for environmental scientists in advancing environmental modeling research, as well as for stakeholders in supporting their decision-making, especially for spatially differentiated mitigation measures.
Secondly, a parsimonious approach of regionalizing the in-stream autotrophic nitrate uptake is proposed using high-frequency data and further integrated into the new mHM-Nitrate model. The new regionalization approach considers the potential uptake rate (as a general parameter) and effects of above-canopy light and riparian shading (represented by global radiation and leaf area index data, respectively). Multi-parameter sensors have been continuously deployed in a forest upstream reach and an agricultural downstream reach of the Selke River. Using the continuous high-frequency data in both streams, daily autotrophic uptake rates (2011-2015) are calculated and used to validate the regionalization approach. The performance and spatial transferability of the approach is validated in terms of well-capturing the distinct seasonal patterns and value ranges in both forest and agricultural streams. Integrating the approach into the mHM-Nitrate model allows spatiotemporal variability of in-stream nitrate transport and uptake to be investigated throughout the river network.
Thirdly, to further assess the spatial variability of catchment nitrate dynamics, for the first time the fully distributed parameterization is investigated through sensitivity analysis. Sensitivity results show that parameters of soil denitrification, in-stream denitrification and in-stream uptake processes are the most sensitive parameters throughout the Selke catchment, while they all show high spatial variability, where hot-spots of parameter sensitivity can be explicitly identified. The Spearman rank correlation is further analyzed between sensitivity indices and multiple catchment factors. The correlation identifies that the controlling factors vary spatially, reflecting heterogeneous catchment responses in the Selke catchment. These insights are, therefore, informative in informing future parameter regionalization schemes for catchment water quality models. In addition, the spatial distributions of parameter sensitivity are also influenced by the gauging information that is being used for sensitivity evaluation. Therefore, an appropriate monitoring scheme is highly recommended to truly reflect the catchment responses.
Die Fehlerkorrektur in der Codierungstheorie beschäftigt sich mit der Erkennung und Behebung von Fehlern bei der Übertragung und auch Sicherung von Nachrichten.
Hierbei wird die Nachricht durch zusätzliche Informationen in ein Codewort kodiert.
Diese Kodierungsverfahren besitzen verschiedene Ansprüche, wie zum Beispiel die maximale Anzahl der zu korrigierenden Fehler und die Geschwindigkeit der Korrektur.
Ein gängiges Codierungsverfahren ist der BCH-Code, welches industriell für bis zu vier Fehler korrigiere Codes Verwendung findet. Ein Nachteil dieser Codes ist die technische Durchlaufzeit für die Berechnung der Fehlerstellen mit zunehmender Codelänge.
Die Dissertation stellt ein neues Codierungsverfahren vor, bei dem durch spezielle Anordnung kleinere Codelängen eines BCH-Codes ein langer Code erzeugt wird. Diese Anordnung geschieht über einen weiteren speziellen Code, einem LDPC-Code, welcher für eine schneller Fehlererkennung konzipiert ist.
Hierfür wird ein neues Konstruktionsverfahren vorgestellt, welches einen Code für einen beliebige Länge mit vorgebbaren beliebigen Anzahl der zu korrigierenden Fehler vorgibt. Das vorgestellte Konstruktionsverfahren erzeugt zusätzlich zum schnellen Verfahren der Fehlererkennung auch eine leicht und schnelle Ableitung eines Verfahrens zu Kodierung der Nachricht zum Codewort. Dies ist in der Literatur für die LDPC-Codes bis zum jetzigen Zeitpunkt einmalig.
Durch die Konstruktion eines LDPC-Codes wird ein Verfahren vorgestellt wie dies mit einem BCH-Code kombiniert wird, wodurch eine Anordnung des BCH-Codes in Blöcken erzeugt wird. Neben der allgemeinen Beschreibung dieses Codes, wird ein konkreter Code für eine 2-Bitfehlerkorrektur beschrieben. Diese besteht aus zwei Teilen, welche in verschiedene Varianten beschrieben und verglichen werden. Für bestimmte Längen des BCH-Codes wird ein Problem bei der Korrektur aufgezeigt, welche einer algebraischen Regel folgt.
Der BCH-Code wird sehr allgemein beschrieben, doch existiert durch bestimmte Voraussetzungen ein BCH-Code im engerem Sinne, welcher den Standard vorgibt. Dieser BCH-Code im engerem Sinne wird in dieser Dissertation modifiziert, so dass das algebraische Problem bei der 2-Bitfehler Korrektur bei der Kombination mit dem LDPC-Code nicht mehr existiert. Es wird gezeigt, dass nach der Modifikation der neue Code weiterhin ein BCH-Code im allgemeinen Sinne ist, welcher 2-Bitfehler korrigieren und 3-Bitfehler erkennen kann. Bei der technischen Umsetzung der Fehlerkorrektur wird des Weiteren gezeigt, dass die Durchlaufzeiten des modifizierten Codes im Vergleich zum BCH-Code schneller ist und weiteres Potential für Verbesserungen besitzt.
Im letzten Kapitel wird gezeigt, dass sich dieser modifizierte Code mit beliebiger Länge eignet für die Kombination mit dem LDPC-Code, wodurch dieses Verfahren nicht nur umfänglicher in der Länge zu nutzen ist, sondern auch durch die schnellere Dekodierung auch weitere Vorteile gegenüber einem BCH-Code im engerem Sinne besitzt.
The current thesis contains the results from two experimental and one modelling study focused on the topic of ductile strain localization in the presence of material heterogeneities. Localization of strain in the high temperature regime is a well known feature of rock deformation occurring in nature at different scales and in a variety of lithologies. Large scale shear zones at the roots of major crustal fault zones are considered responsible for the activity of plate tectonics on our planet. A large number of mechanisms are suggested to be associated with strain softening and nucleation of localization. Among these, the presence of material heterogeneities within homogeneous host rocks is frequently observed in field examples to trigger shear zone development. Despite a number of studies conducted on the topic, the mechanisms controlling initiation and evolution of localization are not fully understood yet. We investigated, experimentally and by means of numerical modelling, phenomenological and microphysical aspects of high temperature strain localization in a homogeneous body containing single and paired inclusions of weaker material. A monomineralic carbonate system composed of Carrara marble (homogeneous, strong matrix) and Solnhofen limestone (weak planar inclusions) is selected for our studies based on its versatility as an experimental material and on the frequent occurrence of carbonate rocks at the core of natural shear zones.
To explore the influence of different loading conditions on heterogeneity-induced high temperature shear zones we conducted torsion experiments under constant twist (deformation) rate and constant torque (stress) conditions in a Paterson-type deformation apparatus on hollow cylinders of marble containing single planar inclusions of limestone. At the imposed experimental conditions (900 ◦C temperature and 400 MPa confining pressure) both materials deform plastically and the marble is ≈ 9 times stronger than the limestone. The viscosity contrast between the two materials induces a perturbation of the stress field within the marble matrix at the tip of the planar inclusion. Early on along the deformation path (at bulk shear strains ≈ 0.3), heterogeneous distribution of strain can be observed under both loading conditions and a small area of incipient strain localization is formed at the tip of the weak limestone inclusion. Strongly deformed grains, incipient dynamic recrystallization and a weak crystallographic preferred orientation characterize the marble within an area a few mm in front of the inclusion. As the bulk strain is increased (up to γ ≈ 1), the area of microstructural modification is expanded along the inclusion plane, the texture strengthens and grain size refinement by dynamic recrystallization becomes pervasive. Locally, evidences for coexisting brittle deformation are also observed regardless of the imposed loading conditions. A shear zone is effectively formed within the deforming Carrara marble, its geometry controlled by the plane containing the thin plate of limestone. Thorough microstructural and textural analysis, however, do not reveal substantial differences in the mechanisms or magnitude of strain localization at the different loading conditions. We conclude that, in the presence of material heterogeneities capable of inducing strain softening, the imposed loading conditions do not affect ductile localization in its nucleating and transient stages.
As the ultimate goal of experimental rock deformation is the extrapolation of results to geologically relevant time and space scales, we developed 2D numerical models reproducing (and benchmarked to) our experimental results. Our cm-scaled models have been implemented with a first-order strain-dependent softening law to reproduce the effect of rheological weakening in the deforming material. We successfully reproduced the local stress concentration at the inclusion tips and the strain localization initiated in the marble matrix. The heterogeneous distribution of strain and its evolution with imposed bulk deformation (i.e. the shape and extent of the nucleating shear zone) are observed to depend on the degree of softening imposed to the deforming matrix. When a second (artificial) softening step is introduced at elevated bulk strains in the model, the formation of a secondary high strain layer is observed at the core of the initial shear zone, analogous to the development of ultramylonite bands in high strain natural shear zones. Our results do not only reproduce the nucleation and transient evolution of a heterogeneity-induced high temperature shear zone with high accuracy, but also confirm the importance of introducing reliable softening laws capable of mimicking strain weakening to numerical models of crustal scale ductile processes.
Material heterogeneities inducing strain localization in the field are often consisting of brittle precursors (joints and fractures). More generally, the interaction of brittle and ductile deformation mechanisms and its effect on the localization of strain have been a key topic in the structural geology community for a long time. The positive feedback between (micro)fracturing and ductile strain localization is a well recognized effect in a number of field examples. We experimentally investigated the influence of brittle deformation on the initiation and evolution of high temperature shear zones in a strong matrix containing pairs of weak material heterogeneities. Our Carrara marble-Solnhofen limestone inclusions system was tested in triaxial compression under constant strain rate and high temperature (900 ◦C) conditions in a Paterson deformation apparatus. The inclusion pairs were arranged in non-overlapping step-over geometries of either compressional or extensional nature. Experimental runs were conducted at different confining pressures (30, 50, 100 and 300 MPa) to induce various amounts of brittle deformation within the marble matrix. At low confinement (30 and 50 MPa) abundant brittle deformation is observed in all configurations, but the spatial distribution of cracks is dependent on the kinematics of the step-over region: concentrated along the shearing plane between the inclusions in the extensional samples, or broadly distributed around the inclusions but outside the step-over region in the compressional configuration. Accordingly, brittle-assisted ductile processes tend to localize deformation along the inclusions plane in the extensional geometry or to distribute widely across large areas of the matrix in the compressional step-over. At pressures of 100 and 300 MPa fracturing is mostly suppressed in both configurations and strain is accommodated almost entirely by viscous creep. In extensional samples this leads to progressive de-localization with increasing confinement. Our results show that, while ductile localization of strain is indeed more efficient where assisted by brittle processes, these latter are only effective if themselves heterogeneously distributed, ultimately a function of the local stress perturbations.
Geomechanical and petrological characterisation of exposed slip zones, Alpine Fault, New Zealand
(2020)
The Alpine Fault is a large, plate-bounding, strike-slip fault extending along the north-western edge of the Southern Alps, South Island, New Zealand. It regularly accommodates large (MW > 8) earthquakes and has a high statistical probability of failure in the near future, i.e., is late in its seismic cycle. This pending earthquake and associated co-seismic landslides are expected to cause severe infrastructural damage that would affect thousands of people, so it presents a substantial geohazard. The interdisciplinary study presented here aims to characterise the fault zone’s 4D (space and time) architecture, because this provides information about its rheological properties that will enable better assessment of the hazard
the fault poses.
The studies undertaken include field investigations of principal slip zone fault gouges exposed
along strike of the fault, and subsequent laboratory analyses of these outcrop and additional borehole samples. These observations have provided new information on (I) characteristic microstructures down to the nanoscale that indicate which deformation mechanisms operated within the rocks, (II) mineralogical information that constrains the fault’s geomechanical behaviour and (III) geochemical compositional information that allows the influence of fluid- related alteration processes on material properties to be unraveled.
Results show that along-strike variations of fault rock properties such as microstructures and mineralogical composition are minor and / or do not substantially influence fault zone architecture. They furthermore provide evidence that the architecture of the fault zone, particularly its fault core, is more complex than previously considered, and also more complex than expected for this sort of mature fault cutting quartzofeldspathic rocks. In particular our results strongly suggest that the fault has more than one principal slip zone, and that these form an anastomosing network extending into the basement below the cover of Quaternary sediments.
The observations detailed in this thesis highlight that two major processes, (I) cataclasis and (II) authigenic mineral formation, are the major controls on the rheology of the Alpine Fault. The velocity-weakening behaviour of its fault gouge is favoured by abundant nanoparticles
promoting powder lubrication and grain rolling rather than frictional sliding. Wall-rock fragmentation is accompanied by co-seismic, fluid-assisted dilatancy that is recorded by calcite cementation. This mineralisation, along with authigenic formation of phyllosilicates, quickly alters the petrophysical fault zone properties after each rupture, restoring fault competency. Dense networks of anastomosing and mutually cross-cutting calcite veins and intensively reworked gouge matrix demonstrate that strain repeatedly localised within the narrow fault gouge. Abundantly undeformed euhedral chlorite crystallites and calcite veins cross-cutting both fault gouge and gravels that overlie basement on the fault’s footwall provide evidence that the processes of authigenic phyllosilicate growth, fluid-assisted dilatancy and associated fault healing are processes active particularly close to the Earth’s surface in this fault zone.
Exposed Alpine Fault rocks are subject to intense weathering as direct consequence of abundant orogenic rainfall associated with the fault’s location at the base of the Southern Alps. Furthermore, fault rock rheology is substantially affected by shallow-depth conditions such as the juxtaposition of competent hanging wall fault rocks on poorly consolidated footwall sediments. This means microstructural, mineralogical and geochemical properties of the exposed fault rocks may differ substantially from those at deeper levels, and thus are not characteristic of the majority of the fault rocks’ history. Examples are (I) frictionally weak smectites found within the fault gouges being artefacts formed at temperature conditions, and imparting petrophysical properties that are not typical for most of fault rocks of the Alpine Fault, (II) grain-scale dissolution resulting from subaerial weathering rather than deformation by pressure-solution processes and (III) fault gouge geometries being more complex than expected for deeper counterparts.
The methodological approaches deployed in analyses of this, and other fault zones, and the major results of this study are finally discussed in order to contextualize slip zone investigations of fault zones and landslides. Like faults, landslides are major geohazards, which highlights the importance of characterising their geomechanical properties. Similarities between faults, especially those exposed to subaerial processes, and landslides, include mineralogical composition and geomechanical behaviour. Together, this ensures failure occurs predominantly by cataclastic processes, although aseismic creep promoted by weak phyllosilicates is not uncommon. Consequently, the multidisciplinary approach commonly used to investigate fault zones may contribute to increase the understanding of landslide faulting processes and the assessment of their hazard potential.
Lava domes are severely hazardous, mound-shaped extrusions of highly viscous lava and commonly erupt at many active stratovolcanoes around the world. Due to gradual growth and flank oversteepening, such lava domes regularly experience partial or full collapses, resulting in destructive and far-reaching pyroclastic density currents. They are also associated with cyclic explosive activity as the complex interplay of cooling, degassing, and solidification of dome lavas regularly causes gas pressurizations on the dome or the underlying volcano conduit. Lava dome extrusions can last from days to decades, further highlighting the need for accurate and reliable monitoring data.
This thesis aims to improve our understanding of lava dome processes and to contribute to the monitoring and prediction of hazards posed by these domes. The recent rise and sophistication of photogrammetric techniques allows for the extraction of observational data in unprecedented detail and creates ideal tools for accomplishing this purpose. Here, I study natural lava dome extrusions as well as laboratory-based analogue models of lava dome extrusions and employ photogrammetric monitoring by Structure-from-Motion (SfM) and Particle-Image-Velocimetry (PIV) techniques. I primarily use aerial photography data obtained by helicopter, airplanes, Unoccupied Aircraft Systems (UAS) or ground-based timelapse cameras. Firstly, by combining a long time-series of overflight data at Volcán de Colima, México, with seismic and satellite radar data, I construct a detailed timeline of lava dome and crater evolution. Using numerical model, the impact of the extrusion on dome morphology and loading stress is further evaluated and an impact on the growth direction is identified, bearing important implications for the location of collapse hazards. Secondly, sequential overflight surveys at the Santiaguito lava dome, Guatemala, reveal surface motion data in high detail. I quantify the growth of the lava dome and the movement of a lava flow, showing complex motions that occur on different timescales and I provide insight into rock properties relevant for hazard assessment inferred purely by photogrammetric processing of remote sensing data. Lastly, I recreate artificial lava dome and spine growth using analogue modelling under controlled conditions, providing new insights into lava extrusion processes and structures as well as the conditions in which they form.
These findings demonstrate the capabilities of photogrammetric data analyses to successfully monitor lava dome growth and evolution while highlighting the advantages of complementary modelling methods to explain the observed phenomena. The results presented herein further bear important new insights and implications for the hazards posed by lava domes.
Carbonates play a key role in the chemistry and dynamics of our planet. They are directly connected to the CO2 budget of our atmosphere and have a great impact on the deep carbon cycle. Moreover, recent studies have shown that carbonates are stable along the geothermal gradient down to Earth's lower mantle conditions, changing their crystal structure and related properties. Subducted carbonates may also react with silicates to form new phases. These reactions will redistribute elements, such as calcium (Ca), magnesium (Mg), iron (Fe) and carbon in the form of carbon dioxide (CO2), but also trace elements, that are carried by the carbonates. The trace elements of most interest are strontium (Sr) and rare earth elements (REE) which have been found to be important constituents in the composition of the primitive lower mantle and in mineral inclusions found in super-deep diamonds. However, the stability of carbonates in presence of mantle silicates at relevant temperatures is far from being well understood. Related to this, very little is known about distribution processes of trace elements between carbonates and mantle silicates. To shed light on these processes, we studied reactions between Sr- and REE-containing CaCO3 and Mg/Fe-bearing silicates of the system (Mg,Fe)2SiO4 - (Mg,Fe)SiO3 at high pressure and high temperature using synchrotron radiation based μ-X-ray diffraction (μ-XRD) and μ-X-ray fluorescence (μ-XRF) with μm-resolution in a laser-heated diamond anvil cell. X-ray diffraction is used to derive the structural changes of the phase reactions whereas X-ray fluorescence gives information on the chemical changes in the sample. In-situ experiments at high pressure and high temperature were performed at beamline P02.2 at PETRA III (Hamburg, Germany) and at beamline ID27 at ESRF (Grenoble, France). In addition to μ-XRD and μ-XRF, ex-situ measurements were made on the recovered sample material using transmission electron microscopy (TEM) and provided further insights into the reaction kinetics of carbonate-silicate reactions.
Our investigations show that CaCO3 is unstable in presence of mantle silicates above 1700 K and a reaction takes place in which magnesite plus CaSiO3-perovskite are formed. In addition, we observed that a high content of iron in the carbonate-silicate system favours dolomite formation during the reaction. The subduction of natural carbonates with significant amounts of Sr leads to a comprehensive investigation of the stability not only of CaCO3 phases in contact with mantle silicates but also of SrCO3 (and of Sr-bearing CaCO3). We found that SrCO3 reacts with (Mg,Fe)SiO3-perovskite to form magnesite and gained evidence for the formation of SrSiO3-perovskite.
To complement our study on the stability of SrCO3 at conditions of the Earth's lower mantle, we performed powder X-ray diffraction and single crystal X-ray diffraction experiments at ambient temperature and up to 49 GPa. We observed a transformation from SrCO3-I into a new high-pressure phase SrCO3-II at around 26 GPa with Pmmn crystal structure and a bulk modulus of 103(10) GPa. This information is essential to fully understand the phase behaviour and stability of carbonates in the Earth's lower mantle and to elucidate the possibility of introducing Sr into mantle silicates by carbonate-silicate reactions.
Simultaneous recording of μ-XRD and μ-XRF in the μm-range over the heated areas provides spatial information not only about phase reactions but also on the elemental redistribution during the reactions. A comparison of the spatial intensity distribution of the XRF signal before and after heating indicates a change in the elemental distribution of Sr and an increase in Sr-concentration was found around the newly formed SrSiO3-perovskite. With the help of additional TEM analyses on the quenched sample material the elemental redistribution was studied at a sub-micrometer scale. Contrary to expectations from combined μ-XRD and μ-XRF measurements, we found that La and Eu were not incorporated into the silicate phases, instead they tend to form either isolated oxide phases (e.g. Eu2O3, La2O3) or hydroxyl-bastnäsite (La(CO3)(OH)). In addition, we observed the transformation from (Mg,Fe)SiO3-perovskite to low-pressure clinoenstatite during pressure release. The monoclinic structure (P21/c) of this phase allows the incorporation of Ca as shown by additional EDX analyses and, to a minor extent, Sr too.
Based on our experiments, we can conclude that a detection of the trace elements in-situ at high pressure and high temperature remains challenging. However, our first findings imply that silicates may incorporate the trace elements provided by the carbonates and indicate that carbonates may have a major effect on the trace element contents of mantle phases.
In the present study, we employ the angle-resolved photoemission spectroscopy (ARPES) technique to study the electronic structure of topological states of matter. In particular, the so-called topological crystalline insulators (TCIs) Pb1-xSnxSe and Pb1-xSnxTe, and the Mn-doped Z2 topological insulators (TIs) Bi2Te3 and Bi2Se3. The Z2 class of strong topological insulators is protected by time-reversal symmetry and is characterized by an odd number of metallic Dirac type surface states in the surface Brillouin zone. The topological crystalline insulators on the other hand are protected by the individual crystal symmetries and exhibit an even number of Dirac cones.
The topological properties of the lead tin chalcogenides topological crystalline insulators can be tuned by temperature and composition. Here, we demonstrate that Bi-doping of the Pb1-xSnxSe(111) epilayers induces a quantum phase transition from a topological crystalline insulator to a Z2 topological insulator. This occurs because Bi-doping lifts the fourfold valley degeneracy in the bulk. As a consequence a gap appears at ⌈¯, while the three Dirac cones at the M̅ points of the surface Brillouin zone remain intact. We interpret this new phase transition is caused by lattice distortion. Our findings extend the topological phase diagram enormously and make strong topological insulators switchable by distortions or electric field. In contrast, the bulk Bi doping of epitaxial Pb1-xSnxTe(111) films induces a giant Rashba splitting at the surface that can be tuned by the doping level. Tight binding calculations identify their origin as Fermi level pinning by trap states at the surface.
Magnetically doped topological insulators enable the quantum anomalous Hall effect (QAHE) which provide quantized edge states for lossless charge transport applications. The edge states are hosted by a magnetic energy gap at the Dirac point which has not been experimentally observed to date. Our low temperature ARPES studies unambiguously reveal the magnetic gap of Mn-doped Bi2Te3. Our analysis shows a five times larger gap size below the Tc than theoretically predicted. We assign this enhancement to a remarkable structure modification induced by Mn doping. Instead of a disordered impurity system, a self-organized alternating sequence of MnBi2Te4 septuple and Bi2Te3quintuple layers is formed. This enhances the wave-function overlap and gives rise to a large magnetic gap. Mn-doped Bi2Se3 forms similar heterostructure, but only a nonmagnetic gap is observed in this system. This correlates with the difference in magnetic anisotropy due to the much larger spin-orbit interaction in Bi2Te3 compared to Bi2Se3. These findings provide crucial insights for pushing lossless transport in topological insulators towards room-temperature applications.
The Willmore functional is a function that maps an immersed Riemannian manifold to its total mean curvature. Finding closed surfaces that minimizes the Willmore energy, or more generally finding critical surfaces, is a classic problem of differential geometry.
In this thesis we will develop the concept of generalized Willmore functionals for surfaces in Riemannian manifolds. We are guided by models in mathematical physics, such as the Hawking energy of general relativity and the bending energies for thin membranes.
We prove the existence of minimizers under area constraint for these generalized Willmore functionals in a suitable class of generalized surfaces. In particular, we construct minimizers of the bending energy mentioned above for prescribed area and enclosed volume.
Furthermore, we prove that critical surfaces of generalized Willmore functionals with prescribed area are smooth, away from finitely many points. These results and the following are based on the existing theory for the Willmore functional.
This general discussion is succeeded by a detailed analysis of the Hawking energy. In the context of general relativity the surrounding manifold describes the space at a given time, hence we strive to understand the interplay between the Hawking energy and the ambient space. We characterize points in the surrounding manifold for which there are small critical spheres with prescribed area in any neighborhood. These points are interpreted as concentration points of the Hawking energy.
Additionally, we calculate an expansion of the Hawking energy on small, round spheres. This allows us to identify a kind of energy density of the Hawking energy.
It needs to be mentioned that our results stand in contrast to previous expansions of the Hawking energy. However, these expansions are obtained on spheres along the light cone at a given point. At this point it is not clear how to explain the discrepancy.
Finally, we consider asymptotically Schwarzschild manifolds. They are a special case of asymptotically flat manifolds, which serf as models for isolated systems. The Schwarzschild spacetime itself is a classical solution to the Einstein equations and yields a simple description of a black hole.
In these asymptotically Schwarzschild manifolds we construct a foliation of the exterior region by critical spheres of the Hawking energy with prescribed large area. This foliation can be seen as a generalized notion of the center of mass of the isolated system. Additionally, the Hawking energy of grows along the foliation as the area of the surfaces grows.
Since the golden era of antibiotics natural products are of ever growing interest to both basic research and applied sciences as they are the main source of new bioactive compounds delivering lead structures for new pharmaceuticals with potent antibiotic, anti-inflammatory or anti-cancer activities. Alongside the technological advances in high-throughput genome sequencing and the better understanding of the general organization of those modular biosynthetic assembly lines of secondary metabolites, there was also a shift from wet-lab screening of active cell extracts towards algorithm-based in silico screening for new natural product biosynthesis gene clusters (BGCs). Although the increasing availability of full genome sequences revealed that such non-ribosomal peptide synthetases (NRPS), polyketide synthases (PKS) and ribosomally synthesized and post-translationally modified peptides (RiPPs) can be found in all three kingdoms of life, certain phyla like actinobacteria and cyanobacteria show a very high density of these secondary metabolite BGCs.
The facultative symbiotic, N2-fixing model organism N. punctiforme PCC73102 is a terrestrial type IV cyanobacterium that not only dedicates are very large fraction of its genome to secondary metabolite production but is also amenable to genetic modification. AntiSMASH analysis of the genome showed that there are sixteen potential secondary metabolite BGCs encoded in N. punctiforme, but until now there were only two compounds assigned to their respective BGC leaving the remaining fourteen orphan. This makes the organism a perfect subject for the establishment of a novel combinatorial genomic mining approach for the detection of new natural products.
In the course of this study a combinatorial approach of genomic mining, independent monitoring techniques and alteration of cultivation conditions lead to new insights in cyanobacterial natural product biosynthesis and ultimately to the description of a novel compound produced by N. punctiforme. With the generation and investigation of a reporter strain library consisting of CFP-producing transcriptional reporter mutants for every predicted secondary metabolite BGC of N. punctiforme, it could be shown that natural product expression is in fact not silent for all those BGCs where no compound can be detected. Instead several distinct expression patterns could be described highlighting that secondary metabolite production is under tight regulation and only a minor fraction of these BGCs is in fact silent under standard laboratory conditions. Furthermore, increasing light intensity and carbon dioxide availability and cultivating N. punctiforme to very high cell densities had a tremendous impact on the overall metabolic activity of the organism. Investigation of high density cultivated cell extracts ultimately lead to the detection of a so far undescribed set of microviridins with unusual extended peptide sequences named Microviridin N3 – N9. Both cultivation of the transcriptional reporter mutants as well as RTqPCR-based detection of secondary metabolite BGC transcription levels revealed that in fact 50% of N. punctiforme’s natural product BGCs are upregulated under high cell density conditions. In contrast to this very broad response, co-cultivation of N. punctiforme in chemical or physical contact with a N-deprived host plant (Blasia pusilla) lead to a very specific upregulation of two natural product BGCs, namely RIPP3 and RIPP4. Although this response could be confirmed by various independent monitoring techniques and heavy analytical efforts were spent, no compound could be assigned to either of these BGCs.
This study is the first in-depth systematic investigation of a cyanobacterial secondary metabolome by a combinatorial approach of genome mining and independent monitoring techniques that can serve as a new strategic approach to gain further insight into natural product synthesis of various organisms. Although there are single well described examples of secondary metabolites like the cell differentiation factor PatS in Anabaena sp. strain PCC 7120, the level and extent of regulation observed in this study is unprecedented and understanding of these mechanisms might be the key to streamline natural product discovery. However, the results of this study also highlight that induction of secondary metabolite BGCs is not the real challenge. Instead the new insights point towards analytical issues being a severe hurdle and finding reliable strategies to overcome these problems might as well drive natural product discovery.
Giant unilamellar vesicles are an important tool in todays experimental efforts to understand the structure and behaviour of biological cells. Their simple structure allows the isolation of the physical elastic properties of the lipid membrane. A central physical
property is the bending energy of the membrane, since the many different shapes of giant vesicles can be obtained by finding the minimum of the bending energy. In the spontaneous curvature model the bending energy is a function of the bending rigidity as well as the mean curvature and an additional parameter called the spontaneous curvature, which describes an internal preference of the lipid-bilayer to bend towards one side or the other. The spontaneous and mean curvature are local properties of the membrane.
Additional constraints arise from the conservation of the membrane surface area and the enclosed volume, which are global properties.
In this thesis the spontaneous curvature model is used to explain the experimental observation of a periodic shape oscillation of a giant unilamellar vesicle that was filled with a protein complex that periodically binds to and unbinds from the membrane.
By assuming that the binding of the proteins to the membrane induces a change in the spontaneous curvature the experimentally observed shapes could successfully be explained. This involves the numerical solution of the differential equations as obtained from the minimization of the bending energy respecting the area and volume constraints, the so called shape equations. Vice versa this approach can be used to estimate the spontaneous curvature from experimentally measurable quantities.
The second topic of this thesis is the analysis of concentration gradients in rigid conic membrane compartments. Gradients of an ideal gas due to gravity and gradients generated by the directed stochastic movement of molecular motors along a microtubulus were considered. It was possible to calculate the free energy and the bending energy analytically for the ideal gas. In the case of the non-equilibrium system with molecular motors, the characteristic length of the density profile, the jam-length, and its dependency on the opening angle of the conic compartment have been calculated in the mean-field limit.
The mean field results agree qualitatively with stochastic particle simulations.
Lattice dynamics
(2020)
In this thesis I summarize my contribution to the research field of ultrafast structural dynamics in condensend matter. It consists of 17 publications that cover the complex interplay between electron, magnon, and phonon subsystems in solid materials and the resulting lattice dynamics after ultrafast photoexcitation. The investigation of such dynamics is necessary for the physical understanding of the processes in materials that might become important in the future as functional materials for technological applications, for example in data storage applications, information processing, sensors, or energy harvesting.
In this work I present ultrafast x-ray diffraction (UXRD) experiments based on the optical pump – x-ray probe technique revealing the time-resolved lattice strain. To study these dynamics the samples (mainly thin film heterostructures) are excited by femtosecond near-infrared or visible light pulses. The induced strain dynamics caused by stresses of the excited subsystems are measured in a pump-probe scheme with x-ray diffraction (XRD) as a probe. The UXRD setups used during my thesis are a laser-driven table-top x-ray source and large-scale synchrotron facilities with dedicated time-resolved diffraction setups. The UXRD experiments provide quantitative access to heat reservoirs in nanometric layers and monitor the transient responses of these layers with coupled electron, magnon, and phonon subsystems. In contrast to optical probes, UXRD allows accessing the material-specific information, which is unavailable for optical light due to the detection of multiple indistinguishable layers in the range of the penetration depth.
In addition, UXRD facilitates a layer-specific probe for layers buried opaque heterostructures to study the energy flow. I extended this UXRD technique to obtain the driving stress profile by measuring the strain dynamics in the unexcited buried layer after excitation of the adjacent absorbing layers with femtosecond laser pulses. This enables the study of negative thermal expansion (NTE) in magnetic materials, which occurs due to the loss of the magnetic order. Part of this work is the investigation of stress profiles which are the source of coherent acoustic phonon wave packets (hypersound waves). The spatiotemporal shape of these stress profiles depends on the energy distribution profile and the ability of the involved subsystems to produce stress. The evaluation of the UXRD data of rare-earth metals yields a stress profile that closely matches the optical penetration profile: In the paramagnetic (PM) phase the photoexcitation results in a quasi-instantaneous expansive stress of the metallic layer whereas in the antiferromagnetic (AFM) phase a quasi-instantaneous contractive stress and a second contractive stress contribution rising on a 10 ps time scale adds to the PM contribution. These two time scales are characteristic for the magnetic contribution and are in agreement with related studies of the magnetization dynamics of rare-earth materials.
Several publications in this thesis demonstrate the scientific progress in the field of active strain control to drive a second excitation or engineer an ultrafast switch. These applications of ultrafast dynamics are necessary to enable control of functional material properties via strain on ultrafast time scales.
For this thesis I implemented upgrades of the existing laser-driven table-top UXRD setup in order to achieve an enhancement of x-ray flux to resolve single digit nanometer thick layers. Furthermore, I developed and built a new in-situ time-resolved magneto-optic Kerr effect (MOKE) and optical reflectivity setup at the laser-driven table-top UXRD setup to measure the dynamics of lattice, electrons and magnons under the same excitation conditions.
Research on novel and advanced biomaterials is an indispensable step towards their applications in desirable fields such as tissue engineering, regenerative medicine, cell culture, or biotechnology. The work presented here focuses on such a promising material: polyelectrolyte multilayer (PEM) composed of hyaluronic acid (HA) and poly(L-lysine) (PLL). This gel-like polymer surface coating is able to accumulate (bio-)molecules such as proteins or drugs and release them in a controlled manner. It serves as a mimic of the extracellular matrix (ECM) in composition and intrinsic properties. These qualities make the HA/PLL multilayers a promising candidate for multiple bio-applications such as those mentioned above. The work presented aims at the development of a straightforward approach for assessment of multi-fractional diffusion in multilayers (first part) and at control of local molecular transport into or from the multilayers by laser light trigger (second part).
The mechanism of the loading and release is governed by the interaction of bioactives with the multilayer constituents and by the diffusion phenomenon overall. The diffusion of a molecule in HA/PLL multilayers shows multiple fractions of different diffusion rate. Approaches, that are able to assess the mobility of molecules in such a complex system, are limited. This shortcoming motivated the design of a novel evaluation tool presented here.
The tool employs a simulation-based approach for evaluation of the data acquired by fluorescence recovery after photobleaching (FRAP) method. In this approach, possible fluorescence recovery scenarios are primarily simulated and afterwards compared with the data acquired while optimizing parameters of a model until a sufficient match is achieved. Fluorescent latex particles of different sizes and fluorescein in an aqueous medium are utilized as test samples validating the analysis results. The diffusion of protein cytochrome c in HA/PLL multilayers is evaluated as well.
This tool significantly broadens the possibilities of analysis of spatiotemporal FRAP data, which originate from multi-fractional diffusion, while striving to be widely applicable. This tool has the potential to elucidate the mechanisms of molecular transport and empower rational engineering of the drug release systems.
The second part of the work focuses on the fabrication of such a spatiotemporarily-controlled drug release system employing the HA/PLL multilayer. This release system comprises different layers of various functionalities that together form a sandwich structure. The bottom layer, which serves as a reservoir, is formed by HA/PLL PEM deposited on a planar glass substrate. On top of the PEM, a layer of so-called hybrids is deposited. The hybrids consist of thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) -based hydrogel microparticles with surface-attached gold nanorods. The layer of hybrids is intended to serve as a gate that controls the local molecular transport through the PEM–solution-interface. The possibility of stimulating the molecular transport by near-infrared (NIR) laser irradiation is being explored.
From several tested approaches for the deposition of hybrids onto the PEM surface, the drying-based approach was identified as optimal. Experiments, that examine the functionality of the fabricated sandwich at elevated temperature, document the reversible volume phase transition of the PEM-attached hybrids while sustaining the sandwich stability. Further, the gold nanorods were shown to effectively absorb light radiation in the tissue- and cell-friendly NIR spectral region while transducing the energy of light into heat. The rapid and reversible shrinkage of the PEM-attached hybrids was thereby achieved. Finally, dextran was employed as a model transport molecule. It loads into the PEM reservoir in a few seconds with the partition constant of 2.4, while it spontaneously releases in a slower, sustained manner. The local laser irradiation of the sandwich, which contains the fluorescein isothiocyanate tagged dextran, leads to a gradual reduction of fluorescence intensity in the irradiated region.
The release system fabricated employs renowned photoresponsivity of the hybrids in an innovative setting. The results of the research are a step towards a spatially-controlled on-demand drug release system that paves the way to spatiotemporally controlled drug release.
The approaches developed in this work have the potential to elucidate the molecular dynamics in ECM and to foster engineering of multilayers with properties tuned to mimic the ECM. The work aims at spatiotemporal control over the diffusion of bioactives and their presentation to the cells.
In den letzten Jahrzehnten ist die Nachfrage nach kostengünstigen und flächendeckenden Kartierungsmöglichkeiten im Hinblick auf eine ertragssteigernde und umweltfreundlichere Bewirtschaftung von landwirtschaftlichen Nutzflächen stark gestiegen. Hierfür eignen sich spektroskopische Methoden wie die Röntgenfluoreszenzanalyse (RFA), Raman- und Gammaspektroskopie sowie die laserinduzierte Plasmaspektroskopie (LIBS). In Abhängigkeit von der Funktionsweise der jeweiligen Methoden werden Informationen zu verschiedensten Bodeneigenschaften wie Nährelementgehalt, Textur und pH-Wert erhalten.
Ziel dieser Arbeit ist die Entwicklung eines Online-LIBS-Verfahrens zur Nährelementbestimmmung und Kartierung von Ackerflächen. Die LIBS ist eine schnelle und simultane Multielementanalyse bei der durch das Fokussieren eines hochenergetischen Laserpulses Probenmaterial von der Probenoberfläche ablatiert wird und in ein Plasma überführt wird. Beim Abkühlen des Plasmas wird Strahlung emittiert, welche Rückschlüsse über die elementare Zusammensetzung der Probe gibt. Diese Arbeit ist im Teilprojekt I4S (Intelligenz für Böden) im Forschungsprogramm BonaRes (Boden als nachhaltige Ressource für die Bioökonomie) des Bundesministerium für Bildung und Forschung (BMBF) entstanden. Es wurden insgesamt 651 Bodenproben von verschiedenen Test-Agrarflächen unterschiedlichster Standorte Deutschlands gemessen, ausgewertet und zu Validierungszwecken mit entsprechender Referenzanalytik wie die Optische Emissionsspektroskopie mittels induktiv gekoppeltem Plasma (ICP-OES) und die wellenlängendispersive Röntgenfluoreszenzanalyse (WDRFA) charakterisiert.
Für die Quantifizierung wurden zunächst die Messparameter des LIBS-Systems auf die Bodenmatrix optimiert und für die Elemente geeignete Linien ausgewählt sowie deren Nachweisgrenzen bestimmt. Es hat sich gezeigt, dass eine absolute Quantifizierung basierend auf einem univariaten Ansatz aufgrund der starken Matrixeffekte und der schlechten Reproduzierbarkeit des Plasmas nur eingeschränkt möglich ist. Bei Verwendung eines multivariaten Ansatz wie der Partial Least Squares Regression (PLSR) für die Kalibrierung konnten für die Nährelemente im Vergleich zur univariaten Variante Analyseergebnisse mit höherer Güte und geringeren Messunsicherheiten ermittelt werden. Die Untersuchungen haben gezeigt, dass das multivariate Modell weiter verbessert werden kann, indem mit einer Vielzahl von gut analysierten Böden verschiedener Standorte, Bodenarten und einem breiten Gehaltsbereich kalibriert wird. Mithilfe der Hauptkomponentenanalyse (PCA) wurde eine Klassifizierung der Böden nach der Textur realisiert. Weiterhin wurde auch eine Kalibrierung mit losem Bodenmaterial erstellt. Trotz der Signalabnahme konnten für die verschiedenen Nährelemente Kalibriergeraden mit ausreichender, analytischer Güte erstellt werden.
Für den Einsatz auf dem Acker wurde außerdem der Einfluss von Korngröße und Feuchtigkeit auf das LIBS-Signal untersucht. Die unterschiedlichen Korngrößen haben nur einen geringen Einfluss auf das LIBS-Signal und das Kalibriermodell lässt sich durch entsprechende Proben leicht anpassen. Dagegen ist der Einfluss der Feuchtigkeit deutlich stärker und hängt stark von der Bodenart ab, sodass für jede Bodenart ein separates Kalibriermodell für verschiedene Feuchtigkeitsgehalte erstellt werden muss. Mithilfe der PCA kann der Feuchtigkeitsgehalt im Boden grob abgeschätzt werden und die entsprechende Kalibrierung ausgewählt werden.
Diese Arbeit liefert essentielle Informationen für eine Echtzeit-Analyse von Nährelementen auf dem Acker mittels LIBS und leistet einen wichtigen Beitrag zu einer fortschrittlichen und zukunftsfähigen Nutzung von Ackerflächen.
Interactions involving biological interfaces such as lipid-based membranes are of paramount importance for all life processes. The same also applies to artificial interfaces to which biological matter is exposed, for example the surfaces of drug delivery systems or implants. This thesis deals with the two main types of interface interactions, namely (i) interactions between a single interface and the molecular components of the surrounding aqueous medium and (ii) interactions between two interfaces. Each type is investigated with regard to an important scientific problem in the fields of biotechnology and biology:
1.) The adsorption of proteins to surfaces functionalized with hydrophilic polymer brushes; a process of great biomedical relevance in context with harmful foreign-body-response to implants and drug delivery systems.
2.) The influence of glycolipids on the interaction between lipid membranes; a hitherto largely unexplored phenomenon with potentially great biological relevance.
Both problems are addressed with the help of (quasi-)planar, lipid-based model surfaces in combination with x-ray and neutron scattering techniques which yield detailed structural insights into the interaction processes. Regarding the adsorption of proteins to brush-functionalized surfaces, the first scenario considered is the exposure of the surfaces to human blood serum containing a multitude of protein species. Significant blood protein adsorption was observed despite the functionalization, which is commonly believed to act as a protein repellent. The adsorption consists of two distinct modes, namely strong adsorption to the brush grafting surface and weak adsorption to the brush itself. The second aspect investigated was the fate of the brush-functionalized surfaces when exposed to aqueous media containing immune proteins (antibodies) against the brush polymer, an emerging problem in current biomedical applications. To this end, it was found that antibody binding cannot be prevented by variation of the brush grafting density or the polymer length. This result motivates the search for alternative, strictly non-antigenic brush chemistries. With respect to the influence of glycolipids on the interaction between lipid membranes, this thesis focused on the glycolipids’ ability to crosslink and thereby to tightly attract adjacent membranes. This adherence is due to preferential saccharide-saccharide interactions occurring among the glycolipid headgroups. This phenomenon had previously been described for lipids with special oligo-saccharide motifs. Here, it was investigated how common this phenomenon is among glycolipids with a variety of more abundant saccharide-headgroups. It was found that glycolipid-induced membrane crosslinking is equally observed for some of these abundant glycolipid types, strongly suggesting that this under-explored phenomenon is potentially of great biological relevance.
Abstract. Catalysis is one of the most effective tools for the highly efficient assembly of complex molecular structures. Nevertheless, it is mainly represented by transition metal-based catalysts and typically is an energy consuming process. Therefore, photocatalysis utilizing solar energy is one of the appealing approaches to overcome these problems. A great alternative to classic transition metal-based photocatalysts, carbon nitrides, a group of organic polymeric semiconductors, have already shown their efficiency in water splitting, CO2 reduction, and organic pollutants degradation. However, these materials have also a great potential for the use in functionalization of complex organic molecules for synthetic needs as it was shown in recent years.
This work addresses the challenge to develop efficient system for heterogeneous organic photocatalysis, employing cheap and environmentally benign photocatalysts – carbon nitrides. Herein, fundamental properties of semiconductors are studied from the organic chemistry standpoint; the inherent properties of carbon nitrides, such as ability to accumulate electrons, are deeply investigated and their effect on the reaction outcome is established. Thus, understanding of the electron charging processes allowed for the synthesis of otherwise hardly-achieved diazetidines-1,3 by tetramerization of benzylamines. Furthermore, the high electron capacity of Potassium Poly(heptazine imide)s (K-PHI) made possible a multi-electron reduction of aromatic nitro compounds to bare or formylated anilines. Additionally, two deep eutectic solvents (DES) were designed as a sustainable reaction media and reducing reagent for this reaction. Eventually, the high oxidation ability of carbon nitride K-PHI is employed in a challenging reaction of halide anion oxidation (Cl―, Br―) to accomplish electrophilic substitution in aromatic ring. The possibility to utilize NaCl solution (seawater mimetic) for the chlorination of electron rich arenes was shown. Eventually, light itself is used as a tool in a chromoselective photocatalytic oxidation of aromatic thiols and thioacetatas to three different compounds, using UV, blue, and red LEDs.
All in all, the work enhances understanding the mechanism of heterogeneous photocatalysis in synthetic organic reactions and therefore, is a step forward to the sustainable methods of synthesis in organic chemistry.
Completely water-based systems are of interest for the development of novel material for various reasons: On one hand, they provide benign environment for biological systems and on the other hand they facilitate effective molecular transport in a membrane-free environment. In order to investigate the general potential of aqueous two-phase systems (ATPSs) for biomaterials and compartmentalized systems, various solid particles were applied to stabilize all-aqueous emulsion droplets. The target ATPS to be investigated should be prepared via mixing of two aqueous solutions of water-soluble polymers, which turn biphasic when exceeding a critical polymer concentration. Hydrophilic polymers with a wide range of molar mass such as dextran/poly(ethylene glycol) (PEG) can therefore be applied. Solid particles adsorbed at the interfaces can be exceptionally efficient stabilizers forming so-called Pickering emulsions, and nanoparticles can bridge the correlation length of polymer solutions and are thereby the best option for water-in-water emulsions.
The first approach towards the investigation of ATPS was conducted with all aqueous dextran-PEG emulsions in the presence of poly(dopamine) particles (PDP) in Chapter 4. The water-in-water emulsions were formed with a PEG/dextran system via utilizing PDP as stabilizers. Studies of the formed emulsions were performed via laser scanning confocal microscope (CLSM), optical microscope (OM), cryo-scanning electron microscope (SEM) and tensiometry. The stable emulsions (at least 16 weeks) were demulsified easily via dilution or surfactant addition. Furthermore, the solid PDP at the water-water interface were crosslinked in order to inhibit demulsification of the Pickering emulsion. Transmission electron microscope (TEM) and scanning electron microscope (SEM) were used to visualize the morphology of PDP before and after crosslinking. PDP stabilized water-in-water emulsions were utilized in the following Chapter 5 to form supramolecular compartmentalized hydrogels. Here, hydrogels were prepared in pre-formed water-in-water emulsions and gelled via α-cyclodextrin-PEG (α-CD-PEG) inclusion complex formation. Studies of the formed complexes were performed via X-ray powder diffraction (XRD) and the mechanical properties of the hydrogels were measured with oscillatory shear rheology. In order to verify the compartmentalized state and its triggered decomposition, hydrogels and emulsions were assessed via OM, SEM and CLSM. The last chapter broadens the investigations from the previous two systems by utilizing various carbon nitrides (CN) as different stabilizers in ATPS. CN introduces another way to trigger demulsification, namely irradiation with visible light. Therefore, emulsification and demulsification with various triggers were probed. The investigated all aqueous multi-phase systems will act as model for future fabrication of biocompatible materials, cell micropatterning as well as separation of compartmentalized systems.
Bank filtration is an effective water treatment technique and is widely adopted in Europe along major rivers. It is the process where surface water penetrates the riverbed, flows through the aquifer, and then is extracted by near-bank production wells. By flowing in the subsurface flow passage, the water quality can be improved by a series of beneficial processes. Long-term riverbank filtration also produces colmation layers on the riverbed. The colmation layer may act as a bioactive zone that is governed by biochemical and physical processes owing to its enrichment of microbes and organic matter. Low permeability may strongly limit the surface water infiltration and further lead to a decreasing recoverable ratio of production wells.The removal of the colmation layer is therefore a trade-off between the treatment capacity and treatment efficiency. The goal of this Ph.D. thesis is to focus on the temporal and spatial change of the water quality and quantity along the flow path of a hydrogeological heterogeneous riverbank filtration site adjacent to an artificial-reconstructed (bottom excavation and bank reconstruction) canal in Potsdam, Germany.
To quantify the change of the infiltration rate, travel time distribution, and the thermal field brought by the canal reconstruction, a three-dimensional flow and heat transport model was created. This model has two scenarios, 1) ‘with’ canal reconstruction, and 2) ‘without’ canal reconstruction. Overall, the model calibration results of both water heads and temperatures matched those observed in the field study. In comparison to the model without reconstruction, the reconstruction model led to more water being infiltrated into the aquifer on that section, on average 521 m3/d, which corresponded to around 9% of the total pumping rate. Subsurface travel-time distribution substantially shifted towards shorter travel times. Flow paths with travel times <200 days increased by ~10% and those with <300 days by 15%. Furthermore, the thermal distribution in the aquifer showed that the seasonal variation in the scenario with reconstruction reaches deeper and laterally propagates further.
By scatter plotting of δ18O versus δ 2H, the infiltrated river water could be differentiated from water flowing in the deep aquifer, which may contain remnant landside groundwater from further north. In contrast, the increase of river water contribution due to decolmation could be shown by piper plot. Geological heterogeneity caused a substantial spatial difference in redox zonation among different flow paths, both horizontally and vertically. Using the Wilcoxon rank test, the reconstruction changed the redox potential differently in observation wells. However, taking the small absolute concentration level, the change is also relatively minor. The treatment efficiency for both organic matter and inorganic matter is consistent after the reconstruction, except for ammonium. The inconsistent results for ammonium could be explained by changes in the Cation Exchange Capacity (CEC) in the newly paved riverbed. Because the bed is new, it was not yet capable of keeping the newly produced ammonium by sorption and further led to the breakthrough of the ammonium plume. By estimation, the peak of the ammonium plume would reach the most distant observation well before February 2024, while the peaking concentration could be further dampened by sorption and diluted by the afterward low ammonium flow. The consistent DOC and SUVA level suggests that there was no clear preference for the organic matter removal along the flow path.
Most reading theories assume that readers aim at word centers for optimal information processing. During reading, saccade targeting turns out to be imprecise: Saccades’ initial landing positions often miss the word centers and have high variance, with an additional systematic error that is modulated by the distance from the launch site to the center of the target word. The performance of the oculomotor system, as reflected in the statistics of within-word landing positions, turns out to be very robust and mostly affected by the spatial information during reading. Hence, it is assumed that the saccade generation is highly automated.
The main goal of this thesis is to explore the performance of the oculomotor system under various reading conditions where orthographic information and the reading direction were manipulated. Additionally, the challenges in understanding the eye movement data to represent the oculomotor process during reading are addressed.
Two experimental studies and one simulation study were conducted for this thesis, which resulted in the following main findings:
(i) Reading texts with orthographic manipulations leads to specific changes in the eye movement patterns, both in temporal and spatial measures. The findings indicate that the oculomotor control of eye movements during reading is dependent on reading conditions (Chapter 2 & 3).
(ii) Saccades’ accuracy and precision can be simultaneously modulated under reversed reading condition, supporting the assumption that the random and systematic oculomotor errors are not independent. By assuming that readers increase the precision of sensory observation while maintaining the learned prior knowledge when reading direction was reversed, a process-oriented Bayesian model for saccade targeting can account for the simultaneous reduction of oculomotor errors (Chapter 2).
(iii) Plausible parameter values serving as proxies for the intended within-word landing positions can be estimated by using the maximum a posteriori estimator from Bayesian inference. Using the mean value of all observations as proxies is insufficient for studies focusing on the launch-site effect because the method exhibits the strongest bias when estimating the size of the effect. Mislocated fixations remain a challenge for the currently known estimation methods, especially when the systematic oculomotor error is large (Chapter 4).
The results reported in this thesis highlight the role of the oculomotor system, together with underlying cognitive processes, in eye movements during reading. The modulation of oculomotor control can be captured through a precise analysis of landing positions.
It has frequently been observed that single emotional events are not only more efficiently processed, but also better remembered, and form longer-lasting memory traces than neutral material. However, when emotional information is perceived as a part of a complex event, such as in the context of or in relation to other events and/or source details, the modulatory effects of emotion are less clear. The present work aims to investigate how emotional, contextual source information modulates the initial encoding and subsequent long-term retrieval of associated neutral material (item memory) and contextual source details (contextual source memory). To do so, a two-task experiment was used, consisting of an incidental encoding task in which neutral objects were displayed over different contextual background scenes which varied in emotional content (unpleasant, pleasant, and neutral), and a delayed retrieval task (1 week), in which previously-encoded objects and new ones were presented. In a series of studies, behavioral indices (Studies 2, 3, and 5), event-related potentials (ERPs; Studies 1-4), and functional magnetic resonance imaging (Study 5) were used to investigate whether emotional contexts can rapidly tune the visual processing of associated neutral information (Study 1) and modulate long-term item memory (Study 2), how different recognition memory processes (familiarity vs. recollection) contribute to these emotion effects on item and contextual source memory (Study 3), whether the emotional effects of item memory can also be observed during spontaneous retrieval (Sstudy 4), and which brain regions underpin the modulatory effects of emotional contexts on item and contextual source memory (Study 5). In Study 1, it was observed that emotional contexts by means of emotional associative learning, can rapidly alter the processing of associated neutral information. Neutral items associated with emotional contexts (i.e. emotional associates) compared to neutral ones, showed enhanced perceptual and more elaborate processing after one single pairing, as indexed by larger amplitudes in the P100 and LPP components, respectively. Study 2 showed that emotional contexts produce longer-lasting memory effects, as evidenced by better item memory performance and larger ERP Old/New differences for emotional associates. In Study 3, a mnemonic differentiation was observed between item and contextual source memory which was modulated by emotion. Item memory was driven by familiarity, independently of emotional contexts during encoding, whereas contextual source memory was driven by recollection, and better for emotional material. As in Study 2, enhancing effects of emotional contexts for item memory were observed in ERPs associated with recollection processes. Likewise, for contextual source memory, a pronounced recollection-related ERP enhancement was observed for exclusively emotional contexts. Study 4 showed that the long-term recollection enhancement of emotional contexts on item memory can be observed even when retrieval is not explicitly attempted, as measured with ERPs, suggesting that the emotion enhancing effects on memory are not related to the task embedded during recognition, but to the motivational relevance of the triggering event. In Study 5, it was observed that enhancing effects of emotional contexts on item and contextual source memory involve stronger engagement of the brain's regions which are associated with memory recollection, including areas of the medial temporal lobe, posterior parietal cortex, and prefrontal cortex.
Taken together, these findings suggest that emotional contexts rapidly modulate the initial processing of associated neutral information and the subsequent, long-term item and contextual source memories. The enhanced memory effects of emotional contexts are strongly supported by recollection rather than familiarity processes, and are shown to be triggered when retrieval is both explicitly and spontaneously attempted. These results provide new insights into the modulatory role of emotional information on the visual processing and the long-term recognition memory of complex events. The present findings are integrated into the current theoretical models and future ventures are discussed.
Studies on the unsustainable use of groundwater resources are still considered incipient since it is frequently a poorly understood and managed, devalued and inadequately protected natural resource. Groundwater Recharge (GWR) is one of the most challenging elements to estimate since it can rarely be measured directly and cannot easily be derived from existing data. To overcome these limitations, many hydro(geo)logists have combined different approaches to estimate large-scale GWR, namely: remote sensing products, such as IMERG product; Water Budget Equation, also in combination with hydrological models, and; Geographic Information System (GIS), using estimation formulas. For intermediary-scale GWR estimation, there exist: Non-invasive Cosmic-Ray Neutron Sensing (CRNS); wireless networks from local soil probes; and soil hydrological models, such as HYDRUS. Accordingly, this PhD thesis aims, on the one hand, to demonstrate a GIS-based model coupling for estimating the GWR distribution on a large scale in tropical wet basins. On the other hand, it aims to use the time series from CRNS and invasive soil moisture probes to inversely calibrate the soil hydraulic properties, and based on this, estimating the intermediary-scale GWR using a soil hydrological model. For such purpose, two tropical wet basins located in a complex sedimentary aquifer in the coastal Northeast region of Brazil were selected. These are the João Pessoa Case Study Area and the Guaraíra Experimental Basin. Several satellite products in the first area were used as input to the GIS-based water budget equation model for estimating the water balance components and GWR in 2016 and 2017. In addition, the point-scale measurement and CRNS data were used in the second area to determine the soil hydraulic properties, and to estimate the GWR in the 2017-2018 and 2018-2019 hydrological years. The resulting values of GWR on large- and intermediary-scale were then compared and validated by the estimates obtained by groundwater table fluctuations. The GWR rates for IMERG- and rain-gauge-based scenarios showed similar coefficients between 68% and 89%, similar mean errors between 30% and 34%, and slightly-different bias between -13% and 11%. The results of GWR rates for soil probes and CRNS soil moisture scenarios ranged from -5.87 to -61.81 cm yr-1, which corresponds to 5% and 38% of the precipitation. The calculations of the mean GWR rates on large-scale, based on remote sensing data, and on intermediary-scale, based on CRNS data, held similar results for the Podzol soil type, namely 17.87% and 17% of the precipitation. It is then concluded that the proposed methodologies allowed for estimating realistically the GWR over the study areas, which can be a ground-breaking step towards improving the water management and decision-making in the Northeast of Brazil.
In der vorliegenden Arbeit wird untersucht, in wie weit physikalische Experimente ein flow-Erleben bei Lernenden hervorrufen. Flow-Erleben wird als Motivationsursache gesehen und soll den Weg zu Freude und Glück darstellen. Insbesondere wegen dem oft zitierten Fachkräftemangel in naturwissenschaftlichen und technischen Berufen ist eine Motivationssteigerung in naturwissenschaftlichen Unterrichtsfächern wichtig. Denn trotz Leistungssteigerungen in internationalen Vergleichstests möchten in Deutschland deutlich weniger Schüler*innen einen solchen Beruf ergreifen als in anderen Industriestaaten. Daher gilt es, möglichst früh Schüler*innen für naturwissenschaftlich-technische Fächer zu begeistern und insbesondere im regelrecht verhassten Physikunterricht flow-Erleben zu erzeugen.
Im Rahmen dieser Arbeit wird das flow-Erleben von Studierenden in klassischen Laborexperimenten und FELS (Forschend-Entdeckendes Lernen mit dem Smartphone) als Lernumgebung untersucht. FELS ist eine an die Lebenswelt der Schüler*innen angepasste Lernumgebung, in der sie mit Smartphones ihre eigene Lebenswelt experimentell untersuchen.
Es zeigt sich, dass sowohl klassische Laborexperimente als auch in der Lebenswelt durchgeführte, smartphonebasierte Experimente flow-Erleben erzeugen. Allerdings verursachen die smartphonebasierten Experimente kaum Stressgefühle.
Die in dieser Arbeit herausgefundenen Ergebnisse liefern einen ersten Ansatz, der durch Folgestudien erweitert werden sollte.
In this thesis, I examine different A-bar movement dependencies in Igbo, a Benue-Congo language spoken in southern Nigeria. Movement dependencies are found in constructions where an element is moved to the left edge of the clause to express information-structural categories such as in questions, relativization and focus. I show that these constructions in Igbo are very uniform from a syntactic point of view. The constructions are built on two basic fronting operations: relativization and focus movement, and are biclausal. I further investigate several morphophonological effects that are found in these A-bar constructions. I propose that these effects are reflexes of movement that are triggered when an element is moved overtly in relativization or focus. This proposal helps to explain the tone patterns that have previously been assumed to be a property of relative clauses. The thesis adds to the growing body of tonal reflexes of A-bar movement reported for a few African languages. The thesis also provides an insight into the complementizer domain (C-domain) of Igbo.
In this dissertation, I describe the mechanisms involved in magmatic plumbing system establishment and evolution. Magmatic plumbing systems play a key role in determining volcanic activity style and recognizing its complexities can help in forecasting eruptions, especially within hazardous volcanic systems such as calderas. I explore the mechanisms of dike emplacement and intrusion geometry that shape magmatic plumbing systems beneath caldera-like topographies and how their characteristics relate to precursory activity of a volcanic eruption. For this purpose, I use scaled laboratory models to study the effect of stress field reorientation on a propagating dike induced by caldera topography. I construct these models by using solid gelatin to mimic the elastic properties of the earth's crust with a caldera on the surface. I inject water as the magma analog and track the evolution of the experiments through qualitative (geometry and stress evolution) and quantitative (displacement and strain computation) descriptions. The results show that a vertical dike deviates towards and outside of the caldera-like margin due to stress field reorientation beneath the caldera-like topography. The propagating intrusion forms a circumferential-eruptive dike when the caldera-like size is small, whereas a cone sheet develops beneath the large caldera-like topography.
To corroborate the results obtained from the experimental models, this thesis also describes the results of a case study utilizing seismic monitoring data associated with the unrest period of the 2015 phreatic eruption of Lascar volcano. Lascar has a crater with a small-scale caldera-like topography and exhibited long-lasting anomalous evolution of the number of long-period (LP) events preceding the 2015 eruption. I apply seismic techniques to constrain the hypocentral locations of LP events and characterize their spatial distribution, obtaining an image of Lascar's plumbing system. I observe an agreement in shallow hypocentral locations obtained through four different seismic techniques; nevertheless, the cross-correlation technique provides the best results. These results depict a plumbing system with a narrow sub-vertical deep conduit and a shallow hydrothermal system, where most LP events are located. These two regions are connected through an intermediate region of path divergence, whose geometry and orientation likely is influenced by stress reorientation due to topographic effects of the caldera-like crater.
Finally, in order to further enhance the interpretations of the previous case study, the seismic data was analyzed in tandem with a complementary multiparametric monitoring dataset. This complementary study confirms that the anomalous LP activity occurred as a sign of unrest in the preparatory phase of the phreatic eruption. In addition, I show how changes observed in other monitored parameters enabled to detect further signs of unrest in the shallow hydrothermal system. Overall, this study demonstrates that detecting complex geometric regions within plumbing systems beneath volcanoes is fundamental to produce an effective forecast of eruptions that from a first view seem to occur without any precursory activity.
Furthermore, through the development of this research I show that combining methods that include both observations and models allows one to obtain a more precise interpretation of the volcanic processes.
Hydrological models are important tools for the simulation and quantification of the water cycle.
They therefore aid in the understanding of hydrological processes, prediction of river discharge, assessment of the impacts of land use and climate changes, or the management of water resources.
However, uncertainties associated with hydrological modelling are still large.
While significant research has been done on the quantification and reduction of uncertainties, there are still fields which have gained little attention so far, such as model structural uncertainties that are related to the process implementations in the models.
This holds especially true for complex process-based models in contrast to simpler conceptual models.
Consequently, the aim of this thesis is to improve the understanding of structural uncertainties with focus on process-based hydrological modelling, including methods for their quantification.
To identify common deficits of frequently used hydrological models and develop further strategies on how to reduce them, a survey among modellers was conducted.
It was found that there is a certain degree of subjectivity in the perception of modellers, for instance with respect to the distinction of hydrological models into conceptual groups.
It was further found that there are ambiguities on how to apply a certain hydrological model, for instance how many parameters should be calibrated, together with a large diversity of opinion regarding the deficits of models.
Nevertheless, evapotranspiration processes are often represented in a more physically based manner, while processes of groundwater and soil water movement are often simplified, which many survey participants saw as a drawback.
A large flexibility, for instance with respect to different alternative process implementations or a small number of parameters that needs to be calibrated, was generally seen as strength of a model.
Flexible and efficient software, which is straightforward to apply, has been increasingly acknowledged by the hydrological community.
This work further elaborated on this topic in a twofold way.
First, a software package for semi-automated landscape discretisation has been developed, which serves as a tool for model initialisation.
This was complemented by a sensitivity analysis of important and commonly used discretisation parameters, of which the size of hydrological sub-catchments as well as the size and number of hydrologically uniform computational units appeared to be more influential than information considered for the characterisation of hillslope profiles.
Second, a process-based hydrological model has been implemented into a flexible simulation environment with several alternative process representations and a number of numerical solvers.
It turned out that, even though computation times were still long, enhanced computational capabilities nowadays in combination with innovative methods for statistical analysis allow for the exploration of structural uncertainties of even complex process-based models, which up to now was often neglected by the modelling community.
In a further study it could be shown that process-based models may even be employed as tools for seasonal operational forecasting.
In contrast to statistical models, which are faster to initialise and to apply, process-based models produce more information in addition to the target variable, even at finer spatial and temporal scales, and provide more insights into process behaviour and catchment functioning.
However, the process-based model was much more dependent on reliable rainfall forecasts.
It seems unlikely that there exists a single best formulation for hydrological processes, even for a specific catchment.
This supports the use of flexible model environments with alternative process representations instead of a single model structure.
However, correlation and compensation effects between process formulations, their parametrisation, and other aspects such as numerical solver and model resolution, may lead to surprising results and potentially misleading conclusions.
In future studies, such effects should be more explicitly addressed and quantified.
Moreover, model functioning appeared to be highly dependent on the meteorological conditions and rainfall input generally was the most important source of uncertainty.
It is still unclear, how this could be addressed, especially in the light of the aforementioned correlations.
The use of innovative data products, e.g.\ remote sensing data in combination with station measurements, and efficient processing methods for the improvement of rainfall input and explicit consideration of associated uncertainties is advisable to bring more insights and make hydrological simulations and predictions more reliable.
Pre-service physics teachers often have difficulties seeing the relevance of the content of the content knowledge courses they attend in their study; they regularly do not see the connection with the physics they need in their later profession as a secondary school teacher. A lower perceived relevance is however connected to motivational problems which leads to both a qualitative and quantitative problem: not only is there a relation between the drop-out of students and their motivation, but their level of conceptual understanding is also suffering under this lower motivation.
In order to increase the perceived relevance of the problems that pre-service physics teachers have to solve for the courses Experimentalphysik 1 and 2, an intervention study has been designed and implemented. In these content knowledge courses, first- and second semester students attend lectures, do experiments and they solve problems on weekly problem sets which are discussed in tutorial sessions. The problems on a typical problem set are however mainly quantitative problems that have no connection to school. In the intervention study, regular, quantitative problems are used next to two newly designed conceptual (qualitative) problem types. One of these problem types are conceptual problems that have no implicit or explicit school-relevance; the other problems are based on school-related content knowledge. This content knowledge category describes knowledge that leads to a deeper understanding of school knowledge, relevant for teachers: a teacher-specific content knowledge. A new model for this category, SRCK, has been conceptualised and operationalised as a cross-disciplinary model that consists of conceptual knowledge and skills necessary for this deeper understanding of content that is relevant to teaching at a secondary school.
During two semesters in both the courses Experimentalphysik 1 and 2 (N = 75 and N = 43 respectively) students had to solve the problems on the problem sets. At the start of every tutorial session, they were asked to rate all the problems with respect to perceived relevance and difficulty. Analyses show that the problems based on SRCK were perceived as more relevant than the regular, quantitative problems. However, this difference is only statistically significant for the course Experimentalphysik 2.
The SRCK-problems show the connection between the content of the problems and school physics and are therefore seen as more relevant. In Experimentalphysik 1, the content is not that distant to school physics. This might be the reason that the students see all the problem types as just as relevant to them. When we however only look at the final third of the first semester, where more advanced subjects - that are not necessarily discussed in secondary school physics – are discussed, we see that in this part the SRCK-problems are seen as more relevant than the regular problems too. We can therefore conclude that if the content is distant to school physics, the SRCK-problems are seen as more relevant than the regular problems. We do not see a statistically significant difference between the (conceptual) problems based on SRCK and the conceptual problems that are not based on SRCK (and therefore have no school relevance). This means that we do not know whether the conceptual problems based on SRCK are more relevant because they are based on SRCK or because they are conceptual.
In order to find out what problem properties have an influence on the perceived relevance of these problems by pre-service teachers, an interview study with N = 7 pre-service teachers was conducted.
This interview was done using the repertory grid technique, based on the personal construct theory by Kelly (1955). This technique makes it possible to find personal constructs of students: how do students determine for themselves how relevant a problem is to them? It allows to capture their intuition or gut feeling. These personal constructs could then give us information about the problem properties that have a positive influence on relevance.
Six categories of personal constructs were found that have a high similarity to relevance. According to the personal constructs that were generated in the interviews, physics problems are more relevant when they are more conceptual (compared to calculational), are close to everyday life, have a lower level of mathematical requirement, have a content that is more school-relevant, give the students the idea that they have learned something, and contain a situation that has to be analysed.
Of the six problem properties described above, one can be connected to the facets of SRCK: many problems based on SRCK contain a situation (e.g. a textbook with a simplified explanation, a student solution with an error) that has to be analysed.
The expectation is that problems that are based on the six properties described above would be perceived as more relevant to pre-service physics teachers.
Living cells rely on transport and interaction of biomolecules to perform their diverse functions. A powerful toolbox to study these highly dynamic processes in the native environment is provided by fluorescence fluctuation spectroscopy (FFS) techniques. In more detail, FFS takes advantage of the inherent dynamics present in biological systems, such as diffusion, to infer molecular parameters from fluctuations of the signal emitted by an ensemble of fluorescently tagged molecules. In particular, two parameters are accessible: the concentration of molecules and their transit times through the observation volume. In addition, molecular interactions can be measured by analyzing the average signal emitted per molecule - the molecular brightness - and the cross-correlation of signals detected from differently tagged species.
In the present work, several FFS techniques were implemented and applied in different biological contexts. In particular, scanning fluorescence correlation spectroscopy (sFCS) was performed to measure protein dynamics and interactions at the plasma membrane (PM) of cells, and number and brightness (N&B) analysis to spatially map molecular aggregation. To account for technical limitations and sample related artifacts, e.g. detector noise, photobleaching, or background signal, several correction schemes were explored. In addition, sFCS was combined with spectral detection and higher moment analysis of the photon count distribution to resolve multiple species at the PM.
Using scanning fluorescence cross-correlation spectroscopy and cross-correlation N&B, the interactions of amyloid precursor-like protein 1 (APLP1), a synaptic membrane protein, were investigated. It is shown for the first time directly in living cells, that APLP1 undergoes specific interactions at cell-cell contacts. It is further demonstrated that zinc ions induce formation of large APLP1 clusters that enrich at contact sites and bind to clusters on the opposing cell. Altogether, these results provide direct evidence that APLP1 is a zinc ion dependent neuronal adhesion protein.
In the context of APLP1, discrepancies of oligomeric state estimates were observed, which were attributed to non-fluorescent states of the chosen red fluorescent protein (FP) tag mCardinal (mCard). Therefore, multiple FPs and their performance in FFS based measurements of protein interactions were systematically evaluated. The study revealed superior properties of monomeric enhanced green fluorescent protein (mEGFP) and mCherry2. Furthermore, a simple correction scheme allowed unbiased in situ measurements of protein oligomerization by quantifying non-fluorescent state fractions of FP tags. The procedure was experimentally confirmed for biologically relevant protein complexes consisting of up to 12 monomers.
In the last part of this work, fluorescence correlation spectroscopy (FCS) and single particle tracking (SPT) were used to characterize diffusive transport dynamics in a bacterial biofilm model. Biofilms are surface adherent bacterial communities, whose structural organization is provided by extracellular polymeric substances (EPS) that form a viscous polymer hydrogel. The presented study revealed a probe size and polymer concentration dependent (anomalous) diffusion hindrance in a reconstituted EPS matrix system caused by polymer chain entanglement at physiological concentrations. This result indicates a meshwork-like organization of the biofilm matrix that allows free diffusion of small particles, but strongly hinders diffusion of larger particles such as bacteriophages. Finally, it is shown that depolymerization of the matrix by phage derived enzymes rapidly facilitated free diffusion. In the context of phage infections, such enzymes may provide a key to evade trapping in the biofilm matrix and promote efficient infection of bacteria. In combination with phage application, matrix depolymerizing enzymes may open up novel antimicrobial strategies against multiresistant bacterial strains, as a promising, more specific alternative to conventional antibiotics.
Percolation process, which is intrinsically a phase transition process near the critical point, is ubiquitous in nature. Many of its applications embrace a wide spectrum of natural phenomena ranging from the forest fires, spread of contagious diseases, social behaviour dynamics to mathematical finance, formation of bedrocks and biological systems. The topology generated by the percolation process near the critical point is a random (stochastic) fractal. It is fundamental to the percolation theory that near the critical point, a unique infinite fractal structure, namely the infinite cluster, would emerge. As de Gennes suggested, the properties of the infinite cluster could be deduced by studying the dynamical behaviour of the random walk process taking place on it. He coined the term the ant in the labyrinth. The random walk process on such an infinite fractal cluster exhibits a subdiffusive dynamics in the sense that the mean squared displacement grows as ~t2/dw, where dw, called the fractal dimension of the random walk path, is greater than 2. Thus, the random walk process on the infinite cluster is classified as a process exhibiting the properties of anomalous diffusions. Yet near the critical point, the infinite cluster is not the sole emergent topology, but it coexists with other clusters whose size is finite. Though finite, on specific length scales these finite clusters exhibit fractal properties as well. In this work, it is assumed that the random walk process could take place on these finite size objects as well. Bearing this assumption in mind requires one address the non-equilibrium initial condition. Due to the lack of knowledge on the propagator of the random walk process in stochastic random environments, a phenomenological correspondence between the renowned Ornstein-Uhlenbeck process and the random walk process on finite size clusters is established. It is elucidated that when an ensemble of these finite size clusters and the infinite cluster is considered, the anisotropy and size of these finite clusters effects the mean squared displacement and its time averaged counterpart to grow in time as ~t(d+df (t-2))/dw, where d is the embedding Euclidean dimension, df is the fractal dimension of the infinite cluster, and , called the Fisher exponent, is a critical exponent governing the power-law distribution of the finite size clusters. Moreover, it is demonstrated that, even though the random walk process on a specific finite size cluster is ergodic, it exhibits a persistent non-ergodic behaviour when an ensemble of finite size and the infinite clusters is considered.
This cumulative thesis is concerned with the evolution of geomagnetic activity since the beginning of the 20th century, that is, the time-dependent response of the geomagnetic field to solar forcing. The focus lies on the description of the magnetospheric response field at ground level, which is particularly sensitive to the ring current system, and an interpretation of its variability in terms of the solar wind driving. Thereby, this work contributes to a comprehensive understanding of long-term solar-terrestrial interactions.
The common basis of the presented publications is formed by a reanalysis of vector magnetic field measurements from geomagnetic observatories located at low and middle geomagnetic latitudes. In the first two studies, new ring current targeting geomagnetic activity indices are derived, the Annual and Hourly Magnetospheric Currents indices (A/HMC). Compared to existing indices (e.g., the Dst index), they do not only extend the covered period by at least three solar cycles but also constitute a qualitative improvement concerning the absolute index level and the ~11-year solar cycle variability. The analysis of A/HMC shows that (a) the annual geomagnetic activity experiences an interval-dependent trend with an overall linear decline during 1900–2010 of ~5 % (b) the average trend-free activity level amounts to ~28 nT (c) the solar cycle related variability shows amplitudes of ~15–45 nT (d) the activity level for geomagnetically quiet conditions (Kp<2) lies slightly below 20 nT. The plausibility of the last three points is ensured by comparison to independent estimations either based on magnetic field measurements from LEO satellite missions (since the 1990s) or the modeling of geomagnetic activity from solar wind input (since the 1960s). An independent validation of the longterm trend is problematic mainly because the sensitivity of the locally measured geomagnetic activity depends on geomagnetic latitude. Consequently, A/HMC is neither directly comparable to global geomagnetic activity indices (e.g., the aa index) nor to the partly reconstructed open solar magnetic flux, which requires a homogeneous response of the ground-based measurements to the interplanetary magnetic field and the solar wind speed.
The last study combines a consistent, HMC-based identification of geomagnetic storms from 1930–2015 with an analysis of the corresponding spatial (magnetic local time-dependent) disturbance patterns. Amongst others, the disturbances at dawn and dusk, particularly their evolution during the storm recovery phases, are shown to be indicative of the solar wind driving structure (Interplanetary Coronal Mass Ejections vs. Stream or Co-rotating Interaction Regions), which enables a backward-prediction of the storm driver classes. The results indicate that ICME-driven geomagnetic storms have decreased since 1930 which is consistent with the concurrent decrease of HMC. Out of the collection of compiled follow-up studies the inclusion of measurements from high-latitude geomagnetic observatories into the third study’s framework seems most promising at this point.
A large body of research now supports the presence of both syntactic and lexical predictions in sentence processing. Lexical predictions, in particular, are considered to indicate a deep level of predictive processing that extends past the structural features of a necessary word (e.g. noun), right down to the phonological features of the lexical identity of a specific word (e.g. /kite/; DeLong et al., 2005). However, evidence for lexical predictions typically focuses on predictions in very local environments, such as the adjacent word or words (DeLong et al., 2005; Van Berkum et al., 2005; Wicha et al., 2004). Predictions in such local environments may be indistinguishable from lexical priming, which is transient and uncontrolled, and as such may prime lexical items that are not compatible with the context (e.g. Kukona et al., 2014). Predictive processing has been argued to be a controlled process, with top-down information guiding preactivation of plausible upcoming lexical items (Kuperberg & Jaeger, 2016). One way to distinguish lexical priming from prediction is to demonstrate that preactivated lexical content can be maintained over longer distances.
In this dissertation, separable German particle verbs are used to demonstrate that preactivation of lexical items can be maintained over multi-word distances. A self-paced reading time and an eye tracking experiment provide some support for the idea that particle preactivation triggered by a verb and its context can be observed by holding the sentence context constant and manipulating the predictabilty of the particle. Although evidence of an effect of particle predictability was only seen in eye tracking, this is consistent with previous evidence suggesting that predictive processing facilitates only some eye tracking measures to which the self-paced reading modality may not be sensitive (Staub, 2015; Rayner1998). Interestingly, manipulating the distance between the verb and the particle did not affect reading times, suggesting that the surprisal-predicted faster reading times at long distance may only occur when the additional distance is created by information that adds information about the lexical identity of a distant element (Levy, 2008; Grodner & Gibson, 2005). Furthermore, the results provide support for models proposing that temporal decay is not major influence on word processing (Lewandowsky et al., 2009; Vasishth et al., 2019).
In the third and fourth experiments, event-related potentials were used as a method for detecting specific lexical predictions. In the initial ERP experiment, we found some support for the presence of lexical predictions when the sentence context constrained the number of plausible particles to a single particle. This was suggested by a frontal post-N400 positivity (PNP) that was elicited when a lexical prediction had been violated, but not to violations when more than one particle had been plausible. The results of this study were highly consistent with previous research suggesting that the PNP might be a much sought-after ERP marker of prediction failure (DeLong et al., 2011; DeLong et al., 2014; Van Petten & Luka, 2012; Thornhill & Van Petten, 2012; Kuperberg et al., 2019). However, a second experiment in a larger sample experiment failed to replicate the effect, but did suggest the relationship of the PNP to predictive processing may not yet be fully understood. Evidence for long-distance lexical predictions was inconclusive.
The conclusion drawn from the four experiments is that preactivation of the lexical entries of plausible upcoming particles did occur and was maintained over long distances. The facilitatory effect of this preactivation at the particle site therefore did not appear to be the result of transient lexical priming. However, the question of whether this preactivation can also lead to lexical predictions of a specific particle remains unanswered. Of particular interest to future research on predictive processing is further characterisation of the PNP. Implications for models of sentence processing may be the inclusion of long-distance lexical predictions, or the possibility that preactivation of lexical material can facilitate reading times and ERP amplitude without commitment to a specific lexical item.
Seit Jahrzehnten stellen die molekularen Schalter ein wachsendes Forschungsgebiet dar. Im Rahmen dieser Dissertation stand die Verbesserung der thermischen Stabilität, der Auslesbarkeit und Schaltbarkeit dieser molekularen Schalter in komplexen Umgebungen mithilfe computergestützter Chemie im Vordergrund.
Im ersten Projekt wurde die Kinetik der thermischen E → Z-Isomerisierung und die damit verbundene thermische Stabilität eines Azobenzol-Derivats untersucht. Dafür wurde Dichtefunktionaltheorie (DFT) in Verbindung mit der Eyring-Theorie des Übergangszustandes (TST) angewendet. Das Azobenzol-Derivat diente als vereinfachtes Modell für das Schalten in einer komplexen Umgebung (hier in metallorganischen Gerüsten). Es wurden thermodynamische und kinetische Größen unter verschiedenen Einflüssen berechnet, wobei gute Übereinstimmungen mit dem Experiment gefunden wurden. Die hier verwendete Methode stellte einen geeigneten Ansatz dar, um diese Größen mit angemessener Genauigkeit vorherzusagen.
Im zweiten Projekt wurde die Auslesbarkeit der Schaltzustände in Form des nichtlinearen optischen (NLO) Kontrastes für die Molekülklasse der Fulgimide untersucht. Die dafür benötigten dynamischen Hyperpolarisierbarkeiten unter Berücksichtigung der Elektronenkorrelation wurden mittels einer etablierten Skalierungsmethode berechnet. Es wurden verschiedene Fulgimide analysiert, wobei viele experimentelle Befunde bestätigt werden konnten. Darüber hinaus legte die theoretische Vorhersage für ein weiteres System nahe, dass insbesondere die Erweiterung des π-Elektronensystems ein vielversprechender Ansatz zur Verbesserung von NLO-Kontrasten darstellt. Die Fulgimide verfügen somit über nützliche Eigenschaften, sodass diese in Zukunft als Bauelemente in photonischen und optoelektronischen Bereichen Anwendungen finden könnten.
Im dritten Projekt wurde die E → Z-Isomerisierung auf ein quantenmechanisch (QM) behandeltes Dimer mit molekularmechanischer (MM) Umgebung und zwei Fluorazobenzol-Monomeren durch Moleküldynamik simuliert. Dadurch wurde die Schaltbarkeit in komplexer Umgebung (hier selbstorgansierte Einzelschichten = SAMs) bzw. von Azobenzolderivaten analysiert. Mit dem QM/MM Modell wurden sowohl Van-der-Waals-Interaktionen mit der Umgebung als auch elektronische Kopplung (nur zwischen QM-Molekülen) berücksichtigt. Dabei wurden systematische Untersuchungen zur Packungsdichte durchgeführt. Es zeigte sich, dass bereits bei einem Molekülabstand von 4.5 Å die Quantenausbeute (prozentuale Anzahl erfolgreicher Schaltprozesse) des Monomers erreicht wird. Die größten Quantenausbeuten wurden für die beiden untersuchten Fluorazobenzole erzielt. Es wurden die Effekte des Molekülabstandes und der Einfluss von Fluorsubstituenten auf die Dynamik eingehend untersucht, sodass der Weg für darauf aufbauende Studien geebnet ist.
Im Rahmen der vom Bundesministerium für Bildung und -forschung geförderten Forschungsinitiative „BonaRes – Boden als nachhaltige Ressource der Bioökonomie“ soll sich das Teilprojekt „I4S – integrated system for site-specific soil fertility management“ der Entwicklung eines integrierten Systems zum ortsspezifischen Management der Bodenfruchtbarkeit widmen. Hierfür ist eine Messplattform zur Bestimmung relevanter Bodeneigenschaften und der quantitativen Analyse ausgewählter Makro- und Mikronährstoffe geplant. In der ersten Phase dieses Projekts liegt das Hauptaugenmerk auf der Kalibrierung und Validierung der verschiedenen Sensoren auf die Matrix Boden, der Probennahme auf dem Acker und der Planung sowie dem Aufbau der Messplattform. Auf dieser Plattform sollen in der zweiten Phase des Projektes die verschiedenen Bodensensoren installiert, sowie Modelle und Entscheidungsalgorithmen zur Steuerung der Düngung und dementsprechend Verbesserung der Bodenfunktionen erstellt werden.
Ziel der vorliegenden Arbeit ist die Grundlagenuntersuchung und Entwicklung einer robusten Online-Analyse mittels Energie-dispersiver Röntgenfluoreszenzspektroskopie (EDRFA) zur Quantifizierung ausgewählter Makro- und Mikronährstoffe in Böden für eine kostengünstige und flächendeckende Kartierung von Ackerflächen. Für die Entwicklung eines Online-Verfahrens wurde ein dem Stand der Technik entsprechender Röntgenfluoreszenzmesskopf in Betrieb genommen und die dazugehörigen Geräteparameter auf die Matrix Boden optimiert. Die Bestimmung der analytischen Qualitäts-merkmale wie Präzision und Nachweisgrenzen fand für eine Auswahl an Nährelementen von Aluminium bis Zink statt. Um eine möglichst Matrix-angepasste Kalibrierung zu erhalten, wurde sowohl mit zertifizierten Referenzmaterialien (CRM), als auch mit Ackerböden kalibriert. Da einer der größten Nachteile der Röntgenfluoreszenzanalyse die Beeinflussung durch Matrixeffekte ist, wurde neben der klassischen univariaten Datenauswertung auch die chemometrische multivariate Methode der Partial Least Squares Regression (PLSR) eingesetzt. Die PLSR bietet dabei den Vorteil, Matrixeffekte auszugleichen, wodurch robustere Kalibriermodelle erhalten werden können. Zusätzlich wurde eine Hauptkomponentenanalyse (PCA) durchgeführt, um Gemeinsamkeiten und Ausreißer innerhalb des Probensets zu identifizieren. Es zeigte sich, dass eine Klassifizierung der Böden anhand ihrer Textur Sand, Schluff, Lehm und Ton möglich ist.
Aufbauend auf den Ergebnissen idealer Bodenproben (zu Tabletten gepresste luftgetrocknete Proben mit Korngrößen < 0,5 mm) wurde im Verlauf dieser Arbeit die Probenvorbereitung immer weiter reduziert und der Einfluss verschiedener Kenngrößen untersucht. Diese Einflussfaktoren können die Dichte und die Homogenität der Probe, sowie Korngrößeneffekte und die Feuchtigkeit sein. Anhand des RMSE (Wurzel der mittleren Fehlerquadratsumme) und unter Berücksichtigung der Residuen werden die jeweils erstellten Kalibriermodelle miteinander verglichen. Um die Güte der Modelle zu bewerten, wurden diese mit einem Testset validiert. Hierfür standen 662 Bodenproben von 15 verschiedenen Standorten in Deutschland zur Verfügung. Da die Ergebnisse an gepressten Tabletten für die Elemente Al, Si, K, Ca, Ti, Mn, Fe und Zn den Anforderungen für eine spätere Online-Analyse entsprechen, wurden im weiteren Verlauf dieser Arbeit Kalibriermodelle mit losen Bodenproben erstellt. Auch hier konnten gute Ergebnisse durch ausreichende Nachweisgrenzen und eine niedrige gemittelte Messabweichung bei der Vorhersage unbekannter Testproben erzielt werden. Es zeigte sich, dass die Vorhersagefähigkeit mit der multivariaten PLSR besser ist als mit der univariaten Datenauswertung, insbesondere für die Elemente Mn und Zn.
Der untersuchte Einfluss der Feuchtigkeit und der Korngrößen auf die Quantifizierung der Elementgehalte war vor allem bei leichteren Elementen deutlich zu sehen. Es konnte schließlich eine multivariate Kalibrierung unter Berücksichtigung dieser Faktoren für die Elemente Al bis Zn erstellt werden, so dass ein Einsatz an Böden auf dem Acker möglich sein sollte. Eine höhere Messunsicherheit muss dabei einkalkuliert werden. Für eine spätere Probennahme auf dem Feld wurde zudem der Unterschied zwischen statischen und dynamischen Messungen betrachtet, wobei sich zeigte, dass beide Varianten genutzt werden können. Zum Abschluss wurde der hier eingesetzte Sensor mit einem kommerziell erhältlichen Hand-Gerät auf sein Quantifizierungspotential hin verglichen. Der Sensor weist anhand seiner Ergebnisse ein großes Potential als Online-Sensor für die Messplattform auf. Die Ergebnisse unter Laborbedingungen zeigen, dass eine robuste Analyse Ackerböden unter Berücksichtigung der Einflussfaktoren möglich ist.
The impact that catalysis has on global economy and environment is substantial, since 85% of all chemical industrial processes are catalytic. Among those, 80% of the processes are heterogeneously catalyzed, 17% make use of homogeneous catalysts, and 3% are biocatalytic processes. Especially in the pharmaceutical and agrochemical industry, a significant part of these processes involves chiral compounds. Obtaining enantiomerically pure compounds is necessary and it is usually accomplished by asymmetric synthesis and catalysis, as well as chiral separation. The efficiency of these processes may be vastly improved if the chiral selectors are positioned on a porous solid support, thereby increasing the available surface area for chiral recognition. Similarly, the majority of commercial catalysts are also supported, usually comprising of metal nanoparticles (NPs) dispersed on highly porous oxide or nanoporous carbon material.
Materials that have exceptional thermal and chemical stability, and are electrically conductive are porous carbons. Their stability in extreme pH regions and temperatures, the possibility to tailor their pore architecture and chemical functionalization, and their electric conductivity have already established these materials in the fields of separation and catalysis. However, their heterogeneous chemical structure with abundant defects make it challenging to develop reliable models for the investigation of structure-performance relationships. Therefore, there is a necessity for expanding the fundamental understanding of these robust materials under experimental conditions to allow for their further optimization for particular applications. This thesis gives a contribution to our knowledge about carbons, through different aspects, and in different applications.
On the one hand, a rather exotic novel application was investigated by attempts in synthesizing porous carbon materials with an enantioselective surface. Chapter 4.1 described an approach for obtaining mesoporous carbons with an enantioselective surface by direct carbonization of a chiral precursor. Two enantiomers of chiral ionic liquids (CIL) based on amino acid tyrosine were used as carbon precursors and ordered mesoporous silica SBA-15 served as a hard template for obtaining porosity. The chiral recognition of the prepared carbons has been tested in the solution by isothermal titration calorimetry with enantiomers of Phenylalanine as probes, as well as chiral vapor adsorption with 2-butanol enantiomers. Measurements in both solution and the gas phase revealed the differences in the affinity of carbons towards two enantiomers.
The atomic efficiency of the CIL precursors was increased in Chapter 4.2, and the porosity was developed independently from the development of chiral carbons, through the formation of stable composites of pristine carbon and CIL-derived coating. After the same set of experiments for the investigation of chirality, the enantiomeric ratios of the composites reported herein were even higher than in the previous chapter.
On the other hand, the structure‒activity relationship of carbons as supports for gold nanoparticles in a rather traditional catalytic model reaction, on the interface between gas, liquid, and solid, was studied. In Chapter 5.1 it was shown on the series of catalysts with different porosities that the kinetics of ᴅ-glucose oxidation reaction can be enhanced by increasing the local concentration of the reactants around the active phase of the catalyst. A large amount of uniform narrow mesopores connected to the surface of the Au catalyst supported on ordered mesoporous carbon led to the water confinement, which increased the solubility of the oxygen in the proximity of the catalyst and thereby increased the apparent catalytic activity of this catalyst.
After increasing the oxygen concentration in the internal area of the catalyst, in Chapter 5.2 the concentration of oxygen was increased in the external environment of the catalyst, by the introduction of less cohesive liquids that serve as efficient solvent for oxygen, perfluorinated compounds, near the active phase of the catalyst. This was achieved by a formation of catalyst particle-stabilized emulsions of perfluorocarbon in aqueous ᴅ-glucose solution, that further promoted the catalytic activity of gold-on-carbon catalyst.
The findings reported within this thesis are an important step in the understanding of the structure-related properties of carbon materials.
Comment sections of online news platforms are an essential space to express opinions and discuss political topics. However, the misuse by spammers, haters, and trolls raises doubts about whether the benefits justify the costs of the time-consuming content moderation. As a consequence, many platforms limited or even shut down comment sections completely. In this thesis, we present deep learning approaches for comment classification, recommendation, and prediction to foster respectful and engaging online discussions. The main focus is on two kinds of comments: toxic comments, which make readers leave a discussion, and engaging comments, which make readers join a discussion. First, we discourage and remove toxic comments, e.g., insults or threats. To this end, we present a semi-automatic comment moderation process, which is based on fine-grained text classification models and supports moderators. Our experiments demonstrate that data augmentation, transfer learning, and ensemble learning allow training robust classifiers even on small datasets. To establish trust in the machine-learned models, we reveal which input features are decisive for their output with attribution-based explanation methods. Second, we encourage and highlight engaging comments, e.g., serious questions or factual statements. We automatically identify the most engaging comments, so that readers need not scroll through thousands of comments to find them. The model training process builds on upvotes and replies as a measure of reader engagement. We also identify comments that address the article authors or are otherwise relevant to them to support interactions between journalists and their readership. Taking into account the readers' interests, we further provide personalized recommendations of discussions that align with their favored topics or involve frequent co-commenters. Our models outperform multiple baselines and recent related work in experiments on comment datasets from different platforms.
The Milky Way is a spiral galaxy consisting of a disc of gas, dust and stars embedded in a halo of dark matter. Within this dark matter halo there is also a diffuse population of stars called the stellar halo, that has been accreting stars for billions of years from smaller galaxies that get pulled in and disrupted by the large gravitational potential of the Milky Way. As they are disrupted, these galaxies leave behind long streams of stars that can take billions of years to mix with the rest of the stars in the halo. Furthermore, the amount of heavy elements (metallicity) of the stars in these galaxies reflects the rate of chemical enrichment that occurred in them, since the Universe has been slowly enriched in heavy elements (e.g. iron) through successive generations of stars which produce them in their cores and supernovae explosions. Therefore, stars that contain small amounts of heavy elements (metal-poor stars) either formed at early times before the Universe was significantly enriched, or in isolated environments. The aim of this thesis is to develop a better understanding of the substructure content and chemistry of the Galactic stellar halo, in order to gain further insight into the formation and evolution of the Milky Way.
The Pristine survey uses a narrow-band filter which specifically targets the Ca II H & K spectral absorption lines to provide photometric metallicities for a large number of stars down to the extremely metal-poor (EMP) regime, making it a very powerful data set for Galactic archaeology studies. In Chapter 2, we quantify the efficiency of the survey using a preliminary spectroscopic follow-up sample of ~ 200 stars. We also use this sample to establish a set of selection criteria to improve the success rate of selecting EMP candidates for follow-up spectroscopy. In Chapter 3, we extend this work and present the full catalogue of ~ 1000 stars from a three year long medium resolution spectroscopic follow-up effort conducted as part of the Pristine survey. From this sample, we compute success rates of 56% and 23% for recovering stars with [Fe/H] < -2.5 and [Fe/H] < -3.0, respectively. This demonstrates a high efficiency for finding EMP stars as compared to previous searches with success rates of 3-4%.
In Chapter 4, we select a sample of ~ 80000 halo stars using colour and magnitude cuts to select a main sequence turnoff population in the distance range 6 < dʘ < 20 kpc. We then use the spectroscopic follow-up sample presented in Chapter 3 to statistically rescale the Pristine photometric metallicities of this sample, and present the resulting corrected metallicity distribution function (MDF) of the halo. The slope at the metal-poor end is significantly shallower than previous spectroscopic efforts have shown, suggesting that there may be more metal-poor stars with [Fe/H] < -2.5 in the halo than previously thought. This sample also shows evidence that the MDF of the halo may not be bimodal as was proposed by previous works, and that the lack of globular clusters in the Milky Way may be the result of a physical truncation of the MDF rather than just statistical under-sampling.
Chapter 5 showcases the unexpected capability of the Pristine filter for separating blue horizontal branch (BHB) stars from Blue Straggler (BS) stars. We demonstrate a purity of 93% and completeness of 91% for identifying BHB stars, a substantial improvement over previous works. We then use this highly pure and complete sample of BHB stars to trace the halo density profile out to d > 100 kpc, and the Sagittarius stream substructure out to ~ 130 kpc.
In Chapter 6 we use the photometric metallicities from the Pristine survey to perform a clustering analysis of the halo as a function of metallicity. Separating the Pristine sample into four metallicity bins of [Fe/H] < -2, -2 < [Fe/H] < -1.5, -1.5 < [Fe/H] < -1 and -0.9 < [Fe/H] < -0.8, we compute the two-point correlation function to measure the amount of clustering on scales of < 5 deg. For a smooth comparison sample we make a mock Pristine data set generated using the Galaxia code based on the Besançon model of the Galaxy. We find enhanced clustering on small scales (< 0.5 deg) for some regions of the Galaxy for the most metal-poor bin ([Fe/H] < -2), while in others we see large scale signals that correspond to known substructures in those directions. This confirms that the substructure content of the halo is highly anisotropic and diverse in different Galactic environments. We discuss the difficulties of removing systematic clustering signals from the data and the limitations of disentangling weak clustering signals from real substructures and residual systematic structure in the data.
Taken together, the work presented in this thesis approaches the problem of better understanding the halo of our Galaxy from multiple angles. Firstly, presenting a sizeable sample of EMP stars and improving the selection efficiency of EMP stars for the Pristine survey, paving the way for the further discovery of metal-poor stars to be used as probes to early chemical evolution. Secondly, improving the selection of BHB distance tracers to map out the halo to large distances, and finally, using the large samples of metal-poor stars to derive the MDF of the inner halo and analyse the substructure content at different metallicities. The results of this thesis therefore expand our understanding of the physical and chemical properties of the Milky Way stellar halo, and provide insight into the processes involved in its formation and evolution.
During a dark night, it is possible to observe thousands of stars by eye. All these stars are located within the Milky Way, our home. Not all stars are the same, they can have different sizes, masses, temperatures and ages. Heavy stars do not live long (in astronomical terms), only a few million years, but stars less massive than the Sun can get more than ten billion years old. Such small stars that formed in the beginning of the Universe still shine today. These ancient stars are very helpful to learn more about the early Universe, the First Stars and the history of the Milky Way. But how do you recognise an ancient star? Using their chemical fingerprints! In the beginning of the Universe, there were only two chemical elements: hydrogen and helium (and a tiny bit of lithium). All the heavier elements like carbon, calcium and iron were only made later within stars and their explosions. The amount of chemical elements in the Universe increases with the number of stars that are born, evolve and explode. Stars that form later are born with more heavy elements, or a greater metallicity. In the field of astronomy that is called “Galactic Archaeology”, stars of various metallicities are used to study the history of the Milky Way. In this doctoral thesis, the focus is on metal-poor stars because these are expected to be the oldest and can therefore tell us a lot about the early history of our Galaxy.
Until today, we still have not discovered a metal-free star. The most metal-poor stars, however, give us important insights in the lives and deaths of the First Stars. Many of the oldest, most metal-poor stars have an unexpectedly large amount of carbon, compared to for example iron. These carbon-enhanced metal-poor (CEMP) stars tell us something about the very first stars in the Universe: they somehow produced a lot of carbon. If we look at the precise chemical fingerprints of the CEMP stars, we can learn a lot more. But our interpretation depends on the assumption that the chemical fingerprint of a star does not change during its life. In this thesis, new data is presented that shows that this assumption may be too simple: many extremely metal-poor CEMP stars are members of binary systems. Interactions between two stars in a binary system can pollute the surface of the stars. Likely not all of the CEMP stars in binary systems were actually polluted, but we should be very careful in our interpretations of the fingerprints of these stars.
The CEMP stars and other metal-poor stars are also important for our understanding of the early history of the Milky Way. Most researchers who study metal-poor stars look for these stars in the halo of the Milky Way: a huge diffuse Galactic component containing about 1% of the stars in our Galaxy. However, models predict that the oldest metal-poor stars are located in the center of the Milky Way, in the bulge. The metal-poor inner Galaxy is unfortunately difficult to study due to large amounts of dust between us and the center and an overwhelming majority of metal-rich stars. This thesis presents results from the successful Pristine Inner Galaxy Survey (PIGS), a new survey looking for (and finding) the oldest stars in the bulge of the Milky Way. PIGS is using images with a specific color that is sensitive to the metallicity of stars, and can therefore efficiently select the metal-poor stars among millions of other, more metal-rich stars. The interesting candidates are followed up with spectroscopy, which is then analysed using two independent methods. With this strategy, PIGS has discovered the largest sample of metal-poor stars in the inner Galaxy to date. A new result from the PIGS data is that the metal-poor stars rotate more slowly around the Galactic center compared to the more metal-rich stars, and they show larger randomness in their motions as well. Another important contribution from PIGS is the discovery of tens of CEMP stars in the inner Galaxy, where previously only two such stars were known.
The new results from this thesis help us to understand the First Stars and the early history of the Milky Way. Ongoing and future large surveys will provide us with a lot of additional data in the coming years. It is an exciting time for the field of Galactic Archaeology.
Galaxies are gravitationally bound systems of stars, gas, dust and - probably - dark matter. They are the building blocks of the Universe. The morphology of galaxies is diverse: some galaxies have structures such as spirals, bulges, bars, rings, lenses or inner disks, among others. The main processes that characterise galaxy evolution can be separated into fast violent events that dominated evolution at earlier times and slower processes, which constitute a phase called secular evolution, that became dominant at later times. Internal processes of secular evolution include the gradual rearrangement of matter and angular momentum, the build-up and dissolution of substructures or the feeding of supermassive black holes and their feedback. Galaxy bulges – bright central components in disc galaxies –, on one hand, are relics of galaxy formation and evolution. For instance, the presence of a classical bulge suggests a relatively violent history. In contrast, the presence of a disc-like bulge instead indicates the occurrence of secular evolution processes in the main disc. Galaxy bars – elongated central stellar structures –, on the other hand, are the engines of secular evolution. Studying internal properties of both bars and bulges is key to comprehending some of the processes through which secular evolution takes place. The main objectives of this thesis are (1) to improve the classification of bulges by combining photometric and spectroscopic approaches for a large sample of galaxies, (2) to quantify star formation in bars and verify dependencies on galaxy properties and (3) to analyse stellar populations in bars to aid in understanding the formation and evolution of bars. Integral field spectroscopy is fundamental to the work presented in this thesis, which consists of three different projects as part of three different galaxy surveys: the CALIFA survey, the CARS survey and the TIMER project.
The first part of this thesis constitutes an investigation of the nature of bulges in disc galaxies. We analyse 45 galaxies from the integral-field spectroscopic survey CALIFA by performing 2D image decompositions, growth curve measurements and spectral template fitting to derive stellar kinematics from CALIFA data cubes. From the obtained results, we present a recipe to classify bulges that combines four different parameters from photometry and kinematics: The bulge Sersic index nb, the concentration index C20;50, the Kormendy relation and the inner slope of the radial velocity dispersion profile ∇σ. The results of the different approaches are in good agreement and allow a safe classification for approximately 95% of the galaxies. We also find that our new ‘inner’ concentration index performs considerably better than the traditionally used C50;90 and, in combination with the Kormendy relation, provides a very robust indication of the physical nature of the bulge. In the second part, we study star formation within bars using VLT/MUSE observations for 16 nearby (0.01 < z < 0.06) barred active-galactic-nuclei (AGN)-host galaxies from the CARS survey. We derive spatially-resolved star formation rates (SFR) from Hα emission line fluxes and perform a detailed multi-component photometric decomposition on images derived from the data cubes. We find a clear separation into eight star-forming (SF) and eight non-SF bars, which we interpret as indication of a fast quenching process. We further report a correlation between the SFR in the bar and the shape of the bar surface brightness profile: only the flattest bars (nbar < 0.4) are SF. Both parameters are found to be uncorrelated with Hubble type. Additionally, owing to the high spatial resolution of the MUSE data cubes, for the first time, we are able to dissect the SFR within the bar and analyse trends parallel and perpendicular to the bar major axis. Star formation is 1.75 times stronger on the leading edge of a rotating bar than on the trailing edge and is radially decreasing. Moreover, from testing an AGN feeding scenario, we report that the SFR of the bar is uncorrelated with AGN luminosity. Lastly, we present a detailed analysis of star formation histories and chemical enrichment of stellar populations (SP) in galaxy bars. We use MUSE observations of nine very nearby barred galaxies from the TIMER project to derive spatially resolved maps of stellar ages and metallicities, [α/Fe] abundances, star formation histories, as well as Hα as tracer of star formation. Using these maps, we explore in detail variations of SP perpendicular to the bar major axes. We find observational evidence for a separation of SP, supposedly caused by an evolving bar. Specifically, intermediate-age stars (∼ 2-6 Gyr) get trapped on more elongated orbits forming a thinner bar, while old stars (> 8 Gyr) form a rounder and thicker bar. This evidence is further strengthened by very similar results obtained from barred galaxies in the cosmological zoom-in simulations from the Auriga project. In addition, we find imprints of typical star formation patterns in barred galaxies on the youngest populations (< 2 Gyr), which continuously become more dominant from the major axis towards the sides of the bar. The effect is slightly stronger on the leading side. Furthermore, we find that bars are on average more metal-rich and less α-enhanced than the inner parts of the discs that surrounds them. We interpret this result as an indication of a more prolonged or continuous formation of stars that shape the bar as compared to shorter formation episodes in the disc within the bar region.
The political legacy of the Martinican poet, novelist and philosopher Édouard Glissant (1928–2011) is the subject of an ongoing debate among postcolonial literary scholars. Responding to an influential view shaping this debate, that Glissant’s work can be categorised into an early political and late apolitical phase, this dissertation claims that this division is based on a narrow conception of 'engaged political writing' that prevents a more comprehensive view of the changing political strategies Glissant pursued throughout his life from emerging. Proceeding from this conceptual basis, the dissertation is concerned with re-reading the dimensions of Glissant's work that have hitherto been relegated as apolitical, literary or poetic, with the aim of conceptualising the politics of relation as an integral part of his overall poetic project. In methodological terms, the dissertation therefore proposes a relational reading of Glissant’s life-work across literary genres, epochs, as well as the conventional divisions between political thought, writing and activism. This perspective is informed by Glissant's philosophy of relation, and draws on a conception of political practice that includes both explicit engagements with established political systems and institutions, as well as literary and cultural interventions geared towards their transformation and the creation of alternatives to them. Theoretically the work thus combines a poststructuralist lens on the conceptual difference between 'politics' and 'the political' with arguments for an inherent political quality of literature, and perspectives from the Afro-Caribbean radical tradition, in which writers and intellectuals have historically sought to combine discursive interventions with organisational actions. Applying this theoretical angle to the analysis of Glissant's politics of relation results in an interdisciplinary research framework designed to explore the synergies between postcolonial political and literary studies.
In order to comprehensively describe Glissant's politics of relation without recourse to evolutionary or digressive models, the concept of an intellectual marronage is proposed as a framework to map the strategies making up Glissant's political archive. Drawing on a variety of historic, political theoretical and literary sources, intellectual marronage is understood as a mode of radical resistance to the neocolonial subjugation for which the plantation system stands historically and metaphorically, as an inherently innovative political practice invested in the creation of communities marked by relational ontologies, and as a commitment to fostering an imagination of the world and the human that differs fundamentally from the Enlightenment paradigm. This specific conception of intellectual marronage forms the basis on which three key strategies that consistently shape Glissant's political practice are identified and mapped. They revolve around Glissant's engagement with history (chapter 2), his commitment to fostering an imagination of the Tout-Monde (whole-world) as a political point of reference (chapter 3), and the continuous exploration of alternative forms of community on the levels of the island, the archipelago and the Tout-Monde (chapter 4). Together these strategies constitute Glissant's personal politics of relation. Its abstract characteristics can be put in a productive conversation with related theoretical traditions invested in exploring the political potentials of fugitivity (chapters 5), as well as with the work of other postcolonial actors whose holistic practice warrants to be described as a politics of relation (chapter 6).
In the last decade the photovoltaic research has been preponderantly overturned by the arrival of metal halide perovskites. The introduction of this class of materials in the academic research for renewable energy literally shifted the focus of a large number of research groups and institutions. The attractiveness of halide perovskites lays particularly on their skyrocketing efficiencies and relatively simple and cheap fabrication methods. Specifically, the latter allowed for a quick development of this research in many universities and institutes around the world at the same time. The outcome has been a fast and beneficial increase in knowledge with a consequent terrific improvement of this new technology. On the other side, the enormous amount of research promoted an immense outgrowth of scientific literature, perpetually published. Halide perovskite solar cells are now effectively competing with other established photovoltaic technologies in terms of power conversion efficiencies and production costs. Despite the tremendous improvement, a thorough understanding of the energy losses in these systems is of imperative importance to unlock the full thermodynamic potential of this material. This thesis focuses on the understanding of the non-radiative recombination processes in the neat perovskite and in complete devices. Specifically, photoluminescence quantum yield (PLQY) measurements were applied to multilayer stacks and cells under different illumination conditions to accurately determine the quasi-Fermi levels splitting (QFLS) in the absorber, and compare it with the external open-circuit voltage of the device (V_OC). Combined with drift-diffusion simulations, this approach allowed us to pinpoint the sites of predominant recombination, but also to investigate the dynamics of the underlying processes. As such, the internal and external ideality factors, associated to the QFLS and V_OC respectively, are studied with the aim of understanding the type of recombination processes taking place in the multilayered architecture of the device. Our findings highlight the failure of the equality between QFLS and V_OC in the case of strong interface recombination, as well as the detrimental effect of all commonly used transport layers in terms of V_OC losses. In these regards, we show how, in most perovskite solar cells, different recombination processes can affect the internal QFLS and the external V_OC and that interface recombination dictates the V_OC losses. This line of arguments allowed to rationalize that, in our devices, the external ideality factor is completely dominated by interface recombination, and that this process can alone be responsible for values of the ideality factor between 1 and 2, typically observed in perovskite solar cells. Importantly, our studies demonstrated how strong interface recombination can lower the ideality factor towards values of 1, often misinterpreted as pure radiative second order recombination. As such, a comprehensive understanding of the recombination loss mechanisms currently limiting the device performance was achieved. In order to reach the full thermodynamic potential of the perovskite absorber, the interfaces of both the electron and hole transport layers (ETL/HTL) must be properly addressed and improved. From here, the second part of the research work is devoted on reducing the interfacial non-radiative energy losses by optimizing the structure and energetics of the relevant interface in our solar cell devices, with the aim of bringing their quasi-Fermi level splitting closer to its radiative limit. As such, the interfaces have been carefully addressed and optimized with different methodologies. First, a small amount of Sr is added into the perovskite precursor solution with the effect of effectively reducing surface and interface recombination. In this case, devices with V_OC up to 1.23 V were achieved and the energy losses were minimized to as low as 100 meV from the radiative limit of the material. Through a combination of different methods, we showed that these improvements are related to a strong n-type surface doping, which repels the holes in the perovskite from the surface and the interface with the ETL. Second, a more general device improvement was achieved by depositing a defect-passivating poly(ionic-liquid) layer on top of the perovskite absorber. The resulting devices featured a concomitant improvement of the V_OC and fill factor, up to 1.17 V and 83% respectively, reaching efficiency as high as 21.4%. Moreover, the protecting polymer layer helped to enhance the stability of the devices under prolonged maximum power point tracking measurements. Lastly, PLQY measurements are used to investigate the recombination mechanisms in halide-segregated large bandgap perovskite materials. Here, our findings showed how few iodide-rich low-energy domains act as highly efficient radiative recombination centers, capable of generating PLQY values up to 25%. Coupling these results with a detailed microscopic cathodoluminescence analysis and absorption profiles allowed to demonstrate how the emission from these low energy domains is due to the trapping of the carriers photogenerated in the Br-rich high-energy domains. Thereby, the strong implications of this phenomenon are discussed in relation to the failure of the optical reciprocity between absorption and emission and on the consequent applicability of the Shockley-Queisser theory for studying the energy losses such systems. In conclusion, the identification and quantification of the non-radiative QFLS and V_OC losses provided a base knowledge of the fundamental limitation of perovskite solar cells and served as guidance for future optimization and development of this technology. Furthermore, by providing practical examples of solar cell improvements, we corroborated the correctness of our fundamental understanding and proposed new methodologies to be further explored by new generations of scientists.
The Cheb Basin (CZ) is a shallow Neogene intracontinental basin located in the western Eger Rift. The Cheb Basin is characterized by active seismicity and diffuse degassing of mantle-derived CO2 in mofette fields. Within the Cheb Basin, the Hartoušov mofette field shows a daily CO2 flux of 23–97 tons. More than 99% of CO2 released over an area of 0.35 km2. Seismic active periods have been observed in 2000 and 2014 in the Hartoušov mofette field. Due to the active geodynamic processes, the Cheb Basin is considered to be an ideal region for the continental deep biosphere research focussing on the interaction of biological processes with geological processes.
To study the influence of CO2 degassing on microbial community in the surface and subsurface environments, two 3-m shallow drillings and a 108.5-m deep scientific drilling were conducted in 2015 and 2016 respectively. Additionally, the fluid retrieved from the deep drilling borehole was also recovered. The different ecosystems were compared regarding their geochemical properties, microbial abundances, and microbial community structures. The geochemistry of the mofette is characterized by low pH, high TOC, and sulfate contents while the subsurface environment shows a neutral pH, and various TOC and sulfate contents in different lithological settings. Striking differences in the microbial community highlight the substantial impact of elevated CO2 concentrations and high saline groundwater on microbial processes. In general, the microorganisms had low abundance in the deep subsurface sediment compared with the shallow mofette. However, within the mofette and the deep subsurface sediment, the abundance of microbes does not show a typical decrease with depth, indicating that the uprising CO2-rich groundwater has a strong influence on the microbial communities via providing sufficient substrate for anaerobic chemolithoautotrophic microorganisms. Illumina MiSeq sequencing of the 16S rRNA genes and multivariate statistics reveals that the pH strongly influences the microbial community composition in the mofette, while the subsurface microbial community is significantly influenced by the groundwater which motivated by the degassing CO2. Acidophilic microorganisms show a much higher relative abundance in the mofette. Meanwhile, the OTUs assigned to family Comamonadaceae are the dominant taxa which characterize the subsurface communities. Additionally, taxa involved in sulfur cycling characterizing the microbial communities in both mofette and CO2 dominated subsurface environments.
Another investigated important geo–bio interaction is the influence of the seismic activity. During seismic events, released H2 may serve as the electron donor for microbial hydrogenotrophic processes, such as methanogenesis. To determine whether the seismic events can potentially trigger methanogenesis by the elevated geogenic H2 concentration, we performed laboratory simulation experiments with sediments retrieved from the drillings. The simulation results indicate that after the addition of hydrogen, substantial amounts of methane were produced in incubated mofette sediments and deep subsurface sediments. The methanogenic hydrogenotrophic genera Methanobacterium was highly enriched during the incubation. The modeling of the in-situ observation of the earthquake swarm period in 2000 at the Novy Kostel focal area/Czech Republic and our laboratory simulation experiments reveals a close relation between seismic activities and microbial methane production via earthquake-induced H2 release. We thus conclude that H2 – which is released during seismic activity – can potentially trigger methanogenic activity in the deep subsurface. Based on this conclusion, we further hypothesize that the hydrogenotrophic early life on Earth was boosted by the Late Heavy Bombardment induced seismic activity in approximately 4.2 to 3.8 Ga.
Hybrid organic-inorganic perovskites have attracted attention in recent years, caused by the incomparable increase in efficiency in energy convergence, which implies the application as an absorber material for solar cells. A disadvantage of these materials is, among others, the instability to moisture and UV-radiation. One possible solution for these problems is the reduction of the size towards the nano world. With that nanosized perovskites are showing superior stability in comparison to e.g. perovskite layers. Additionally to this the nanosize even enables stable perovskite structures, which could not be achieved otherwise at
room temperature.
This thesis is separated into two major parts. The separation is done by the composition and the band gap of the material and at the same time the shape and size of the nanoparticles. Here the division is made by the methylammonium lead tribromide nanoplatelets and the caesium lead triiodide nanocubes.
The first part is focusing on the hybrid organic-inorganic perovskite (methylammonium lead tribromide) nanoplatelets with a band gap of 2.35 eV and their thermal behaviour. Due to the challenging character of this material, several analysis methods are used to investigate the sub nano and nanostructures under the influence of temperature. As a result, a shift of phase-transition temperatures towards higher temperatures is observed. This unusual behaviour can be explained by the ligand, which is incorporated in the perovskite outer structure and adds phase-stability into the system.
The second part of this thesis is focusing on the inorganic caesium lead triiodide nanocubes with a band gap of 1.83 eV. These nanocrystals are first investigated and compared by TEM, XRD and other optical methods. Within these methods, a cuboid and orthorhombic structure are revealed instead of the in literature described cubic shape and structure. Furthermore, these cuboids are investigated towards their self-assembly on a substrate. Here a high degree in self-assembly is shown. As a next step, the ligands of the nanocuboids are exchanged against other ligands to increase the charge carrier mobility. This is further investigated by the above-mentioned methods. The last section is dealing with the enhancement of the CsPbI3 structure, by incorporating potassium in the crystal structure. The results are suggesting here an increase in stability.
In recent years people have realised non-renewability of our modern society which relays on spending huge amounts of energy mostly produced from fosil fuels, such as oil and coal, and the shift towards more sustainable energy sources has started. However, sustainable sources of energy, such as wind-, solar- and hydro-energy, produce primarily electrical energy and can not just be poured in canister like many fosil fuels, creating necessity for rechragable batteries. However, modern Li-ion batteries are made from toxic heavy metals and sustainable alternatives are needed. Here we show that naturally abundant catecholic and guaiacyl groups can be utilised to replace heavy metals in Li-ion batteries.
Foremost vanillin, a naturally occurring food additive that can be sustainably synthesised from industrial biowaste, lignin, was utilised to synthesise materials that showed extraordinary performance as cathodes in Li-ion batteries. Furthermore, behaviour of catecholic and guiacyl groups in Li-ion system was compared, confirming usability of guiacayl containing biopolymers as cathodes in Li-ion batteries. Lastly, naturally occurring polyphenol, tannic acid, was incorporated in fully bioderived hybrid material that shows performance comparable to commercial Li-ion batteries and good stability.
This thesis presents an important advancement in understanding of biowaste derived cathode materials for Li-ion batteries. Further research should be conducted to better understand behaviour of guaiacyl groups during Li-ion battery cycling. Lastly, challenges of incorporation of lignin, an industrial biowaste, have to be addressed and lignin should be incorporated as a cathode material in Li-ion batteries.
Ammonia is a chemical of fundamental importance for nature`s vital nitrogen cycle. It is crucial for the growth of living organisms as well as food and energy source. Traditionally, industrial ammonia production is predominated by Haber- Bosch process (HBP) which is based on direct conversion of N2 and H2 gas under high temperature and high pressure (~500oC, 150-300 bar). However, it is not the favorite route because of its thermodynamic and kinetic limitations, and the need for the energy intense production of hydrogen gas by reforming processes. All these disfavors of HBP open a target to search for an alternative technique to perform efficient ammonia synthesis via electrochemical catalytic processes, in particular via water electrolysis, using water as the hydrogen source to save the process from gas reforming.
In this study, the investigation of the interface effects between imidazolium-based ionic liquids and the surface of porous carbon materials with a special interest in the nitrogen absorption capability. As the further step, the possibility to establish this interface as the catalytically active area for the electrochemical N2 reduction to NH3 has been evaluated. This particular combination has been chosen because the porous carbon materials and ionic liquids (IL) have a significant importance in many scientific fields including catalysis and electrocatalysis due to their special structural and physicochemical properties. Primarily, the effects of the confinement of ionic liquid (EmimOAc, 1-Ethyl-3-methylimidazolium acetate) into carbon pores have been investigated. The salt-templated porous carbons, which have different porosity (microporous and mesoporous) and nitrogen species, were used as model structures for the comparison of the IL confinement at different loadings. The nitrogen uptake of EmimOAc can be increased by about 10 times by the confinement in the pores of carbon materials compared to the bulk form. In addition, the most improved nitrogen absorption was observed by IL confinement in micropores and in nitrogen-doped carbon materials as a consequence of the maximized structural changes of IL. Furthermore, the possible use of such interfaces between EmimOAc and porous carbon for the catalytic activation of dinitrogen during the kinetically challenging NRR due to the limited gas absorption in the electrolyte, was examined. An electrocatalytic NRR system based on the conversion of water and nitrogen gas to ammonia at ambient operation conditions (1 bar, 25 °C) was performed in a setup under an applied electric potential with a single chamber electrochemical cell, which consists of the combination of EmimOAc electrolyte with the porous carbon-working electrode and without a traditional electrocatalyst. Under a potential of -3 V vs. SCE for 45 minutes, a NH3 production rate of 498.37 μg h-1 cm-2 and FE of 12.14% were achieved. The experimental observations show that an electric double-layer, which serves the catalytically active area, occurs between a microporous carbon material and ions of the EmimOAc electrolyte in the presence of sufficiently high provided electric potential. Comparing with the typical NRR systems which have been reported in the literature, the presented electrochemical ammonia synthesis approach provides a significantly higher ammonia production rate with a chance to avoid the possible kinetic limitations of NRR. In terms of operating conditions, ammonia production rate and the faradic efficiency without the need for any synthetic electrocatalyst can be resulted of electrocatalytic activation of nitrogen in the double-layer formed between carbon and IL ions.
Potato is the 4th most important food crop in the world. Especially in tropical and sub-tropical potato production, drought is a yield limiting factor. Potato is sensitive to water stress. Potato yield loss under water stress could be reduced by using tolerant varieties and adjusted agronomic practices. Direct selection for yield under water-stressed conditions requires long selection cycles. Thus, identification of markers for marker-assisted selection may speed up breeding. The objective of this thesis is to identify morphological markers for drought tolerance by continuously monitoring plant growth and canopy temperature with an automatic phenotyping system.
The phenotyping was performed in drought-stress experiments that were conducted in population A with 64 genotypes and population B with 21 genotypes in the screenhouse in 2015 and 2016 (population A) and in 2017 and 2018 (population B). Drought tolerance was quantified as deviation of the relative tuber starch yield from the experimental median (DRYM) and parent median (DRYMp). Relative tuber starch yield is starch yield under drought stress relative to the average starch yield of the respective cultivar under control conditions in the same experiment. The specific DRYM value was calculated based on the yield data of the same experiment or the global DRYM that was calculated from yield data derived from data combined over yeas of respective population or across multiple experiments including VALDIS and TROST experiments (2011-2016).
Analysis of variance found a significant effect of genotype on DRYM indicating that the tolerance variation required for marker identification was given in both populations.
Canopy growth was monitored continuously six times a day over five to ten weeks by a laser scanner system and yielded information on leaf area, plant height and leaf angle for population A and additionally on leaf inclination and light penetration depth for population B. Canopy temperature was measured 48 times a day over six to seven weeks by infrared thermometry in population B. From the continuous IRT surface temperature data set, the canopy temperature for each plant was selected by matching the time stamp of the IRT data with laser scanner data.
Mean, maximum, range and growth rate values were calculated from continuous laser scanner measurements of respective canopy parameters. Among the canopy parameters, the maximum and mean values in long-term stress conditions showed better correlation with DRYM values calculated in the same experiment than growth rate and diurnal range values. Therefore, drought tolerance index prediction was done from maximum and mean values of canopy parameters.
The tolerance index in specific experiment condition was linearly predicted by simple regression model from different single canopy parameters under long-term stress condition in population A (2016) and population B (2017 and 2018). Among the canopy parameters maximum light penetration depth (2017), mean leaf angle (2017, 2018, and 2016), mean leaf inclination or mean canopy temperature depression (2017 and 2018), maximum plant height (2017) were selected as tolerance predictors. However, no single parameters were sufficient to predict DRYM. Therefore, several independent parameters were integrated in a multiple regression model.
In multiple regression model, specific experiment DRYM values in population A was predicted from mean leaf angle (2016). In population B, specific tolerance could be predicted from maximum light penetration depth and mean leaf inclination (2017) and mean leaf inclination (2018) or mean canopy temperature depression and mean leaf angle (2018).
In data combined over season of population A, the multiple linear regression model selected maximum plant height and mean leaf angle as tolerance predictor. In Population B, mean leaf inclination was selected as tolerance predictor. However, in population A, the variation explained by the final model was too low.
Furthermore, the average tolerances respective to parent median (2011-2018) across FGH plants or all plants (FGH and field) were predicted from maximum plant height (population A) and maximum plant height and mean leaf inclination (population B). Altogether, canopy parameters could be used as markers for drought tolerance. Therefore, water stress breeding in potato could be speed up through using leaf inclination, light penetration depth, plant height and canopy temperature depression as markers for drought tolerance, especially in long-term stress conditions.
Die Entwicklung nachhaltiger Bewirtschaftungs- und Produktionsmethoden ist eine der zentralen Fragestellungen der modernen Agrarwirtschaft. Die vorliegende Arbeit beschäftigt sich mit zwei Forschungsthemen, die das Konzept Nachhaltigkeit beinhalten. In beiden Fällen werden analytische Grundlagen für die Entwicklung entsprechender landwirtschaftlicher Arbeitsmethoden gelegt.
Das erste Thema ist an den sogenannten Präzisionsackerbau angelehnt. Bei diesem wird die Bearbeitung von Agrarflächen ortsabhängig ausgeführt. Das heißt, die Ausbringung von Saatgut, Dünger, Bewässerung usw. richtet sich nach den Eigenschaften des jeweiligen Standortes und wird nicht pauschal gleichmäßig über ein ganzes Feld verteilt. Voraussetzung hierfür ist eine genaue Kenntnis der Bodeneigenschaften. In der vorliegenden Arbeit sollten diese Parameter mittels der analytischen Technik der Laser-induzierten Breakdown Spektroskopie (LIBS), die eine Form der Elementaranalyse darstellt, bestimmt werden. Bei den hier gesuchten Bodeneigenschaften handelte es sich um die Gehalte von Nährstoffen sowie einige sekundäre Parameter wie den Humusanteil, den pH-Wert und den pflanzenverfügbaren Anteil einzelner Nährstoffe. Diese Eigenschaften wurden durch etablierte Referenzanalysen bestimmt. Darauf aufbauend wurden die Messergebnissen der LIBS-Untersuchungen durch verschiedene Methoden der sogenannten multivariaten Datenanalyse (MVA) ausgewertet. Daraus sollten Modelle zur Vorhersage der Bodenparameter in zukünftigen LIBS-Messungen erarbeitet werden. Die Ergebnisse dieser Arbeit zeigten, dass mit der Kombination von LIBS und MVA sämtliche Bodenparameter erfolgreich vorhergesagt werden konnten. Dies beinhaltete sowohl die tatsächlich messbaren Elemente als auch die sekundären Eigenschaften, welche durch die MVA mit den Elementgehalten in Zusammenhang gebracht wurden.
Das zweite Thema beschäftigt sich mit der Vermeidung von Verlusten durch Schädlingsbefall bei der Getreidelagerung. Hier sollten mittels der Ionenmobilitätsspektrometrie (IMS) Schimmelpilzkontaminationen detektiert werden. Dabei wurde nach den flüchtigen Stoffwechselprodukten der Pilze gesucht. Die durch Referenzmessungen mit Massenspektrometern identifizierten Substanzen konnten durch IMS im Gasvolumen über den Proben, dem sogenannten Headspace, nachgewiesen werden. Dabei wurde nicht nur die Anwesenheit einer Kontamination festgestellt, sondern diese auch charakterisiert. Die freigesetzten Substanzen bildeten spezifische Muster, anhand derer die Pilze identifiziert werden konnten. Hier wurden sowohl verschiedene Gattungen als auch einzelne Arten unterschieden. Die Messungen fanden auf verschiedenen Nährböden statt um den Einfluss dieser auf die Stoffwechselprodukte zu beobachten. Auch die sekundären Stoffwechselprodukte der Schimmelpilze, die Mykotoxine, konnten durch IMS detektiert werden.
Beide in dieser Arbeit vorgestellten Forschungsthemen konnten erfolgreich abgeschlossen werden. Sowohl LIBS als auch IMS erwiesen sich für den Nachweis der jeweiligen Analyten als geeignet, und der Einsatz moderner computergestützter Auswertemethoden ermöglichte die genaue Charakterisierung der gesuchten Parameter. Beide Techniken können in Form von mobilen Geräten verwendet werden und zeichnen sich durch eine schnelle und sichere Analyse aus. In Kombination mit entsprechenden Modellen der MVA sind damit alle Voraussetzungen für Vor-Ort-Untersuchungen und damit für den Einsatz in der Landwirtschaft erfüllt.
The plasmasphere is a dynamic region of cold, dense plasma surrounding the Earth. Its shape and size are highly susceptible to variations in solar and geomagnetic conditions. Having an accurate model of plasma density in the plasmasphere is important for GNSS navigation and for predicting hazardous effects of radiation in space on spacecraft. The distribution of cold plasma and its dynamic dependence on solar wind and geomagnetic conditions remain, however, poorly quantified. Existing empirical models of plasma density tend to be oversimplified as they are based on statistical averages over static parameters. Understanding the global dynamics of the plasmasphere using observations from space remains a challenge, as existing density measurements are sparse and limited to locations where satellites can provide in-situ observations. In this dissertation, we demonstrate how such sparse electron density measurements can be used to reconstruct the global electron density distribution in the plasmasphere and capture its dynamic dependence on solar wind and geomagnetic conditions.
First, we develop an automated algorithm to determine the electron density from in-situ measurements of the electric field on the Van Allen Probes spacecraft. In particular, we design a neural network to infer the upper hybrid resonance frequency from the dynamic spectrograms obtained with the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrumentation suite, which is then used to calculate the electron number density. The developed Neural-network-based Upper hybrid Resonance Determination (NURD) algorithm is applied to more than four years of EMFISIS measurements to produce the publicly available electron density data set.
We utilize the obtained electron density data set to develop a new global model of plasma density by employing a neural network-based modeling approach. In addition to the location, the model takes the time history of geomagnetic indices and location as inputs, and produces electron density in the equatorial plane as an output. It is extensively validated using in-situ density measurements from the Van Allen Probes mission, and also by comparing the predicted global evolution of the plasmasphere with the global IMAGE EUV images of He+ distribution. The model successfully reproduces erosion of the plasmasphere on the night side as well as plume formation and evolution, and agrees well with data.
The performance of neural networks strongly depends on the availability of training data, which is limited during intervals of high geomagnetic activity. In order to provide reliable density predictions during such intervals, we can employ physics-based modeling. We develop a new approach for optimally combining the neural network- and physics-based models of the plasmasphere by means of data assimilation. The developed approach utilizes advantages of both neural network- and physics-based modeling and produces reliable global plasma density reconstructions for quiet, disturbed, and extreme geomagnetic conditions.
Finally, we extend the developed machine learning-based tools and apply them to another important problem in the field of space weather, the prediction of the geomagnetic index Kp. The Kp index is one of the most widely used indicators for space weather alerts and serves as input to various models, such as for the thermosphere, the radiation belts and the plasmasphere. It is therefore crucial to predict the Kp index accurately. Previous work in this area has mostly employed artificial neural networks to nowcast and make short-term predictions of Kp, basing their inferences on the recent history of Kp and solar wind measurements at L1. We analyze how the performance of neural networks compares to other machine learning algorithms for nowcasting and forecasting Kp for up to 12 hours ahead. Additionally, we investigate several machine learning and information theory methods for selecting the optimal inputs to a predictive model of Kp. The developed tools for feature selection can also be applied to other problems in space physics in order to reduce the input dimensionality and identify the most important drivers.
Research outlined in this dissertation clearly demonstrates that machine learning tools can be used to develop empirical models from sparse data and also can be used to understand the underlying physical processes. Combining machine learning, physics-based modeling and data assimilation allows us to develop novel methods benefiting from these different approaches.
The Earth's inner magnetosphere is a very dynamic system, mostly driven by the external solar wind forcing exerted upon the magnetic field of our planet. Disturbances in the solar wind, such as coronal mass ejections and co-rotating interaction regions, cause geomagnetic storms, which lead to prominent changes in charged particle populations of the inner magnetosphere - the plasmasphere, ring current, and radiation belts. Satellites operating in the regions of elevated energetic and relativistic electron fluxes can be damaged by deep dielectric or surface charging during severe space weather events. Predicting the dynamics of the charged particles and mitigating their effects on the infrastructure is of particular importance, due to our increasing reliance on space technologies.
The dynamics of particles in the plasmasphere, ring current, and radiation belts are strongly coupled by means of collisions and collisionless interactions with electromagnetic fields induced by the motion of charged particles. Multidimensional numerical models simplify the treatment of transport, acceleration, and loss processes of these particles, and allow us to predict how the near-Earth space environment responds to solar storms. The models inevitably rely on a number of simplifications and assumptions that affect model accuracy and complicate the interpretation of the results. In this dissertation, we quantify the processes that control electron dynamics in the inner magnetosphere, paying particular attention to the uncertainties of the employed numerical codes and tools.
We use a set of convenient analytical solutions for advection and diffusion equations to test the accuracy and stability of the four-dimensional Versatile Electron Radiation Belt (VERB-4D) code. We show that numerical schemes implemented in the code converge to the analytical solutions and that the VERB-4D code demonstrates stable behavior independent of the assumed time step. The order of the numerical scheme for the convection equation is demonstrated to affect results of ring current and radiation belt simulations, and it is crucially important to use high-order numerical schemes to decrease numerical errors in the model.
Using the thoroughly tested VERB-4D code, we model the dynamics of the ring current electrons during the 17 March 2013 storm. The discrepancies between the model and observations above 4.5 Earth's radii can be explained by uncertainties in the outer boundary conditions. Simulation results indicate that the electrons were transported from the geostationary orbit towards the Earth by the global-scale electric and magnetic fields.
We investigate how simulation results depend on the input models and parameters. The model is shown to be particularly sensitive to the global electric field and electron lifetimes below 4.5 Earth's radii. The effects of radial diffusion and subauroral polarization streams are also quantified.
We developed a data-assimilative code that blends together a convection model of energetic electron transport and loss and Van Allen Probes satellite data by means of the Kalman filter. We show that the Kalman filter can correct model uncertainties in the convection electric field, electron lifetimes, and boundary conditions. It is also demonstrated how the innovation vector - the difference between observations and model prediction - can be used to identify physical processes missing in the model of energetic electron dynamics.
We computed radial profiles of phase space density of ultrarelativistic electrons, using Van Allen Probes measurements. We analyze the shape of the profiles during geomagnetically quiet and disturbed times and show that the formation of new local minimums in the radial profiles coincides with the ground observations of electromagnetic ion-cyclotron (EMIC) waves. This correlation indicates that EMIC waves are responsible for the loss of ultrarelativistic electrons from the heart of the outer radiation belt into the Earth's atmosphere.
The two hallmark features of Brownian motion are the linear growth < x2(t)> = 2Ddt of the mean squared displacement (MSD) with diffusion coefficient D in d spatial dimensions, and the Gaussian distribution of displacements. With the increasing complexity of the studied systems deviations from these two central properties have been unveiled over the years. Recently, a large variety of systems have been reported in which the MSD exhibits the linear growth in time of Brownian (Fickian) transport, however, the distribution of displacements is pronouncedly non-Gaussian (Brownian yet non-Gaussian, BNG). A similar behaviour is also observed for viscoelastic-type motion where an anomalous trend of the MSD, i.e., <x2(t)> ~ ta, is combined with a priori unexpected non-Gaussian distributions (anomalous yet non-Gaussian, ANG). This kind of behaviour observed in BNG and ANG diffusions has been related to the presence of heterogeneities in the systems and a common approach has been established to address it, that is, the random diffusivity approach.
This dissertation explores extensively the field of random diffusivity models. Starting from a chronological description of all the main approaches used as an attempt of describing BNG and ANG diffusion, different mathematical methodologies are defined for the resolution and study of these models. The processes that are reported in this work can be classified in three subcategories, i) randomly-scaled Gaussian processes, ii) superstatistical models and iii) diffusing diffusivity models, all belonging to the more general class of random diffusivity models. Eventually, the study focuses more on BNG diffusion, which is by now well-established and relatively well-understood. Nevertheless, many examples are discussed for the description of ANG diffusion, in order to highlight the possible scenarios which are known so far for the study of this class of processes.
The second part of the dissertation deals with the statistical analysis of random diffusivity processes. A general description based on the concept of moment-generating function is initially provided to obtain standard statistical properties of the models. Then, the discussion moves to the study of the power spectral analysis and the first passage statistics for some particular random diffusivity models. A comparison between the results coming from the random diffusivity approach and the ones for standard Brownian motion is discussed. In this way, a deeper physical understanding of the systems described by random diffusivity models is also outlined.
To conclude, a discussion based on the possible origins of the heterogeneity is sketched, with the main goal of inferring which kind of systems can actually be described by the random diffusivity approach.
The Andean Plateau (Altiplano-Puna Plateau) of the southern Central Andes is the second-highest orogenic plateau on our planet after Tibet. The Andean Plateau and its foreland exhibit a pronounced segmentation from north to south regarding the style and magnitude of deformation. In the Altiplano (northern segment), more than 300 km of tectonic shortening has been recorded, which started during the Eocene. A well-developed thin-skinned thrust wedge located at the eastern flank of the plateau (Subandes) indicates a simple-shear shortening mode. In contrast, the Puna (southern segment) records approximately half of the shortening of the Altiplano - and the shortening started later. The tectonic style in the Puna foreland switches to a thick-skinned mode, which is related to pure-shear shortening. In this study, carried out in the framework of the StRATEGy project, high-resolution 2D thermomechanical models were developed to systematically investigate controls of deformation patterns in the orogen-foreland pair. The 2D and 3D models were subsequently applied to study the evolution of foreland deformation and surface topography in the Altiplano-Puna Plateau. The models demonstrate that three principal factors control the foreland-deformation patterns: (i) strength differences in the upper lithosphere between the orogen and its foreland, rather than a strength difference in the entire lithosphere; (ii) gravitational potential energy of the orogen (GPE) controlled by crustal and lithospheric thicknesses, and (iii) the strength and thickness of foreland-basin sediments. The high-resolution 2D models are constrained by observations and successfully reproduce deformation structures and surface topography of different segments of the Altiplano-Puna plateau and its foreland. The developed 3D models confirm these results and suggest that a relatively high shortening rate in the Altiplano foreland (Subandean foreland fold-and-thrust belt) is due to simple-shear shortening facilitated by thick and mechanically weak sediments, a process which requires a much lower driving force than the pure-shear shortening deformation mode in the adjacent broken foreland of the Puna, where these thick sedimentary basin fills are absent. Lower shortening rate in the Puna foreland is likely accommodated in the forearc by the slab retreat.
Seismic receiver arrays have variety of applications in seismology, particularly when the signal enhancement is a prerequisite to detect seismic events, and in situations where installing and maintaining sparse networks are impractical. This thesis has mainly focused on the development of a new approach for seismological source and receiver array design.The proposed approach deals with the array design task as an optimization problem. The criteria and prerequisite constraints in array design task are integrated in objective function definition and evaluation of a optimization process. Three cases are covered in this thesis: (1) a 2-D receiver array geometry optimization, (2) a 3-D source array optimization, and (3) an array application to monitor microseismic data, where the effect of different types of noise are evaluated.
A flexible receiver array design framework implements a customizable scenario modelling and optimization scheme by making use of synthetic seismograms. Using synthetic seismograms to evaluate array performance makes it possible to consider additional constraints, e.g. land ownership, site-specific noise levels or characteristics of the seismic sources under investigation. The use of synthetic array beamforming as an array design criteria is suggested. The framework is customized by designing a 2-D small scale receiver array to monitor earthquake swarm activity in northwest Bohemia/ Vogtland in central Europe. Two sub-functions are defined to verify the accuracy of horizontal slowness estimation; one to suppress aliasing effects due to possible secondary lobes of synthetic array beamforming calculated in horizontal slowness space, and the other to reduce the event's mislocation caused by miscalculation of the horizontal slowness vector. Subsequently, a weighting technique is applied to combine the sub-functions into one single scalar objective function to use in the optimization process.
The idea of optimal array is employed to design a 3-D source array, given a well-located catalog of events. The conditions to make source arrays are formulated in four objective functions and a weighted sum technique is used to combine them in one single scalar function. The criteria are: (1) accurate slowness vector estimation, (2) high waveform coherency, (3) low location error and (4) high energy of coda phases. The method is evaluated by two experiments, (1) a synthetic test using realistic synthetic seismograms, (2) using real seismograms, and for each case optimized SA elements are configured using the data from the Vogtland area.
The location of a possible scatterer in the velocity model, that makes the converted/reflected phases, e.g. sp-phases, is retrieved by a grid search method using the optimized SA. The accuracy of the approach and the obtained results demonstrated that the method is applicable to study the crustal structure and the location of crustal scatterers when the strong converted phases are observed in the data and a well-located catalog is available.
Small aperture arrays are employed in seismology for a variety of applications, ranging from pure event detection to monitor and study of microcosmic activities. The monitoring of microseismicity during temporary human activities is often difficult, as the signal-to-noise ratio is very low and noise is strongly increased during the operation. The combination of small aperture seismic arrays with shallow borehole sensors offers a solution. We tested this monitoring approach at two different sites, (1) accompanying a fracking experiment in sedimentary shale at 4~km depth, and (2) above a gas field under depletion. Arrays recordings are compared with recordings available from shallow borehole sensors and examples of detection and location performance of the array are given. The effect of different types of noise at array and borehole stations are compared and discussed.
Near-Earth space represents a significant scientific and technological challenge. Particularly at magnetic low-latitudes, the horizontal magnetic field geometry at the dip equator and its closed field-lines support the existence of a distinct electric current system, abrupt electric field variations and the development of plasma irregularities. Of particular interest are small-scale irregularities associated with equatorial plasma depletions (EPDs). They are responsible for the disruption of trans-ionospheric radio waves used for navigation, communication, and Earth observation. The fast increase of satellite missions makes it imperative to study the near-Earth space, especially the phenomena known to harm space technology or disrupt their signals. EPDs correspond to the large-scale structure (i.e., tens to hundreds of kilometers) of topside F region irregularities commonly known as Spread F. They are observed as depleted-plasma density channels aligned with the ambient magnetic field in the post-sunset low-latitude ionosphere. Although the climatological variability of their occurrence in terms of season, longitude, local time and solar flux is well-known, their day to day variability is not. The sparse observations from ground-based instruments like radars and the few simultaneous measurements of ionospheric parameters by space-based instruments have left gaps in the knowledge of EPDs essential to comprehend their variability.
In this dissertation, I profited from the unique observations of the ESA’s Swarm constellation mission launched in November 2013 to tackle three issues that revealed novel and significant results on the current knowledge of EPDs. I used Swarm’s measurements of the electron density, magnetic, and electric fields to answer, (1.) what is the direction of propagation of the electromagnetic energy associated with EPDs?, (2.) what are the spatial and temporal characteristics of the electric currents (field-aligned and diamagnetic currents) related to EPDs, i.e., seasonal/geographical, and local time dependencies?, and (3.) under what conditions does the balance between magnetic and plasma pressure across EPDs occur?
The results indicate that: (1.) The electromagnetic energy associated with EPDs presents a preference for interhemispheric flows; that is, the related Poynting flux directs from one magnetic hemisphere to the other and varies with longitude and season. (2.) The field-aligned currents at the edges of EPDs are interhemispheric. They generally close in the hemisphere with the highest Pedersen conductance. Such hemispherical preference presents a seasonal/longitudinal dependence. The diamagnetic currents increase or decrease the magnetic pressure inside EPDs. These two effects rely on variations of the plasma temperature inside the EPDs that depend on longitude and local time. (3.) EPDs present lower or higher plasma pressure than the ambient. For low-pressure EPDs the plasma pressure gradients are mostly dominated by variations of the plasma density so that variations of the temperature are negligible. High-pressure EPDs suggest significant temperature variations with magnitudes of approximately twice the ambient. Since their occurrence is more frequent in the vicinity of the South Atlantic magnetic anomaly, such high temperatures are suggested to be due to particle precipitation.
In a broader context, this dissertation shows how dedicated satellite missions with high-resolution capabilities improve the specification of the low-latitude ionospheric electrodynamics and expand knowledge on EPDs which is valuable for current and future communication, navigation, and Earth-observing missions. The contributions of this investigation represent several ’firsts’ in the study of EPDs: (1.) The first observational evidence of interhemispheric electromagnetic energy flux and field-aligned currents. (2.) The first spatial and temporal characterization of EPDs based on their associated field-aligned and diamagnetic currents. (3.) The first evidence of high plasma pressure in regions of depleted plasma density in the ionosphere. These findings provide new insights that promise to advance our current knowledge of not only EPDs but the low-latitude post-sunset ionosphere environment.
Using individual-based modeling to understand grassland diversity and resilience in the Anthropocene
(2020)
The world’s grassland systems are increasingly threatened by anthropogenic change. Susceptible to a variety of different stressors, from land-use intensification to climate change, understanding the mechanisms driving the maintenance of these systems’ biodiversity and stability, and how these mechanisms may shift under human-mediated disturbance, is thus critical for successfully navigating the next century. Within this dissertation, I use an individual-based and spatially-explicit model of grassland community assembly (IBC-grass) to examine several processes, thought key to understanding their biodiversity and stability and how it changes under stress. In the first chapter of my thesis, I examine the conditions under which intraspecific trait variation influences the diversity of simulated grassland communities. In the second and third chapters of my thesis, I shift focus towards understanding how belowground herbivores influence the stability of these grassland systems to either a disturbance that results in increased, stochastic, plant mortality, or eutrophication.
Intraspecific trait variation (ITV), or variation in trait values between individuals of the same species, is fundamental to the structure of ecological communities. However, because it has historically been difficult to incorporate into theoretical and statistical models, it has remained largely overlooked in community-level analyses. This reality is quickly shifting, however, as a consensus of research suggests that it may compose a sizeable proportion of the total variation within an ecological community and that it may play a critical role in determining if species coexist. Despite this increasing awareness that ITV matters, there is little consensus of the magnitude and direction of its influence. Therefore, to better understand how ITV changes the assembly of grassland communities, in the first chapter of my thesis, I incorporate it into an established, individual-based grassland community model, simulating both pairwise invasion experiments as well as the assembly of communities with varying initial diversities. By varying the amount of ITV in these species’ functional traits, I examine the magnitude and direction of ITV’s influence on pairwise invasibility and community coexistence. During pairwise invasion, ITV enables the weakest species to more frequently invade the competitively superior species, however, this influence does not generally scale to the community level. Indeed, unless the community has low alpha- and beta- diversity, there will be little effect of ITV in bolstering diversity. In these situations, since the trait axis is sparsely filled, the competitively inferior may suffer less competition and therefore ITV may buffer the persistence and abundance of these species for some time.
In the second and third chapters of my thesis, I model how one of the most ubiquitous trophic interactions within grasslands, herbivory belowground, influences their diversity and stability. Until recently, the fundamental difficulty in studying a process within the soil has left belowground herbivory “out of sight, out of mind.” This dilemma presents an opportunity for simulation models to explore how this understudied process may alter community dynamics. In the second chapter of my thesis, I implement belowground herbivory – represented by the weekly removal of plant biomass – into IBC-grass. Then, by introducing a pulse disturbance, modelled as the stochastic mortality of some percentage of the plant community, I observe how the presence of belowground herbivores influences the resistance and recovery of Shannon diversity in these communities. I find that high resource, low diversity, communities are significantly more destabilized by the presence of belowground herbivores after disturbance. Depending on the timing of the disturbance and whether the grassland’s seed bank persists for more than one season, the impact of the disturbance – and subsequently the influence of the herbivores – can be greatly reduced. However, because human-mediated eutrophication increases the amount of resources in the soil, thus pressuring grassland systems, our results suggest that the influence of these herbivores may become more important over time.
In the third chapter of my thesis, I delve further into understanding the mechanistic underpinnings of belowground herbivores on the diversity of grasslands by replicating an empirical mesocosm experiment that crosses the presence of herbivores above- and below-ground with eutrophication. I show that while aboveground herbivory, as predicted by theory and frequently observed in experiments, mitigates the impact of eutrophication on species diversity, belowground herbivores counterintuitively reduce biodiversity. Indeed, this influence positively interacts with the eutrophication process, amplifying its negative impact on diversity. I discovered the mechanism underlying this surprising pattern to be that, as the herbivores consume roots, they increase the proportion of root resources to root biomass. Because root competition is often symmetric, herbivory fails to mitigate any asymmetries in the plants’ competitive dynamics. However, since the remaining roots have more abundant access to resources, the plants’ competition shifts aboveground, towards asymmetric competition for light. This leads the community towards a low-diversity state, composed of mostly high-performance, large plant species. We further argue that this pattern will emerge unless the plants’ root competition is asymmetric, in which case, like its counterpart aboveground, belowground herbivory may buffer diversity by reducing this asymmetry between the competitively superior and inferior plants.
I conclude my dissertation by discussing the implications of my research on the state of the art in intraspecific trait variation and belowground herbivory, with emphasis on the necessity of more diverse theory development in the study of these fundamental interactions. My results suggest that the influence of these processes on the biodiversity and stability of grassland systems is underappreciated and multidimensional, and must be thoroughly explored if researchers wish to predict how the world’s grasslands will respond to anthropogenic change. Further, should researchers myopically focus on understanding central ecological interactions through only mathematically tractable analyses, they may miss entire suites of potential coexistence mechanisms that can increase the coviability of species, potentially leading to coexistence over ecologically-significant timespans. Individual-based modelling, therefore, with its focus on individual interactions, will prove a critical tool in the coming decades for understanding how local interactions scale to larger contexts, and how these interactions shape ecological communities and further predicting how these systems will change under human-mediated stress.
Seismological agencies play an important role in seismological research and seismic hazard mitigation by providing source parameters of seismic events (location, magnitude, mechanism), and keeping these data accessible in the long term. The availability of catalogues of seismic source parameters is an important requirement for the evaluation and mitigation of seismic hazards, and the catalogues are particularly valuable to the research community as they provide fundamental long-term data in the geophysical sciences. This work is well motivated by the rising demand for developing more robust and efficient methods for routine source parameter estimation, and ultimately generation of reliable catalogues of seismic source parameters. Specifically, the aim of this work is to develop some methods to determine hypocentre location and temporal evolution of seismic sources based on regional and teleseismic observations more accurately, and investigate the potential of these methods to be integrated in near real-time processing.
To achieve this, a location method that considers several events simultaneously and improves the relative location accuracy among nearby events has been provided. This method tries to reduce the biasing effects of the lateral velocity heterogeneities (or equivalently to compensate for limitations and inaccuracies in the assumed velocity model) by calculating a set of timing corrections for each seismic station. The systematic errors introduced into the locations by the inaccuracies in the assumed velocity structure can be corrected without explicitly solving for a velocity model. Application to sets of multiple earthquakes in complex tectonic environments with strongly heterogeneous structure such as subduction zones and plate boundary region demonstrate that this relocation process significantly improves the hypocentre locations compared to standard locations.
To meet the computational demands of this location process, a new open-source software package has been developed that allows for efficient relocation of large-scale multiple seismic events using arrival time data. Upon that, a flexible location framework is provided which can be tailored to various application cases on local, regional, and global scales. The latest version of the software distribution including source codes, a user guide, an example data set, and a change history, is freely available to the community.
The developed relocation algorithm has been modified slightly and then its performance in a simulated near real-time processing has been evaluated. It has been demonstrated that applying the proposed technique significantly reduces the bias in routine locations and enhance the ability to locate the lower magnitude events using only regional arrival data.
Finally, to return to emphasis on global seismic monitoring, an inversion framework has been developed to determine the seismic source time function through direct waveform fitting of teleseismic recordings. The inversion technique can be systematically applied to moderate- size seismic events and has the potential to be performed in near real-time applications. It is exemplified by application to an abnormal seismic event; the 2017 North Korean nuclear explosion.
The presented work and application case studies in this dissertation represent the first step in an effort to establish a framework for automatic, routine generation of reliable catalogues of seismic event locations and source time functions.
Geomorphology seeks to characterize the forms, rates, and magnitudes of sediment and water transport that sculpt landscapes. This is generally referred to as earth surface processes, which incorporates the influence of biologic (e.g., vegetation), climatic (e.g., rainfall), and tectonic (e.g., mountain uplift) factors in dictating the transport of water and eroded material. In mountains, high relief and steep slopes combine with strong gradients in rainfall and vegetation to create dynamic expressions of earth surface processes. This same rugged topography presents challenges in data collection and process measurement, where traditional techniques involving detailed observations or physical sampling are difficult to apply at the scale of entire catchments. Herein lies the utility of remote sensing. Remote sensing is defined as any measurement that does not disturb the natural environment, typically via acquisition of images in the visible- to radio-wavelength range of the electromagnetic spectrum. Remote sensing is an especially attractive option for measuring earth surface processes, because large areal measurements can be acquired at much lower cost and effort than traditional methods. These measurements cover not only topographic form, but also climatic and environmental metrics, which are all intertwined in the study of earth surface processes. This dissertation uses remote sensing data ranging from handheld camera-based photo surveying to spaceborne satellite observations to measure the expressions, rates, and magnitudes of earth surface processes in high-mountain catchments of the Eastern Central Andes in Northwest Argentina. This work probes the limits and caveats of remote sensing data and techniques applied to geomorphic research questions, and presents important progress at this disciplinary intersection.
Salt pans also termed playas are common landscape features of hydrologically closed basins in arid and semiarid zones, where evaporation significantly exceeds the local precipitation. The analysis and monitoring of salt pan environments is important for the evaluation of current and future impact of these landscape features. Locally, salt pans have importance for the ecosystem, wildlife and human health, and through dust emissions they influence the climate on regional and global scales. Increasing economic exploitation of these environments in the last years, e.g. by brine extraction for raw materials, as well as climate change severely affect the water, material and energy balance of these systems. Optical remote sensing has the potential to characterise salt pan environments and to increase the understanding of processes in playa basins, as well as to assess wider impacts and feedbacks that exist between climate forcing and human intervention in their regions. Remote sensing techniques can provide information for extensive regions on a high temporal basis compared to traditional field samples and ground observations. Specifically, for salt pans that are often challenging to study because of their large size, remote location, and limited accessibility due to missing infrastructure and ephemeral flooding. Furthermore, the availability of current and upcoming hyperspectral remote sensing data opened the opportunity for the analyses of the complex reflectance signatures that relate to the mineralogical mixtures found in the salt pan sediments. However, these new advances in sensor technology, as well as increased data availability currently have not been fully explored for the study of salt pan environments. The potential of new sensors needs to be assessed and state of the art methods need to be adapted and improved to provide reliable information for in depth analysis of processes and characterisation of the recent condition, as well as to support long-term monitoring and to evaluate environmental impacts of changing climate and anthropogenic activity.
This thesis provides an assessment of the capabilities of optical remote sensing for the study of salt pan environments that combines the information of hyperspectral data with the increased temporal coverage of multispectral observations for a more complete understanding of spatial and temporal complexity of salt pan environments using the Omongwa salt pan located in the south-west Kalahari as a test site. In particular, hyperspectral data are used for unmixing of the mineralogical surface composition, spectral feature-based modelling for quantification of main crust components, as well as time-series based classification of multispectral data for the assessment of the long-term dynamic and the analysis of the seasonal process regime. The results show that the surface of the Omongwa pan can be categorized into three major crust types based on diagnostic absorption features and mineralogical ground truth data. The mineralogical crust types can be related to different zones of surface dynamic as well as pan morphology that influences brine flow during the pan inundation and desiccation cycles. Using current hyperspectral imagery, as well as simulated data of upcoming sensors, robust quantification of the gypsum component could be derived. For the test site the results further indicate that the crust dynamic is mainly driven by flooding events in the wet season, but it is also influenced by temperature and aeolian activity in the dry season. Overall, the scientific outcomes show that optical remote sensing can provide a wide range of information helpful for the study of salt pan environments. The thesis also highlights that remote sensing approaches are most relevant, when they are adapted to the specific site conditions and research scenario and that upcoming sensors will increase the potential for mineralogical, sedimentological and geomorphological analysis, and will improve the monitoring capabilities with increased data availability.
The significant environmental and socioeconomic consequences of hydrometeorological extreme events, such as extreme rainfall, are constituted as a most important motivation for analyzing these events in the south-central Andes of NW Argentina. The steep topographic and climatic gradients and their interactions frequently lead to the formation of deep convective storms and consequently trigger extreme rainfall generation.
In this dissertation, I focus on identifying the dominant climatic variables and atmospheric conditions and their spatiotemporal variability leading to deep convection and extreme rainfall in the south-central Andes.
This dissertation first examines the significant contribution of temperature on atmospheric humidity (dew-point temperature, Td) and on convection (convective available potential energy, CAPE) for deep convective storms and hence, extreme rainfall along the topographic and climatic gradients. It was found that both climatic variables play an important role in extreme rainfall generation. However, their contributions differ depending on topographic and climatic sub-regions, as well as rainfall percentiles.
Second, this dissertation explores if (near real-time) the measurements conducted by the Global Navigation Satellite System (GNSS) on integrated water vapor (IWV) provide reliable data for explaining atmospheric humidity. I argue that GNSS-IWV, in conjunction with other atmospheric stability parameters such as CAPE, is able to decipher the extreme rainfall in the eastern central Andes. In my work, I rely on a multivariable regression analysis described by a theoretical relationship and fitting function analysis.
Third, this dissertation identifies the local impact of convection on extreme rainfall in the eastern Andes. Relying on a Principal Component Analysis (PCA) it was found that during the existence of moist and warm air, extreme rainfall is observed more often during local night hours. The analysis includes the mechanisms for this observation.
Exploring the atmospheric conditions and climatic variables leading to extreme rainfall is one of the main findings of this dissertation. The conditions and variables are a prerequisite for understanding the dynamics of extreme rainfall and predicting these events in the eastern Andes.