Refine
Year of publication
- 2020 (234) (remove)
Document Type
- Doctoral Thesis (234) (remove)
Keywords
- Maschinelles Lernen (3)
- diffusion (3)
- Anden (2)
- Andes (2)
- Arktis (2)
- Boden (2)
- Chemometrie (2)
- Datenassimilation (2)
- Diffusion (2)
- Digitalisierung (2)
- EEG (2)
- Galaktische Archäologie (2)
- Genomik (2)
- Geomagnetic activity (2)
- Geomagnetische Aktivität (2)
- Geophysik (2)
- ICP-OES (2)
- Klima (2)
- Körper (2)
- Migration (2)
- Nanopartikel (2)
- Nährelemente (2)
- PCA (2)
- PLSR (2)
- Paleoclimatology (2)
- Paläoklimatologie (2)
- Perowskit (2)
- Photochemie (2)
- Porous carbon (2)
- RFA (2)
- Rashba effect (2)
- Rashba-Effekt (2)
- Solarzellen (2)
- Spaltsätze (2)
- Stratigraphy (2)
- Translation (2)
- Wissensmanagement (2)
- XRF (2)
- body (2)
- chloroplast (2)
- clefts (2)
- emulsion (2)
- knowledge management (2)
- machine learning (2)
- memory (2)
- methane (2)
- remote sensing (2)
- social media (2)
- solar cells (2)
- 1,6,7,12-Tetraazaperylen (1)
- 3D field calculations (1)
- 3D-Feldsimulationen (1)
- A-bar movement (1)
- A-quer-Bewegung (1)
- AIDS (1)
- AM1 (1)
- AM1/FOMO (1)
- AM1/FOMO-CI (1)
- AMP (1)
- ARPES (1)
- ATPS (1)
- Abgleich von Abhängigkeiten (1)
- Active Labor Market Programs (1)
- Adipositas (1)
- Adressnormalisierung (1)
- Adviescollege Toetsing Administrative Lasten (1)
- Adviescollege Toetsing Regeldruk (1)
- Advisory Boards (1)
- Affekt (1)
- Aid Effectiveness (1)
- Akan (1)
- Aktinzytoskelett (1)
- Algorithmen (1)
- Algorithms (1)
- Alpine Fault (1)
- Altes Testament (1)
- Altitude (1)
- America (1)
- American studies (1)
- Amerika (1)
- Amerikastudien (1)
- Ammonia (1)
- Ammoniak (1)
- Analogmodell (1)
- Analogue Model (1)
- Anfoderungsprofil Judo (1)
- Anregungsspektren (1)
- Anreicherungsmethoden (1)
- Antikörper-Färbung (1)
- Anwendungsbedingungen (1)
- Aptamer (1)
- Aptamers (1)
- Arbeitgeberattraktivität (1)
- Archaeolithoporella (1)
- Archäomagnetismus (1)
- Arctic (1)
- Arctic-midlatitude linkages (1)
- Argentina (1)
- Argentinien (1)
- Array-Entwurf (1)
- Arrhenius (1)
- Asia (1)
- Asien (1)
- Astronomical instrumentation (1)
- Astrophysik (1)
- Astroteilchenphysik (1)
- Atemgas (1)
- Atmosphäre (1)
- Atomic Force Microscope (1)
- Aufbau Ost (1)
- Automatizität (1)
- Autotropher Nitrat-Aufnahme in Gewässernetzen (1)
- Autotrophic nitrate uptake in river networks (1)
- Azobenzene (1)
- Azobenzene containing surfactant (1)
- Azobenzol (1)
- Azobenzol enthaltende Moleküle (1)
- Azobenzol enthaltendes Tensid (1)
- B3LYP (1)
- BCH code (1)
- BCH-Code (1)
- Bacteria (1)
- Bakterien (1)
- Bank filtration (1)
- Bayesian Inference (1)
- Bayesian inference (1)
- Beanspruchungsfolgen (1)
- Bending energy (1)
- Benetzung (1)
- Beratung (1)
- Beratungssituationen (1)
- Berny-Algorithmus (1)
- Berufliche Wiedereingliederung (1)
- Berufliches Lernen (1)
- Besatzungspolitik (1)
- Bessere Rechtsetzung (1)
- Bestandsparameter (1)
- Better Regulation (1)
- Better Regulation Board (1)
- Beugungseffizienz (1)
- Biegeenergie (1)
- Big Data (1)
- Bildanalyse (1)
- Bildlinguistik (1)
- Bildungsbeteiligung (1)
- Bildungsentscheidungen (1)
- Bildungssystem (1)
- Bildungstechnologien (1)
- Biochemie (1)
- Biofilme (1)
- Biogeochemie (1)
- Biogeochemistry (1)
- Biogeographie (1)
- Biohacking (1)
- Biokompatibilität (1)
- Biologie (1)
- Biomasse (1)
- Biomimetics (1)
- Biomimetik (1)
- Biopolitik (1)
- Biotechnology (1)
- Biotop (1)
- Bioverfügbarkeit (1)
- Blickbewegungen (1)
- Bodenanalytik (1)
- Broken Web (1)
- Bula Matadi (1)
- Bundeskanzleramt (1)
- Bürokratieabbau (1)
- Bürokratiekosten (1)
- CCSEM (1)
- CO2 degassing (1)
- CO2-Entgasung (1)
- Calculus of Variation (1)
- Caldera-ähnliche Topographie (1)
- Candidate Journey (1)
- Canopy parameters (1)
- Carbonate-Silicate reactions (1)
- Cardinality estimation (1)
- Caribbean (1)
- Carrara marble (1)
- Carrara-marmor (1)
- Catchment and in-stream water quality (1)
- Causal Inference (1)
- Cell-cell adhesion (1)
- Central Andes (1)
- Change Data Capture (1)
- Chemerin (1)
- Chemometrics (1)
- Chemotaxis (1)
- Childhoods (1)
- China (1)
- Chloroplast gene expression (1)
- Chloroplasten-Genexpression (1)
- Clay Minerals (1)
- Climate (1)
- Cluster (1)
- Codierungstheorie (1)
- Coding theory (1)
- Cognitive linguistics (1)
- Coiled Coil (1)
- Cone sheet (1)
- Conic compartments (1)
- Consciousness-for-sustainable-consumption-Modell (1)
- Consciousness-for-sustainable-consuption model (1)
- Contamination Control (1)
- Corporate Entrepreneurship (1)
- Criminal law (1)
- Criminology (1)
- Crowd Resourcing (1)
- Cyanobacteria (1)
- Cyanobakterien (1)
- Cybercriminology (1)
- Cybergrooming (1)
- Cyberkriminologie (1)
- Cybersecurity e-Learning (1)
- Cybersicherheit E-Learning (1)
- D3 (1)
- DFT (1)
- DIY (1)
- DNS (1)
- DRYM (1)
- Data Assimilation (1)
- Data Integration (1)
- Data Mining (1)
- Data Profiling (1)
- Data assimilation (1)
- Data profiling (1)
- Data quality (1)
- Data-Driven Methods (1)
- Daten Assimilation (1)
- Datenabgleich (1)
- Datenaufbereitung (1)
- Datenbereinigung (1)
- Datengetriebene Methoden (1)
- Datenintegration (1)
- Datenqualität (1)
- Datensatzverknüpfung (1)
- Decolmation (1)
- Defektchemie (1)
- Defekte (1)
- Deformationsmechanismen (1)
- Dekolmation (1)
- Denkstile (1)
- Der Rückzug aus dem Gazastreifen (1)
- Designed Biointerfaces (1)
- Designte Biointerface (1)
- Deutsch (1)
- Deutschland (1)
- Development Aid (1)
- Diagnostik (1)
- Dichtefunktionaltheorie (1)
- Dielektrophorese (1)
- Differential Geometrie (1)
- Differential Geometry (1)
- Digital Multimodal Linguistics (1)
- Digital education (1)
- Digitale Bildung (1)
- Digitale multimodale Linguistik (1)
- Direkte Manipulation (1)
- Dispersionskorrektur (1)
- Djerba (1)
- Do-it-yourself (1)
- Doppelpuls (1)
- Dritte Phase (1)
- Drogenhandel (1)
- Drought tolerance (1)
- Duplikaterkennung (1)
- Dwarf galaxies (1)
- Dynamic Data (1)
- Düngeempfehlung (1)
- E-Government (1)
- E-Z Isomerisierung (1)
- E. coli (1)
- EKP (1)
- ERP (1)
- Early Earth (1)
- Early Starvation 1 (1)
- Early starvation protein (1)
- Earth's magnetic field (1)
- Earth's mantle (1)
- Economics of Convention (1)
- Education technologies (1)
- Effizienz- und Effektivitätssteigerung (1)
- Einschätzung der Diffusion (1)
- Einzel-Objekt-Nachweis (1)
- Einzelmolekül-Biosensor (1)
- Einzelmolekülkraftspektroskopie (1)
- Einzelzellbewegung (1)
- Ejina Basin (1)
- Ejina Becken (1)
- El Niño-Southern Oscillation (ENSO) (1)
- El Niño-Südliche Oszillation (1)
- Electrocatalysis (1)
- Elektret (1)
- Elektrochemie (1)
- Elektrokatalyse (1)
- Elektronenstrukturrechnung (1)
- Emotion (1)
- Employer Branding (1)
- Emulsion (1)
- Emulsionen (1)
- Energieumwandlung (1)
- Englisch (1)
- English (1)
- Entbürokratisierung (1)
- Entitätsauflösung (1)
- Entlastungsspannung (1)
- Entwicklungszusammenarbeit (1)
- Eocene (1)
- Eozän (1)
- Epistemologie (1)
- Erdbeben (1)
- Erdbebenkatalogdaten (1)
- Erdbebenquellen-Array (1)
- Erdmagnetfeld (1)
- Erdmantel (1)
- Erdrutsche (1)
- Erfüllungsaufwand (1)
- Erinnerung (1)
- Erinnerungskultur (1)
- Escherichia coli (1)
- Ethik (1)
- Eutrophierung (1)
- Evolution (1)
- Ex-ante assessment (1)
- Exhaustivitätsinferenz (1)
- Experimental Physics (1)
- Experimentalphysik (1)
- Experimente (1)
- Expert interviews (1)
- Expertengremien (1)
- Experteninterview (1)
- Exploration (1)
- Extracellular Matrix (1)
- Extrazelluläre Matrix (1)
- Extremniederschläge (1)
- Eye movements (1)
- Eyetracking (1)
- Eyring (1)
- FELS (1)
- FGF21 (1)
- FRAP (1)
- Fachkräftemangel (1)
- Fahrkompetenz (1)
- Fault Healing (1)
- Fehlende Werte (1)
- Feldmaus (1)
- Fernerkundung (1)
- Fernerkundungsprodukte (1)
- Fernsehen/TV (1)
- Fibre-fed spectroscopy (1)
- Flagella (1)
- Flagellen (1)
- Flow (1)
- Fluid-Gesteins-Wechselwirkung (1)
- Fluorchemie (1)
- Fluorescence fluctuation spectroscopy (1)
- Fluoreszenz-Mikroskopie (1)
- Fluoreszenzfluktuationsspektroskopie (1)
- Fluoreszenzproteine (1)
- Flüchtlingskrise (1)
- Fokus (1)
- Folgekosten (1)
- Forecasting (1)
- Foreign Keys (1)
- Foreign Keys Discovery (1)
- Formgleichungen von Vesikeln (1)
- Forschend Entdeckendes Lernen (1)
- Fortbildung (1)
- Fotografieforschung (1)
- Fotogrammetrie (1)
- Fractals (1)
- Fragebogen (1)
- Fraktale (1)
- Frame Analyse; Französische Entwicklungsagentur (1)
- Frame Analysis (1)
- Französisch (1)
- Frauen (1)
- French (1)
- French Development Agency (1)
- Frequenzverdopplung (1)
- Frontier Conviviality (1)
- Frühe Erdgeschichte (1)
- Fulgimide (1)
- Fully distributed nitrate modeling (1)
- Functional dependencies (1)
- Funktionale Abhängigkeiten (1)
- Funktionalisierung (1)
- Förster Resonanz Energie Transfer (FRET) (1)
- Förster resonance energy transfer (FRET) (1)
- Führung (1)
- GAUSSIAN (1)
- GPI (1)
- Galactic Archaeology (1)
- Galactic archaeology (1)
- Galaxien (1)
- Galaxienbalken (1)
- Galaxienbulges (1)
- Galaxienentwicklung (1)
- Galaxienstruktur (1)
- Gammastrahlung (1)
- Gammastrahlungsastronomie (1)
- Gaxun Nur (1)
- Gedächtnis (1)
- Gelatine-Analogmodellierung (1)
- Genomic Mining (1)
- Genomics (1)
- Geodynamik (1)
- Geological heterogeneity (1)
- Geologie (1)
- Geologische Heterogenität (1)
- Geology (1)
- Geomagnetic index (1)
- Geomagnetic observatory (1)
- Geomagnetischer Index (1)
- Geomagnetisches Observatorium (1)
- Geometric Analysis (1)
- Geometric Data Analysis (1)
- Geometrieoptimierung (1)
- Geometrische Analysis (1)
- Geometrische Datenanalyse (1)
- Geomicrobiology (1)
- Geomikrobiologie (1)
- Geomorphologie (1)
- Geophysics (1)
- Geosciences (1)
- Geowissenschaften (1)
- German (1)
- German Studies (1)
- Germanistica (1)
- Germanistik (1)
- Germany (1)
- Geschäftsbeziehungstypen (1)
- Geschäftsverhandlungen (1)
- Gesetzesfolgenabschätzung (1)
- Gesetzesqualität (1)
- Gilbert Imlay (1)
- Gitterdynamik (1)
- Glissant (1)
- Glucan water dikinase (1)
- Glucan-Wasser-Dikinase (1)
- Glukose Oxidation (1)
- Glycoproteins (1)
- Glykolipide (1)
- Glykoproteine (1)
- Grafen von Arnstein (1)
- Graphbedingungen (1)
- Graphen (1)
- Graphitic carbon nitride (1)
- Graphtransformationen (1)
- Graphtransformationssysteme (1)
- Grasland (1)
- Grenzflächen (1)
- Grenzflächenrekombination (1)
- Gressmann, Hugo (1)
- Griffithsin (1)
- Groundwater modelling (1)
- Großbritannien (1)
- Grundschule (1)
- Grundwassermodellierung (1)
- Grundwasserneubildung (1)
- Grüne Infrastruktur (1)
- HIV (1)
- Habitat (1)
- Halachic (1)
- Halo der Milchstraße (1)
- Halogenid-Perowskite (1)
- Hartree Fock (1)
- Hasserkennung (1)
- Heat transport (1)
- Heavy Minerals (1)
- Heihe (1)
- Herdzeit Parameter Abschätzung (1)
- Herkunftsanalyse (1)
- Herrschaft Ruppin (1)
- Herschel Island Qikiqtaruk (1)
- Herzentwicklung (1)
- Herzklappe (1)
- Heterogene Einzugsgebietsreaktionen (1)
- Heterogeneous catchement responses (1)
- Heterogenität (1)
- High-frequency data (1)
- Hill Youth (1)
- Himalaya (1)
- Histidin-Metall Koordination (1)
- Hochdurchsatzsequenzierung (1)
- Hochfrequenzdaten (1)
- Hochschulforschung (1)
- Hochtemperatur Gesteinsdeformtion (1)
- Hybrid materials synthesis (1)
- Hydrogel (1)
- Hydrologie (1)
- Hydrophobic and hydrophillic interactions (1)
- Hydrophobizität (1)
- Hyperpolarisierbarkeit (1)
- Hyperspektral (1)
- Höhe (1)
- ICON (1)
- IMAGE EUV (1)
- IR laser (1)
- Iconolinguistica (1)
- Identitätsorientierte Unternehmensführung (1)
- Igbo (1)
- Image studies (1)
- Immigration (1)
- Impact Assessment (1)
- Impact Assessment Board (1)
- Inclusion Dependency (1)
- Inclusion Dependency Discovery (1)
- Incremental Discovery (1)
- Incrementally Inclusion Dependencies Discovery (1)
- Index (1)
- India (1)
- Inklusion (1)
- Inklusionsabhängigkeiten (1)
- Inklusionsabhängigkeiten Entdeckung (1)
- Inner magnetosphere (1)
- Innere Magnetosphäre (1)
- Innovation Management (1)
- Institutional Complexity (1)
- Institutionelle Komplexität (1)
- Integrale Feldspektroskopie (1)
- Intentional communities (1)
- Intrapreneurship (1)
- Ionenmobilitätsspektrometrie (1)
- Ionic liquids (1)
- Ionische Flüssigkeiten (1)
- Ionosphäre (1)
- Isomerisierung Kinetik (1)
- Jahreszeitenvorhersage (1)
- Janus colloids (1)
- Janus-Kolloid (1)
- Kalfon, Mosheh, ha-Kohen (1)
- Kalman Filter (1)
- Kalman filter (1)
- Kampfsport (1)
- Kant (1)
- Karbonat-Silikat-Reaktionen (1)
- Kardinalitätsschätzung (1)
- Kardiologische Rehabilitation (1)
- Karibik (1)
- Kartierung (1)
- Kartographie (1)
- Kegelförmige Geometrien (1)
- Kern-Schale Aufkonvertierende Nanopartikel (1)
- Kindesmissbrauch (1)
- Kinetics of photoisomerization (1)
- Klimapolitik (1)
- Klimatologie (1)
- Kognitionswissenschaft (1)
- Kognitive Linguistik (1)
- Kohlenhydrate (1)
- Kohn Sham (1)
- Kolonialismus (1)
- Kommunen (1)
- Kontaminationskontrolle (1)
- Koordination (1)
- Kostenfolgen (1)
- Kp index (1)
- Kp-Index (1)
- Kreolische Sprachen (1)
- Kriminalpolitik (1)
- Kriminologie (1)
- Kristang (1)
- Kritik der Urteilskraft (1)
- Kugelflächenfunktionen (1)
- Kulturwissenschaft (1)
- L-edge spectroscopy (1)
- LDDO (1)
- LDPC code (1)
- LDPC-Code (1)
- LIBS (1)
- Ladungsspeicherung und -transport (1)
- Landschaft (1)
- Landschaftsökologie (1)
- Landwirtschaft (1)
- Langmuir-Schaefer method (1)
- Langmuir-Schäfer-Methode (1)
- Lascar Volcano (1)
- Laser induzierte Breakdown Spektroskopie (1)
- Lava dome (1)
- Lavadom (1)
- Learning Analytics (1)
- Learning experience (1)
- Lebendigkeit (1)
- Lehrkräftebildung (1)
- Leistungsrückmeldung (1)
- Lernerlebnis (1)
- Lernumgebung (1)
- Lernverlauf (1)
- Lesen (1)
- Licht-Materie-Wechselwirkung (1)
- Light-Matter Coupling (1)
- Like-Early Starvation 1 (1)
- Like-Early starvation protein (1)
- Linguistica cognitiva (1)
- Linguistica del testo (1)
- Linguistica digitale multimodale (1)
- Lipide (1)
- Local adaptation (1)
- Lokalisierte Deformation (1)
- Lokalisierung von Verformung (1)
- Luteinester (1)
- Luxusgüter (1)
- Luxuskonsum (1)
- MOOC (1)
- MOOCs (1)
- MP2 (1)
- Macau (1)
- Machine learning (1)
- Magellanic Clouds (1)
- Magellansche Wolken (1)
- Magnetotactic bacteria (1)
- Mapping (1)
- Marie Howland (1)
- Marronage (1)
- Maschinen- und Anlagenbau (1)
- Massenaussterben (1)
- Massive Open Online Courses (1)
- Mathematical Physics (1)
- Mathematik (1)
- Mathematische Physik (1)
- Mbembe (1)
- Meaning Structure (1)
- Mechanical engineering (1)
- Mechanosensation (1)
- Medien (1)
- Mediengebrauch (1)
- Medienpraxeologie (1)
- Medienwissenschaft (1)
- Medizin (1)
- Medizinisch-beruflich orientierte Rehabilitation (MBOR) (1)
- Melaka (1)
- Melt inclusions (1)
- Membran (1)
- Metabolic Engineering (1)
- Metadata Discovery (1)
- Metadaten Entdeckung (1)
- Metal-poor stars (1)
- Metallarme Sterne (1)
- Metamorphism (1)
- Metamorphose (1)
- Metanome (1)
- Metasomatism (1)
- Metasomatose (1)
- Meteorologie (1)
- Methane (1)
- Methanogene (1)
- Methanotrophe (1)
- Microbial communities (1)
- Microtus arvalis (1)
- Mikrobielle Gemeinschaften (1)
- Mikrobieller Abbau von organischen Material (1)
- Mikrogel (1)
- Mikrostrukturen (1)
- Mikroviskosität (1)
- Milchstraße (1)
- Milky Way (1)
- Milky Way Halo (1)
- Min-Proteine (1)
- Min-proteins (1)
- Mind2 (1)
- Mineralzusammensetzung (1)
- Ministerial bureaucracy (1)
- Ministerialbürokratie (1)
- Missing values (1)
- Modal expansion method (1)
- Modeling (1)
- Modell der Frame-Selektion (1)
- Modellieren (1)
- Modellierung (1)
- Molecular motors (1)
- Molekular-dynamik (1)
- Molekulare Motoren (1)
- Moleküle in äußeren Feldern (1)
- Monsoon (1)
- Monsun (1)
- Moonlets (1)
- Morphologie (1)
- Motivation (1)
- Motivation zur Wissensteilung (1)
- Multi-Issue-Verhandlungen (1)
- Multi-object spectroscopy (1)
- Multimode fibres (1)
- Multiple Correspondence Analysis (1)
- Multiple Korrespondenzanalyse (1)
- Myodus glareolus (1)
- Nachhaltige Entwicklung (1)
- Nachhaltiger Konsum (1)
- Nachhaltiges Finanzwesen (1)
- Nachhaltigkeitsbewusstsein (1)
- Nachhaltigkeitstransformation (1)
- Nachkriegszeit (1)
- Nahrunsgergänzungsmittel (1)
- Namibia (1)
- Nano-Elektroden (1)
- Nanoparticle (1)
- Nanoparticles (1)
- Nanoplättchen (1)
- Narcocultura (1)
- Narrative (1)
- Nathaniel Hawthorne (1)
- National Regulatory Control Council (1)
- National narrative (1)
- Nationaler Normenkontrollrat (1)
- Natural Products (1)
- Naturschutz (1)
- Naturstoffe (1)
- Netflix (1)
- Netzwerke (1)
- Neural networks (1)
- Neuronale Netze (1)
- Neutronen Diffraktion (1)
- Neutronen Reflektometrie (1)
- Neutronen aus kosmischer Höhenstrahlung (1)
- Nicotiana tabacum (1)
- Niederlande (1)
- Nineteenth century (1)
- Nitrogen Physisorption (1)
- Nordostbrasilien (1)
- Normalmodenanalyse (1)
- Nostoc punctiforme (1)
- Nutrients (1)
- Nutzen (1)
- Nutzerinteraktion (1)
- Oberflächenelektromyographie (1)
- Oberflächengitter (1)
- Oberflächentopografie (1)
- Oberfächen (1)
- Obesity (1)
- Oculomotor control (1)
- Okulomotorik (1)
- On-Demand (1)
- One-in-one-out (1)
- Online-Lernen (1)
- Onlinepredator (1)
- Optical remote sensing (1)
- Optische Fernerkundung (1)
- Ordnung der Partikel auf der Oberfläche (1)
- Organic matter mineralization (1)
- Organisationssoziologie (1)
- Ornstein-Uhlenbeck Process (1)
- Ornstein-Uhlenbeck Prozess (1)
- Outcome (1)
- P2P (1)
- PEG brushes (1)
- PEG-Funktionalisierung (1)
- PNIPAM (1)
- Pachkungsdichte (1)
- Paleogeography (1)
- Paläogeographie (1)
- Paläomagnetismus (1)
- Pantoea stewartii (1)
- Partial Differential Equations (1)
- Partielle Differential Gleichungen (1)
- Partikelverben (1)
- Patientenzufriedenheit (1)
- Peer Assessment (1)
- Perceived Relevance (1)
- Percolation (1)
- Perkolation (1)
- Perm (1)
- Permafrost carbon feedback (1)
- Permafrostdegradation (1)
- Permian (1)
- Perovskite (1)
- Perowskit Solarzellen (1)
- Petrologie (1)
- Petrology (1)
- Pflanzen (1)
- Pflanzenzellen (1)
- Phasenmodulationsspektroskopie (1)
- Philosophie der Biologie (1)
- Phosphoglucan water dikinase (1)
- Phosphoglucan-Wasser-Dikinase (1)
- Phosphorylation process (1)
- Phosphorylierungsprozess (1)
- Photochemistry (1)
- Photogrammetry (1)
- Photokatalyse (1)
- Photopolymer (1)
- Photopolymerization (1)
- Photorespiration (1)
- Photostrukturierung von Polymerfilmen (1)
- Photovoltaik (1)
- Physics Education (1)
- Physics Problems (1)
- Physikaufgaben (1)
- Physikdidaktik (1)
- Planungsstrategien (1)
- Plasmasphere (1)
- Plasmasphäre (1)
- Plasmonen (1)
- Plasmons (1)
- Playa (1)
- Politics (1)
- Politik (1)
- Politikberatung (1)
- Politolinguistica (1)
- Politolinguistics (1)
- Politolinguistik (1)
- Polyelektrolyt-Multischichten (1)
- Polymere (1)
- Polymers (1)
- Polypropylen (1)
- Polystyrol Nano-Sphären (1)
- Populationsanalyse (1)
- Populationsgenetik (1)
- Porous silica particles (1)
- Portugiesisch (1)
- Poröser Kohlenstoff (1)
- Postbürokratie (1)
- Postcolonial (1)
- Postkolonial (1)
- Pragmatica (1)
- Pragmatics (1)
- Pragmatik (1)
- Praxisrelevanz (1)
- Preaktivierung (1)
- Precision Agriculture (1)
- Preisschild (1)
- Price Tag (1)
- Professionalisierung (1)
- Prognose (1)
- Programmieren (1)
- Propellers (1)
- Protein (1)
- Protein complex assembly (1)
- Proteinkomplexassemblierung (1)
- Provenance Analysis (1)
- Prädikatsinterpretation (distributiv vs. nicht-distributiv) (1)
- QM/MM (1)
- Qualitative content analysis (1)
- Qualität (1)
- Quantenausbeute (1)
- Quantenpunkte (1)
- Quantifizierung von Unsicherheit (1)
- Quantumdots (1)
- Quell-Array optimales Design (1)
- R-PE (1)
- REM (1)
- Random Environments (1)
- Random Walk (1)
- Rasterkraftmikroskopie (1)
- Raumplanung (1)
- Reading (1)
- Rechtsetzungskultur (1)
- Reconstruction of eastern Germany (1)
- Redox (1)
- Redoxchemie (1)
- Reeducation (1)
- RegWatchEurope (1)
- Regestensammlung (1)
- Regulatory Impact Assessment (1)
- Regulatory Oversight Bodies (1)
- Regulatory Policy Committee (1)
- Regulatory Scrutiny Board (1)
- Regulierungspolitik (1)
- Relation (1)
- Relativsätze (1)
- Religion (1)
- Religionsgeschichtliche Schule (1)
- Religious Zionism (1)
- Religiöser Zionismus (1)
- Repertory Grid (1)
- Resonances (1)
- Resonante Energie Transfer (1)
- Restitution (1)
- Results-Based Management (1)
- Return to work (1)
- Rhenium (1)
- Rheologie (1)
- Ribosome profiling (1)
- Riff (1)
- Rings (1)
- Ringstromelektronen (1)
- Romance Studies (1)
- Romanistica (1)
- Romanistik (1)
- Rothberg (1)
- Rumpfkraft (1)
- Ruthenium (1)
- Räumlich verteilte Nitratmodellierung (1)
- Räumliche und zeitliche Nitratvariabilität (1)
- Rötelmaus (1)
- Rückgabe (1)
- S-indd++ (1)
- SAM (1)
- SAXS (1)
- SEM (1)
- SFG (1)
- SHG (1)
- SUMO (1)
- Salt pan (1)
- Salzpfanne (1)
- Satellitenmission Swarm (1)
- Saturn (1)
- Satzverarbeitung (1)
- Sauerstofflöschung (1)
- Sauerstoffsensorik (1)
- Scalability (1)
- Schema discovery (1)
- Schema-Entdeckung (1)
- Scherzonen (1)
- Schimmelpilze (1)
- Schmelzeinschlüsse (1)
- Schule (1)
- Schwerminerale (1)
- Schädel (1)
- Seasonal prediction (1)
- Secondary Metabolites (1)
- Sedimente (1)
- Sedimentologie (1)
- Sedimentology (1)
- Seismizitätsmodellierung (1)
- Seismologie (1)
- Seismology (1)
- Seismotektonik (1)
- Sekundärmetabolite (1)
- Selbstwirsamkeitserwartungen (1)
- Semantica (1)
- Semantics (1)
- Semantik (1)
- Semiotic Testology (1)
- Semiotica (1)
- Semiotics (1)
- Semiotik (1)
- Semiotische Textologie (1)
- Sensorik (1)
- Serialität (1)
- Settlers (1)
- Sexual children abuse (1)
- Sexualdelikte (1)
- Shape equations of vesicles (1)
- Siberia (1)
- Siedler (1)
- Siegel (1)
- Signalübertragung (1)
- Simulation (1)
- Single-cell motility (1)
- Skalenentwicklung (1)
- Skalierbarkeit (1)
- Skalierungsmethode von Champagne (1)
- Skriptsprachen (1)
- Smartphone (1)
- Soil (1)
- South Africa (1)
- Soziale Arbeit (1)
- Soziale Medien (1)
- Space climate (1)
- Space weather (1)
- Spatial and temporal nitrate variability (1)
- Spin Textur (1)
- Sport (1)
- Sprachkontakt (1)
- Spread F (1)
- Sprungwahrscheinlichkeit (1)
- Spröde Vorläufer (1)
- Spätmittelalter (1)
- Squeak/Smalltalk (1)
- Standardkostenmodell (1)
- Standort des Streuers (1)
- Starch metabolism (1)
- Start-Up Subsidies (1)
- Sterne (1)
- Sternenpopulationen (1)
- Stickstoff Physisorption (1)
- Strahlungsgürtel (1)
- Stratigrafie (1)
- Stratigraphie (1)
- Stratosphere-troposphere coupling (1)
- Stratospheric polar vortex (1)
- Stratosphären-Troposphären-Kopplung (1)
- Stratosphärischer Polarwirbel (1)
- Streuresonanzen (1)
- Strukturgeologie (1)
- Studienabbruch (1)
- Studieneingangsphase (1)
- Stärkestoffwechsel (1)
- Störungszonenarchitektur (1)
- Subduktion (1)
- Subjektivitäten (1)
- Submarine permafrost (1)
- Submariner Permafrost (1)
- Subsea permafrost (1)
- Supernovaüberrest (1)
- Surface Relief Grating (SRG) (1)
- Survey-Experiment (1)
- Sustainability (1)
- Sustainability awareness (1)
- Sustainable consumption (1)
- Sutton E. Griggs (1)
- Svalbard (1)
- Synthetic Biology (1)
- Systemtheorie (1)
- Südafrika (1)
- TDDFT (1)
- Tansania (1)
- Tanzania (1)
- Teamarbeit (1)
- Teilchenbeschleunigung (1)
- Tektonik (1)
- Tensor (1)
- Testologia semiotica (1)
- Text Linguistics (1)
- Textklassifikation (1)
- Textlinguistik (1)
- The disengagement from Gaza Strip (1)
- Theorie (1)
- Theorie-Praxis-Problem (1)
- Tonminerale (1)
- Topological Crystalline Insulator (1)
- Topological Insulator (1)
- Topologischer Isolator (1)
- Topologischer kristalliner Isolator (1)
- Touch-and-feel-Faktor (1)
- Touchpoint Management (1)
- Trajektorien (1)
- Translation feedback regulation (1)
- Translationsfeedbackregulation (1)
- Translationstheorie (1)
- Trockentoleranz (1)
- Tully-Algorithmus (1)
- USA (1)
- Uferfiltration (1)
- Umsatzsteuer (1)
- Uncertainty Quantification (1)
- Unsicherheiten (1)
- Urban (1)
- Utopia (1)
- Utopian communities (1)
- Utopie (1)
- Validation (1)
- Validierung (1)
- Variabilität (1)
- Variationsrechung (1)
- Verarbeitung natürlicher Sprache (1)
- Verbindungspfade zwischen der Arktis und den mittleren Breiten (1)
- Vereinigtes Königreich (1)
- Verifikation induktiver Invarianten (1)
- Versorgungsmodell (1)
- Verteilungsmuster (1)
- Verwaltungskultur (1)
- Verwaltungsmodernisierung (1)
- Verwaltungsreform (1)
- Verwaltungstradition (1)
- Verwerfungen (1)
- Virtual Laboratory (1)
- Virtuelles Labor (1)
- Volcano (1)
- Vorhersagen (1)
- Vorlanddeformation (1)
- Vulkan (1)
- Vulkan Lascar (1)
- Vulkan-Überwachung (1)
- Vulnerabilität (1)
- W.E.B. Du Bois (1)
- WAXS (1)
- War for Talents (1)
- Wasser-in-Wasser (1)
- Wellen-Teilchen Wechselwirkungen (1)
- Wellengleichung (1)
- Weltraumklima (1)
- Weltraumwetter (1)
- Werkzeugbau (1)
- Werteprofil (1)
- Wettbewerb (1)
- Wettbewerbsverzerrung (1)
- Wirksamkeit von Entwicklungszusammenarbeit (1)
- Wirkstoff-Freisetzun (1)
- Wirkungsorientiertes Management (1)
- Wismut (1)
- Wissensteilung (1)
- Wuchiapingian (1)
- Wuchiapingium (1)
- Wärmetransport (1)
- Yamabe problem (1)
- Yamabe-Problem (1)
- YouTube (1)
- Z-E Isomerisierung (1)
- Zahnwale (1)
- Zebrafisch (1)
- Zeitaufgelöste Lumineszenz (1)
- Zell-zell Adhäsion (1)
- Zellform (1)
- Zentralanden (1)
- Zielausrichtung (1)
- Zielsetzungsstrategien (1)
- Zielumfang (1)
- Zufällige Stochastische Irrfahrt (1)
- Zufällige Umgebungen (1)
- Zwei-Prozess (1)
- Zweikernkomplexe (1)
- Zwerg Galaxien (1)
- actin cytoskeleton machine (1)
- action processing (1)
- active layer (1)
- address normalization (1)
- aesthetics (1)
- affect (1)
- agriculture (1)
- algorithmic image recognition (1)
- algorithmische Bilderkennung (1)
- anaphoric existence presupposition (1)
- anaphorische Existenzpräsupposition (1)
- anomalous diffusion (1)
- anti bacterial (1)
- antibody staining (1)
- antimicrobial peptide (1)
- antiviral agent (1)
- application conditions (1)
- archeomagnetism (1)
- arctic (1)
- array design (1)
- artificial intelligence (1)
- assessment (1)
- assessment of the diffusion (1)
- astroparticle physics (1)
- astrophysics (1)
- athletic performance (1)
- atmosphere (1)
- attribution (1)
- authigene Mineralbildung (1)
- authigenic mineral formation (1)
- automatic (1)
- azobenzene containing molecules (1)
- bayessche Inferenz (1)
- belowground herbivory (1)
- beobachtende Seismologie (1)
- bioavailability (1)
- biochemistry (1)
- biocompatibility (1)
- biocultures (1)
- biofilm (1)
- biofilms (1)
- biogeography (1)
- biohacking (1)
- biological membranes (1)
- biologische Membranen (1)
- biology (1)
- biomarker (1)
- biomass (1)
- biophysics (1)
- biopolitics (1)
- biotechnology (1)
- biotope (1)
- bismuth (1)
- blended learning (1)
- brittle deformation (1)
- brittle precursors (1)
- business negotiations (1)
- business relationship types (1)
- caldera-like topography (1)
- carbohydrates (1)
- carbon density (1)
- carbon nitride (1)
- cardiac rehabilitation (1)
- cardiac valves (1)
- celestial mechanics (1)
- cell morphogenesis (1)
- cell shape (1)
- cells epidermis (1)
- charge storage and transport (1)
- charge-transfer excitations (1)
- chemische Oberflächen-Modifikationen (1)
- chiral separation (1)
- chirale Trennung (1)
- circumferential dike (1)
- cis-trans Isomerisierung (1)
- climate (1)
- climate policy (1)
- climatology (1)
- close packing (1)
- coarse grained Molekulardynamiken (1)
- coarse grained molecular dynamics (1)
- coarse-graining (1)
- cognitive science (1)
- coiled coil (1)
- collaborative learning (1)
- collaborative work (1)
- colonialism (1)
- combat sport (1)
- cone sheet (1)
- conservation (1)
- constraint-based modeling (1)
- coordination (1)
- core-shell UCNP (1)
- cosmic-ray neutron sensing (1)
- counseling at schools (1)
- countermeasures (1)
- critique of judgement (1)
- cuerpo (1)
- cultural narratives (1)
- cultural studies (1)
- culture (1)
- cybersecurity (1)
- data assimilation (1)
- data cleaning (1)
- data matching (1)
- data preparation (1)
- decolonial (1)
- decolonial feminism (1)
- deep biosphere (1)
- deep convection (1)
- defect chemistry (1)
- definite Pseudospaltsätze (1)
- definite pseudoclefts (1)
- deformation mechanisms (1)
- dekolonial (1)
- demands (1)
- detecção de nêutrons de raios cósmicos (1)
- dichteste Packung (1)
- dielectrophoresis (1)
- dietary supplements (1)
- diffraction efficiency (1)
- diffusioosmotic flow (1)
- diffusioosmotischer Fluss (1)
- digital learning (1)
- digital media (1)
- digitale Medien (1)
- digitales Lernen (1)
- direct manipulation (1)
- distribution pattern (1)
- driving competence (1)
- drug release (1)
- drug trafficking (1)
- dual-process (1)
- duplicate detection (1)
- dynamic hyperpolarizability (1)
- dynamische Hyperpolarisierbarkeit (1)
- eLearning (1)
- early numeracy (1)
- earthquake (1)
- earthquake bulletin data (1)
- earthquake source array (1)
- eastern south–central Andes (1)
- ecological modelling (1)
- electret (1)
- electric and magnetic fields (1)
- electrical resistivity (1)
- electrical switches (1)
- electrochemistry (1)
- electronic structure (1)
- elektrische und magnetische Felder (1)
- elektronische Schalter (1)
- emotion (1)
- emotions (1)
- employee driven innovation (1)
- employee involvement in innovation (1)
- energy conversion (1)
- energy metabolism (1)
- enrichments methods (1)
- entity resolution (1)
- epistemology (1)
- equatorial plasma depletions (1)
- eutrophication (1)
- evolution (1)
- exercise (1)
- exhaustive inference (1)
- exopolysaccharide (1)
- experiment (1)
- experimental studies (1)
- experimentelle Studien (1)
- exploration (1)
- extreme rainfall (1)
- eye tracking (1)
- fMRI (1)
- fMRT (1)
- fault healing (1)
- fault zone architecture (1)
- faults (1)
- first passage (1)
- fluid rock interaction (1)
- fluorescence microscopy (1)
- fluorescent proteins (1)
- fluorous chemistry (1)
- focus (1)
- foreland deformation (1)
- formal verification (1)
- formale Verifikation (1)
- formate assimilation (1)
- formate dehydrogenases (1)
- freie Aktivierungsenthalpie (1)
- functionalization (1)
- galaxies (1)
- galaxy bars (1)
- galaxy bulges (1)
- galaxy evolution (1)
- galaxy structure (1)
- gamma rays (1)
- gamma-ray astronomy (1)
- gelatin analogue modeling (1)
- genocide (1)
- genomics (1)
- geodynamic modeling (1)
- geodynamics (1)
- geodynamische Modellierung (1)
- geomorphology (1)
- geophysics (1)
- gestreute Phasen (1)
- glacial and interglacial permafrost (1)
- global-hyperbolisch (1)
- globally hyperbolic (1)
- glucose oxidation (1)
- glycolipids (1)
- goal orientation (1)
- goal scope (1)
- goal setting strategies (1)
- graph constraints (1)
- graph transformation systems (1)
- graph transformations (1)
- graphene (1)
- grassland (1)
- green infrastructure (1)
- green investments (1)
- grobkörnig (1)
- groundwater recharge (1)
- großflächige Liganden (1)
- grüne Investitionen (1)
- habitat (1)
- halide perovskite (1)
- hate speech detection (1)
- heart development (1)
- heterodinuklear (1)
- heterogene Katalyse (1)
- heterogeneous catalysis (1)
- high light (1)
- high temperature rock deformation (1)
- histidine-metal coordination (1)
- human remains (1)
- hydraulische Bodeneigenschaften (1)
- hydrodynamics (1)
- hydrological modelling (1)
- hydrologische Modellierung (1)
- hydrology (1)
- hydrophoben und hydrophile Wechselwirkungen (1)
- hydrophobicity (1)
- hyperpolarizability (1)
- identity politics (1)
- imitation (1)
- immediacy (1)
- immigration (1)
- index (1)
- inducible expression (1)
- inductive invariant checking (1)
- infancy (1)
- innovative behaviour (1)
- inquiry based learning (1)
- institutional crisis (1)
- institutionelle Krise (1)
- integral field spectroscopy (1)
- interfacial recombination (1)
- international law (1)
- intersectionality (1)
- intraspecific trait variation (1)
- intraspezifische Merkmalsvariation (1)
- ion mobility spectrometry (1)
- ionic defects (1)
- ionosphere (1)
- iron (1)
- isotopes (1)
- judo-specific pulling movement (1)
- judospezifische Anrissbewegung (1)
- k-Induktion (1)
- k-induction (1)
- k-inductive invariant (1)
- k-induktive Invariante (1)
- knowledge sharing (1)
- knowledge sharing motivation (1)
- kollaboratives Arbeiten (1)
- kollaboratives Lernen (1)
- künstliche Intelligenz (1)
- lagoons (1)
- landscape (1)
- landscape ecology (1)
- landslides (1)
- language (1)
- large deviation theory (1)
- laser induced breakdown spectroscopy (1)
- laser-geheizte Diamantstempelzelle (1)
- laser-heated Diamond Anvil Cell (1)
- lattice dynamics (1)
- learning environment (1)
- learning networks plant (1)
- learning progression (1)
- life sciences (1)
- lipids (1)
- liveness (1)
- localized deformation (1)
- lokale Anpassung (1)
- lutein esters (1)
- luxury consumption (1)
- luxury goods (1)
- magnetic beads (1)
- magnetism (1)
- market-based view (1)
- martini (1)
- mass extinction (1)
- massive open online courses (1)
- matching dependencies (1)
- mathematical modeling (1)
- mathematics (1)
- mathematische Modellierung (1)
- mechanical and acoustical properties (1)
- mechanische und akustische Eigenschaften (1)
- mechanobiology (1)
- mechanosensation (1)
- media (1)
- media praxeology (1)
- media studies (1)
- media use (1)
- medicine (1)
- membrane (1)
- menschliche Überreste (1)
- metabolic engineering (1)
- metabolic networks (1)
- metabolism (1)
- metalorganic frameworks (1)
- meteorology (1)
- methanogenic archaea (1)
- methanogens (1)
- methanol assimilation (1)
- methanotrophs (1)
- micorbicide (1)
- microbial activity (1)
- microbial community (1)
- microfluidics (1)
- microgel (1)
- microscopy (1)
- microstructures (1)
- microviscosity (1)
- migration (1)
- mikrobielle Aktivität (1)
- mikrobielle Gemeinschaft (1)
- mineral composition (1)
- modelling (1)
- modulation (1)
- mold fungi (1)
- molecular dynamics (1)
- molecular farming (1)
- molekularer Abstand (1)
- morphological impairments (1)
- morphologische Störungen (1)
- morphology (1)
- motivation (1)
- mujeres (1)
- mulifractional diffusion (1)
- multi-fraktionelle Diffusion (1)
- multi-issue negotiations (1)
- muscle activity (1)
- nano-electrodes (1)
- nanoscale heat transfer (1)
- nanoskaliger Wärmetransport (1)
- narcocultura (1)
- narcoculture (1)
- narcotráfico (1)
- natural language processing (1)
- needs analysis judo (1)
- neutron diffraction (1)
- neutron reflectometry (1)
- next generation sequencing (1)
- nicht-lineare Optik (1)
- nichtadiabatische Kopplung (1)
- nichtadibatische Dynamik (1)
- nichtlineare Optik (1)
- nichtstrahlende Verluste (1)
- non-adiabatic coupling (1)
- non-adiabatic dynamic (1)
- non-gaussianity (1)
- non-linear optics (1)
- nonradiative losses (1)
- nordeste do Brasil (1)
- northeast of Brazil (1)
- numerical modeling (1)
- numerische Basisfähigkeiten (1)
- numerische Modellierung (1)
- observational seismology (1)
- off-specular scattering (1)
- online learning (1)
- optimal array configuration (1)
- optimale Array-Konfiguration (1)
- ordering of particles on the surface (1)
- organic carbon cycle (1)
- organic chemistry (1)
- oxidative Proteinmodifikationen (1)
- oxidative protein modifications (1)
- paleomagnetism (1)
- parallel immobilization of biomolecules (1)
- parallele Immobilisierung von Biomolekülen (1)
- particle acceleration (1)
- particle verbs (1)
- pavement cells image analysis (1)
- peer assessment (1)
- peptide (1)
- performance feedback (1)
- permafrost (1)
- permafrost degradation (1)
- perovskite (1)
- perovskite solar cells (1)
- philosophy of biology (1)
- photo-chemical pathways (1)
- photo-structuring of polymer films (1)
- photocatalysis (1)
- photography research (1)
- photoredox catalysis (1)
- photosensitive Polymer (1)
- phreatic eruption (1)
- phreatische Eruption (1)
- physics education (1)
- planets and satellites: rings (1)
- planning strategies (1)
- plant (1)
- plants (1)
- plasmidome (1)
- plastid transformation (1)
- polyelectrolyte multilayers (1)
- polymers (1)
- polypeptide (1)
- polypropylene (1)
- polystyrene nano-spheres (1)
- population genetics (1)
- poröse Kohlenstoffmaterialien (1)
- poröse Siliciumdioxidpartikel (1)
- post-translational modifications (1)
- postcolonial (1)
- postkolonial (1)
- posttranslationale Modifikationen (1)
- power spectral analysis (1)
- practical driving (1)
- praktische Fahrerlaubnisprüfung (1)
- preactivation (1)
- predicate interpretation (distributive vs. non-distributive) (1)
- prediction (1)
- primary school (1)
- programming (1)
- propriedades hidráulicas do solo (1)
- protein (1)
- protein fusion (1)
- public management (1)
- qualitative Inhaltsanalyse (1)
- quantum yield (1)
- radiation belts (1)
- random diffusivity (1)
- randomisierte kontrollierte Studie (RCT) (1)
- randomized controlled trial (1)
- raumbezogene Ökologie (1)
- recarga de águas subterrâneas (1)
- recombinant production (1)
- record linkage (1)
- redox (1)
- redox chemistry (1)
- reef (1)
- refugee crisis (1)
- relativization (1)
- reliability (1)
- repatriation (1)
- representation (1)
- resonance energy transfer (1)
- rheology (1)
- ribosome profiling (1)
- ring current electrons (1)
- rivers (1)
- räumlich und zeitlich kontrollierte Wirkstoff-Freisetzung (1)
- salt diffusion (1)
- scale development (1)
- scattered phases (1)
- scatterer location (1)
- scattering (1)
- scattering resonances (1)
- scripting languages (1)
- second harmonic generation (1)
- sediments (1)
- seismic activity (1)
- seismic array (1)
- seismic event localization (1)
- seismic source-time function estimation (1)
- seismicity modelling (1)
- seismische Aktivität (1)
- seismische Ereignislokalisierung (1)
- seismisches Array (1)
- selbstassemblierende Monolagen (1)
- selbstbestimmtes Lesen (1)
- selbstorganisierte Einzelschichten (1)
- self-assembled monolayer (1)
- self-assembled monolayers (1)
- self-defence (1)
- self-efficacy (1)
- self-paced reading (1)
- sensoriamento remoto (1)
- sentence processing (1)
- serialism (1)
- shear zones (1)
- signalling (1)
- simulation (1)
- single-cell (1)
- single-molecule biosensor (1)
- single-molecule force spectroscopy (1)
- single-object detection (1)
- skull (1)
- social cognition (1)
- social uses (1)
- social work (1)
- soil analysis (1)
- soil hydraulic properties (1)
- source array optimal design (1)
- soziale Gebrauchsweisen (1)
- spatial ecology (1)
- spatial planning (1)
- spatially and temporally controlled drug release (1)
- spektrale Leistungsdichte (1)
- spezifisches Krafttraining (1)
- spherical harmonics (1)
- spin texture (1)
- sport-specific resistance training (1)
- sportliche Leistung (1)
- spread F (1)
- spröde Deformation (1)
- starker Konvektion (1)
- stars (1)
- static hyperpolarizability (1)
- statische Hyperpolarisierbarkeit (1)
- statistical physics (1)
- stellar populations (1)
- stewartan (1)
- strain localization (1)
- stress-tolerance genes (1)
- structural geology (1)
- sub-diffraction gratings (1)
- subduction (1)
- subjectivities (1)
- subjetividades (1)
- submarine (1)
- subsea (1)
- supernova remnant (1)
- surface chemical treatment (1)
- surface topography (1)
- surfaces and interfaces (1)
- survey experiment (1)
- sustainability transformation (1)
- sustainable development (1)
- sustainable finance (1)
- swarm mission (1)
- synthetic array beam power (1)
- synthetische Array-Strahlleistung (1)
- teamwork (1)
- tectonics (1)
- television/TV (1)
- text classification (1)
- theoretische Chemie (1)
- thermokarst (1)
- thinking styles (1)
- tiefe Biosphäre (1)
- time-resolved luminescence (1)
- tissue growth (1)
- tool building (1)
- toothed whales (1)
- trajectory (1)
- trajectory surface hopping (1)
- trans-cis Isomerisierung (1)
- transcrystalline polypropylene (1)
- transgenic (1)
- transition metal complexes (1)
- transition state (1)
- transkristallines Polypropylen (1)
- translation (1)
- trunk muscle strength (1)
- ultrafast magnetism (1)
- ultrafast x-ray diffraction (1)
- ultraschnelle Röntgendiffraktion (1)
- ultraschneller Magnetimus (1)
- umlaufender Deich (1)
- uncertainties (1)
- unloading stress (1)
- unterirdische Pflanzenfresser (1)
- upconverting nanoparticles (1)
- urban (1)
- user interaction (1)
- variability (1)
- visual word recognition (1)
- visuelle Worterkennung (1)
- volcanic plumbing system (1)
- volcano monitoring (1)
- volcano seismology (1)
- vulkanische Seismologie (1)
- vulkanisches System (1)
- vulnerability (1)
- water-in-water (1)
- wave equation (1)
- wave-particle interactions (1)
- western Eger Rift (1)
- westlichen Eger-Graben (1)
- wetting (1)
- whole genome (1)
- women (1)
- work-related medical rehabilitation (1)
- zebrafish (1)
- zufälligen Diffusivität (1)
- Ärztemangel (1)
- Ästhetik (1)
- Ökonomie der Konventionen (1)
- Übergangszustand (1)
- äquatorialen Plasma-Verarmungen (1)
- öffentliche BWL (1)
- öffentliche Hand (1)
- ökologische Modellierung (1)
- östlich-südzentrale Anden (1)
- ג'רבה (1)
- הכהן, משה כלפון בן שלום דניאל יהודה, (1)
- הלכתית (1)
- התנתקות מרצועת עזה (1)
- מתנחלים (1)
- נוער הגבעות (1)
- ציונות דתית (1)
- תג מחיר (1)
- wahrgenommene Relevanz (1)
Institute
- Institut für Biochemie und Biologie (32)
- Institut für Physik und Astronomie (31)
- Institut für Geowissenschaften (24)
- Institut für Chemie (22)
- Öffentliches Recht (11)
- Hasso-Plattner-Institut für Digital Engineering GmbH (9)
- Institut für Anglistik und Amerikanistik (9)
- Institut für Ernährungswissenschaft (8)
- Institut für Umweltwissenschaften und Geographie (8)
- Department Psychologie (7)
Research on novel and advanced biomaterials is an indispensable step towards their applications in desirable fields such as tissue engineering, regenerative medicine, cell culture, or biotechnology. The work presented here focuses on such a promising material: polyelectrolyte multilayer (PEM) composed of hyaluronic acid (HA) and poly(L-lysine) (PLL). This gel-like polymer surface coating is able to accumulate (bio-)molecules such as proteins or drugs and release them in a controlled manner. It serves as a mimic of the extracellular matrix (ECM) in composition and intrinsic properties. These qualities make the HA/PLL multilayers a promising candidate for multiple bio-applications such as those mentioned above. The work presented aims at the development of a straightforward approach for assessment of multi-fractional diffusion in multilayers (first part) and at control of local molecular transport into or from the multilayers by laser light trigger (second part).
The mechanism of the loading and release is governed by the interaction of bioactives with the multilayer constituents and by the diffusion phenomenon overall. The diffusion of a molecule in HA/PLL multilayers shows multiple fractions of different diffusion rate. Approaches, that are able to assess the mobility of molecules in such a complex system, are limited. This shortcoming motivated the design of a novel evaluation tool presented here.
The tool employs a simulation-based approach for evaluation of the data acquired by fluorescence recovery after photobleaching (FRAP) method. In this approach, possible fluorescence recovery scenarios are primarily simulated and afterwards compared with the data acquired while optimizing parameters of a model until a sufficient match is achieved. Fluorescent latex particles of different sizes and fluorescein in an aqueous medium are utilized as test samples validating the analysis results. The diffusion of protein cytochrome c in HA/PLL multilayers is evaluated as well.
This tool significantly broadens the possibilities of analysis of spatiotemporal FRAP data, which originate from multi-fractional diffusion, while striving to be widely applicable. This tool has the potential to elucidate the mechanisms of molecular transport and empower rational engineering of the drug release systems.
The second part of the work focuses on the fabrication of such a spatiotemporarily-controlled drug release system employing the HA/PLL multilayer. This release system comprises different layers of various functionalities that together form a sandwich structure. The bottom layer, which serves as a reservoir, is formed by HA/PLL PEM deposited on a planar glass substrate. On top of the PEM, a layer of so-called hybrids is deposited. The hybrids consist of thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) -based hydrogel microparticles with surface-attached gold nanorods. The layer of hybrids is intended to serve as a gate that controls the local molecular transport through the PEM–solution-interface. The possibility of stimulating the molecular transport by near-infrared (NIR) laser irradiation is being explored.
From several tested approaches for the deposition of hybrids onto the PEM surface, the drying-based approach was identified as optimal. Experiments, that examine the functionality of the fabricated sandwich at elevated temperature, document the reversible volume phase transition of the PEM-attached hybrids while sustaining the sandwich stability. Further, the gold nanorods were shown to effectively absorb light radiation in the tissue- and cell-friendly NIR spectral region while transducing the energy of light into heat. The rapid and reversible shrinkage of the PEM-attached hybrids was thereby achieved. Finally, dextran was employed as a model transport molecule. It loads into the PEM reservoir in a few seconds with the partition constant of 2.4, while it spontaneously releases in a slower, sustained manner. The local laser irradiation of the sandwich, which contains the fluorescein isothiocyanate tagged dextran, leads to a gradual reduction of fluorescence intensity in the irradiated region.
The release system fabricated employs renowned photoresponsivity of the hybrids in an innovative setting. The results of the research are a step towards a spatially-controlled on-demand drug release system that paves the way to spatiotemporally controlled drug release.
The approaches developed in this work have the potential to elucidate the molecular dynamics in ECM and to foster engineering of multilayers with properties tuned to mimic the ECM. The work aims at spatiotemporal control over the diffusion of bioactives and their presentation to the cells.
Lava domes are severely hazardous, mound-shaped extrusions of highly viscous lava and commonly erupt at many active stratovolcanoes around the world. Due to gradual growth and flank oversteepening, such lava domes regularly experience partial or full collapses, resulting in destructive and far-reaching pyroclastic density currents. They are also associated with cyclic explosive activity as the complex interplay of cooling, degassing, and solidification of dome lavas regularly causes gas pressurizations on the dome or the underlying volcano conduit. Lava dome extrusions can last from days to decades, further highlighting the need for accurate and reliable monitoring data.
This thesis aims to improve our understanding of lava dome processes and to contribute to the monitoring and prediction of hazards posed by these domes. The recent rise and sophistication of photogrammetric techniques allows for the extraction of observational data in unprecedented detail and creates ideal tools for accomplishing this purpose. Here, I study natural lava dome extrusions as well as laboratory-based analogue models of lava dome extrusions and employ photogrammetric monitoring by Structure-from-Motion (SfM) and Particle-Image-Velocimetry (PIV) techniques. I primarily use aerial photography data obtained by helicopter, airplanes, Unoccupied Aircraft Systems (UAS) or ground-based timelapse cameras. Firstly, by combining a long time-series of overflight data at Volcán de Colima, México, with seismic and satellite radar data, I construct a detailed timeline of lava dome and crater evolution. Using numerical model, the impact of the extrusion on dome morphology and loading stress is further evaluated and an impact on the growth direction is identified, bearing important implications for the location of collapse hazards. Secondly, sequential overflight surveys at the Santiaguito lava dome, Guatemala, reveal surface motion data in high detail. I quantify the growth of the lava dome and the movement of a lava flow, showing complex motions that occur on different timescales and I provide insight into rock properties relevant for hazard assessment inferred purely by photogrammetric processing of remote sensing data. Lastly, I recreate artificial lava dome and spine growth using analogue modelling under controlled conditions, providing new insights into lava extrusion processes and structures as well as the conditions in which they form.
These findings demonstrate the capabilities of photogrammetric data analyses to successfully monitor lava dome growth and evolution while highlighting the advantages of complementary modelling methods to explain the observed phenomena. The results presented herein further bear important new insights and implications for the hazards posed by lava domes.
The plasmasphere is a dynamic region of cold, dense plasma surrounding the Earth. Its shape and size are highly susceptible to variations in solar and geomagnetic conditions. Having an accurate model of plasma density in the plasmasphere is important for GNSS navigation and for predicting hazardous effects of radiation in space on spacecraft. The distribution of cold plasma and its dynamic dependence on solar wind and geomagnetic conditions remain, however, poorly quantified. Existing empirical models of plasma density tend to be oversimplified as they are based on statistical averages over static parameters. Understanding the global dynamics of the plasmasphere using observations from space remains a challenge, as existing density measurements are sparse and limited to locations where satellites can provide in-situ observations. In this dissertation, we demonstrate how such sparse electron density measurements can be used to reconstruct the global electron density distribution in the plasmasphere and capture its dynamic dependence on solar wind and geomagnetic conditions.
First, we develop an automated algorithm to determine the electron density from in-situ measurements of the electric field on the Van Allen Probes spacecraft. In particular, we design a neural network to infer the upper hybrid resonance frequency from the dynamic spectrograms obtained with the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrumentation suite, which is then used to calculate the electron number density. The developed Neural-network-based Upper hybrid Resonance Determination (NURD) algorithm is applied to more than four years of EMFISIS measurements to produce the publicly available electron density data set.
We utilize the obtained electron density data set to develop a new global model of plasma density by employing a neural network-based modeling approach. In addition to the location, the model takes the time history of geomagnetic indices and location as inputs, and produces electron density in the equatorial plane as an output. It is extensively validated using in-situ density measurements from the Van Allen Probes mission, and also by comparing the predicted global evolution of the plasmasphere with the global IMAGE EUV images of He+ distribution. The model successfully reproduces erosion of the plasmasphere on the night side as well as plume formation and evolution, and agrees well with data.
The performance of neural networks strongly depends on the availability of training data, which is limited during intervals of high geomagnetic activity. In order to provide reliable density predictions during such intervals, we can employ physics-based modeling. We develop a new approach for optimally combining the neural network- and physics-based models of the plasmasphere by means of data assimilation. The developed approach utilizes advantages of both neural network- and physics-based modeling and produces reliable global plasma density reconstructions for quiet, disturbed, and extreme geomagnetic conditions.
Finally, we extend the developed machine learning-based tools and apply them to another important problem in the field of space weather, the prediction of the geomagnetic index Kp. The Kp index is one of the most widely used indicators for space weather alerts and serves as input to various models, such as for the thermosphere, the radiation belts and the plasmasphere. It is therefore crucial to predict the Kp index accurately. Previous work in this area has mostly employed artificial neural networks to nowcast and make short-term predictions of Kp, basing their inferences on the recent history of Kp and solar wind measurements at L1. We analyze how the performance of neural networks compares to other machine learning algorithms for nowcasting and forecasting Kp for up to 12 hours ahead. Additionally, we investigate several machine learning and information theory methods for selecting the optimal inputs to a predictive model of Kp. The developed tools for feature selection can also be applied to other problems in space physics in order to reduce the input dimensionality and identify the most important drivers.
Research outlined in this dissertation clearly demonstrates that machine learning tools can be used to develop empirical models from sparse data and also can be used to understand the underlying physical processes. Combining machine learning, physics-based modeling and data assimilation allows us to develop novel methods benefiting from these different approaches.
Completely water-based systems are of interest for the development of novel material for various reasons: On one hand, they provide benign environment for biological systems and on the other hand they facilitate effective molecular transport in a membrane-free environment. In order to investigate the general potential of aqueous two-phase systems (ATPSs) for biomaterials and compartmentalized systems, various solid particles were applied to stabilize all-aqueous emulsion droplets. The target ATPS to be investigated should be prepared via mixing of two aqueous solutions of water-soluble polymers, which turn biphasic when exceeding a critical polymer concentration. Hydrophilic polymers with a wide range of molar mass such as dextran/poly(ethylene glycol) (PEG) can therefore be applied. Solid particles adsorbed at the interfaces can be exceptionally efficient stabilizers forming so-called Pickering emulsions, and nanoparticles can bridge the correlation length of polymer solutions and are thereby the best option for water-in-water emulsions.
The first approach towards the investigation of ATPS was conducted with all aqueous dextran-PEG emulsions in the presence of poly(dopamine) particles (PDP) in Chapter 4. The water-in-water emulsions were formed with a PEG/dextran system via utilizing PDP as stabilizers. Studies of the formed emulsions were performed via laser scanning confocal microscope (CLSM), optical microscope (OM), cryo-scanning electron microscope (SEM) and tensiometry. The stable emulsions (at least 16 weeks) were demulsified easily via dilution or surfactant addition. Furthermore, the solid PDP at the water-water interface were crosslinked in order to inhibit demulsification of the Pickering emulsion. Transmission electron microscope (TEM) and scanning electron microscope (SEM) were used to visualize the morphology of PDP before and after crosslinking. PDP stabilized water-in-water emulsions were utilized in the following Chapter 5 to form supramolecular compartmentalized hydrogels. Here, hydrogels were prepared in pre-formed water-in-water emulsions and gelled via α-cyclodextrin-PEG (α-CD-PEG) inclusion complex formation. Studies of the formed complexes were performed via X-ray powder diffraction (XRD) and the mechanical properties of the hydrogels were measured with oscillatory shear rheology. In order to verify the compartmentalized state and its triggered decomposition, hydrogels and emulsions were assessed via OM, SEM and CLSM. The last chapter broadens the investigations from the previous two systems by utilizing various carbon nitrides (CN) as different stabilizers in ATPS. CN introduces another way to trigger demulsification, namely irradiation with visible light. Therefore, emulsification and demulsification with various triggers were probed. The investigated all aqueous multi-phase systems will act as model for future fabrication of biocompatible materials, cell micropatterning as well as separation of compartmentalized systems.
Seismological and seismotectonic analysis of the northwestern Argentine Central Andean foreland
(2020)
After a severe M W 5.7 earthquake on October 17, 2015 in El Galpón in the province of Salta NW Argentina, I installed a local seismological network around the estimated epicenter. The network covered an area characterized by inherited Cretaceous normal faults and neotectonic faults with unknown recurrence intervals, some of which may have been reactivated normal faults. The 13 three-component seismic stations recorded data continuously for 15 months.
The 2015 earthquake took place in the Santa Bárbara System of the Andean foreland, at about 17km depth. This region is the easternmost morphostructural region of the central Andes. As a part of the broken foreland, it is bounded to the north by the Subandes fold-and-thrust belt and the Sierras Pampeanas to the south; to the east lies the Chaco-Paraná basin.
A multi-stage morphotectonic evolution with thick-skinned basement uplift and coeval thin-skinned deformation in the intermontane basins is suggested for the study area. The release of stresses associated with the foreland deformation can result in strong earthquakes, as the study area is known for recurrent and historical, destructive earthquakes. The available continuous record reaches back in time, when the strongest event in 1692 (magnitude 7 or intensity IX) destroyed the city of Esteco. Destructive earthquakes and surface deformation are thus a hallmark of this part of the Andean foreland.
With state-of-the-art Python packages (e.g. pyrocko, ObsPy), a semi-automatic approach is followed to analyze the collected continuous data of the seismological network. The resulting 1435 hypocenter locations consist of three different groups: 1.) local crustal earthquakes (nearly half of the events belong to this group), 2.) interplate activity, of regional distance in the slab of the Nazca-plate, and 3.) very deep earthquakes at about 600km depth. My major interest focused on the first event class. Those crustal events are partly aftershock events of the El Galpón earthquake and a second earthquake, in the south of the same fault. Further events can be considered as background seismicity of other faults within the study area. Strikingly, the seismogenic zone encompass the whole crust and propagates brittle deformation down, close to the Moho.
From the collected seismological data, a local seismic velocity model is estimated, using VELEST. After the execution of various stability tests, the robust minimum 1D-velocity model implies guiding values for the composition of the local, subsurface structure of the crust. Afterwards, performing a hypocenter relocation enables the assignment of individual earthquakes to aftershock clusters or extended seismotectonic structures. This allows the mapping of previously unknown seismogenic faults.
Finally, focal mechanisms are modeled for events with acurately located hypocenters, using the newly derived local velocity model. A compressive regime is attested by the majority of focal mechanisms, while the strike direction of the individual seismogenic structures is in agreement with the overall north – south orientation of the Central Andes, its mountain front, and individual mountain ranges in the southern Santa-Bárbara-System.
The Milky Way is a spiral galaxy consisting of a disc of gas, dust and stars embedded in a halo of dark matter. Within this dark matter halo there is also a diffuse population of stars called the stellar halo, that has been accreting stars for billions of years from smaller galaxies that get pulled in and disrupted by the large gravitational potential of the Milky Way. As they are disrupted, these galaxies leave behind long streams of stars that can take billions of years to mix with the rest of the stars in the halo. Furthermore, the amount of heavy elements (metallicity) of the stars in these galaxies reflects the rate of chemical enrichment that occurred in them, since the Universe has been slowly enriched in heavy elements (e.g. iron) through successive generations of stars which produce them in their cores and supernovae explosions. Therefore, stars that contain small amounts of heavy elements (metal-poor stars) either formed at early times before the Universe was significantly enriched, or in isolated environments. The aim of this thesis is to develop a better understanding of the substructure content and chemistry of the Galactic stellar halo, in order to gain further insight into the formation and evolution of the Milky Way.
The Pristine survey uses a narrow-band filter which specifically targets the Ca II H & K spectral absorption lines to provide photometric metallicities for a large number of stars down to the extremely metal-poor (EMP) regime, making it a very powerful data set for Galactic archaeology studies. In Chapter 2, we quantify the efficiency of the survey using a preliminary spectroscopic follow-up sample of ~ 200 stars. We also use this sample to establish a set of selection criteria to improve the success rate of selecting EMP candidates for follow-up spectroscopy. In Chapter 3, we extend this work and present the full catalogue of ~ 1000 stars from a three year long medium resolution spectroscopic follow-up effort conducted as part of the Pristine survey. From this sample, we compute success rates of 56% and 23% for recovering stars with [Fe/H] < -2.5 and [Fe/H] < -3.0, respectively. This demonstrates a high efficiency for finding EMP stars as compared to previous searches with success rates of 3-4%.
In Chapter 4, we select a sample of ~ 80000 halo stars using colour and magnitude cuts to select a main sequence turnoff population in the distance range 6 < dʘ < 20 kpc. We then use the spectroscopic follow-up sample presented in Chapter 3 to statistically rescale the Pristine photometric metallicities of this sample, and present the resulting corrected metallicity distribution function (MDF) of the halo. The slope at the metal-poor end is significantly shallower than previous spectroscopic efforts have shown, suggesting that there may be more metal-poor stars with [Fe/H] < -2.5 in the halo than previously thought. This sample also shows evidence that the MDF of the halo may not be bimodal as was proposed by previous works, and that the lack of globular clusters in the Milky Way may be the result of a physical truncation of the MDF rather than just statistical under-sampling.
Chapter 5 showcases the unexpected capability of the Pristine filter for separating blue horizontal branch (BHB) stars from Blue Straggler (BS) stars. We demonstrate a purity of 93% and completeness of 91% for identifying BHB stars, a substantial improvement over previous works. We then use this highly pure and complete sample of BHB stars to trace the halo density profile out to d > 100 kpc, and the Sagittarius stream substructure out to ~ 130 kpc.
In Chapter 6 we use the photometric metallicities from the Pristine survey to perform a clustering analysis of the halo as a function of metallicity. Separating the Pristine sample into four metallicity bins of [Fe/H] < -2, -2 < [Fe/H] < -1.5, -1.5 < [Fe/H] < -1 and -0.9 < [Fe/H] < -0.8, we compute the two-point correlation function to measure the amount of clustering on scales of < 5 deg. For a smooth comparison sample we make a mock Pristine data set generated using the Galaxia code based on the Besançon model of the Galaxy. We find enhanced clustering on small scales (< 0.5 deg) for some regions of the Galaxy for the most metal-poor bin ([Fe/H] < -2), while in others we see large scale signals that correspond to known substructures in those directions. This confirms that the substructure content of the halo is highly anisotropic and diverse in different Galactic environments. We discuss the difficulties of removing systematic clustering signals from the data and the limitations of disentangling weak clustering signals from real substructures and residual systematic structure in the data.
Taken together, the work presented in this thesis approaches the problem of better understanding the halo of our Galaxy from multiple angles. Firstly, presenting a sizeable sample of EMP stars and improving the selection efficiency of EMP stars for the Pristine survey, paving the way for the further discovery of metal-poor stars to be used as probes to early chemical evolution. Secondly, improving the selection of BHB distance tracers to map out the halo to large distances, and finally, using the large samples of metal-poor stars to derive the MDF of the inner halo and analyse the substructure content at different metallicities. The results of this thesis therefore expand our understanding of the physical and chemical properties of the Milky Way stellar halo, and provide insight into the processes involved in its formation and evolution.
Water quality in river systems is of growing concern due to rising anthropogenic pressures and climate change. Mitigation efforts have been placed under the guidelines of different governance conventions during last decades (e.g., the Water Framework Directive in Europe). Despite significant improvement through relatively straightforward measures, the environmental status has likely reached a plateau. A higher spatiotemporal accuracy of catchment nitrate modeling is, therefore, needed to identify critical source areas of diffuse nutrient pollution (especially for nitrate) and to further guide implementation of spatially differentiated, cost-effective mitigation measures. On the other hand, the emerging high-frequency sensor monitoring upgrades the monitoring resolution to the time scales of biogeochemical processes and enables more flexible monitoring deployments under varying conditions. The newly available information offers new prospects in understanding nitrate spatiotemporal dynamics. Formulating such advanced process understanding into catchment models is critical for model further development and environmental status evaluation. This dissertation is targeting on a comprehensive analysis of catchment and in-stream nitrate dynamics and is aiming to derive new insights into their spatial and temporal variabilities through the new fully distributed model development and the new high-frequency data.
Firstly, a new fully distributed, process-based catchment nitrate model (the mHM-Nitrate model) is developed based on the mesoscale Hydrological Model (mHM) platform. Nitrate process descriptions are adopted from the Hydrological Predictions for the Environment (HYPE), with considerable improved implementations. With the multiscale grid-based discretization, mHM-Nitrate balances the spatial representation and the modeling complexity. The model has been thoughtfully evaluated in the Selke catchment (456 km2), central Germany, which is characterized by heterogeneous physiographic conditions. Results show that the model captures well the long-term discharge and nitrate dynamics at three nested gauging stations. Using daily nitrate-N observations, the model is also validated in capturing short-term fluctuations due to changes in runoff partitioning and spatial contribution during flooding events. By comparing the model simulations with the values reported in the literature, the model is capable of providing detailed and reliable spatial information of nitrate concentrations and fluxes. Therefore, the model can be taken as a promising tool for environmental scientists in advancing environmental modeling research, as well as for stakeholders in supporting their decision-making, especially for spatially differentiated mitigation measures.
Secondly, a parsimonious approach of regionalizing the in-stream autotrophic nitrate uptake is proposed using high-frequency data and further integrated into the new mHM-Nitrate model. The new regionalization approach considers the potential uptake rate (as a general parameter) and effects of above-canopy light and riparian shading (represented by global radiation and leaf area index data, respectively). Multi-parameter sensors have been continuously deployed in a forest upstream reach and an agricultural downstream reach of the Selke River. Using the continuous high-frequency data in both streams, daily autotrophic uptake rates (2011-2015) are calculated and used to validate the regionalization approach. The performance and spatial transferability of the approach is validated in terms of well-capturing the distinct seasonal patterns and value ranges in both forest and agricultural streams. Integrating the approach into the mHM-Nitrate model allows spatiotemporal variability of in-stream nitrate transport and uptake to be investigated throughout the river network.
Thirdly, to further assess the spatial variability of catchment nitrate dynamics, for the first time the fully distributed parameterization is investigated through sensitivity analysis. Sensitivity results show that parameters of soil denitrification, in-stream denitrification and in-stream uptake processes are the most sensitive parameters throughout the Selke catchment, while they all show high spatial variability, where hot-spots of parameter sensitivity can be explicitly identified. The Spearman rank correlation is further analyzed between sensitivity indices and multiple catchment factors. The correlation identifies that the controlling factors vary spatially, reflecting heterogeneous catchment responses in the Selke catchment. These insights are, therefore, informative in informing future parameter regionalization schemes for catchment water quality models. In addition, the spatial distributions of parameter sensitivity are also influenced by the gauging information that is being used for sensitivity evaluation. Therefore, an appropriate monitoring scheme is highly recommended to truly reflect the catchment responses.
Perovskite solar cells have become one of the most studied systems in the quest for new, cheap and efficient solar cell materials. Within a decade device efficiencies have risen to >25% in single-junction and >29% in tandem devices on top of silicon. This rapid improvement was in many ways fortunate, as e. g. the energy levels of commonly used halide perovskites are compatible with already existing materials from other photovoltaic technologies such as dye-sensitized or organic solar cells. Despite this rapid success, fundamental working principles must be understood to allow concerted further improvements. This thesis focuses on a comprehensive understanding of recombination processes in functioning devices.
First the impact the energy level alignment between the perovskite and the electron transport layer based on fullerenes is investigated. This controversial topic is comprehensively addressed and recombination is mitigated through reducing the energy difference between the perovskite conduction band minimum and the LUMO of the fullerene. Additionally, an insulating blocking layer is introduced, which is even more effective in reducing this recombination, without compromising carrier collection and thus efficiency. With the rapid efficiency development (certified efficiencies have broken through the 20% ceiling) and thousands of researchers working on perovskite-based optoelectronic devices, reliable protocols on how to reach these efficiencies are lacking. Having established robust methods for >20% devices, while keeping track of possible pitfalls, a detailed description of the fabrication of perovskite solar cells at the highest efficiency level (>20%) is provided. The fabrication of low-temperature p-i-n structured devices is described, commenting on important factors such as practical experience, processing atmosphere & temperature, material purity and solution age. Analogous to reliable fabrication methods, a method to identify recombination losses is needed to further improve efficiencies. Thus, absolute photoluminescence is identified as a direct way to quantify the Quasi-Fermi level splitting of the perovskite absorber (1.21eV) and interfacial recombination losses the transport layers impose, reducing the latter to ~1.1eV. Implementing very thin interlayers at both the p- and n-interface (PFN-P2 and LiF, respectively), these losses are suppressed, enabling a VOC of up to 1.17eV. Optimizing the device dimensions and the bandgap, 20% devices with 1cm2 active area are demonstrated. Another important consideration is the solar cells’ stability if subjected to field-relevant stressors during operation. In particular these are heat, light, bias or a combination thereof. Perovskite layers – especially those incorporating organic cations – have been shown to degrade if subjected to these stressors. Keeping in mind that several interlayers have been successfully used to mitigate recombination losses, a family of perfluorinated self-assembled monolayers (X-PFCn, where X denotes I/Br and n = 7-12) are introduced as interlayers at the n-interface. Indeed, they reduce interfacial recombination losses enabling device efficiencies up to 21.3%. Even more importantly they improve the stability of the devices. The solar cells with IPFC10 are stable over 3000h stored in the ambient and withstand a harsh 250h of MPP at 85◦C without appreciable efficiency losses. To advance further and improve device efficiencies, a sound understanding of the photophysics of a device is imperative. Many experimental observations in recent years have however drawn an inconclusive picture, often suffering from technical of physical impediments, disguising e. g. capacitive discharge as recombination dynamics. To circumvent these obstacles, fully operational, highly efficient perovskites solar cells are investigated by a combination of multiple optical and optoelectronic probes, allowing to draw a conclusive picture of the recombination dynamics in operation. Supported by drift-diffusion simulations, the device recombination dynamics can be fully described by a combination of first-, second- and third-order recombination and JV curves as well as luminescence efficiencies over multiple illumination intensities are well described within the model. On this basis steady state carrier densities, effective recombination constants, densities-of-states and effective masses are calculated, putting the devices at the brink of the radiative regime. Moreover, a comprehensive review of recombination in state-of-the-art devices is given, highlighting the importance of interfaces in nonradiative recombination. Different strategies to assess these are discussed, before emphasizing successful strategies to reduce interfacial recombination and pointing towards the necessary steps to further improve device efficiency and stability. Overall, the main findings represent an advancement in understanding loss mechanisms in highly efficient solar cells. Different reliable optoelectronic techniques are used and interfacial losses are found to be of grave importance for both efficiency and stability. Addressing the interfaces, several interlayers are introduced, which mitigate recombination losses and degradation.
Sekundäre Pflanzenstoffe und ihre gesundheitsfördernden Eigenschaften sind in den letzten zwei Jahrzehnten vielfach ernährungsphysiologisch untersucht und spezifische positive Effekte im humanen Organismus zum Teil sehr genau beschrieben worden. Zu den Carotinoiden zählend ist der sekundäre Pflanzenstoff Lutein insbesondere in der Prävention von ophthalmologischen Erkrankungen in den Mittelpunkt der Forschung gerückt. Das ausschließlich von Pflanzen und einigen Algen synthetisierte Xanthophyll wird über die pflanzliche Nahrung insbesondere grünes Blattgemüse in den humanen Organismus aufgenommen. Dort akkumuliert es bevorzugt im Makulapigment der Retina des menschlichen Auges und ist bedeutend im Prozess der Aufrechterhaltung der Funktionsfähigkeit der Photorezeptorzellen. Im Laufe des Alterns kann die Abnahme der Dichte des Makulapigments und der Abbau von Lutein beobachtet werden. Die dadurch eintretende Destabilisierung der Photorezeptorzellen im Zusammenhang mit einer veränderten Stoffwechsellage im alternden Organismus kann zur Ausprägung der altersbedingten Makuladegeneration (AMD) führen. Die pathologische Symptomatik der Augenerkrankung reicht vom Verlust der Sehschärfe bis hin zum irreversiblen Erblinden. Da therapeutische Mittel ausschließlich ein Fortschreiten verhindern, bestehen hier Forschungsansätze präventive Maßnahmen zu finden. Die Supplementierung von luteinhaltigen Präparaten bietet dabei einen Ansatzpunkt. Auf dem Markt finden sich bereits Nahrungsergänzungsmittel (NEM) mit Lutein in verschiedenen Applikationen. Limitierend ist dabei die Stabilität und Bioverfügbarkeit von Lutein, welches teilweise kostenintensiv und mit unbekannter Reinheit zu erwerben ist. Aus diesem Grund wäre die Verwendung von Luteinestern als die pflanzliche Speicherform des Luteins im Rahmen eines NEMs vorteilhaft. Neben ihrer natürlichen, höheren Stabilität sind Luteinester nachhaltig und kostengünstig einsetzbar.
In dieser Arbeit wurden physikochemische und ernährungsphysiologisch relevante Aspekte in dem Produktentwicklungsprozess eines NEMs mit Luteinestern in einer kolloidalen Formulierung untersucht. Die bisher einzigartige Anwendung von Luteinestern in einem Mundspray sollte die Aufnahme des Wirkstoffes insbesondere für ältere Menschen erleichtern und verbessern. Unter Beachtung der Ergebnisse und der ernährungsphysiologischen Bewertung sollten u.a. Empfehlungen für die Rezepturzusammensetzungen einer Miniemulsion (Emulsion mit Partikelgrößen <1,0 µm) gegeben werden. Eine Einschätzung der Bioverfügbarkeit der Luteinester aus den entwickelten, kolloidalen Formulierungen konnte anhand von Studien zur Resorption- und Absorptionsverfügbarkeit in vitro ermöglicht werden.
In physikalischen Untersuchungen wurden zunächst Basisbestandteile für die Formulierungen präzisiert. In ersten wirkstofffreien Musteremulsionen konnten ausgewählte Öle als Trägerphase sowie Emulgatoren und Löslichkeitsvermittler (Peptisatoren) hinsichtlich ihrer Eignung zur Bereitstellung einer Miniemulsion physikalisch geprüft werden. Die beste Stabilität und optimale Eigenschaften einer Miniemulsion zeigten sich bei der Verwendung von MCT-Öl (engl. medium chain triglyceride) bzw. Rapsöl in der Trägerphase sowie des Emulgators Tween® 80 (Tween 80) allein oder in Kombination mit dem Molkenproteinhydrolysat Biozate® 1 (Biozate 1).
Aus den physikalischen Untersuchungen der Musteremulsionen gingen die Präemulsionen als Prototypen hervor. Diese enthielten den Wirkstoff Lutein in verschiedenen Formen. So wurden Präemulsionen mit Lutein, mit Luteinestern sowie mit Lutein und Luteinestern konzipiert, welche den Emulgator Tween 80 oder die Kombination mit Biozate 1 enthielten. Bei der Herstellung der Präemulsionen führte die Anwendung der Emulgiertechniken Ultraschall mit anschließender Hochdruckhomogenisation zu den gewünschten Miniemulsionen. Beide eingesetzten Emulgatoren boten optimale Stabilisierungseffekte. Anschließend erfolgte die physikochemische Charakterisierung der Wirkstoffe. Insbesondere Luteinester aus Oleoresin erwiesen sich hier als stabil gegenüber verschiedenen Lagerungsbedingungen. Ebenso konnte bei einer kurzzeitigen Behandlung der Wirkstoffe unter spezifischen mechanischen, thermischen, sauren und basischen Bedingungen eine Stabilität von Lutein und Luteinestern gezeigt werden. Die Zugabe von Biozate 1 bot dabei nur für Lutein einen zusätzlichen Schutz. Bei längerer physikochemischer Behandlung unterlagen die in den Miniemulsionen eingebrachten Wirkstoffe moderaten Abbauvorgängen. Markant war deren Sensitivität gegenüber dem basischen Milieu. Im Rahmen der Rezepturentwicklung des NEMs war hier die Empfehlung, eine Miniemulsion mit einem leicht saurem pH-Milieu zum Schutz des Wirkstoffes durch kontrollierte Zugabe weiterer Inhaltstoffe zu gestalten.
Im weiteren Entwicklungsprozess des NEMs wurden Fertigrezepturen mit dem Wirkstoff Luteinester aufgestellt. Die alleinige Anwendung des Emulgators Biozate 1 zeigte sich dabei als ungeeignet. Die weiterhin zur Verfügung stehenden Fertigrezepturen enthielten in der Öl-phase neben dem Wirkstoff das MCT-ÖL oder Rapsöl sowie a-Tocopherol zur Stabilisierung. Die Wasserphase bestand aus dem Emulgator Tween 80 oder einer Kombination aus Tween 80 und Biozate 1. Zusatzstoffe waren zudem als mikrobiologischer Schutz Ascorbinsäure und Kaliumsorbat sowie für sensorische Effekte Xylitol und Orangenaroma. Die Anordnung der Basisrezeptur und das angewendete Emulgierverfahren lieferten stabile Miniemulsionen. Weiterhin zeigten langfristige Lagerungsversuche mit den Fertigrezepturen bei 4°C, dass eine Aufrechterhaltung der geforderten Luteinestermenge im Produkt gewährleistet war. Analoge Untersuchungen an einem luteinhaltigen, marktgängigen Präparat bestätigten dagegen eine bereits bei kurzfristiger Lagerung auftretende Instabilität von Lutein.
Abschließend wurde durch Resorptions- und Absorptionsstudien in vitro mit den Präemulsionen und Fertigrezepturen die Bioverfügbarkeit von Luteinestern geprüft. Nach Behandlung in einem etablierten in vitro Verdaumodell konnte eine geringfügige Resorptionsverfügbarkeit der Luteinester definiert werden. Limitiert war eine Micellarisierung des Wirkstoffes aus den konzipierten Formulierungen zu beobachten. Eine enzymatische Spaltung der Luteinester zu freiem Lutein wurde nur begrenzt festgestellt. Spezifität und Aktivität von entsprechenden hydrolytischen Lipasen sind als äußerst gering gegenüber Luteinestern zu bewerten. In sich anschließenden Zellkulturversuchen mit der Zelllinie Caco-2 wurden keine zytotoxischen Effekte durch die relevanten Inhaltsstoffe in den Präemulsionen gezeigt. Dagegen konnten eine Sensibilität gegenüber den Fertigrezepturen beobachtet werden. Diese sollte im Zusammenhang mit Irritationen der Schleimhäute des Magen-Darm-Traktes bedacht werden. Eine weniger komplexe Rezeptur könnte die beobachteten Einschränkungen möglicherweise minimieren. Abschließende Absorptionsstudien zeigten, dass grundsätzlich eine geringfügige Aufnahme von vorrangig Lutein, aber auch Luteinmonoestern in den Enterocyten aus Miniemulsionen erfolgen kann. Dabei hatte weder Tween 80 noch Biozate 1 einen förderlichen Einfluss auf die Absorptionsrate von Lutein oder Luteinestern. Die Metabolisierung der Wirkstoffe durch vorherigen in vitro-Verdau steigerte die zelluläre Aufnahme von Wirkstoffen aus Formulierungen mit Lutein und Luteinestern gleichermaßen. Die beobachtete Aufnahme von Lutein und Luteinmonoestern in den Enterocyten scheint über passive Diffusion zu erfolgen, wobei auch der aktive Transport nicht ausgeschlossen werden kann. Dagegen können Luteindiester aufgrund ihrer Molekülgröße nicht über den Weg der Micellarisierung und einfachen Diffusion in die Enterocyten gelangen. Ihre Aufnahme in die Dünndarmepithelzellen bedarf einer vorherigen hydrolytischen Spaltung durch spezifische Lipasen. Dieser Schritt limitiert wiederum die effektive Aufnahme der Luteinester in die Zellen bzw. stellt eine Einschränkung in ihrer Bioverfügbarkeit im Vergleich zu freiem Lutein dar.
Zusammenfassend konnte für die physikochemisch stabilen Luteinester eine geringe Bioverfügbarkeit aus kolloidalen Formulierungen gezeigt werden. Dennoch ist die Verwendung als Wirkstoffquelle für den sekundären Pflanzenstoff Lutein in einem NEM zu empfehlen. Im Zusammenhang mit der Aufnahme von luteinreichen, pflanzlichen Lebensmitteln kann trotz der zu erwartenden geringen Bioverfügbarkeit der Luteinester aus dem NEM ein Beitrag zur Verbesserung des Luteinstatus erreicht werden. Entsprechende Publikationen zeigten eindeutige Korrelationen zwischen der Aufnahme von luteinesterhaltigen Präparaten und einem Anstieg der Luteinkonzentration im Serum bzw. der Makulapigmentdichte in vivo. Die geringfügig bessere Bioverfügbarkeit von freiem Lutein steht im kritischen Zusammenhang mit seiner Instabilität und Kostenintensität. Bilanzierend wurde im Rahmen dieser Arbeit das marktgängige Produkt Vita Culus® konzipiert. Im Ausblick sollten humane Interventionsstudien mit dem NEM die abschließende Bewertung der Bioverfügbarkeit von Luteinestern aus dem Präparat möglich machen.
To meet the demands of a growing world population while reducing carbon dioxide (CO2) emissions, it is necessary to capture CO2 and convert it into value-added compounds. In recent years, metabolic engineering of microbes has gained strong momentum as a strategy for the production of valuable chemicals. As common microbial feedstocks like glucose directly compete with human consumption, the one carbon (C1) compound formate was suggested as an alternative feedstock. Formate can be easily produced by various means including electrochemical reduction of CO2 and could serve as a feedstock for microbial production, hence presenting a novel entry point for CO2 to the biosphere and a storage option for excess electricity. Compared to the gaseous molecule CO2, formate is a highly soluble compound that can be easily handled and stored. It can serve as a carbon and energy source for natural formatotrophs, but these microbes are difficult to cultivate and engineer. In this work, I present the results of several projects that aim to establish efficient formatotrophic growth of E. coli – which cannot naturally grow on formate – via synthetic formate assimilation pathways. In the first study, I establish a workflow for growth-coupled metabolic engineering of E. coli. I demonstrate this approach by presenting an engineering scheme for the PFL-threonine cycle, a synthetic pathway for anaerobic formate assimilation in E. coli. The described methods are intended to create a standardized toolbox for engineers that aim to establish novel metabolic routes in E. coli and related organisms. The second chapter presents a study on the catalytic efficiency of C1-oxidizing enzymes in vivo. As formatotrophic growth requires generation of both energy and biomass from formate, the engineered E. coli strains need to be equipped with a highly efficient formate dehydrogenase, which provides reduction equivalents and ATP for formate assimilation. I engineered a strain that cannot generate reducing power and energy for cellular growth, when fed on acetate. Under this condition, the strain depends on the introduction of an enzymatic system for NADH regeneration, which could further produce ATP via oxidative phosphorylation. I show that the strain presents a valuable testing platform for C1-oxidizing enzymes by testing different NAD-dependent formate and methanol dehydrogenases in the energy auxotroph strain. Using this platform, several candidate enzymes with high in vivo activity, were identified and characterized as potential energy-generating systems for synthetic formatotrophic or methylotrophic growth in E. coli. In the third chapter, I present the establishment of the serine threonine cycle (STC) – a synthetic formate assimilation pathway – in E. coli. In this pathway, formate is assimilated via formate tetrahydrofolate ligase (FtfL) from Methylobacterium extorquens (M. extorquens). The carbon from formate is attached to glycine to produce serine, which is converted into pyruvate entering central metabolism. Via the natural threonine synthesis and cleavage route, glycine is regenerated and acetyl-CoA is produced as the pathway product. I engineered several selection strains that depend on different STC modules for growth and determined key enzymes that enable high flux through threonine synthesis and cleavage. I could show that expression of an auxiliary formate dehydrogenase was required to achieve growth via threonine synthesis and cleavage on pyruvate. By overexpressing most of the pathway enzymes from the genome, and applying adaptive laboratory evolution, growth on glycine and formate was achieved, indicating the activity of the complete cycle. The fourth chapter shows the establishment of the reductive glycine pathway (rGP) – a short, linear formate assimilation route – in E. coli. As in the STC, formate is assimilated via M. extorquens FtfL. The C1 from formate is condensed with CO2 via the reverse reaction of the glycine cleavage system to produce glycine. Another carbon from formate is attached to glycine to form serine, which is assimilated into central metabolism via pyruvate. The engineered E. coli strain, expressing most of the pathway genes from the genome, can grow via the rGP with formate or methanol as a sole carbon and energy source.
Methane is an important greenhouse gas contributing to global climate change. Natural environments and restored wetlands contribute a large proportion to the global methane budget. Methanogenic archaea (methanogens) and methane oxidizing bacteria (methanotrophs), the biogenic producers and consumers of methane, play key roles in the methane cycle in those environments. A large number of studies revealed the distribution, diversity and composition of these microorganisms in individual habitats. However, uncertainties exist in predicting the response and feedback of methane-cycling microorganisms to future climate changes and related environmental changes due to the limited spatial scales considered so far, and due to a poor recognition of the biogeography of these important microorganisms combining global and local scales.
With the aim of improving our understanding about whether and how methane-cycling microbial communities will be affected by a series of dynamic environmental factors in response to climate change, this PhD thesis investigates the biogeographic patterns of methane-cycling communities, and the driving factors which define these patterns at different spatial scales. At the global scale, a meta-analysis was performed by implementing 94 globally distributed public datasets together with environmental data from various natural environments including soils, lake sediments, estuaries, marine sediments, hydrothermal sediments and mud volcanos. In combination with a global biogeographic map of methanogenic archaea from multiple natural environments, this thesis revealed that biogeographic patterns of methanogens exist. The terrestrial habitats showed higher alpha diversities than marine environments. Methanoculleus and Methanosaeta (Methanothrix) are the most frequently detected taxa in marine habitats, while Methanoregula prevails in terrestrial habitats. Estuary ecosystems, the transition zones between marine and terrestrial/limnic ecosystems, have the highest methanogenic richness but comparably low methane emission rates. At the local scale, this study compared two rewetted fens with known high methane emissions in northeastern Germany, a coastal brackish fen (Hütelmoor) and a freshwater riparian fen (Polder Zarnekow). Consistent with different geochemical conditions and land-use history, the two rewetted fens exhibit dissimilar methanogenic and, especially, methanotrophic community compositions. The methanotrophic community was generally under-represented among the prokaryotic communities and both fens show similarly low ratios of methanotrophic to methanogenic abundances. Since few studies have characterized methane-cycling microorganisms in rewetted fens, this study provides first evidence that the rapid and well re-established methanogenic community in combination with the low and incomplete re-establishment of the methanotrophic community after rewetting contributes to elevated sustained methane fluxes following rewetting.
Finally, this thesis demonstrates that dispersal limitation only slightly regulates the biogeographic distribution patterns of methanogenic microorganisms in natural environments and restored wetlands. Instead, their existence, adaption and establishment are more associated with the selective pressures under different environmental conditions. Salinity, pH and temperature are identified as the most important factors in shaping microbial community structure at different spatial scales (global versus terrestrial environments). Predicted changes in climate, such as increasing temperature, changes in precipitation patterns and increasing frequency of flooding events, are likely to induce a series of environmental alterations, which will either directly or indirectly affect the driving environmental forces of methanogenic communities, leading to changes in their community composition and thus potentially also in methane emission patterns in the future.
Largescale patterns of global land use change are very frequently accompanied by natural habitat loss. To assess the consequences of habitat loss for the remaining natural and semi-natural biotopes, inclusion of cumulative effects at the landscape level is required. The interdisciplinary concept of vulnerability constitutes an appropriate assessment framework at the landscape level, though with few examples of its application for ecological assessments. A comprehensive biotope vulnerability analysis allows identification of areas most affected by landscape change and at the same time with the lowest chances of regeneration.
To this end, a series of ecological indicators were reviewed and developed. They measured spatial attributes of individual biotopes as well as some ecological and conservation characteristics of the respective resident species community. The final vulnerability index combined seven largely independent indicators, which covered exposure, sensitivity and adaptive capacity of biotopes to landscape changes. Results for biotope vulnerability were provided at the regional level. This seems to be an appropriate extent with relevance for spatial planning and designing the distribution of nature reserves.
Using the vulnerability scores calculated for the German federal state of Brandenburg, hot spots and clusters within and across the distinguished types of biotopes were analysed. Biotope types with high dependence on water availability, as well as biotopes of the open landscape containing woody plants (e.g., orchard meadows) are particularly vulnerable to landscape changes. In contrast, the majority of forest biotopes appear to be less vulnerable. Despite the appeal of such generalised statements for some biotope types, the distribution of values suggests that conservation measures for the majority of biotopes should be designed specifically for individual sites. Taken together, size, shape and spatial context of individual biotopes often had a dominant influence on the vulnerability score.
The implementation of biotope vulnerability analysis at the regional level indicated that large biotope datasets can be evaluated with high level of detail using geoinformatics. Drawing on previous work in landscape spatial analysis, the reproducible approach relies on transparent calculations of quantitative and qualitative indicators. At the same time, it provides a synoptic overview and information on the individual biotopes. It is expected to be most useful for nature conservation in combination with an understanding of population, species, and community attributes known for specific sites. The biotope vulnerability analysis facilitates a foresighted assessment of different land uses, aiding in identifying options to slow habitat loss to sustainable levels. It can also be incorporated into planning of restoration measures, guiding efforts to remedy ecological damage. Restoration of any specific site could yield synergies with the conservation objectives of other sites, through enhancing the habitat network or buffering against future landscape change.
Biotope vulnerability analysis could be developed in line with other important ecological concepts, such as resilience and adaptability, further extending the broad thematic scope of the vulnerability concept. Vulnerability can increasingly serve as a common framework for the interdisciplinary research necessary to solve major societal challenges.
To find out the future of nowadays reef ecosystem turnover under the environmental stresses such as global warming and ocean acidification, analogue studies from the geologic past are needed. As a critical time of reef ecosystem innovation, the Permian-Triassic transition witnessed the most severe demise of Phanerozoic reef builders, and the establishment of modern style symbiotic relationships within the reef-building organisms. Being the initial stage of this transition, the Middle Permian (Capitanian) mass extinction coursed a reef eclipse in the early Late Permian, which lead to a gap of understanding in the post-extinction Wuchiapingian reef ecosystem, shortly before the radiation of Changhsingian reefs. Here, this thesis presents detailed biostratigraphic, sedimentological, and palaeoecological studies of the Wuchiapingian reef recovery following the Middle Permian (Capitanian) mass extinction, on the only recorded Wuchiapingian reef setting, outcropping in South China at the Tieqiao section.
Conodont biostratigraphic zonations were revised from the Early Permian Artinskian to the Late Permian Wuchiapingian in the Tieqiao section. Twenty main and seven subordinate conodont zones are determined at Tieqiao section including two conodont zone below and above the Tieqiao reef complex. The age of Tieqiao reef was constrained as early to middle Wuchiapingian.
After constraining the reef age, detailed two-dimensional outcrop mapping combined with lithofacies study were carried out on the Wuchiapingian Tieqiao Section to investigate the reef growth pattern stratigraphically as well as the lateral changes of reef geometry on the outcrop scale. Semi-quantitative studies of the reef-building organisms were used to find out their evolution pattern within the reef recovery. Six reef growth cycles were determined within six transgressive-regressive cycles in the Tieqiao section. The reefs developed within the upper part of each regressive phase and were dominated by different biotas. The timing of initial reef recovery after the Middle Permian (Capitanian) mass extinction was updated to the Clarkina leveni conodont zone, which is earlier than previous understanding. Metazoans such as sponges were not the major components of the Wuchiapingian reefs until the 5th and 6th cycles. So, the recovery of metazoan reef ecosystem after the Middle Permian (Capitanian) mass extinction was obviously delayed. In addition, although the importance of metazoan reef builders such as sponges did increase following the recovery process, encrusting organisms such as Archaeolithoporella and Tubiphytes, combined with microbial carbonate precipitation, still played significant roles to the reef building process and reef recovery after the mass extinction.
Based on the results from outcrop mapping and sedimentological studies, quantitative composition analysis of the Tieqiao reef complex were applied on selected thin sections to further investigate the functioning of reef building components and the reef evolution after the Middle Permian (Capitanian) mass extinction. Data sets of skeletal grains and whole rock components were analyzed. The results show eleven biocommunity clusters/eight rock composition clusters dominated by different skeletal grains/rock components. Sponges, Archaeolithoporella and Tubiphytes were the most ecologically important components within the Wuchiapingian Tieqiao reef, while the clotted micrites and syndepositional cements are the additional important rock components for reef cores. The sponges were important within the whole reef recovery. Tubiphytes were broadly distributed in different environments and played a key-role in the initial reef communities. Archaeolithoporella concentrated in the shallower part of reef cycles (i.e., the upper part of reef core) and was functionally significant for the enlargement of reef volume.
In general, the reef recovery after the Middle Permian (Capitanian) mass extinction has some similarities with the reef recovery following the end-Permian mass extinction. It shows a delayed recovery of metazoan reefs and a stepwise recovery pattern that was controlled by both ecological and environmental factors. The importance of encrusting organisms and microbial carbonates are also similar to most of the other post-extinction reef ecosystems. These findings can be instructive to extend our understanding of the reef ecosystem evolution under environmental perturbation or stresses.
Bank filtration is an effective water treatment technique and is widely adopted in Europe along major rivers. It is the process where surface water penetrates the riverbed, flows through the aquifer, and then is extracted by near-bank production wells. By flowing in the subsurface flow passage, the water quality can be improved by a series of beneficial processes. Long-term riverbank filtration also produces colmation layers on the riverbed. The colmation layer may act as a bioactive zone that is governed by biochemical and physical processes owing to its enrichment of microbes and organic matter. Low permeability may strongly limit the surface water infiltration and further lead to a decreasing recoverable ratio of production wells.The removal of the colmation layer is therefore a trade-off between the treatment capacity and treatment efficiency. The goal of this Ph.D. thesis is to focus on the temporal and spatial change of the water quality and quantity along the flow path of a hydrogeological heterogeneous riverbank filtration site adjacent to an artificial-reconstructed (bottom excavation and bank reconstruction) canal in Potsdam, Germany.
To quantify the change of the infiltration rate, travel time distribution, and the thermal field brought by the canal reconstruction, a three-dimensional flow and heat transport model was created. This model has two scenarios, 1) ‘with’ canal reconstruction, and 2) ‘without’ canal reconstruction. Overall, the model calibration results of both water heads and temperatures matched those observed in the field study. In comparison to the model without reconstruction, the reconstruction model led to more water being infiltrated into the aquifer on that section, on average 521 m3/d, which corresponded to around 9% of the total pumping rate. Subsurface travel-time distribution substantially shifted towards shorter travel times. Flow paths with travel times <200 days increased by ~10% and those with <300 days by 15%. Furthermore, the thermal distribution in the aquifer showed that the seasonal variation in the scenario with reconstruction reaches deeper and laterally propagates further.
By scatter plotting of δ18O versus δ 2H, the infiltrated river water could be differentiated from water flowing in the deep aquifer, which may contain remnant landside groundwater from further north. In contrast, the increase of river water contribution due to decolmation could be shown by piper plot. Geological heterogeneity caused a substantial spatial difference in redox zonation among different flow paths, both horizontally and vertically. Using the Wilcoxon rank test, the reconstruction changed the redox potential differently in observation wells. However, taking the small absolute concentration level, the change is also relatively minor. The treatment efficiency for both organic matter and inorganic matter is consistent after the reconstruction, except for ammonium. The inconsistent results for ammonium could be explained by changes in the Cation Exchange Capacity (CEC) in the newly paved riverbed. Because the bed is new, it was not yet capable of keeping the newly produced ammonium by sorption and further led to the breakthrough of the ammonium plume. By estimation, the peak of the ammonium plume would reach the most distant observation well before February 2024, while the peaking concentration could be further dampened by sorption and diluted by the afterward low ammonium flow. The consistent DOC and SUVA level suggests that there was no clear preference for the organic matter removal along the flow path.
As one of the most-produced commodity polymers, polypropylene draws considerable scientific and commercial interest as an electret material. In the present thesis, the influence of the surface chemical modification and crystalline reconstruction on the electret properties of the polypropylene thin films will be discussed. The chemical treatment with orthophosphoric acid can significantly improve the surface charge stability of the polypropylene electrets by introducing phosphorus- and oxygen-containing structures onto the modified surface. The thermally stimulated discharge measurement and charge profiling by means of piezoelectrically generated pressure steps are used to investigate the electret behaviour. It is concluded that deep traps of limited number density are created during the treatment with inorganic chemicals. Hence, the improvement dramatically decreases when the surface-charge density is substantially higher than ±1.2×10^(-3) C·m^(-2). The newly formed traps also show a higher trapping energy for negative charges. The energetic distributions of the traps in the non-treated and chemically treated samples offer an insight regarding the surface and foreign-chemical dominance on the charge storage and transport in the polypropylene electrets.
Additionally, different electret properties are observed on the polypropylene films with the spherulitic and transcrystalline structures. It indicates the dependence of the charge storage and transport on the crystallite and molecular orientations in the crystalline phase. In general, a more diverse crystalline growth in the spherulitic samples can result in a more complex energetic trap distribution, in comparison to that in a transcrystalline polypropylene. The double-layer transcrystalline polypropylene film with a crystalline interface in the middle can be obtained by crystallising the film in contact with rough moulding surfaces on both sides. A layer of heterocharges appears on each side of the interface in the double-layer transcrystalline polypropylene electrets after the thermal poling. However, there is no charge captured within the transcrystalline layers. The phenomenon reveals the importance of the crystalline interface in terms of creating traps with the higher activation energy in polypropylene. The present studies highlight the fact that even slight variations in the polypropylene film may lead to dramatic differences in its electret properties.
Unter Verschluss
(2020)
Cosa avviene quando coscienze linguistiche distinte, oltre ad essere separate dall’epoca, dall’area geografica di provenienza o dalla differenziazione sociale, dalle diverse dimensioni linguistiche, appartengono anche a domini semiotici diversi? È quel che accade ogni volta che comunichiamo in rete, l’interazione digitale è infatti l’ambito di comunicazione ibrido per eccellenza: in esso alla mescolanza di lingue diverse si sovrappone la mescolanza di codici diversi. Partendo dal presupposto che siano i nuovi bisogni espressivi e le nuove situazioni comunicative a spingere verso le innovazioni linguistiche, sembra dunque interessante tener conto del rilievo assunto dal repertorio visuale – e più in generale multimodale – nell’uso spontaneo dei nuovi media e constatare come le particolari strategie di costruzione del significato attualmente in atto non possano ormai più prescindere da queste seconde dimensioni. Del loro peso nell’uso digitale della lingua è bene avere consapevolezza per affrontare senza pregiudizi tutte le novità ad essa connesse. Un ruolo di centrale importanza nell’approccio al linguaggio verbale in Internet è legato alla funzione indessicale della lingua che, unito alla presenza di un archivio di riferimento di conoscenze del mondo condiviso, innesca un nuovo tipo d’inferenzialità nel ricevente. La conversazione attraverso i social network consente infatti azioni che non necessariamente sono presenti nello scambio vis-a-vis, ma che invece sono peculiari di Facebook, Twitter, G+, Instagram, Flickr e in generale dei social network: la condivisione di materiale multimediale di vario genere, l’opzione di richiamare i messaggi relativi a un tema specifico e la possibilità di glossarlo. Il materiale multimediale diventa così al tempo stesso parte integrante della comunicazione e modalità espressiva, focus del discorso e linguaggio metaforico condiviso. Questo lavoro di ricerca indaga come ambiti di ricerca diversi, e apparentemente distanti fra loro, possano interagire produttivamente con il panorama scientifico delle scienze del linguaggio, dell’immagine e della comunicazione, giungendo alla formulazione di un modello aggiornato dell'ibridazione linguistica che caratterizza la comunicazione in rete.
It has frequently been observed that single emotional events are not only more efficiently processed, but also better remembered, and form longer-lasting memory traces than neutral material. However, when emotional information is perceived as a part of a complex event, such as in the context of or in relation to other events and/or source details, the modulatory effects of emotion are less clear. The present work aims to investigate how emotional, contextual source information modulates the initial encoding and subsequent long-term retrieval of associated neutral material (item memory) and contextual source details (contextual source memory). To do so, a two-task experiment was used, consisting of an incidental encoding task in which neutral objects were displayed over different contextual background scenes which varied in emotional content (unpleasant, pleasant, and neutral), and a delayed retrieval task (1 week), in which previously-encoded objects and new ones were presented. In a series of studies, behavioral indices (Studies 2, 3, and 5), event-related potentials (ERPs; Studies 1-4), and functional magnetic resonance imaging (Study 5) were used to investigate whether emotional contexts can rapidly tune the visual processing of associated neutral information (Study 1) and modulate long-term item memory (Study 2), how different recognition memory processes (familiarity vs. recollection) contribute to these emotion effects on item and contextual source memory (Study 3), whether the emotional effects of item memory can also be observed during spontaneous retrieval (Sstudy 4), and which brain regions underpin the modulatory effects of emotional contexts on item and contextual source memory (Study 5). In Study 1, it was observed that emotional contexts by means of emotional associative learning, can rapidly alter the processing of associated neutral information. Neutral items associated with emotional contexts (i.e. emotional associates) compared to neutral ones, showed enhanced perceptual and more elaborate processing after one single pairing, as indexed by larger amplitudes in the P100 and LPP components, respectively. Study 2 showed that emotional contexts produce longer-lasting memory effects, as evidenced by better item memory performance and larger ERP Old/New differences for emotional associates. In Study 3, a mnemonic differentiation was observed between item and contextual source memory which was modulated by emotion. Item memory was driven by familiarity, independently of emotional contexts during encoding, whereas contextual source memory was driven by recollection, and better for emotional material. As in Study 2, enhancing effects of emotional contexts for item memory were observed in ERPs associated with recollection processes. Likewise, for contextual source memory, a pronounced recollection-related ERP enhancement was observed for exclusively emotional contexts. Study 4 showed that the long-term recollection enhancement of emotional contexts on item memory can be observed even when retrieval is not explicitly attempted, as measured with ERPs, suggesting that the emotion enhancing effects on memory are not related to the task embedded during recognition, but to the motivational relevance of the triggering event. In Study 5, it was observed that enhancing effects of emotional contexts on item and contextual source memory involve stronger engagement of the brain's regions which are associated with memory recollection, including areas of the medial temporal lobe, posterior parietal cortex, and prefrontal cortex.
Taken together, these findings suggest that emotional contexts rapidly modulate the initial processing of associated neutral information and the subsequent, long-term item and contextual source memories. The enhanced memory effects of emotional contexts are strongly supported by recollection rather than familiarity processes, and are shown to be triggered when retrieval is both explicitly and spontaneously attempted. These results provide new insights into the modulatory role of emotional information on the visual processing and the long-term recognition memory of complex events. The present findings are integrated into the current theoretical models and future ventures are discussed.
Sie senden den Wandel
(2020)
Altbekannt ist, welch wichtige Rolle Medien bei der Konsolidierung oder aber auch bei der Transformation einer Gesellschaft spielen. Was aber geschieht, wenn Medien von unten aus agieren und dies in großer Zahl geschieht, unter Einbindung vieler gesellschaftlicher Akteure sowie gegenüber einem umfassenden Publikum? In Argentinien hat sich eine faszinierende Radiolandschaft gebildet, die kollektiv, partizipativ und progressiv arbeitet: Die Community-Radios. Viviana Uriona nimmt uns mit auf eine ethnografische Reise durch die Geschichte dieser Radios, analysiert ihre Arbeitsweise und sucht nach den Gründen ihres Erfolges. Am Ende der Lektüre bleibt eine Frage nicht mehr offen: Könnte hierzulande in gleicher Weise gelingen, was dort geschah?
This thesis offers new insights on the effects of Start-Up Subsidies (SUS) for unemployed individuals as a special kind of active labor market program (ALMP) that aims to re-integrate individuals into the labor market via the route of self-employment. Moreover, this thesis contributes to the literature on methods for causal inference when the treatment variable is continuous rather than binary. For example, this is the case when individuals differ in their degree of exposure to a common treatment.
The analysis of the effects of SUS focuses on the main current German program called “Gründungszuschuss” (New Start-Up Subsidy, NSUS) after its reform in 2011. Average Effects on participants' labor market outcomes - as measured by employment and earnings - as well as subjective well-being are estimated mainly based on propensity score matching (PSM) techniques. PSM aims to achieve balance in terms of observed characteristics by matching participants with at least one comparable non-participant in terms of their probability to receive the treatment. This estimation strategy is valid as long as all relevant characteristics that explain selection patterns into treatment are observed and included in the estimation of the propensity score. To make our analysis as credible as possible, we control for a large vector of characteristics as observed through the combination of rich administrative data from the Federal Employment Agency as well as through survey data.
Chapters two to four of this thesis puts special emphasis on aspects regarding (the evaluation of) SUS programs that have received no or only limited attention thus far. The first aspect relates to the interplay of institutional details of the program and its effectiveness. So far, relatively little is known about the importance of SUS program features such as the duration of support. Second, there is no experimental benchmark evaluation of SUS available and thus, the reliability of non-experimental estimation techniques such as PSM is of crucial importance as estimates are biased when relevant confounders are omitted from the analysis. Third, there may be potentially detrimental effects of transitioning into (relatively risky) self-employment on subjective well-being among subsidized founders out of unemployment. These were to remain undetected if the analysis would focus exclusively on labor market outcomes of participants. The results indicate positive long-term effects of SUS participation on employment and earnings among participants. These effects are substantially larger than what estimated before the reform, indicating room for improvement in program design via changes in institutional details. Moreover, non-experimental estimates of treatment effects are remarkably robust to hidden confounding. Regarding subjective well-being, this thesis finds a positive long-run impact on job satisfaction and a detrimental effect on satisfaction with social security. The latter appears to be driven by adverse effects on social insurance contributions.
In chapter five, a novel automated covariate balancing technique for the estimation of causal effects in the context of continuous treatments is derived and assessed regarding its performance compared to other (automated) balancing techniques. Although binary research designs that only differentiate between participants and non-participants of some treatment remain the most-common case in empirical practice, many applications can be adapted to include continuous treatments as well. Often, this will allow for more meaningful estimates of causal effects in order to further improve the design of programs. In the context of SUS, one may further investigate the effects of the size of monetary support or its duration on participants' labor market outcomes. Both Monte-Carlo investigations and analysis of two well-known datasets suggests superior performance of the proposed Entropy Balancing for continuous treatments (EBCT) compared to other existing estimation strategies.
The development of bioinspired self-assembling materials, such as hydrogels, with promising applications in cell culture, tissue engineering and drug delivery is a current focus in material science. Biogenic or bioinspired proteins and peptides are frequently used as versatile building blocks for extracellular matrix (ECM) mimicking hydrogels. However, precisely controlling and reversibly tuning the properties of these building blocks and the resulting hydrogels remains challenging. Precise control over the viscoelastic properties and self-healing abilities of hydrogels are key factors for developing intelligent materials to investigate cell matrix interactions. Thus, there is a need to develop building blocks that are self-healing, tunable and self-reporting. This thesis aims at the development of α-helical peptide building blocks, called coiled coils (CCs), which integrate these desired properties. Self-healing is a direct result of the fast self-assembly of these building blocks when used as material cross-links. Tunability is realized by means of reversible histidine (His)-metal coordination bonds. Lastly, implementing a fluorescent readout, which indicates the CC assembly state, self-reporting hydrogels are obtained.
Coiled coils are abundant protein folding motifs in Nature, which often have mechanical function, such as in myosin or fibrin. Coiled coils are superhelices made up of two or more α-helices wound around each other. The assembly of CCs is based on their repetitive sequence of seven amino acids, so-called heptads (abcdefg). Hydrophobic amino acids in the a and d position of each heptad form the core of the CC, while charged amino acids in the e and g position form ionic interactions. The solvent-exposed positions b, c and f are excellent targets for modifications since they are more variable. His-metal coordination bonds are strong, yet reversible interactions formed between the amino acid histidine and transition metal ions (e.g. Ni2+, Cu2+ or Zn2+). His-metal coordination bonds essentially contribute to the mechanical stability of various high-performance proteinaceous materials, such as spider fangs, Nereis worm jaws and mussel byssal threads. Therefore, I bioengineered reversible His-metal coordination sites into a well-characterized heterodimeric CC that served as tunable material cross-link. Specifically, I took two distinct approaches facilitating either intramolecular (Chapter 4.2) and/or intermolecular (Chapter 4.3) His-metal coordination.
Previous research suggested that force-induced CC unfolding in shear geometry starts from the points of force application. In order to tune the stability of a heterodimeric CC in shear geometry, I inserted His in the b and f position at the termini of force application (Chapter 4.2). The spacing of His is such that intra-CC His-metal coordination bonds can form to bridge one helical turn within the same helix, but also inter-CC coordination bonds are not generally excluded. Starting with Ni2+ ions, Raman spectroscopy showed that the CC maintained its helical structure and the His residues were able to coordinate Ni2+. Circular dichroism (CD) spectroscopy revealed that the melting temperature of the CC increased by 4 °C in the presence of Ni2+. Using atomic force microscope (AFM)-based single molecule force spectroscopy, the energy landscape parameters of the CC were characterized in the absence and the presence of Ni2+. His-Ni2+ coordination increased the rupture force by ~10 pN, accompanied by a decrease of the dissociation rate constant. To test if this stabilizing effect can be transferred from the single molecule level to the bulk viscoelastic material properties, the CC building block was used as a non-covalent cross-link for star-shaped poly(ethylene glycol) (star-PEG) hydrogels. Shear rheology revealed a 3-fold higher relaxation time in His-Ni2+ coordinating hydrogels compared to the hydrogel without metal ions. This stabilizing effect was fully reversible when using an excess of the metal chelator ethylenediaminetetraacetate (EDTA). The hydrogel properties were further investigated using different metal ions, i.e. Cu2+, Co2+ and Zn2+. Overall, these results suggest that Ni2+, Cu2+ and Co2+ primarily form intra-CC coordination bonds while Zn2+ also participates in inter-CC coordination bonds. This may be a direct result of its different coordination geometry.
Intermolecular His-metal coordination bonds in the terminal regions of the protein building blocks of mussel byssal threads are primarily formed by Zn2+ and were found to be intimately linked to higher-order assembly and self-healing of the thread. In the above example, the contribution of intra-CC and inter-CC His-Zn2+ cannot be disentangled. In Chapter 4.3, I redesigned the CC to prohibit the formation of intra-CC His-Zn2+ coordination bonds, focusing only on inter-CC interactions. Specifically, I inserted His in the solvent-exposed f positions of the CC to focus on the effect of metal-induced higher-order assembly of CC cross-links. Raman and CD spectroscopy revealed that this CC building block forms α-helical Zn2+ cross-linked aggregates. Using this CC as a cross-link for star-PEG hydrogels, I showed that the material properties can be switched from viscoelastic in the absence of Zn2+ to elastic-like in the presence of Zn2+. Moreover, the relaxation time of the hydrogel was tunable over three orders of magnitude when using different Zn2+:His ratios. This tunability is attributed to a progressive transformation of single CC cross-links into His-Zn2+ cross-linked aggregates, with inter-CC His-Zn2+ coordination bonds serving as an additional, cross-linking mode.
Rheological characterization of the hydrogels with inter-CC His-Zn2+ coordination raised the question whether the His-Zn2+ coordination bonds between CCs or also the CCs themselves rupture when shear strain is applied. In general, the amount of CC cross-links initially formed in the hydrogel as well as the amount of CC cross-links breaking under force remains to be elucidated. In order to more deeply probe these questions and monitor the state of the CC cross-links when force is applied, a fluorescent reporter system based on Förster resonance energy transfer (FRET) was introduced into the CC (Chapter 4.4). For this purpose, the donor-acceptor pair carboxyfluorescein and tetramethylrhodamine was used. The resulting self-reporting CC showed a FRET efficiency of 77 % in solution. Using this fluorescently labeled CC as a self-reporting, reversible cross-link in an otherwise covalently cross-linked star-PEG hydrogel enabled the detection of the FRET efficiency change under compression force. This proof-of-principle result sets the stage for implementing the fluorescently labeled CCs as molecular force sensors in non-covalently cross-linked hydrogels.
In summary, this thesis highlights that rationally designed CCs are excellent reversibly tunable, self-healing and self-reporting hydrogel cross-links with high application potential in bioengineering and biomedicine. For the first time, I demonstrated that His-metal coordination-based stabilization can be transferred from the single CC level to the bulk material with clear viscoelastic consequences. Insertion of His in specific sequence positions was used to implement a second non-covalent cross-linking mode via intermolecular His-metal coordination. This His-metal binding induced aggregation of the CCs enabled for reversibly tuning the hydrogel properties from viscoelastic to elastic-like. As a proof-of-principle to establish self-reporting CCs as material cross-links, I labeled a CC with a FRET pair. The fluorescently labelled CC acts as a molecular force sensor and first preliminary results suggest that the CC enables the detection of hydrogel cross-link failure under compression force. In the future, fluorescently labeled CC force sensors will likely not only be used as intelligent cross-links to study the failure of hydrogels but also to investigate cell-matrix interactions in 3D down to the single molecule level.
Philosophische Tugenden
(2020)
Worin besteht gutes Philosophieren? Und weshalb ist gerade John Stuart Mill ein außergewöhnlich guter Philosoph? Joachim Toenges-Hinn verbindet in diesem Band die metaphilosophische Suche danach, was gute Philosophie ausmacht, mit einer historischen Betrachtung des Philosophen John Stuart Mill. Dabei fungiert Mill zugleich als Urheber von und Verkörperung des Strebens nach zwei philosophischen Tugenden, die Toenges-Hinn aus Mills philosophischem Werk ableitet und anschließend systematisch verteidigt. Diese als „Bentham-Ideal“ und „Coleridge-Ideal“ bezeichneten Tugenden stehen dabei ebenso im Fokus seiner Untersuchung wie die Bedeutung von Lebensexperimenten für philosophische Biografien.
The development of methods such as super-resolution microscopy (Nobel prize in Chemistry, 2014) and multi-scale computer modelling (Nobel prize in Chemistry, 2013) have provided scientists with powerful tools to study microscopic systems. Sub-micron particles or even fluorescently labelled single molecules can now be tracked for long times in a variety of systems such as living cells, biological membranes, colloidal solutions etc. at spatial and temporal resolutions previously inaccessible. Parallel to such single-particle tracking experiments, super-computing techniques enable simulations of large atomistic or coarse-grained systems such as biologically relevant membranes or proteins from picoseconds to seconds, generating large volume of data. These have led to an unprecedented rise in the number of reported cases of anomalous diffusion wherein the characteristic features of Brownian motion—namely linear growth of the mean squared displacement with time and the Gaussian form of the probability density function (PDF) to find a particle at a given position at some fixed time—are routinely violated. This presents a big challenge in identifying the underlying stochastic process and also estimating the corresponding parameters of the process to completely describe the observed behaviour. Finding the correct physical mechanism which leads to the observed dynamics is of paramount importance, for example, to understand the first-arrival time of transcription factors which govern gene regulation, or the survival probability of a pathogen in a biological cell post drug administration. Statistical Physics provides useful methods that can be applied to extract such vital information. This cumulative dissertation, based on five publications, focuses on the development, implementation and application of such tools with special emphasis on Bayesian inference and large deviation theory. Together with the implementation of Bayesian model comparison and parameter estimation methods for models of diffusion, complementary tools are developed based on different observables and large deviation theory to classify stochastic processes and gather pivotal information. Bayesian analysis of the data of micron-sized particles traced in mucin hydrogels at different pH conditions unveiled several interesting features and we gained insights into, for example, how in going from basic to acidic pH, the hydrogel becomes more heterogeneous and phase separation can set in, leading to observed non-ergodicity (non-equivalence of time and ensemble averages) and non-Gaussian PDF. With large deviation theory based analysis we could detect, for instance, non-Gaussianity in seeming Brownian diffusion of beads in aqueous solution, anisotropic motion of the beads in mucin at neutral pH conditions, and short-time correlations in climate data. Thus through the application of the developed methods to biological and meteorological datasets crucial information is garnered about the underlying stochastic processes and significant insights are obtained in understanding the physical nature of these systems.
Die §§ 299a, 299b StGB sind das Ergebnis eines legitimen gesetzgeberischen Anliegens, die Umsetzung erfolgte indes defizitär.
Der Rechtsgüterschutz ist nicht konsistent, was zu dessen Weiterentwicklung zwingt.
Die unkritische Übernahme der Auslegungsergebnisse zu § 299 StGB – obwohl vom Gesetzgeber intendiert – ist darüber hinaus oftmals verfehlt. Die Arbeit hinterfragt diesen Ansatz und zeigt Lösungen auf.
Um existente Strafbarkeitslücken zu schließen ist es ferner unerlässlich eine verfassungskonforme „Pflichtverletzungsvariante“ einzuführen. Ferner böte eine Clearingstelle Rechtssicherheit, insbesondere mit der Möglichkeit zur Genehmigung von Austauschverhältnissen im Gesundheitswesen. Hierzu werden Vorschläge gemacht.
‘The Territorialities of U.S. Imperialisms’ sets into relation U.S. imperial and Indigenous conceptions of territoriality as articulated in U.S. legal texts and Indigenous life writing in the 19th century. It analyzes the ways in which U.S. legal texts as “legal fictions” narratively press to affirm the United States’ territorial sovereignty and coherence in spite of its reliance on a variety of imperial practices that flexibly disconnect and (re)connect U.S. sovereignty, jurisdiction and territory.
At the same time, the book acknowledges Indigenous life writing as legal texts in their own right and with full juridical force, which aim to highlight the heterogeneity of U.S. national territory both from their individual perspectives and in conversation with these legal fictions. Through this, the book’s analysis contributes to a more nuanced understanding of the coloniality of U.S. legal fictions, while highlighting territoriality as a key concept in the fashioning of the narrative of U.S. imperialism.
This work presents a new design for programming environments that promote the exploration of domain-specific software artifacts and the construction of graphical tools for such program comprehension tasks. In complex software projects, tool building is essential because domain- or task-specific tools can support decision making by representing concerns concisely with low cognitive effort. In contrast, generic tools can only support anticipated scenarios, which usually align with programming language concepts or well-known project domains.
However, the creation and modification of interactive tools is expensive because the glue that connects data to graphics is hard to find, change, and test. Even if valuable data is available in a common format and even if promising visualizations could be populated, programmers have to invest many resources to make changes in the programming environment. Consequently, only ideas of predictably high value will be implemented. In the non-graphical, command-line world, the situation looks different and inspiring: programmers can easily build their own tools as shell scripts by configuring and combining filter programs to process data.
We propose a new perspective on graphical tools and provide a concept to build and modify such tools with a focus on high quality, low effort, and continuous adaptability. That is, (1) we propose an object-oriented, data-driven, declarative scripting language that reduces the amount of and governs the effects of glue code for view-model specifications, and (2) we propose a scalable UI-design language that promotes short feedback loops in an interactive, graphical environment such as Morphic known from Self or Squeak/Smalltalk systems.
We implemented our concept as a tool building environment, which we call VIVIDE, on top of Squeak/Smalltalk and Morphic. We replaced existing code browsing and debugging tools to iterate within our solution more quickly. In several case studies with undergraduate and graduate students, we observed that VIVIDE can be applied to many domains such as live language development, source-code versioning, modular code browsing, and multi-language debugging. Then, we designed a controlled experiment to measure the effect on the time to build tools. Several pilot runs showed that training is crucial and, presumably, takes days or weeks, which implies a need for further research.
As a result, programmers as users can directly work with tangible representations of their software artifacts in the VIVIDE environment. Tool builders can write domain-specific scripts to populate views to approach comprehension tasks from different angles. Our novel perspective on graphical tools can inspire the creation of new trade-offs in modularity for both data providers and view designers.
A large body of research now supports the presence of both syntactic and lexical predictions in sentence processing. Lexical predictions, in particular, are considered to indicate a deep level of predictive processing that extends past the structural features of a necessary word (e.g. noun), right down to the phonological features of the lexical identity of a specific word (e.g. /kite/; DeLong et al., 2005). However, evidence for lexical predictions typically focuses on predictions in very local environments, such as the adjacent word or words (DeLong et al., 2005; Van Berkum et al., 2005; Wicha et al., 2004). Predictions in such local environments may be indistinguishable from lexical priming, which is transient and uncontrolled, and as such may prime lexical items that are not compatible with the context (e.g. Kukona et al., 2014). Predictive processing has been argued to be a controlled process, with top-down information guiding preactivation of plausible upcoming lexical items (Kuperberg & Jaeger, 2016). One way to distinguish lexical priming from prediction is to demonstrate that preactivated lexical content can be maintained over longer distances.
In this dissertation, separable German particle verbs are used to demonstrate that preactivation of lexical items can be maintained over multi-word distances. A self-paced reading time and an eye tracking experiment provide some support for the idea that particle preactivation triggered by a verb and its context can be observed by holding the sentence context constant and manipulating the predictabilty of the particle. Although evidence of an effect of particle predictability was only seen in eye tracking, this is consistent with previous evidence suggesting that predictive processing facilitates only some eye tracking measures to which the self-paced reading modality may not be sensitive (Staub, 2015; Rayner1998). Interestingly, manipulating the distance between the verb and the particle did not affect reading times, suggesting that the surprisal-predicted faster reading times at long distance may only occur when the additional distance is created by information that adds information about the lexical identity of a distant element (Levy, 2008; Grodner & Gibson, 2005). Furthermore, the results provide support for models proposing that temporal decay is not major influence on word processing (Lewandowsky et al., 2009; Vasishth et al., 2019).
In the third and fourth experiments, event-related potentials were used as a method for detecting specific lexical predictions. In the initial ERP experiment, we found some support for the presence of lexical predictions when the sentence context constrained the number of plausible particles to a single particle. This was suggested by a frontal post-N400 positivity (PNP) that was elicited when a lexical prediction had been violated, but not to violations when more than one particle had been plausible. The results of this study were highly consistent with previous research suggesting that the PNP might be a much sought-after ERP marker of prediction failure (DeLong et al., 2011; DeLong et al., 2014; Van Petten & Luka, 2012; Thornhill & Van Petten, 2012; Kuperberg et al., 2019). However, a second experiment in a larger sample experiment failed to replicate the effect, but did suggest the relationship of the PNP to predictive processing may not yet be fully understood. Evidence for long-distance lexical predictions was inconclusive.
The conclusion drawn from the four experiments is that preactivation of the lexical entries of plausible upcoming particles did occur and was maintained over long distances. The facilitatory effect of this preactivation at the particle site therefore did not appear to be the result of transient lexical priming. However, the question of whether this preactivation can also lead to lexical predictions of a specific particle remains unanswered. Of particular interest to future research on predictive processing is further characterisation of the PNP. Implications for models of sentence processing may be the inclusion of long-distance lexical predictions, or the possibility that preactivation of lexical material can facilitate reading times and ERP amplitude without commitment to a specific lexical item.
Gold at the nanoscale
(2020)
In this cumulative dissertation, I want to present my contributions to the field of plasmonic nanoparticle science. Plasmonic nanoparticles are characterised by resonances of the free electron gas around the spectral range of visible light. In recent years, they have evolved as promising components for light based nanocircuits, light harvesting, nanosensors, cancer therapies, and many more.
This work exhibits the articles I authored or co-authored in my time as PhD student at the University of Potsdam. The main focus lies on the coupling between localised plasmons and excitons in organic dyes. Plasmon–exciton coupling brings light–matter coupling to the nanoscale. This size reduction is accompanied by strong enhancements of the light field which can, among others, be utilised to enhance the spectroscopic footprint of molecules down to single molecule detection, improve the efficiency of solar cells, or establish lasing on the nanoscale. When the coupling exceeds all decay channels, the system enters the strong coupling regime. In this case, hybrid light–matter modes emerge utilisable as optical switches, in quantum networks, or as thresholdless lasers. The present work investigates plasmon–exciton coupling in gold–dye core–shell geometries and contains both fundamental insights and technical novelties. It presents a technique which reveals the anticrossing in coupled systems without manipulating the particles themselves. The method is used to investigate the relation between coupling strength and particle size. Additionally, the work demonstrates that pure extinction measurements can be insufficient when trying to assess the coupling regime. Moreover, the fundamental quantum electrodynamic effect of vacuum induced saturation is introduced. This effect causes the vacuum fluctuations to diminish the polarisability of molecules and has not yet been considered in the plasmonic context.
The work additionally discusses the reaction of gold nanoparticles to optical heating. Such knowledge is of great importance for all potential optical applications utilising plasmonic nanoparticles since optical excitation always generates heat. This heat can induce a change in the optical properties, but also mechanical changes up to melting can occur. Here, the change of spectra in coupled plasmon–exciton particles is discussed and explained with a precise model. Moreover, the work discusses the behaviour of gold nanotriangles exposed to optical heating. In a pump–probe measurement, X-ray probe pulses directly monitored the particles’ breathing modes. In another experiment, the triangles were exposed to cw laser radiation with varying intensities and illumination areas. X-ray diffraction directly measured the particles’ temperature. Particle melting was investigated with surface enhanced Raman spectroscopy and SEM imaging demonstrating that larger illumination areas can cause melting at lower intensities. An elaborate methodological and theoretical introduction precedes the articles. This way, also readers without specialist’s knowledge get a concise and detailed overview of the theory and methods used in the articles. I introduce localised plasmons in metal nanoparticles of different shapes. For this work, the plasmons were mostly coupled to excitons in J-aggregates. Therefore, I discuss these aggregates of organic dyes with sharp and intense resonances and establish an understanding of the coupling between the two systems. For ab initio simulations of the coupled systems, models for the systems’ permittivites are presented, too. Moreover, the route to the sample fabrication – the dye coating of gold nanoparticles, their subsequent deposition on substrates, and the covering with polyelectrolytes – is presented together with the measurement methods that were used for the articles.
Obwohl die Herrschaft Lindow-Ruppin im Spätmittelalter eine zentrale Lage zwischen den großen Landesherrschaften im Osten des Reiches einnahm und sich ihre beiden Landesteile über eine weite Entfernung erstreckten, ist sie weitgehend in Vergessenheit geraten. Lediglich die Frage nach der Herrschaftsgründung der aus dem nordöstlichen Harzraum stammenden Arnsteiner im Land Ruppin gab Historikern Anlass zur Beschäftigung mit dem Thema. Eine differenzierte Erforschung blieb bisher hingegen aus.
Anhand teils neu entdeckter Quellen untersucht André Stellmacher verschiedene Schwerpunkte, wie die Heirats- und Erwerbspolitik, die Besitzverteilung innerhalb der Herrschaft und das Verhältnis der Grafen zu ihren Vasallen. Eine beinahe 1.000 Einträge umfassende Regestensammlung bildet dafür die Grundlage. Neu geschaffene Karten und der reich bebilderte anhängende Siegelkatalog belegen die Ergebnisse auf anschauliche Weise.
Lifelong learning plays an increasingly important role in many societies. Technology is changing faster than ever and what has been important to learn today, may be obsolete tomorrow. The role of informal programs is becoming increasingly important. Particularly, Massive Open Online Courses have become popular among learners and instructors. In 2008, a group of Canadian education enthusiasts started the first Massive Open Online Courses or MOOCs to prove their cognitive theory of Connectivism. Around 2012, a variety of American start-ups redefined the concept of MOOCs. Instead of following the connectivist doctrine they returned to a more traditional approach. They focussed on video lecturing and combined this with a course forum that allowed the participants to discuss with each other and the teaching team. While this new version of the concept was enormously successful in terms of massiveness—hundreds of thousands of participants from all over the world joined the first of these courses—many educators criticized the re-lapse to the cognitivist model. In the early days, the evolving platforms often did not have more features than a video player, simple multiple-choice quizzes, and the course forum. It soon became a major interest of research to allow the scaling of more modern approaches of learning and teaching for the massiveness of these courses. Hands-on exercises, alternative forms of assessment, collaboration, and teamwork are some of the topics on the agenda. The insights provided by cognitive and pedagogical theories, however, do not necessarily always run in sync with the needs and the preferences of the majority of participants. While the former promote action-learning, hands-on-learning, competence-based-learning, project-based-learning, team-based-learning as the holy grail, many of the latter often rather prefer a more laid-back style of learning, sometimes referred to as edutainment. Obviously, given the large numbers of participants in these courses, there is not just one type of learners. Participants are not a homogeneous mass but a potpourri of individuals with a wildly heterogeneous mix of backgrounds, previous knowledge, familial and professional circumstances, countries of origin, gender, age, and so on. For the majority of participants, a full-time job and/or a family often just does not leave enough room for more time intensive tasks, such as practical exercises or teamwork. Others, however, particularly enjoy these hands-on or collaborative aspects of MOOCs. Furthermore, many subjects particularly require these possibilities and simply cannot be taught or learned in courses that lack collaborative or hands-on features. In this context, the thesis discusses how team assignments have been implemented on the HPI MOOC platform. During the recent years, several experiments have been conducted and a great amount of experience has been gained by employing team assignments in courses in areas, such as Object-Oriented Programming, Design Thinking, and Business Innovation on various instances of this platform: openHPI, openSAP, and mooc.house
The two hallmark features of Brownian motion are the linear growth < x2(t)> = 2Ddt of the mean squared displacement (MSD) with diffusion coefficient D in d spatial dimensions, and the Gaussian distribution of displacements. With the increasing complexity of the studied systems deviations from these two central properties have been unveiled over the years. Recently, a large variety of systems have been reported in which the MSD exhibits the linear growth in time of Brownian (Fickian) transport, however, the distribution of displacements is pronouncedly non-Gaussian (Brownian yet non-Gaussian, BNG). A similar behaviour is also observed for viscoelastic-type motion where an anomalous trend of the MSD, i.e., <x2(t)> ~ ta, is combined with a priori unexpected non-Gaussian distributions (anomalous yet non-Gaussian, ANG). This kind of behaviour observed in BNG and ANG diffusions has been related to the presence of heterogeneities in the systems and a common approach has been established to address it, that is, the random diffusivity approach.
This dissertation explores extensively the field of random diffusivity models. Starting from a chronological description of all the main approaches used as an attempt of describing BNG and ANG diffusion, different mathematical methodologies are defined for the resolution and study of these models. The processes that are reported in this work can be classified in three subcategories, i) randomly-scaled Gaussian processes, ii) superstatistical models and iii) diffusing diffusivity models, all belonging to the more general class of random diffusivity models. Eventually, the study focuses more on BNG diffusion, which is by now well-established and relatively well-understood. Nevertheless, many examples are discussed for the description of ANG diffusion, in order to highlight the possible scenarios which are known so far for the study of this class of processes.
The second part of the dissertation deals with the statistical analysis of random diffusivity processes. A general description based on the concept of moment-generating function is initially provided to obtain standard statistical properties of the models. Then, the discussion moves to the study of the power spectral analysis and the first passage statistics for some particular random diffusivity models. A comparison between the results coming from the random diffusivity approach and the ones for standard Brownian motion is discussed. In this way, a deeper physical understanding of the systems described by random diffusivity models is also outlined.
To conclude, a discussion based on the possible origins of the heterogeneity is sketched, with the main goal of inferring which kind of systems can actually be described by the random diffusivity approach.
Redox signalling in plants
(2020)
Once proteins are synthesized, they can additionally be modified by post-translational modifications (PTMs). Proteins containing reactive cysteine thiols, stabilized in their deprotonated form due to their local environment as thiolates (RS-), serve as redox sensors by undergoing a multitude of oxidative PTMs (Ox-PTMs). Ox-PTMs such as S-nitrosylation or formation of inter- or intra-disulfide bridges induce functional changes in these proteins. Proteins containing cysteines, whose thiol oxidation state regulates their functions, belong to the so-called redoxome. Such Ox-PTMs are controlled by site-specific cellular events that play a crucial role in protein regulation, affecting enzyme catalytic sites, ligand binding affinity, protein-protein interactions or protein stability. Reversible protein thiol oxidation is an essential regulatory mechanism of photosynthesis, metabolism, and gene expression in all photosynthetic organisms. Therefore, studying PTMs will remain crucial for understanding plant adaptation to external stimuli like fluctuating light conditions. Optimizing methods suitable for studying plants Ox-PTMs is of high importance for elucidation of the redoxome in plants. This study focusses on thiol modifications occurring in plant and provides novel insight into in vivo redoxome of Arabidopsis thaliana in response to light vs. dark. This was achieved by utilizing a resin-assisted thiol enrichment approach. Furthermore, confirmation of candidates on the single protein level was carried out by a differential labelling approach. The thiols and disulfides were differentially labelled, and the protein levels were detected using immunoblot analysis. Further analysis was focused on light-reduced proteins. By the enrichment approach many well studied redox-regulated proteins were identified. Amongst those were fructose 1,6-bisphosphatase (FBPase) and sedoheptulose-1,7-bisphosphatase (SBPase) which have previously been described as thioredoxin system targeted enzymes. The redox regulated proteins identified in the current study were compared to several published, independent results showing redox regulated proteins in Arabidopsis leaves, root, mitochondria and specifically S-nitrosylated proteins. These proteins were excluded as potential new candidates but remain as a proof-of-concept to the enrichment experiments to be effective. Additionally, CSP41A and CSP41B proteins, which emerged from this study as potential targets of redox-regulation, were analyzed by Ribo-Seq. The active translatome study of csp41a mutant vs. wild-type showed most of the significant changes at end of the night, similarly as csp41b. Yet, in both mutants only several chloroplast-encoded genes were altered. Further studies of CSP41A and CSP41B proteins are needed to reveal their functions and elucidate the role of redox regulation of these proteins.
Distance Education or e-Learning platform should be able to provide a virtual laboratory to let the participants have hands-on exercise experiences in practicing their skill remotely. Especially in Cybersecurity e-Learning where the participants need to be able to attack or defend the IT System. To have a hands-on exercise, the virtual laboratory environment must be similar to the real operational environment, where an attack or a victim is represented by a node in a virtual laboratory environment. A node is usually represented by a Virtual Machine (VM). Scalability has become a primary issue in the virtual laboratory for cybersecurity e-Learning because a VM needs a significant and fix allocation of resources. Available resources limit the number of simultaneous users. Scalability can be increased by increasing the efficiency of using available resources and by providing more resources. Increasing scalability means increasing the number of simultaneous users.
In this thesis, we propose two approaches to increase the efficiency of using the available resources. The first approach in increasing efficiency is by replacing virtual machines (VMs) with containers whenever it is possible. The second approach is sharing the load with the user-on-premise machine, where the user-on-premise machine represents one of the nodes in a virtual laboratory scenario. We also propose two approaches in providing more resources. One way to provide more resources is by using public cloud services. Another way to provide more resources is by gathering resources from the crowd, which is referred to as Crowdresourcing Virtual Laboratory (CRVL).
In CRVL, the crowd can contribute their unused resources in the form of a VM, a bare metal system, an account in a public cloud, a private cloud and an isolated group of VMs, but in this thesis, we focus on a VM. The contributor must give the credential of the VM admin or root user to the CRVL system. We propose an architecture and methods to integrate or dis-integrate VMs from the CRVL system automatically. A Team placement algorithm must also be investigated to optimize the usage of resources and at the same time giving the best service to the user. Because the CRVL system does not manage the contributor host machine, the CRVL system must be able to make sure that the VM integration will not harm their system and that the training material will be stored securely in the contributor sides, so that no one is able to take the training material away without permission. We are investigating ways to handle this kind of threats.
We propose three approaches to strengthen the VM from a malicious host admin. To verify the integrity of a VM before integration to the CRVL system, we propose a remote verification method without using any additional hardware such as the Trusted Platform Module chip. As the owner of the host machine, the host admins could have access to the VM's data via Random Access Memory (RAM) by doing live memory dumping, Spectre and Meltdown attacks. To make it harder for the malicious host admin in getting the sensitive data from RAM, we propose a method that continually moves sensitive data in RAM. We also propose a method to monitor the host machine by installing an agent on it. The agent monitors the hypervisor configurations and the host admin activities.
To evaluate our approaches, we conduct extensive experiments with different settings. The use case in our approach is Tele-Lab, a Virtual Laboratory platform for Cyber Security e-Learning. We use this platform as a basis for designing and developing our approaches. The results show that our approaches are practical and provides enhanced security.
In today's world, many applications produce large amounts of data at an enormous rate. Analyzing such datasets for metadata is indispensable for effectively understanding, storing, querying, manipulating, and mining them. Metadata summarizes technical properties of a dataset which rang from basic statistics to complex structures describing data dependencies. One type of dependencies is inclusion dependency (IND), which expresses subset-relationships between attributes of datasets. Therefore, inclusion dependencies are important for many data management applications in terms of data integration, query optimization, schema redesign, or integrity checking. So, the discovery of inclusion dependencies in unknown or legacy datasets is at the core of any data profiling effort.
For exhaustively detecting all INDs in large datasets, we developed S-indd++, a new algorithm that eliminates the shortcomings of existing IND-detection algorithms and significantly outperforms them. S-indd++ is based on a novel concept for the attribute clustering for efficiently deriving INDs. Inferring INDs from our attribute clustering eliminates all redundant operations caused by other algorithms. S-indd++ is also based on a novel partitioning strategy that enables discording a large number of candidates in early phases of the discovering process. Moreover, S-indd++ does not require to fit a partition into the main memory--this is a highly appreciable property in the face of ever-growing datasets. S-indd++ reduces up to 50% of the runtime of the state-of-the-art approach.
None of the approach for discovering INDs is appropriate for the application on dynamic datasets; they can not update the INDs after an update of the dataset without reprocessing it entirely. To this end, we developed the first approach for incrementally updating INDs in frequently changing datasets. We achieved that by reducing the problem of incrementally updating INDs to the incrementally updating the attribute clustering from which all INDs are efficiently derivable. We realized the update of the clusters by designing new operations to be applied to the clusters after every data update. The incremental update of INDs reduces the time of the complete rediscovery by up to 99.999%.
All existing algorithms for discovering n-ary INDs are based on the principle of candidate generation--they generate candidates and test their validity in the given data instance. The major disadvantage of this technique is the exponentially growing number of database accesses in terms of SQL queries required for validation. We devised Mind2, the first approach for discovering n-ary INDs without candidate generation. Mind2 is based on a new mathematical framework developed in this thesis for computing the maximum INDs from which all other n-ary INDs are derivable. The experiments showed that Mind2 is significantly more scalable and effective than hypergraph-based algorithms.
To investigate the reliability and stability of spherical harmonic models based on archeo/-paleomagnetic data, 2000 Geomagnetic models were calculated. All models are based on the same data set but with randomized uncertainties. Comparison of these models to the geomagnetic field model gufm1 showed that large scale magnetic field structures up to spherical harmonic degree 4 are stable throughout all models. Through a ranking of all models by comparing the dipole coefficients to gufm1 more realistic uncertainty estimates were derived than the authors of the data provide.
The derived uncertainty estimates were used in further modelling, which combines archeo/-paleomagnetic and historical data. The huge difference in data count, accuracy and coverage of these two very different data sources made it necessary to introduce a time dependent spatial damping, which was constructed to constrain the spatial complexity of the model. Finally 501 models were calculated by considering that each data point is a Gaussian random variable, whose mean is the original value and whose standard deviation is its uncertainty. The final model arhimag1k is calculated by taking the mean of the 501 sets of Gauss coefficients. arhimag1k fits different dependent and independent data sets well. It shows an early reverse flux patch at the core-mantle boundary between 1000 AD and 1200 AD at the location of the South Atlantic Anomaly today. Another interesting feature is a high latitude flux patch over Greenland between 1200 and 1400 AD. The dipole moment shows a constant behaviour between 1600 and 1840 AD.
In the second part of the thesis 4 new paleointensities from 4 different flows of the island Fogo, which is part of Cape Verde, are presented. The data is fitted well by arhimag1k with the exception of the value at 1663 of 28.3 microtesla, which is approximately 10 microtesla lower than the model suggest.
One of the tremendous discoveries by the Cassini spacecraft has been the detection of propeller structures in Saturn's A ring. Although the generating moonlet is too small to be resolved by the cameras aboard Cassini, its produced density structure within the rings, caused by its gravity can be well observed. The largest observed propeller is called Blériot and has an azimuthal extent over several thousand kilometers. Thanks to its large size, Blériot could be identified in different images over a time span of over 10 years, allowing the reconstruction of its orbital evolution. It turns out that Blériot deviates considerably from its expected Keplerian orbit in azimuthal direction by several thousand kilometers. This excess motion can be well reconstructed by a superposition of three harmonics, and therefore resembles the typical fingerprint of a resonantly perturbed body. This PhD thesis is directed to the excess motion of Blériot. Resonant perturbations are a known for some of the outer satellites of Saturn. Thus, in the first part of this thesis, we seek for suiting resonance candidates nearby the propeller, which might explain the observed periods and amplitudes. In numeric simulations, we show that indeed resonances by Prometheus, Pandora and Mimas can explain the libration periods in good agreement, but not the amplitudes. The amplitude problem is solved by the introduction of a propeller-moonlet interaction model, where we assume a broken symmetry of the propeller by a small displacement of the moonlet. This results in a librating motion the moonlet around the propeller's symmetry center due to the non-vanishing accelerations. The retardation of the reaction of the propeller structure to the motion of the moonlet causes the propeller to become asymmetric. Hydrodynamic simulations to test our analytical model confirm our predictions. In the second part of this thesis, we consider a stochastic migration of the moonlet, which is an alternative hypothesis to explain the observed excess motion of Blériot. The mean-longitude is a time-integrated quantity and thus introduces a correlation between the independent kicks of a random walk, smoothing the noise and thus makes the residual look similar to the observed one for Blériot. We apply a diagonalization test to decorrelated the observed residuals for the propellers Blériot and Earhart and the ring-moon Daphnis. It turns out that the decorrelated distributions do not strictly follow the expected Gaussian distribution. The decorrelation method fails to distinguish a correlated random walk from a noisy libration and thus we provide an alternative study. Assuming the three-harmonic fit to be a valid representation of the excess motion for Blériot, independently from its origin, we test the likelihood that this excess motion can be created by a random walk. It turns out that a non-correlated and correlated random walk is unlikely to explain the observed excess motion.
The hepatokine FGF21 and the adipokine chemerin have been implicated as metabolic regulators and mediators of inter-tissue crosstalk. While FGF21 is associated with beneficial metabolic effects and is currently being tested as an emerging therapeutic for obesity and diabetes, chemerin is linked to inflammation-mediated insulin resistance. However, dietary regulation of both organokines and their role in tissue interaction needs further investigation.
The LEMBAS nutritional intervention study investigated the effects of two diets differing in their protein content in obese human subjects with non-alcoholic fatty liver disease (NAFLD). The study participants consumed hypocaloric diets containing either low (LP: 10 EN%, n = 10) or high (HP: 30 EN%, n = 9) dietary protein 3 weeks prior to bariatric surgery. Before and after the intervention the participants were anthropometrically assessed, blood samples were drawn, and hepatic fat content was determined by MRS. During bariatric surgery, paired subcutaneous and visceral adipose tissue biopsies as well as liver biopsies were collected. The aim of this thesis was to investigate circulating levels and tissue-specific regulation of (1) FGF21 and (2) chemerin in the LEMBAS cohort. The results were compared to data obtained in 92 metabolically healthy subjects with normal glucose tolerance and normal liver fat content.
(1) Serum FGF21 concentrations were elevated in the obese subjects, and strongly associated with intrahepatic lipids (IHL). In accordance, FGF21 serum concentrations increased with severity of NAFLD as determined histologically in the liver biopsies. Though both diets were successful in reducing IHL, the effect was more pronounced in the HP group. FGF21 serum concentrations and mRNA expression were bi-directionally regulated by dietary protein, independent from metabolic improvements. In accordance, in the healthy study subjects, serum FGF21 concentrations dropped by more than 60% in response to the HP diet. A short-term HP intervention confirmed the acute downregulation of FGF21 within 24 hours. Lastly, experiments in HepG2 cell cultures and primary murine hepatocytes identified nitrogen metabolites (NH4Cl and glutamine) to dose-dependently suppress FGF21 expression.
(2) Circulating chemerin concentrations were considerably elevated in the obese versus lean study participants and differently associated with markers of obesity and NAFLD in the two cohorts. The adipokine decreased in response to the hypocaloric interventions while an unhealthy high-fat diet induced a rise in chemerin serum levels. In the lean subjects, mRNA expression of RARRES2, encoding chemerin, was strongly and positively correlated with expression of several cytokines, including MCP1, TNFα, and IL6, as well as markers of macrophage infiltration in the subcutaneous fat depot. However, RARRES2 was not associated with any cytokine assessed in the obese subjects and the data indicated an involvement of chemerin not only in the onset but also resolution of inflammation. Analyses of the tissue biopsies and experiments in human primary adipocytes point towards a role of chemerin in adipogenesis while discrepancies between the in vivo and in vitro data were detected.
Taken together, the results of this thesis demonstrate that circulating FGF21 and chemerin levels are considerably elevated in obesity and responsive to dietary interventions. FGF21 was acutely and bi-directionally regulated by dietary protein in a hepatocyte-autonomous manner. Given that both, a lack in essential amino acids and excessive nitrogen intake, exert metabolic stress, FGF21 may serve as an endocrine signal for dietary protein balance. Lastly, the data revealed that chemerin is derailed in obesity and associated with obesity-related inflammation. However, future studies on chemerin should consider functional and regulatory differences between secreted and tissue-specific isoforms.
With his September 2015 speech “Breaking the tragedy of the horizon”, the President of the Central Bank of England, Mark Carney, put climate change on the agenda of financial market regulators. Until then, climate change had been framed mainly as a problem of negative externalities leading to long-term economic costs, which resulted in countries trying to keep the short-term costs of climate action to a minimum. Carney argued that climate change, as well as climate policy, can also lead to short-term financial risks, potentially causing strong adjustments in asset prices. Analysing the effect of a sustainability transition on the financial sector challenges traditional economic and financial analysis and requires a much deeper understanding of the interrelations between climate policy and financial markets.
This dissertation thus investigates the implications of climate policy for financial markets as well as the role of financial markets in a transition to a sustainable economy. The approach combines insights from macroeconomic and financial risk analysis. Following an introduction and classification in Chapter 1, Chapter 2 shows a macroeconomic analysis that combines ambitious climate targets (negative externality) with technological innovation (positive externality), adaptive expectations and an investment program, resulting in overall positive macroeconomic outcomes. The analysis also reveals the limitations of climate economic models in their representation of financial markets. Therefore, the subsequent part of this dissertation is concerned with the link between climate policies and financial markets. In Chapter 3, an empirical analysis of stock-market responses to the announcement of climate policy targets is performed to investigate impacts of climate policy on financial markets. Results show that 1) international climate negotiations have an effect on asset prices and 2) investors increasingly recognize transition risks in carbon-intensive investments. In Chapter 4, an analysis of equity markets and the interbank market shows that transition risks can potentially affect a large part of the equity market and that financial interconnections can amplify negative shocks. In Chapter 5, an analysis of mortgage loans shows how information on climate policy and the energy performance of buildings can be integrated into risk management and reflected in interest rates.
While costs of climate action have been explored at great depth, this dissertation offers two main contributions. First, it highlights the importance of a green investment program to strengthen the macroeconomic benefits of climate action. Second, it shows different approaches on how to integrate transition risks and opportunities into financial market analysis. Anticipating potential losses and gains in the value of financial assets as early as possible can make the financial system more resilient to transition risks and can stimulate investments into the decarbonization of the economy.
The earth’s ecosystems undergo considerable changes characterized by human-induced alterations of environmental factors. In order to develop conservation goals for vulnerable ecosystems, research on ecosystem functioning is required.. Therefore, it is crucial to explore organismal interactions, such as trophic interaction or competition, which are decisive for key processes in ecosystems. These interactions are determined by the performance responses of organisms to environmental changes, which in turn, are shaped by the organism’s functional traits. Exploring traits, their variation, and the environmental factors that act on them may provide insights on how ecological interactions affect
populations, community structures and dynamics, and thus ecosystem
functioning. In aquatic ecosystems, global warming intensifies
phytoplankton blooms, which are more frequently dominated by
cyanobacteria. As cyanobacteria are poor in polyunsaturated fatty acids
(PUFA) and sterols, this compositional change alters the biochemical
food quality of phytoplankton for consumer species with potential
effects on ecological interactions. Within this thesis, I studied the
effects of biochemical food quality on consumer traits and performance responses at the phytoplankton-zooplankton interface using different strains of two closely related generalist rotifer species Brachionus calyciflorus and Brachionus fernandoi and three phytoplankton species that differ in their biochemical food quality, i.e. in their content and composition of PUFA and sterols. In a series of laboratory feeding experiments I found that biochemical food quality affected rotifer’s performance, i.e. fecundity, survival, and population growth, across a broad range of food quantities. Biochemical food quality constraints,
which are often underestimated as influencing environmental factors, had strong impacts on performance responses. I further explored the potential of biochemical food quality in mediating consumer response variation between species and among strains of one species. Co-limitation by food quantity and biochemical food quality resulted in differences in performance responses, which were more pronounced within than between rotifer species. Furthermore, I demonstrated that the body PUFA compositions of rotifer species and strains were differently affected by the dietary PUFA supply, which indicates inter- and intraspecific differences in physiological traits, such as PUFA retention, allocation, and/or bioconversion capacity, within the genus Brachionus. This indicates that dietary PUFA are involved in shaping traits and performance responses of rotifers. This thesis reveals that biochemical food quality is an environmental factor with strong effects on individual traits and performance responses of consumers. Biochemical food quality constraints can further mediate trait and response variation among species or strains. Consequently, they carry the potential to shape ecological interactions and evolutionary processes with effects on community structures and dynamics. Trait-based approaches, which include food quality research, thus may provide further insights into the linkage between functional diversity and the maintenance of crucial ecosystem functions.
Chloroplasts are the photosynthetic organelles in plant and algae cells that enable photoautotrophic growth. Due to their prokaryotic origin, modern-day chloroplast genomes harbor 100 to 200 genes. These genes encode for core components of the photosynthetic complexes and the chloroplast gene expression machinery, making most of them essential for the viability of the organism. The regulation of those genes is predominated by translational adjustments. The powerful technique of ribosome profiling was successfully used to generate highly resolved pictures of the translational landscape of Arabidopsis thaliana cytosol, identifying translation of upstream open reading frames and long non-coding transcripts. In addition, differences in plastidial translation and ribosomal pausing sites were addressed with this method. However, a highly resolved picture of the chloroplast translatome is missing. Here, with the use of chloroplast isolation and targeted ribosome affinity purification, I generated highly enriched ribosome profiling datasets of the chloroplasts translatome for Nicotiana tabacum in the dark and light. Chloroplast isolation was found unsuitable for the unbiased analysis of translation in the chloroplast but adequate to identify potential co-translational import. Affinity purification was performed for the small and large ribosomal subunit independently. The enriched datasets mirrored the results obtained from whole-cell ribosome profiling. Enhanced translational activity was detected for psbA in the light. An alternative translation initiation mechanism was not identified by selective enrichment of small ribosomal subunit footprints. In sum, this is the first study that used enrichment strategies to obtain high-depth ribosome profiling datasets of chloroplasts to study ribosome subunit distribution and chloroplast associated translation.
Ever-changing light intensities are challenging the photosynthetic capacity of photosynthetic organism. Increased light intensities may lead to over-excitation of photosynthetic reaction centers resulting in damage of the photosystem core subunits. Additional to an expensive repair mechanism for the photosystem II core protein D1, photosynthetic organisms developed various features to reduce or prevent photodamage. In the long-term, photosynthetic complex contents are adjusted for the efficient use of experienced irradiation. However, the contribution of chloroplastic gene expression in the acclimation process remained largely unknown. Here, comparative transcriptome and ribosome profiling was performed for the early time points of high-light acclimation in Nicotiana tabacum chloroplasts in a genome-wide scale. The time- course data revealed stable transcript level and only minor changes in translational activity of specific chloroplast genes during high-light acclimation. Yet, psbA translation was increased by two-fold in the high light from shortly after the shift until the end of the experiment. A stress-inducing shift from low- to high light exhibited increased translation only of psbA. This study indicate that acclimation fails to start in the observed time frame and only short-term responses to reduce photoinhibition were observed.
Um auch die unbeabsichtigten Folgen ihrer Politik zu ermitteln, unternehmen Regierungen umfassende Gesetzesfolgenabschätzungen. Immer häufiger lassen sie sich dabei von unabhängigen Expertengremien kontrollieren. Doch: Wie erzielen diese Gremien Einfluss? Und welche Rolle spielen sie als Politikberater für Bürokratieabbau und bessere Rechtsetzung? Das Buch eröffnet neue Einblicke in die Entwicklungshistorie und Handlungsrealität der drei erfahrensten Normenkontrollräte in Europa. Vor dem Hintergrund unterschiedlicher Verwaltungskulturen werden die Ratstypen „Wachhund“, „Torwächter“ und „Kritischer Freund“ herausgearbeitet. Die Ergebnisse schärfen die politische und wissenschaftliche Debatte um die Leistungsfähigkeit von Normenkontrollräten.
Die vorliegende Untersuchung analysierte den direkten Zusammenhang eines berufsbezogenen Angebots Sozialer Gruppenarbeit mit dem Ergebnis beruflicher Wiedereingliederung bei Rehabilitandinnen und Rehabilitanden in besonderen beruflichen Problemlagen. Sie wurde von der Deutschen Rentenversicherung Bund als Forschungsprojekt vom 01.01.2013 bis 31.12. 2015 gefördert und an der Professur für Rehabilitationswissenschaften der Universität Potsdam realisiert.
Die Forschungsfrage lautete: Kann eine intensive sozialarbeiterische Gruppenintervention im Rahmen der stationären medizinischen Rehabilitation soweit auf die Stärkung sozialer Kompetenzen und die Soziale Unterstützung von Rehabilitandinnen und Rehabilitanden einwirken, dass sich dadurch langfristige Verbesserungen hinsichtlich der beruflichen Wiedereingliederung im Vergleich zur konventionellen Behandlung ergeben?
Die Studie gliederte sich in eine qualitative und eine quantitative Erhebung mit einer zwischenliegenden Intervention. Eingeschlossen waren 352 Patientinnen und Patienten im Alter zwischen 18 und 65 Jahren mit kardiovaskulären Diagnosen, deren Krankheitsbilder häufig von komplexen Problemlagen begleitet sind, verbunden mit einer schlechten sozialmedizinischen Prognose.
Die Evaluation der Gruppenintervention erfolgte in einem clusterrandomisierten kontrollierten Studiendesign, um einen empirischen Nachweis darüber zu erbringen, inwieweit die Intervention gegenüber der regulären sozialarbeiterischen Behandlung höhere Effekte erzielen kann. Die Interventionsgruppen nahmen am Gruppenprogramm teil, die Kontrollgruppen erhielten die reguläre sozialarbeiterische Behandlung.
Im Ergebnis konnte mit dieser Stichprobe kein Nachweis zur Verbesserung der beruflichen Wiedereingliederung, der gesundheitsbezogenen Arbeitsfähigkeit, der Lebensqualität sowie der Sozialen Unterstützung durch die Teilnahme am sozialarbeiterischen Gruppenprogramm erbracht werden. Die Return-To-Work-Rate betrug 43,7 %, ein Viertel der Untersuchungsgruppe befand sich nach einem Jahr in Arbeitslosigkeit. Die durchgeführte Gruppenintervention ist dem konventionellen Setting Sozialer Arbeit als gleichwertig anzusehen.
Schlussfolgernd wurde auf eine sozialarbeiterische Unterstützung der beruflichen Wiedereingliederung über einen längeren Zeitraum nach einer kardiovaskulären Erkrankung verwiesen, insbesondere durch wohnortnahe Angebote zu einem späteren Zeitpunkt bei stabilerer Gesundheit. Aus den Erhebungen ließen sich mögliche Erfolge bei engerer Kooperation zwischen dem Fachbereich der Sozialen Arbeit und der Psychologie ableiten. Ebenfalls gab es Hinweise auf die einflussreiche Rolle der Angehörigen, die durch Einbindung in die Soziale Beratung unterstützend auf den Wiedereingliederungsprozess wirken könnten. Die Passgenauigkeit der untersuchten sozialarbeiterischen Gruppeninterventionen ist durch eine gezielte Soziale Diagnostik zu verbessern.
Crowdinvesting
(2020)
Finanzierung durch den Schwarm Crowdinvesting - auch bekannt als Schwarmfinanzierung - hat in den letzten Jahren deutlich an Relevanz gewonnen. Crowdinvesting bietet Startup- und Wachstumsunternehmen aber auch Entwicklern von Immobilienprojekten eine echte Alternative zum klassischen Bankdarlehen. Hierbei rufen die Unternehmen über das Internet zur Finanzierung auf und eine Vielzahl von Kleinanlegern und Business Angels können sich mit kleinen oder großen Beträgen an der Finanzierung beteiligen.
Das Werk beleuchtet dabei grundlegende Fragen wie:
- Ist Crowdinvesting dasselbe wie Crowdfunding? Gibt es noch weitere Formen der Schwarmfinanzierung?
- Welche Vorteile und Rechte erhalten die Anleger beim Crowdinvesting?
- Welche gesetzlichen Rahmenbedingungen gelten für Crowdinvesting-Finanzierungen?
- Gibt es Bestrebungen für einen EU-weit einheitlichen Rechtsrahmen?
- Wie sind Einkünfte aus einer Crowdinvesting-Finanzierung zu versteuern? Muss Umsatzsteuer abgeführt werden?
Klar strukturiert und verständlich formuliert
Das Werk bietet dem Leser eine umfassende Darstellung zum Begriff des Crowdinvesting und Abgrenzung von ähnlichen Schwarmfinanzierungen. Darüber hinaus werden die rechtlichen Beziehungen der verschiedenen Beteiligten beim Crowdinvesting zivilrechtlich eingeordnet. Anschließend wird der aktuelle aufsichtsrechtliche Rahmen des Crowdinvesting dargestellt und kritisch anhand ökonomischer Theorien (insb. Erkenntnisse der behavioral finance) hinterfragt sowie auf seine verfassungsrechtliche Zulässigkeit untersucht. Auch die aktuellen Entwicklungen eines europäischen Rechtsrahmens für Crowdinvesting werden diskutiert. Abschließend gibt das Werk Antworten auf ertrags- und umsatzsteuerliche Fragen.
Vorteile auf einen Blick
- umfassende Darstellung eines aktuellen und (volks-)wirtschaftlich relevanten Themas
- Nachschlagewerk für zivilrechtliche, aufsichtsrechtliche und steuerliche Fragestellungen beim Crowdinvesting
- auch ohne Vorkenntnisse gut verständlich
Zielgruppe
Für Rechtsanwälte, Unternehmen, Wissenschaftler, Betriebs- und Volkswirte sowie alle am Thema Schwarmfinanzierung Interessierte.
Geomechanical and petrological characterisation of exposed slip zones, Alpine Fault, New Zealand
(2020)
The Alpine Fault is a large, plate-bounding, strike-slip fault extending along the north-western edge of the Southern Alps, South Island, New Zealand. It regularly accommodates large (MW > 8) earthquakes and has a high statistical probability of failure in the near future, i.e., is late in its seismic cycle. This pending earthquake and associated co-seismic landslides are expected to cause severe infrastructural damage that would affect thousands of people, so it presents a substantial geohazard. The interdisciplinary study presented here aims to characterise the fault zone’s 4D (space and time) architecture, because this provides information about its rheological properties that will enable better assessment of the hazard
the fault poses.
The studies undertaken include field investigations of principal slip zone fault gouges exposed
along strike of the fault, and subsequent laboratory analyses of these outcrop and additional borehole samples. These observations have provided new information on (I) characteristic microstructures down to the nanoscale that indicate which deformation mechanisms operated within the rocks, (II) mineralogical information that constrains the fault’s geomechanical behaviour and (III) geochemical compositional information that allows the influence of fluid- related alteration processes on material properties to be unraveled.
Results show that along-strike variations of fault rock properties such as microstructures and mineralogical composition are minor and / or do not substantially influence fault zone architecture. They furthermore provide evidence that the architecture of the fault zone, particularly its fault core, is more complex than previously considered, and also more complex than expected for this sort of mature fault cutting quartzofeldspathic rocks. In particular our results strongly suggest that the fault has more than one principal slip zone, and that these form an anastomosing network extending into the basement below the cover of Quaternary sediments.
The observations detailed in this thesis highlight that two major processes, (I) cataclasis and (II) authigenic mineral formation, are the major controls on the rheology of the Alpine Fault. The velocity-weakening behaviour of its fault gouge is favoured by abundant nanoparticles
promoting powder lubrication and grain rolling rather than frictional sliding. Wall-rock fragmentation is accompanied by co-seismic, fluid-assisted dilatancy that is recorded by calcite cementation. This mineralisation, along with authigenic formation of phyllosilicates, quickly alters the petrophysical fault zone properties after each rupture, restoring fault competency. Dense networks of anastomosing and mutually cross-cutting calcite veins and intensively reworked gouge matrix demonstrate that strain repeatedly localised within the narrow fault gouge. Abundantly undeformed euhedral chlorite crystallites and calcite veins cross-cutting both fault gouge and gravels that overlie basement on the fault’s footwall provide evidence that the processes of authigenic phyllosilicate growth, fluid-assisted dilatancy and associated fault healing are processes active particularly close to the Earth’s surface in this fault zone.
Exposed Alpine Fault rocks are subject to intense weathering as direct consequence of abundant orogenic rainfall associated with the fault’s location at the base of the Southern Alps. Furthermore, fault rock rheology is substantially affected by shallow-depth conditions such as the juxtaposition of competent hanging wall fault rocks on poorly consolidated footwall sediments. This means microstructural, mineralogical and geochemical properties of the exposed fault rocks may differ substantially from those at deeper levels, and thus are not characteristic of the majority of the fault rocks’ history. Examples are (I) frictionally weak smectites found within the fault gouges being artefacts formed at temperature conditions, and imparting petrophysical properties that are not typical for most of fault rocks of the Alpine Fault, (II) grain-scale dissolution resulting from subaerial weathering rather than deformation by pressure-solution processes and (III) fault gouge geometries being more complex than expected for deeper counterparts.
The methodological approaches deployed in analyses of this, and other fault zones, and the major results of this study are finally discussed in order to contextualize slip zone investigations of fault zones and landslides. Like faults, landslides are major geohazards, which highlights the importance of characterising their geomechanical properties. Similarities between faults, especially those exposed to subaerial processes, and landslides, include mineralogical composition and geomechanical behaviour. Together, this ensures failure occurs predominantly by cataclastic processes, although aseismic creep promoted by weak phyllosilicates is not uncommon. Consequently, the multidisciplinary approach commonly used to investigate fault zones may contribute to increase the understanding of landslide faulting processes and the assessment of their hazard potential.
Die Aufgabe der Selbstregulierung der Presse wird in Deutschland vom Deutschen Presserat wahrgenommen. Dieser sieht sich seit seiner Gründung fortwährender Kritik ausgesetzt. Die Arbeit geht der Frage nach, ob der Deutsche Presserat mit seiner bisherigen Tätigkeit den Ansprüchen an eine erfolgreiche Selbstregulierung gerecht wird. Sodann werden elf Lösungsvorschläge auf ihre rechtliche Umsetzbarkeit und ihre Auswirkungen auf die Pressekontrolle in Deutschland untersucht. Zudem wird das britische Modell der Presseselbstkontrolle mit dem deutschen verglichen, um Unterschiede und Gemeinsamkeiten aufzuzeigen, sowie Anregungen und Verbesserungsvorschläge für die Zukunft zu erforschen. Durch ähnliche Strukturen am Anfang ihres Entstehens und das größtenteils parallele Bestehen von Presseräten in beiden Ländern eignet sich explizit das britische Modell für einen Vergleich mit dem Deutschen Presserat.
Even though the majority of individuals know that exercising is healthy, a high percentage struggle to achieve the recommended amount of exercise. The (social-cognitive) theories that are commonly applied to explain exercise motivation refer to the assumption that people base their decisions mainly on rational reasoning. However, behavior is not only bound to reflection. In recent years, the role of automaticity and affect for exercise motivation has been increasingly discussed. In this dissertation, central assumptions of the affective–reflective theory of physical inactivity and exercise (ART; Brand & Ekkekakis, 2018), an exercise-specific dual-process theory that emphasizes the role of a momentary automatic affective reaction for exercise-decisions, were examined. The central aim of this dissertation was to investigate exercisers and non-exercisers automatic affective reactions to exercise-related stimuli (i.e., type-1 process). In particular, the two components of the ART’s type-1 process, that are, automatic associations with exercise and the automatic affective valuation to exercise, were under study.
In the first publication (Schinkoeth & Antoniewicz, 2017), research on automatic (evaluative) associations with exercise was summarized and evaluated in a systematic review. The results indicated that automatic associations with exercise appeared to be relevant predictors for exercise behavior and other exercise-related variables, providing evidence for a central assumption of the ART’s type-1 process. Furthermore, indirect methods seem to be suitable to assess automatic associations. The aim of the second publication (Schinkoeth, Weymar, & Brand, 2019) was to approach the somato-affective core of the automatic valuation of exercise using analysis of reactivity in vagal HRV while viewing exercise-related pictures. Results revealed that differences in exercise volume could be regressed on HRV reactivity. In light of the ART, these findings were interpreted as evidence of an inter-individual affective reaction elicited at the thought of exercise and triggered by exercise-stimuli. In the third publication (Schinkoeth & Brand, 2019, subm.), it was sought to disentangle and relate to each other the ART’s type-1 process components—automatic associations and the affective valuation of exercise. Automatic associations to exercise were assessed with a recoding-free variant of an implicit association test (IAT). Analysis of HRV reactivity was applied to approach a somatic component of the affective valuation, and facial reactions in a facial expression (FE) task served as indicators of the automatic affective reaction’s valence. Exercise behavior was assessed via self-report. The measurement of the affective valuation’s valence with the FE task did not work well in this study. HRV reactivity was predicted by the IAT score and did also statistically predict exercise behavior. These results thus confirm and expand upon the results of publication two and provide empirical evidence for the type-1 process, as defined in the ART. This dissertation advances the field of exercise psychology concerning the influence of automaticity and affect on exercise motivation. Moreover, both methodical implications and theoretical extensions for the ART can be derived from the results.
Der zentralasiatische Naturraum, wie er sich uns heute präsentiert, ist das Ergebnis eines Zusammenwirkens vieler verschiedener Faktoren über Jahrmillionen hinweg. Im aktuellen Kontext des Klimawandels zeigt sich jedoch, wie stark sich Stoffflüsse auch kurzfristig ändern und dabei das Gesicht der Landschaft verwandeln können. Die Gobi-Wüste in der Inneren Mongolei (China), als Teil der gleichnamigen Trockenregionen Nordwestchinas, ist aufgrund der Ausgestaltung ihrer landschaftsprägenden Elemente sowie ihrer Landschaftsdynamik, im Zusammenhang mit der Lage zum Tibet-Plateau, in den Fokus der klimageschichtlichen Grundlagenforschung gerückt. Als großes Langzeitarchiv unterschiedlichster fluvialer, lakustriner und äolischer Sedimente stellt sie eine bedeutende Lokalität zur Rekonstruktion von lokalen und regionalen Stoffflüssen dar.. Andererseits ist die Gobi-Wüste zugleich auch eine bedeutende Quelle für den überregionalen Staubtransport, da sie aufgrund der klimatischen Bedingungen insbesondere der Erosion durch Ausblasung preisgegeben wird. Vor diesem Hintergrund erfolgten zwischen 2011 und 2014, im Rahmen des BMBF-Verbundprogramms WTZ Zentralasien – Monsundynamik & Geoökosysteme (Förderkennzeichen 03G0814), mehrere deutsch-chinesische Expeditionen in das Ejina-Becken (Innere Mongolei) und das Qilian Shan-Vorland. Im Zuge dieser Expeditionen wurden für eine Bestimmung potenzieller Sedimentquellen erstmals zahlreiche Oberflächenproben aus dem gesamten Einzugsgebiet des Heihe (schwarzer Fluss) gesammelt. Zudem wurden mit zwei Bohrungen im inneren des Ejina-Beckens, ergänzende Sedimentbohrkerne zum bestehenden Bohrkern D100 (siehe Wünnemann (2005)) abgeteuft, um weit reichende, ergänzende Informationen zur Landschaftsgeschichte und zum überregionalen Sedimenttransfer zu erhalten. Gegenstand und Ziel der vorliegenden Doktorarbeit ist die sedimentologisch-mineralogische Charakterisierung des Untersuchungsgebietes in Bezug auf potenzielle Sedimentquellen und Stoffflüsse des Ejina-Beckens sowie die Rekonstruktion der Ablagerungsgeschichte eines dort erbohrten, 19m langen Sedimentbohrkerns (GN100). Schwerpunkt ist hierbei die Klärung der Sedimentherkunft innerhalb des Bohrkerns sowie die Ausweisung von Herkunftssignalen und möglichen Sedimentquellen bzw. Sedimenttransportpfaden. Die methodische Herangehensweise basiert auf einem Multi-Proxy-Ansatz zur Charakterisierung der klastischen Sedimentfazies anhand von Geländebeobachtungen, lithologisch-granulometrischen und mineralogisch-geochemischen Analysen sowie statistischen Verfahren. Für die mineralogischen Untersuchungen der Sedimente wurde eine neue, rasterelektronenmikroskopische Methode zur automatisierten Partikelanalyse genutzt und den traditionellen Methoden gegenübergestellt. Die synoptische Betrachtung der granulometrischen, geochemischen und mineralogischen Befunde der Oberflächensedimente ergibt für das Untersuchungsgebiet ein logisches Kaskadenmodell mit immer wiederkehrenden Prozessbereichen und ähnlichen Prozesssignalen. Die umfangreichen granulometrischen Analysen deuten dabei auf abnehmende Korngrößen mit zunehmender Entfernung vom Qilian Shan hin und ermöglichen die Identifizierung von vier texturellen Signalen: den fluvialen Sanden, den Dünensanden, den Stillwassersedimenten und Stäuben. Diese Ergebnisse können als Interpretationsgrundlage für die Korngrößenanalysen des Bohrkerns genutzt werden. Somit ist es möglich, die Ablagerungsgeschichte der Bohrkernsedimente zu rekonstruieren und in Verbindung mit eigenen und literaturbasierten Datierungen in einen Gesamtkontext einzuhängen. Für das Untersuchungsgebiet werden somit vier Ablagerungsphasen ausgewiesen, die bis in die Zeit des letzten glazialen Maximums (LGM) zurückreichen. Während dieser Ablagerungsphasen kam es im Zuge unterschiedlicher Aktivitäts- und Stabilitätsphasen zu einer kontinuierlichen Progradation und Überprägung des Schwemmfächers. Eine besonders aktive Phase kann zwischen 8 ka und 4 ka BP festgestellt werden, während der es aufgrund zunehmender fluvialer Aktivitäten zu einer deutlich verstärkten Schwemmfächerdynamik gekommen zu sein scheint. In den Abschnitten davor und danach waren es vor allem äolische Prozesse, die zu einer Überprägung des Schwemmfächers geführt haben. Hinsichtlich der mineralogischen Herkunftssignale gibt es eine große Variabilität. Dies spiegelt die enorme Heterogenität der Geologie des Untersuchungsgebietes wider, wodurch die räumlichen Signale nicht sehr stark ausgeprägt sind. Dennoch, können für das Einzugsgebiet drei größere Bereiche deklariert werden, die als Herkunftsgebiet in Frage kommen. Das östliche Qilian Shan Vorland zeichnet sich dabei durch deutlich höhere Chloritgehalte als primäre Quelle für die Sedimente im Ejina-Becken aus. Sie unterscheiden sich insbesondere durch stark divergierende Chloritgehalte in der Tonmineral- und Gesamtmineralfraktion, was das östliche Qilian Shan Vorland als primäre Quelle für die Sedimente im Ejina-Becken auszeichnet. Dies steht in Zusammenhang mit den Grünschiefern, Ophioliten und Serpentiniten in diesem Bereich. Geochemisch deutet vor allem das Cr/Rb-Verhältnis eine große Variabilität innerhalb des Einzugsgebietes an. Auch hier ist es das östliche Vorland, welches aufgrund seines hohen Anteils an mafischen Gesteinen reich an Chromiten und Spinellen ist und sich somit vom restlichen Untersuchungsgebiet abhebt. Die zeitliche aber auch die generelle Variabilität der Sedimentherkunft lässt sich in den Bohrkernsedimenten nicht so deutlich nachzeichnen. Die mineralogisch-sedimentologischen Eigenschaften der erbohrten klastischen Sedimente zeugen zwar von zwischenzeitlichen Änderungen bei der Sedimentherkunft, diese sind jedoch nicht so deutlich ausgeprägt, wie es die Quellsignale in den Oberflächensedimenten vermuten lassen. Ein Grund dafür scheint die starke Vermischung unterschiedlichster Sedimente während des Transportes zu sein. Die Kombination der Korngrößenergebnisse mit den Befunden der Gesamt- und Schwermineralogie deuten darauf hin, dass es zwischenzeitlich eine Phase mit überwiegend äolischen Prozessen gegeben hat, die mit einem Sedimenteintrag aus dem westlichen Bei Shan in Verbindung stehen. Neben der Zunahme ultrastabiler Schwerminerale wie Zirkon und Granat und der Abnahme opaker Schwerminerale, weisen vor allem die heutigen Verhältnisse darauf hin. Der Vergleich der traditionellen Schwermineralanalyse mit der Computer-Controlled-Scanning-Electron-Microscopy (kurz: CCSEM), die eine automatisierte Partikelauswertung der Proben ermöglicht, zeigt den deutlichen Vorteil der modernen Analysemethode. Neben einem zeitlichen Vorteil, den man durch die automatisierte Abarbeitung der vorbereiteten Proben erlangen kann, steht vor allem die deutlich größere statistische Signifikanz des Ergebnisses im Vordergrund. Zudem können mit dieser Methode auch chemische Varietäten einiger Schwerminerale bestimmt werden, die eine noch feinere Klassifizierung und sicherere Aussagen zu einer möglichen Sedimentherkunft ermöglichen. Damit ergeben sich außerdem verbesserte Aussagen zu Zusammensetzungen und Entstehungsprozessen der abgelagerten Sedimente. Die Studie verdeutlicht, dass die Sedimentherkunft innerhalb des Untersuchungsgebietes sowie die ablaufenden Prozesse zum Teil stark von lokalen Gegebenheiten abhängen. Die Heterogenität der Geologie und die Größe des Einzugsgebietes sowie die daraus resultierende Komplexität der Sedimentgenese, machen exakte Zuordnungen zu klar definierten Sedimentquellen sehr schwer. Dennoch zeigen die Ergebnisse, dass die Sedimentzufuhr in das Ejina-Becken in erster Linie durch fluviale klastische Sedimente des Heihe aus dem Qilian Shan erfolgt sein muss. Die Untersuchungsergebnisse zeigen jedoch ebenso die Notwendigkeit einer ergänzenden Bearbeitung angrenzender Untersuchungsgebiete, wie beispielsweise den Gobi-Altai im Norden oder den Beishan im Westen, sowie die Verdichtung der Oberflächenbeprobung zur feineren Auflösung von lokalen Sedimentquellen.
Die Gesundheitswirtschaft in Deutschland steht vor zahlreichen Veränderungen in ihrem Umfeld. Dabei mangelt es der Krankenhauslandschaft an ärztlichem Personal. Es fehlt insbesondere an Arztstunden bei zugleich steigendem Bedarf an medizinischen Leistungen. Der demografische Wandel, Abwanderung ins Ausland (z. B. in die Schweiz) und Extrapolationen wie Feminisierung des Arztberufs und der Wertewandel respektive Generationenforschung begünstigen diese
Entwicklung. Zudem wird der Arbeitsplatz Krankenhaus zunehmend als unattraktiv von zukünftigen Arbeitnehmerkohorten angesehen. Nachwuchsärzte entscheiden sich bereits im Studium vermehrt gegen die kurative Medizin beziehungsweise gegen eine medizinisch-klinische Tätigkeit. Ein virulentes Beschaffungsproblem zeichnet sich ab, das es zu lösen gilt.
Das Forschungsziel ist die Entwicklung eines ganzheitlichen strategischen Lösungsansatzes mit marktorientierter Akzentuierung für den Arbeitgeber Krankenhaus. Dabei wird das Krankenhaus als wissens- und kompetenzintensive Dienstleistungsorganisation definiert. Employer Branding stellt den Bezugsrahmen eines marktorientierten Personalmanagements dar. Ein forschungstheoretischer Unterbau wird durch strategische Managementansätze integriert. Employer Branding schlägt die Brücke vom Market-based View zum Competence-based View und ist Markenmanagement im Kontext des strategischen Personalmanagements. In der vorliegenden Arbeit wird ein holistischer Bezugsrahmen vorgestellt, der die Wirkungszusammenhänge respektive Employer Branding darstellt. Das Herzstück ist die Employer Value Proposition, die auf der Markenidentität der Organisation basiert. Ziel des Employer Branding Ansatzes ist es, unter anderem eine präferenzerzeugende Wirkung im Prozess der Arbeitsplatzwahl sicherzustellen.
Die Zielrichtung und die Erkenntnis-Interessen erfordern einen breit angelegten Forschungsansatz, der qualitative und quantitative Methoden kombiniert. Ziel der leitfadengesteuerten Tiefeninterviews (exploratives Studiendesign) ist es, bestehende und noch zu entwickelnde Kompetenzstärken der Krankenhausorganisationen zu identifizieren. Bei der Stichprobe handelt es sich um ein „Typical Case Sampling“ (N=12). Defizitäre Befunde, welche Werte und Attraktivitätsfaktoren bei angehenden Ärzten ausgeprägt sind, werden bestätigt. In der Krankenhauslandschaft wird eine fragmentarische und reaktive Herangehensweise identifiziert, die die erfolgreiche Rekrutierung von qualifiziertem Krankenhauspersonal erschwert.
Durch die qualitative Marktforschung werden Anforderungen des Marktes – also zukünftiges, ärztliches Krankenhauspersonal – auf faktorenanalytischer Basis analysiert. Die Stichprobe (N=475) ist isomorph. Dabei wird der Prozess der Einstellungsbildung in das neobehavioristische Erklärungsmodell des Käuferverhaltens eingeordnet. Die Vereinbarkeit von Beruf, Karriere und Familie sowie die betriebliche Kinderbetreuung werden als Schlüsselkomponenten erkannt.
Wichtigste Arbeitswerte in Bezug auf einen attraktiven Arbeitgeber sind Verlässlichkeit, Verantwortung und Respekt. Diese Komponenten haben kommunikativen Nutzen und Differenzierungskraft. Schließlich bewerten Bewerber ein Krankenhaus positiver im Prozess der Arbeitsplatzwahl, je mehr die Werte des potentiellen Arbeitnehmers mit dem Werteprofil der Person übereinstimmen (Person-Organisation Fit).
Krankenhausorganisationen, die den Employer Branding Ansatz implementieren und als Chance zur Definition ihrer Stärken, ihrer Vorzüge und ihres Leistungsangebots als Arbeitgeber annehmen, rüsten sich für das verstärkte Werben um Jungmediziner. Schließlich setzt Employer Branding als marktorientierter Strategieansatz intern und extern Kräfte frei, die Differenzierungsvorteile gegenüber anderen Arbeitgebern bringen. Dabei hat Employer Branding positive Auswirkungen auf den Human Ressource-Bereich entlang der Wertschöpfungskette und mindert das gesamtwirtschaftliche Problem.
Socializing Development
(2020)
Die vorgelegte Arbeit besteht aus drei Teilprojekten, der Realisierung eines Multiparametersensors (Temperatur, pH-Wert und Sauerstoffkonzentration), der Konzipierung und Untersuchung eines optischen Atemgassensors und Untersuchungen zur Anwendung des Konzeptes der Sauerstofflöschung in der Immuntechnologie. Zur Realisierung des Multiparametersensors wurden die einzelnen Sensorfarbstoffe, sofern notwendig, synthetisiert und anschließend einzeln unter Laborbedingungen charakterisiert. Im weiteren Verlauf wurde ein Versuchsaufbau konzipiert mit dem es möglich ist, alle verwendeten Sensorfarbstoffe mit einer Anregungsquelle anzuregen. Dabei erfolgte die Detektion der Parameter Temperatur und Sauerstoffkonzentration mittels Phasenmodulationsspektroskopie und die pH-Wert-bestimmung mittels stationärer Fluoreszenzspektroskopie. So konnte ein Multiparametersensor konzipiert werden, mit dem es möglich ist, die drei genannten Parameter simultan, in Echtzeit und ohne externe Temperaturmessung zu detektieren. Im Rahmen der Entwicklung eines optischen Atemgassensors konnte zunächst eine neue Sensorform entwickelt werden. Durch diese neue Sensorform, welche sich durch sehr kurze Ansprechzeiten auszeichnet, ist es möglich den Sauerstoffgehalt in der Exspirationsluft sehr detailreich zu erfassen. Durch freiwillige Selbstversuche mit dem Atemgassensor konnte eine Korrelation mit einer etablierten Untersuchungsmethode hergestellt werden. Während der Untersuchungen zur Anwendung des Konzeptes der Sauerstofflöschung in der Immuntechnologie konnte zunächst ein Modell entwickelt werden, welches die Wechselwirkung zwischen Antikörper und synthetisiertem Farbstoff, welcher als Antigen fungierte, beschreibt. Nachdem weiterhin eine Wechselwirkung zwischen Antikörper und Antigen in einfachen Medien, wie PBS-Pufferlösung, gezeigt werden konnte, gelang dies auch in komplexen Medien wie bovinem Serum, Kuhmilch oder Speichelflüssigkeit. So konnte ein System entwickelt werden, mit dem es möglich ist Antikörper-Antigen-Wechselwirkungen in komplexen biologischen Medien zu verfolgen.
Escaping the plant cell
(2020)
Feminist Solidarities after Modulation produces an intersectional analysis of transnational feminist movements and their contemporary digital frameworks of identity and solidarity. Engaging media theory, critical race theory, and Black feminist theory, as well as contemporary feminist movements, this book argues that digital feminist interventions map themselves onto and make use of the multiplicity and ambiguity of digital spaces to question presentist and fixed notions of the internet as a white space and technologies in general as objective or universal. Understanding these frameworks as colonial constructions of the human, identity is traced to a socio-material condition that emerges with the modernity/colonialism binary. In the colonial moment, race and gender become the reasons for, as well as the effects of, technologies of identification, and thus need to be understood as and through technologies. What Deleuze has called modulation is not a present modality of control, but is placed into a longer genealogy of imperial division, which stands in opposition to feminist, queer, and anti-racist activism that insists on non-modular solidarities across seeming difference. At its heart, Feminist Solidarities after Modulation provides an analysis of contemporary digital feminist solidarities, which not only work at revealing the material histories and affective ""leakages"" of modular governance, but also challenges them to concentrate on forms of political togetherness that exceed a reductive or essentialist understanding of identity, solidarity, and difference.
Das bayerisch-ligistische Kriegskommissariat kontrollierte das weitestgehend autonome Söldnerheer im Dreißigjährigen Krieg. Es gilt daher als ein beispielhaftes Forschungsobjekt zur fürstlichen Macht dieser Zeit. Während sich die Forschung bisher auf die normative Ebene beschränkte, unternimmt diese Arbeit durch die Auswertung von Feldakten und Privatkorrespondenzen sowie anhand der Methoden der Prosopographie und Netzwerkanalyse eine multiperspektivische Annäherung an das Thema. Die Verwicklungen verschiedener Kompetenzen und Funktionen des sozialen Netzwerks für die Amtsausführung des Kriegskommissariats werden zu Tage gefördert. Damit trägt die Arbeit zur Erfassung der Vielfältigkeit der Herrschaft in der Frühen Neuzeit bei.
Cybergrooming, die Anbahnung des sexuellen Missbrauchs eines Kindes über Soziale Medien und Onlinespiele, gilt als eines der schwerwiegendsten digitalen Risiken für Kinder. Für viele Kinder gehört es dabei zur Normalität in einer digitalisierten Welt aufzuwachsen und ihre Zeit in Sozialen Medien und Onlinespielen zu verbringen. In diesen Programmen spielen und kommunizieren die Kinder ganz selbstverständlich auch mit unbekannten Erwachsenen und anderen Minderjährigen. Hieraus können für die Kinder eine Vielzahl von Risiken entstehen. Eines der vermutlich relevantesten ist dabei die Gefahr, dass das Kind Opfer eines Sexualdelikts wird. Wie effektiv sind aber gegenwärtig die gesellschaftlichen und vor allem kriminalpolitischen Maßnahmen, um Kinder vor solchen Risiken in einem globalisierten digitalen Raum zu schützen? Dieses Buch setzt sich daher grundlegend mit dem Phänomen des Cybergroomings und seiner gesellschaftlichen Bekämpfungsstrategien auseinander. Neben einer umfangreichen Darstellung der Phänomenologie, der Täter- und Opferstruktur sowie der Ursachen für normenabweichendes Verhalten im digitalen Raum aus Sicht der Cyberkriminologie, liegt ein Schwerpunkt der Arbeit auf der strafrechtlichen Einordnung von Cybergrooming in Deutschland. Im Zentrum dieser juristischen Betrachtung steht die aktuelle Auseinandersetzung über die Auswirkungen der Einführung einer Versuchsstrafbarkeit für § 176 Abs. 4 Nr. 3 StGB. Die vorliegende Publikation versteht sich als eine intradisziplinäre Arbeit, die Erkenntnisse aus den Bereichen der Rechtswissenschaft, der Cyberkriminologie und der Medienwissenschaften kombiniert, um einen möglichst ganzheitlichen Blick auf das Phänomen Cybergrooming zu gewinnen. Im Ergebnis werden kriminalpolitische Handlungsempfehlungen abgeleitet, die in der Gesamtheit die Keimzelle einer digitalen Generalprävention bilden könnten.
Im Jahre 1960 behauptete Yamabe folgende Aussage bewiesen zu haben: Auf jeder kompakten Riemannschen Mannigfaltigkeit (M,g) der Dimension n ≥ 3 existiert eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung. Diese Aussage ist äquivalent zur Existenz einer Lösung einer bestimmten semilinearen elliptischen Differentialgleichung, der Yamabe-Gleichung. 1968 fand Trudinger einen Fehler in seinem Beweis und infolgedessen beschäftigten sich viele Mathematiker mit diesem nach Yamabe benannten Yamabe-Problem. In den 80er Jahren konnte durch die Arbeiten von Trudinger, Aubin und Schoen gezeigt werden, dass diese Aussage tatsächlich zutrifft. Dadurch ergeben sich viele Vorteile, z.B. kann beim Analysieren von konform invarianten partiellen Differentialgleichungen auf kompakten Riemannschen Mannigfaltigkeiten die Skalarkrümmung als konstant vorausgesetzt werden.
Es stellt sich nun die Frage, ob die entsprechende Aussage auch auf Lorentz-Mannigfaltigkeiten gilt. Das Lorentz'sche Yamabe Problem lautet somit: Existiert zu einer gegebenen räumlich kompakten global-hyperbolischen Lorentz-Mannigfaltigkeit (M,g) eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung? Das Ziel dieser Arbeit ist es, dieses Problem zu untersuchen.
Bei der sich aus dieser Fragestellung ergebenden Yamabe-Gleichung handelt es sich um eine semilineare Wellengleichung, deren Lösung eine positive glatte Funktion ist und aus der sich der konforme Faktor ergibt. Um die für die Behandlung des Yamabe-Problems benötigten Grundlagen so allgemein wie möglich zu halten, wird im ersten Teil dieser Arbeit die lokale Existenztheorie für beliebige semilineare Wellengleichungen für Schnitte auf Vektorbündeln im Rahmen eines Cauchy-Problems entwickelt. Hierzu wird der Umkehrsatz für Banachräume angewendet, um mithilfe von bereits existierenden Existenzergebnissen zu linearen Wellengleichungen, Existenzaussagen zu semilinearen Wellengleichungen machen zu können. Es wird bewiesen, dass, falls die Nichtlinearität bestimmte Bedingungen erfüllt, eine fast zeitglobale Lösung des Cauchy-Problems für kleine Anfangsdaten sowie eine zeitlokale Lösung für beliebige Anfangsdaten existiert.
Der zweite Teil der Arbeit befasst sich mit der Yamabe-Gleichung auf global-hyperbolischen Lorentz-Mannigfaltigkeiten. Zuerst wird gezeigt, dass die Nichtlinearität der Yamabe-Gleichung die geforderten Bedingungen aus dem ersten Teil erfüllt, so dass, falls die Skalarkrümmung der gegebenen Metrik nahe an einer Konstanten liegt, kleine Anfangsdaten existieren, so dass die Yamabe-Gleichung eine fast zeitglobale Lösung besitzt. Mithilfe von Energieabschätzungen wird anschließend für 4-dimensionale global-hyperbolische Lorentz-Mannigfaltigkeiten gezeigt, dass unter der Annahme, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist, eine zeitglobale Lösung der Yamabe-Gleichung existiert, die allerdings nicht notwendigerweise positiv ist. Außerdem wird gezeigt, dass, falls die H2-Norm der Skalarkrümmung bezüglich der gegebenen Metrik auf einem kompakten Zeitintervall auf eine bestimmte Weise beschränkt ist, die Lösung positiv auf diesem Zeitintervall ist. Hierbei wird ebenfalls angenommen, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist. Falls zusätzlich hierzu gilt, dass die Skalarkrümmung bezüglich der gegebenen Metrik negativ ist und die Metrik gewisse Bedingungen erfüllt, dann ist die Lösung für alle Zeiten in einem kompakten Zeitintervall positiv, auf dem der Gradient der Skalarkrümmung auf eine bestimmte Weise beschränkt ist. In beiden Fällen folgt unter den angeführten Bedingungen die Existenz einer zeitglobalen positiven Lösung, falls M = I x Σ für ein beschränktes offenes Intervall I ist. Zum Schluss wird für M = R x Σ ein Beispiel für die Nichtexistenz einer globalen positiven Lösung angeführt.
Algorithmen in der Justiz
(2020)
Unter welchen Bedingungen dürfen Gerichte in Deutschland digitale Anwendungen zur Entscheidungsfindung einsetzen? Das Werk zeigt die engen Grenzen und einen Lösungsweg hierfür auf. Neben rechtstheoretischen und durch die computerspezifische Arbeitsweise gesetzten Grenzen ist der durch das Grundgesetz und das Europarecht abgesteckte Rechtsrahmen zu beachten. Im Zentrum der Bearbeitung steht die Garantie der richterlichen Unabhängigkeit, die durch den Technikeinsatz nicht infrage gestellt werden darf. Zur Auflösung des daraus resultierenden Konflikts wird ein Zertifizierungsverfahren für determinierte Programme vorgeschlagen. Schließlich werden konkrete Anwendungsbeispiele beleuchtet.
Urbanization and agricultural land use are two of the main drivers of global changes with effects on ecosystem functions and human wellbeing. Green Infrastructure is a new approach in spatial planning contributing to sustainable urban development, and to address urban challenges, such as biodiversity conservation, climate change adaptation, green economy development, and social cohesion. Because the research focus has been mainly on open green space structures, such as parks, urban forest, green building, street green, but neglected spatial and functional potentials of utilizable agricultural land, this thesis aims at fill this gap.
This cumulative thesis addresses how agricultural land in urban and peri-urban landscapes can contribute to the development of urban green infrastructure as a strategy to promote sustainable urban development. Therefore, a number of different research approaches have been applied. First, a quantitative, GIS-based modeling approach looked at spatial potentials, addressing the heterogeneity of peri-urban landscape that defines agricultural potentials and constraints. Second, a participatory approach was applied, involving stakeholder opinions to evaluate multiple urban functions and benefits. Finally, an evidence synthesis was conducted to assess the current state of research on evidence to support future policy making at different levels.
The results contribute to the conceptual understanding of urban green infrastructures as a strategic spatial planning approach that incorporates inner-urban utilizable agricultural land and the agriculturally dominated landscape at the outer urban fringe. It highlights the proposition that the linkage of peri-urban farmland with the green infrastructure concept can contribute to a network of multifunctional green spaces to provide multiple benefits to the urban system and to successfully address urban challenges. Four strategies are introduced for spatial planning with the contribution of peri-urban farmland to a strategically planned multifunctional network, namely the connecting, the productive, the integrated, and the adapted way. Finally, this thesis sheds light on the opportunities that arise from the integration of the peri- urban farmland in the green infrastructure concept to support transformation towards a more sustainable urban development. In particular, the inherent core planning principle of multifunctionality endorses the idea of co-benefits that are considered crucial to trigger transformative processes.
This work concludes that the linkage of peri-urban farmland with the green infrastructure concept is a promising action field for the development of new pathways for urban transformation towards sustainable urban development. Along with these outcomes, attention is drawn to limitations that remain to be addressed by future research, especially the identification of further mechanisms required to support policy integration at all levels.
Near-Earth space represents a significant scientific and technological challenge. Particularly at magnetic low-latitudes, the horizontal magnetic field geometry at the dip equator and its closed field-lines support the existence of a distinct electric current system, abrupt electric field variations and the development of plasma irregularities. Of particular interest are small-scale irregularities associated with equatorial plasma depletions (EPDs). They are responsible for the disruption of trans-ionospheric radio waves used for navigation, communication, and Earth observation. The fast increase of satellite missions makes it imperative to study the near-Earth space, especially the phenomena known to harm space technology or disrupt their signals. EPDs correspond to the large-scale structure (i.e., tens to hundreds of kilometers) of topside F region irregularities commonly known as Spread F. They are observed as depleted-plasma density channels aligned with the ambient magnetic field in the post-sunset low-latitude ionosphere. Although the climatological variability of their occurrence in terms of season, longitude, local time and solar flux is well-known, their day to day variability is not. The sparse observations from ground-based instruments like radars and the few simultaneous measurements of ionospheric parameters by space-based instruments have left gaps in the knowledge of EPDs essential to comprehend their variability.
In this dissertation, I profited from the unique observations of the ESA’s Swarm constellation mission launched in November 2013 to tackle three issues that revealed novel and significant results on the current knowledge of EPDs. I used Swarm’s measurements of the electron density, magnetic, and electric fields to answer, (1.) what is the direction of propagation of the electromagnetic energy associated with EPDs?, (2.) what are the spatial and temporal characteristics of the electric currents (field-aligned and diamagnetic currents) related to EPDs, i.e., seasonal/geographical, and local time dependencies?, and (3.) under what conditions does the balance between magnetic and plasma pressure across EPDs occur?
The results indicate that: (1.) The electromagnetic energy associated with EPDs presents a preference for interhemispheric flows; that is, the related Poynting flux directs from one magnetic hemisphere to the other and varies with longitude and season. (2.) The field-aligned currents at the edges of EPDs are interhemispheric. They generally close in the hemisphere with the highest Pedersen conductance. Such hemispherical preference presents a seasonal/longitudinal dependence. The diamagnetic currents increase or decrease the magnetic pressure inside EPDs. These two effects rely on variations of the plasma temperature inside the EPDs that depend on longitude and local time. (3.) EPDs present lower or higher plasma pressure than the ambient. For low-pressure EPDs the plasma pressure gradients are mostly dominated by variations of the plasma density so that variations of the temperature are negligible. High-pressure EPDs suggest significant temperature variations with magnitudes of approximately twice the ambient. Since their occurrence is more frequent in the vicinity of the South Atlantic magnetic anomaly, such high temperatures are suggested to be due to particle precipitation.
In a broader context, this dissertation shows how dedicated satellite missions with high-resolution capabilities improve the specification of the low-latitude ionospheric electrodynamics and expand knowledge on EPDs which is valuable for current and future communication, navigation, and Earth-observing missions. The contributions of this investigation represent several ’firsts’ in the study of EPDs: (1.) The first observational evidence of interhemispheric electromagnetic energy flux and field-aligned currents. (2.) The first spatial and temporal characterization of EPDs based on their associated field-aligned and diamagnetic currents. (3.) The first evidence of high plasma pressure in regions of depleted plasma density in the ionosphere. These findings provide new insights that promise to advance our current knowledge of not only EPDs but the low-latitude post-sunset ionosphere environment.
Comment sections of online news platforms are an essential space to express opinions and discuss political topics. However, the misuse by spammers, haters, and trolls raises doubts about whether the benefits justify the costs of the time-consuming content moderation. As a consequence, many platforms limited or even shut down comment sections completely. In this thesis, we present deep learning approaches for comment classification, recommendation, and prediction to foster respectful and engaging online discussions. The main focus is on two kinds of comments: toxic comments, which make readers leave a discussion, and engaging comments, which make readers join a discussion. First, we discourage and remove toxic comments, e.g., insults or threats. To this end, we present a semi-automatic comment moderation process, which is based on fine-grained text classification models and supports moderators. Our experiments demonstrate that data augmentation, transfer learning, and ensemble learning allow training robust classifiers even on small datasets. To establish trust in the machine-learned models, we reveal which input features are decisive for their output with attribution-based explanation methods. Second, we encourage and highlight engaging comments, e.g., serious questions or factual statements. We automatically identify the most engaging comments, so that readers need not scroll through thousands of comments to find them. The model training process builds on upvotes and replies as a measure of reader engagement. We also identify comments that address the article authors or are otherwise relevant to them to support interactions between journalists and their readership. Taking into account the readers' interests, we further provide personalized recommendations of discussions that align with their favored topics or involve frequent co-commenters. Our models outperform multiple baselines and recent related work in experiments on comment datasets from different platforms.
Seit Jahrzehnten stellen die molekularen Schalter ein wachsendes Forschungsgebiet dar. Im Rahmen dieser Dissertation stand die Verbesserung der thermischen Stabilität, der Auslesbarkeit und Schaltbarkeit dieser molekularen Schalter in komplexen Umgebungen mithilfe computergestützter Chemie im Vordergrund.
Im ersten Projekt wurde die Kinetik der thermischen E → Z-Isomerisierung und die damit verbundene thermische Stabilität eines Azobenzol-Derivats untersucht. Dafür wurde Dichtefunktionaltheorie (DFT) in Verbindung mit der Eyring-Theorie des Übergangszustandes (TST) angewendet. Das Azobenzol-Derivat diente als vereinfachtes Modell für das Schalten in einer komplexen Umgebung (hier in metallorganischen Gerüsten). Es wurden thermodynamische und kinetische Größen unter verschiedenen Einflüssen berechnet, wobei gute Übereinstimmungen mit dem Experiment gefunden wurden. Die hier verwendete Methode stellte einen geeigneten Ansatz dar, um diese Größen mit angemessener Genauigkeit vorherzusagen.
Im zweiten Projekt wurde die Auslesbarkeit der Schaltzustände in Form des nichtlinearen optischen (NLO) Kontrastes für die Molekülklasse der Fulgimide untersucht. Die dafür benötigten dynamischen Hyperpolarisierbarkeiten unter Berücksichtigung der Elektronenkorrelation wurden mittels einer etablierten Skalierungsmethode berechnet. Es wurden verschiedene Fulgimide analysiert, wobei viele experimentelle Befunde bestätigt werden konnten. Darüber hinaus legte die theoretische Vorhersage für ein weiteres System nahe, dass insbesondere die Erweiterung des π-Elektronensystems ein vielversprechender Ansatz zur Verbesserung von NLO-Kontrasten darstellt. Die Fulgimide verfügen somit über nützliche Eigenschaften, sodass diese in Zukunft als Bauelemente in photonischen und optoelektronischen Bereichen Anwendungen finden könnten.
Im dritten Projekt wurde die E → Z-Isomerisierung auf ein quantenmechanisch (QM) behandeltes Dimer mit molekularmechanischer (MM) Umgebung und zwei Fluorazobenzol-Monomeren durch Moleküldynamik simuliert. Dadurch wurde die Schaltbarkeit in komplexer Umgebung (hier selbstorgansierte Einzelschichten = SAMs) bzw. von Azobenzolderivaten analysiert. Mit dem QM/MM Modell wurden sowohl Van-der-Waals-Interaktionen mit der Umgebung als auch elektronische Kopplung (nur zwischen QM-Molekülen) berücksichtigt. Dabei wurden systematische Untersuchungen zur Packungsdichte durchgeführt. Es zeigte sich, dass bereits bei einem Molekülabstand von 4.5 Å die Quantenausbeute (prozentuale Anzahl erfolgreicher Schaltprozesse) des Monomers erreicht wird. Die größten Quantenausbeuten wurden für die beiden untersuchten Fluorazobenzole erzielt. Es wurden die Effekte des Molekülabstandes und der Einfluss von Fluorsubstituenten auf die Dynamik eingehend untersucht, sodass der Weg für darauf aufbauende Studien geebnet ist.
Wege zur Gesangskarriere
(2020)
Lehrkräftefortbildungen werden in der Forschung als vielversprechende Möglichkeit angesehen, Lehrkräfte dabei zu unterstützen, den an sie gestellten Anforderungen gerecht zu werden (Darling-Hammond, Hyler & Gardner, 2017). Über verschiedene Studien hinweg konnte hierbei gezeigt werden, dass die Teilnahme einer Lehrkraft an Fortbildungen positiv mit der Entwicklung ihrer Schüler*innen zusammenhängt (Kalinowski, Egert, Gronostaj & Vock, 2020; Yoon, Duncan, Lee, Scarloss & Shapley, 2007). Während ein Teil der Forschung ihren Fokus auf die Wirksamkeit von Fortbildungsangeboten richtet, nehmen andere Untersuchungen stärker die Nutzung dieser Angebote in den Blick. Die vorliegende Arbeit knüpft an die bisherige Forschung zur Lehrkräftefortbildung an und versucht, Aspekte der Nutzung und des Angebots der Lehrkräftefortbildung in Deutschland im Rahmen des Comprehensive Lifelong Learning Participation Models stärker gemeinsam zu betrachten (Boeren, Nicaise & Baert, 2010). Hierbei handelt es sich um ein Mehrebenen-Modell zur Erklärung von Weiterbildungsverhalten, welches verschiedene Akteursgruppen (z.B. Fortbildungsteilnehmer*innen, Fortbildungsinstitute) auf der Nachfrage- und der Angebotsseite berücksichtigt und deren Interdependenz in den Blick nimmt. Vor diesem Hintergrund beschäftigt sich die vorliegende Arbeit in vier Teilstudien mit den Merkmalen der Fortbildungsteilnehmer*innen und des Fortbildungsangebots. Sie untersucht zudem die Prädiktoren für Fortbildungsbeteiligung von Lehrkräften auf der Nachfrage- und Angebotsseite. Studie 1 fokussiert zunächst die Nachfrageseite des Comprehensive Lifelong Learning Participation Models (Boeren et al., 2010) und betrachtet die in der Forschung wenig beachtete Gruppe der Nicht-Teilnehmer*innen. An Befunde der allgemeinen Weiterbildungsforschung anknüpfend, beschäftigt sich diese Teilstudie mit den Teilnahmebarrieren von Lehrkräften an Fortbildungen und untersucht, wie diese mit der Fortbildungsaktivität von Lehrkräften zusammenhängen. Die Beantwortung der Forschungsfragen basiert auf einer Sekundärdatenanalyse des IQB-Ländervergleichs 2012 (Pant et al., 2013), auf dessen Grundlage ein Gruppenvergleich von Teilnehmer*innen und Nicht-Teilnehmer*innen an Fortbildung sowie eine faktoranalytische Betrachtung der berichteten Teilnahmebarrieren durchgeführt wurde. Studie 2 greift die Frage auf, welche Lehrkräfte intensiv von Angeboten der Fortbildung Gebrauch machen. Ausgangspunkt der Arbeit ist die Beobachtung eines Matthäus-Effektes durch die allgemeine Weiterbildungsforschung. Demnach beteiligen sich insbesondere jene Personen stärker an beruflichen Lerngelegenheiten, die bereits vor der Maßnahme über günstige Ausgangsvoraussetzungen, etwa in Form eines höheren Kompetenzniveaus im Vergleich zu Personen, die nicht oder weniger an Angeboten der Fort- und Weiterbildung partizipieren, verfügen. In Anlehnung an diese Befunde diskutiert Teilstudie 2 verschiedene Aspekte der Qualität von Lehrkräften und geht anhand bivariater Zusammenhangsanalysen der Frage nach, welche Zusammenhänge zwischen der Voraussetzung der Lehrkraft und der Nutzung von Fortbildung bestehen. Dabei berücksichtigt die Studie das von Boeren et al. (2010) eingeführte Comprehensive Lifelong Learning Participation Model, indem sie die Befunde der Wirksamkeits- und Angebotsforschung aufgreift und differentielle Effekte in Abhängigkeit der Merkmale der Fortbildungen (fachlich vs. nicht-fachlich) in den Blick nimmt. Die durchgeführten Analysen beruhen auf einer Sekundärdatenanalyse des COACTIV-Forschungsprojekts (Kunter, Baumert & Blum, 2011). Auch in Studie 3 steht die Interaktion von Nachfrage- und Angebotsseite im Mittelpunkt. Während in Studie 2 jedoch konzeptionelle Merkmale der Fortbildung fokussiert wurden, liegt der Fokus in Studie 3 auf dem strukturellen Merkmal Zeit. Zeiten zum (beruflichen) Lernen werden hierbei zunächst auf Basis empirischer Befunde als Grundvoraussetzung für das Zustandekommen von Fortbildungsbeteiligung herausgearbeitet. Die Studie geht dann der Frage nach, welche zeitlichen Merkmale das Fortbildungsangebot für Lehrkräfte aufweist und wie diese Merkmale des Angebots im Zusammenhang mit der Nutzung durch Lehrkräfte stehen. Zur Beantwortung der Forschungsfragen werden eine Programmanalyse sowie polynomiale Regressionen durchgeführt. Die der Analyse zugrundeliegenden Daten beruhen hierbei auf den in der elektronischen Datenbank hinterlegten Fortbildungsangebotsdaten für das Land Brandenburg aus dem akademischen Jahr 2016/2017. Studie 4 fokussiert schließlich die Gruppe der Lehrerfortbildner*innen und somit die Angebotsseite des Comprehensive Lifelong Learning Participation Models (Boeren et al., 2010). In Anlehnung an theoretische Arbeiten zur professionellen Identität wird dabei der Frage nachgegangen, wie Lehrerfortbildner*innen ihre Aufgaben wahrnehmen und wie diese Wahrnehmung mit weiteren Aspekten ihrer professionellen Identität und der Gestaltung ihrer Fortbildungsveranstaltungen zusammenhängt. Hierzu wurden selbsterhobene Daten einer schriftlichen Befragung von Lehrerfortbildner*innen im Jahr 2019 zunächst faktoranalytisch betrachtet und anschließend mithilfe bivariater Zusammenhangsanalysen untersucht. Die zentralen Ergebnisse der vorliegenden Arbeit werden abschließend zusammengefasst diskutiert. Sie deuten insgesamt darauf hin, dass das derzeitige Fortbildungssystem in Deutschland nicht dazu geeignet erscheint, alle Lehrkräfte mit qualitativ hochwertigen Fortbildungen so zu erreichen, dass sie an ihren Schwächen arbeiten können. Die Befunde zeigen weiter, dass Fortbildner*innen eine mögliche Stellschraube für die Veränderung von Teilen des Fortbildungsangebots darstellen. Die Befunde bieten somit die Grundlage für zukünftige Forschungsarbeiten und mögliche Implikationen in der Fortbildungspraxis und Bildungspolitik.
Um ihre ästhetischen und strukturellen Ähnlichkeiten zum Fernsehprogramm aufzudecken, analysiert Christian Richter ausführlich mediale Inszenierungen von Netflix und YouTube. Die Schlagworte »Flow«, »Serialität«, »Liveness« und »Adressierung« dienen dabei als zentrale Orientierungshilfen. Antworten liefern etablierte Fernsehtheorien ebenso wie facettenreiche und triviale Beispiele. Diese reichen vom ZDF-Fernsehgarten und alten Horrorfilmen über den SuperBowl und einsame Bahnfahrten durch Norwegen bis zu BibisBeautyPalace und House of Cards. Am Ende schält sich ein Zustand von FERNSEHEN heraus, der als eine neue Version aufgefasst werden kann.
In unserer digitalisierten Welt verlagert sich das Lernen in die Cloud. Vom Unterricht in der Schule und der Tafel zum Tablet, hin zu einem lebenslangen Lernen in der Arbeitswelt und sogar darüber hinaus. Wie erfolgreich und attraktiv dieses zeitgemäße Lernen erfolgt, hängt nicht unwesentlich von den technologischen Möglichkeiten ab, die digitale Lernplattformen rund um MOOCs und Schul-Clouds bieten.
Bei deren Weiterentwicklung sollten statt ökonomischen Messgrößen und KPIs die Lernenden und ihre Lernerfahrungen im Vordergrund stehen.
Hierfür wurde ein Optimierungsframework entwickelt, das für die Entwicklung von Lernplattformen anhand verschiedener qualitativer und quantitative Methoden Verbesserungen identifiziert, priorisiert und deren Beurteilung und Umsetzung steuert.
Datengestützte Entscheidungen sollten auf einer ausreichenden Datenbasis aufbauen. Moderne Web-Anwendungen bestehen aber oft aus mehreren Microservices mit jeweils eigener Datenhaltung. Viele Daten sind daher nicht mehr einfach zugänglich. Daher wird in dieser Arbeit ein Learning Analytics Dienst eingeführt, der diese Daten sammelt und verarbeitet. Darauf aufbauend werden Metriken eingeführt, auf deren Grundlage die erfassten Daten nutzbar werden und die somit zu verschiedenen Zwecken verwendet werden können.
Neben der Visualisierung der Daten in Dashboards werden die Daten für eine automatisierte Qualitätskontrolle herangezogen. So kann festgestellt werden, wenn Tests zu schwierig oder die soziale Interaktion in einem MOOC zu gering ist.
Die vorgestellte Infrastruktur lässt sich aber auch verwenden, um verschiedene A/B/n-Tests durchzuführen. In solchen Tests gibt es mehrere Varianten, die an verschiedene Nutzergruppen in einem kontrollierten Experiment erprobt werden. Dank der vorgestellten Testinfrastruktur, die in der HPI MOOC Plattform eingebaut wurde, kann ermittelt werden, ob sich für diese Gruppen statistisch signifikante Änderungen in der Nutzung feststellen lassen. Dies wurde mit fünf verschiedenen Verbesserungen der HPI MOOC Plattform evaluiert, auf der auch openHPI und openSAP basieren.
Dabei konnte gezeigt werden, dass sich Lernende mit reaktivierenden Mails zurück in den Kurs holen lassen. Es ist primär die Kommunikation der unbearbeiteten Lerninhalte des Nutzers, die eine reaktivierende Wirkung hat.
Auch Übersichtsmails, die die Forenaktivität zusammenfassen, haben einen positiven Effekt erzielt.
Ein gezieltes On-Boarding kann dazu führen, dass die Nutzer die Plattform besser verstehen und hierdurch aktiver sind.
Der vierte Test konnte zeigen, dass die Zuordnung von Forenfragen zu einem bestimmten Zeitpunkt im Video und die grafische Anzeige dieser Informationen zu einer erhöhten Forenaktivität führt.
Auch die experimentelle Erprobung von unterschiedlichen Lernmaterialien, wie sie im fünften Test durchgeführt wurde, ist in MOOCs hilfreich, um eine Verbesserung der Kursmaterialien zu erreichen.
Neben diesen funktionalen Verbesserungen wird untersucht wie MOOC Plattformen und Schul-Clouds einen Nutzen bieten können, wenn Nutzern nur eine schwache oder unzuverlässige Internetanbindung zur Verfügung steht (wie dies in vielen deutschen Schulen der Fall ist). Hier wird gezeigt, dass durch ein geschicktes Vorausladen von Daten die Internetanbindungen entlastet werden können. Teile der Lernanwendungen funktionieren dank dieser Anpassungen, selbst wenn keine Verbindung zum Internet besteht.
Als Letztes wird gezeigt, wie Endgeräte sich in einem lokalen Peer-to-Peer CDN gegenseitig mit Daten versorgen können, ohne dass diese aus dem Internet heruntergeladen werden müssen.
Nach lavierenden obiter dicta entspricht die Steueranspruchstheorie wieder der gefestigten Rechtsprechung des 1. Strafsenates des BGH (zuletzt: BGH, Beschl. v. 1.4.2020 – 1 StR 5/20, BeckRS 2020, 23245). Für den prozessualen Nachweis des Steuerhinterziehungsvorsatzes ist damit erforderlich, dass der Steuerpflichtige auch in rechtlicher Hinsicht den verletzten Steueranspruch kannte. Die vorliegende Rechtsprechungsanalyse zum straf- und finanzgerichtlichen Vorsatznachweis bei § 370 AO zeigt, dass die Judikatur mit diesem Konzept besser umgeht als vielfach unterstellt.
Insoweit wird herausgearbeitet, dass die Rechtsprechung mit einem hinreichend etablierten Kanon an Indizien operiert, welche berechtigterweise den Schluss auf vorsätzliches Handeln zulassen. Zugleich kommt die Untersuchung zu dem Ergebnis, dass die Rechtsprechung ebenfalls konsistent entlastende Umstände würdigt und bei entsprechender Indizienlage einen vorsatzausschließenden Irrtum feststellt. Als weiteres Ergebnis liefert die Untersuchung zugleich eine Leitlinie für den Tatrichter bei der Vorsatzfeststellung im Steuerstrafverfahren.
This thesis is concerned with Data Assimilation, the process of combining model predictions with observations. So called filters are of special interest. One is inter- ested in computing the probability distribution of the state of a physical process in the future, given (possibly) imperfect measurements. This is done using Bayes’ rule. The first part focuses on hybrid filters, that bridge between the two main groups of filters: ensemble Kalman filters (EnKF) and particle filters. The first are a group of very stable and computationally cheap algorithms, but they request certain strong assumptions. Particle filters on the other hand are more generally applicable, but computationally expensive and as such not always suitable for high dimensional systems. Therefore it exists a need to combine both groups to benefit from the advantages of each. This can be achieved by splitting the likelihood function, when assimilating a new observation and treating one part of it with an EnKF and the other part with a particle filter.
The second part of this thesis deals with the application of Data Assimilation to multi-scale models and the problems that arise from that. One of the main areas of application for Data Assimilation techniques is predicting the development of oceans and the atmosphere. These processes involve several scales and often balance rela- tions between the state variables. The use of Data Assimilation procedures most often violates relations of that kind, which leads to unrealistic and non-physical pre- dictions of the future development of the process eventually. This work discusses the inclusion of a post-processing step after each assimilation step, in which a minimi- sation problem is solved, which penalises the imbalance. This method is tested on four different models, two Hamiltonian systems and two spatially extended models, which adds even more difficulties.
The significant environmental and socioeconomic consequences of hydrometeorological extreme events, such as extreme rainfall, are constituted as a most important motivation for analyzing these events in the south-central Andes of NW Argentina. The steep topographic and climatic gradients and their interactions frequently lead to the formation of deep convective storms and consequently trigger extreme rainfall generation.
In this dissertation, I focus on identifying the dominant climatic variables and atmospheric conditions and their spatiotemporal variability leading to deep convection and extreme rainfall in the south-central Andes.
This dissertation first examines the significant contribution of temperature on atmospheric humidity (dew-point temperature, Td) and on convection (convective available potential energy, CAPE) for deep convective storms and hence, extreme rainfall along the topographic and climatic gradients. It was found that both climatic variables play an important role in extreme rainfall generation. However, their contributions differ depending on topographic and climatic sub-regions, as well as rainfall percentiles.
Second, this dissertation explores if (near real-time) the measurements conducted by the Global Navigation Satellite System (GNSS) on integrated water vapor (IWV) provide reliable data for explaining atmospheric humidity. I argue that GNSS-IWV, in conjunction with other atmospheric stability parameters such as CAPE, is able to decipher the extreme rainfall in the eastern central Andes. In my work, I rely on a multivariable regression analysis described by a theoretical relationship and fitting function analysis.
Third, this dissertation identifies the local impact of convection on extreme rainfall in the eastern Andes. Relying on a Principal Component Analysis (PCA) it was found that during the existence of moist and warm air, extreme rainfall is observed more often during local night hours. The analysis includes the mechanisms for this observation.
Exploring the atmospheric conditions and climatic variables leading to extreme rainfall is one of the main findings of this dissertation. The conditions and variables are a prerequisite for understanding the dynamics of extreme rainfall and predicting these events in the eastern Andes.
Organizing immigration
(2020)
Immigration constitutes a dynamic policy field with – often quite unpredictable – dynamics. This is based on immigration constituting a ‘wicked problem’ meaning that it is characterized by uncertainty, ambiguity and complexity. Due to the dynamics in the policy field, expectations towards public administrations often change. Following neo-institutionalist theory, public administrations depend on meeting the expectations in the organizational field in order to maintain legitimacy as the basis for, e.g., resources and compliance of stakeholders. With the dynamics in the policy field, expectations might change and public administrations consequently need to adapt in order to maintain or repair the then threatened legitimacy. If their organizational legitimacy is threatened by a perception of structures and processes being inadequate for changed expectations, an ‘institutional crisis’ unfolds. However, we know little about ministerial bureaucracies’ structural reactions to such crucial momentums and how this effects the quest for coordination within policy-making. Overall, the dissertation thus links to both policy analysis and public administration research and consists of five publications. It asks: How do structures in ministerial bureaucracies change in the context of institutional crises? And what effect do these changes have on ministerial coordination? The dissertation hereby focusses on the above described dynamic policy field of immigration in Germany in the period from 2005 to 2017 and pursues three objectives: 1) to identify the context and impulse for changes in the structures of ministerial bureaucracies, 2) to describe respective changes with regard to their organizational structures, and 3) to identify their effect on coordination. It hereby compares and contrasts institutional crises by incremental change and shock as well as changes and effects at federal and Länder level which allows a comprehensive answer to both of the research questions. Theoretically, the dissertation follows neo-institutionalist theory with a particular focus on changes in organizational structures, coordination and crisis management. Methodologically, it follows a comparative design. Each article (except for the literature review), focusses on ministerial bureaucracies at one governmental level (federal or Länder) and on an institutional crisis induced by either an incremental process or a shock. Thus, responses and effects can be compared and contrasted across impulses for institutional crises and governmental levels. Overall, the dissertation follows a mixed methods approach with a majority of qualitative single and small-n case studies based on document analysis and semi-structured interviews. Additionally, two articles use quantitative methods as they best suited the respective research question. The rather explorative nature of these two articles however fits to the overall interpretivist approach of the dissertation. Overall, the dissertation’s core argument is: Within the investigation period, varying dynamics and thus impulses for institutional crises took place in the German policy field of immigration. Respectively, expectations by stakeholders on how the politico-administrative system should address the policy problem changed. Ministerial administrations at both the federal and Länder level adapted to these expectations in order to maintain, or regain respectively, organizational legitimacy. The administration hereby referred to well-known recipes of structural changes. Institutional crises do not constitute fields of experimentation. The new structures had an immediate effect on ministerial coordination, with respect to both the horizontal and vertical dimension. Yet, they did not mean a comprehensive change of the system in place. The dissertation thus challenges the idea of the toppling effect of crises and rather shows that adaptability and persistence of public administrations constitute two sides of the same coin.
Geomorphology seeks to characterize the forms, rates, and magnitudes of sediment and water transport that sculpt landscapes. This is generally referred to as earth surface processes, which incorporates the influence of biologic (e.g., vegetation), climatic (e.g., rainfall), and tectonic (e.g., mountain uplift) factors in dictating the transport of water and eroded material. In mountains, high relief and steep slopes combine with strong gradients in rainfall and vegetation to create dynamic expressions of earth surface processes. This same rugged topography presents challenges in data collection and process measurement, where traditional techniques involving detailed observations or physical sampling are difficult to apply at the scale of entire catchments. Herein lies the utility of remote sensing. Remote sensing is defined as any measurement that does not disturb the natural environment, typically via acquisition of images in the visible- to radio-wavelength range of the electromagnetic spectrum. Remote sensing is an especially attractive option for measuring earth surface processes, because large areal measurements can be acquired at much lower cost and effort than traditional methods. These measurements cover not only topographic form, but also climatic and environmental metrics, which are all intertwined in the study of earth surface processes. This dissertation uses remote sensing data ranging from handheld camera-based photo surveying to spaceborne satellite observations to measure the expressions, rates, and magnitudes of earth surface processes in high-mountain catchments of the Eastern Central Andes in Northwest Argentina. This work probes the limits and caveats of remote sensing data and techniques applied to geomorphic research questions, and presents important progress at this disciplinary intersection.
Lattice dynamics
(2020)
In this thesis I summarize my contribution to the research field of ultrafast structural dynamics in condensend matter. It consists of 17 publications that cover the complex interplay between electron, magnon, and phonon subsystems in solid materials and the resulting lattice dynamics after ultrafast photoexcitation. The investigation of such dynamics is necessary for the physical understanding of the processes in materials that might become important in the future as functional materials for technological applications, for example in data storage applications, information processing, sensors, or energy harvesting.
In this work I present ultrafast x-ray diffraction (UXRD) experiments based on the optical pump – x-ray probe technique revealing the time-resolved lattice strain. To study these dynamics the samples (mainly thin film heterostructures) are excited by femtosecond near-infrared or visible light pulses. The induced strain dynamics caused by stresses of the excited subsystems are measured in a pump-probe scheme with x-ray diffraction (XRD) as a probe. The UXRD setups used during my thesis are a laser-driven table-top x-ray source and large-scale synchrotron facilities with dedicated time-resolved diffraction setups. The UXRD experiments provide quantitative access to heat reservoirs in nanometric layers and monitor the transient responses of these layers with coupled electron, magnon, and phonon subsystems. In contrast to optical probes, UXRD allows accessing the material-specific information, which is unavailable for optical light due to the detection of multiple indistinguishable layers in the range of the penetration depth.
In addition, UXRD facilitates a layer-specific probe for layers buried opaque heterostructures to study the energy flow. I extended this UXRD technique to obtain the driving stress profile by measuring the strain dynamics in the unexcited buried layer after excitation of the adjacent absorbing layers with femtosecond laser pulses. This enables the study of negative thermal expansion (NTE) in magnetic materials, which occurs due to the loss of the magnetic order. Part of this work is the investigation of stress profiles which are the source of coherent acoustic phonon wave packets (hypersound waves). The spatiotemporal shape of these stress profiles depends on the energy distribution profile and the ability of the involved subsystems to produce stress. The evaluation of the UXRD data of rare-earth metals yields a stress profile that closely matches the optical penetration profile: In the paramagnetic (PM) phase the photoexcitation results in a quasi-instantaneous expansive stress of the metallic layer whereas in the antiferromagnetic (AFM) phase a quasi-instantaneous contractive stress and a second contractive stress contribution rising on a 10 ps time scale adds to the PM contribution. These two time scales are characteristic for the magnetic contribution and are in agreement with related studies of the magnetization dynamics of rare-earth materials.
Several publications in this thesis demonstrate the scientific progress in the field of active strain control to drive a second excitation or engineer an ultrafast switch. These applications of ultrafast dynamics are necessary to enable control of functional material properties via strain on ultrafast time scales.
For this thesis I implemented upgrades of the existing laser-driven table-top UXRD setup in order to achieve an enhancement of x-ray flux to resolve single digit nanometer thick layers. Furthermore, I developed and built a new in-situ time-resolved magneto-optic Kerr effect (MOKE) and optical reflectivity setup at the laser-driven table-top UXRD setup to measure the dynamics of lattice, electrons and magnons under the same excitation conditions.
Hydrological models are important tools for the simulation and quantification of the water cycle.
They therefore aid in the understanding of hydrological processes, prediction of river discharge, assessment of the impacts of land use and climate changes, or the management of water resources.
However, uncertainties associated with hydrological modelling are still large.
While significant research has been done on the quantification and reduction of uncertainties, there are still fields which have gained little attention so far, such as model structural uncertainties that are related to the process implementations in the models.
This holds especially true for complex process-based models in contrast to simpler conceptual models.
Consequently, the aim of this thesis is to improve the understanding of structural uncertainties with focus on process-based hydrological modelling, including methods for their quantification.
To identify common deficits of frequently used hydrological models and develop further strategies on how to reduce them, a survey among modellers was conducted.
It was found that there is a certain degree of subjectivity in the perception of modellers, for instance with respect to the distinction of hydrological models into conceptual groups.
It was further found that there are ambiguities on how to apply a certain hydrological model, for instance how many parameters should be calibrated, together with a large diversity of opinion regarding the deficits of models.
Nevertheless, evapotranspiration processes are often represented in a more physically based manner, while processes of groundwater and soil water movement are often simplified, which many survey participants saw as a drawback.
A large flexibility, for instance with respect to different alternative process implementations or a small number of parameters that needs to be calibrated, was generally seen as strength of a model.
Flexible and efficient software, which is straightforward to apply, has been increasingly acknowledged by the hydrological community.
This work further elaborated on this topic in a twofold way.
First, a software package for semi-automated landscape discretisation has been developed, which serves as a tool for model initialisation.
This was complemented by a sensitivity analysis of important and commonly used discretisation parameters, of which the size of hydrological sub-catchments as well as the size and number of hydrologically uniform computational units appeared to be more influential than information considered for the characterisation of hillslope profiles.
Second, a process-based hydrological model has been implemented into a flexible simulation environment with several alternative process representations and a number of numerical solvers.
It turned out that, even though computation times were still long, enhanced computational capabilities nowadays in combination with innovative methods for statistical analysis allow for the exploration of structural uncertainties of even complex process-based models, which up to now was often neglected by the modelling community.
In a further study it could be shown that process-based models may even be employed as tools for seasonal operational forecasting.
In contrast to statistical models, which are faster to initialise and to apply, process-based models produce more information in addition to the target variable, even at finer spatial and temporal scales, and provide more insights into process behaviour and catchment functioning.
However, the process-based model was much more dependent on reliable rainfall forecasts.
It seems unlikely that there exists a single best formulation for hydrological processes, even for a specific catchment.
This supports the use of flexible model environments with alternative process representations instead of a single model structure.
However, correlation and compensation effects between process formulations, their parametrisation, and other aspects such as numerical solver and model resolution, may lead to surprising results and potentially misleading conclusions.
In future studies, such effects should be more explicitly addressed and quantified.
Moreover, model functioning appeared to be highly dependent on the meteorological conditions and rainfall input generally was the most important source of uncertainty.
It is still unclear, how this could be addressed, especially in the light of the aforementioned correlations.
The use of innovative data products, e.g.\ remote sensing data in combination with station measurements, and efficient processing methods for the improvement of rainfall input and explicit consideration of associated uncertainties is advisable to bring more insights and make hydrological simulations and predictions more reliable.
This cumulative thesis is concerned with the evolution of geomagnetic activity since the beginning of the 20th century, that is, the time-dependent response of the geomagnetic field to solar forcing. The focus lies on the description of the magnetospheric response field at ground level, which is particularly sensitive to the ring current system, and an interpretation of its variability in terms of the solar wind driving. Thereby, this work contributes to a comprehensive understanding of long-term solar-terrestrial interactions.
The common basis of the presented publications is formed by a reanalysis of vector magnetic field measurements from geomagnetic observatories located at low and middle geomagnetic latitudes. In the first two studies, new ring current targeting geomagnetic activity indices are derived, the Annual and Hourly Magnetospheric Currents indices (A/HMC). Compared to existing indices (e.g., the Dst index), they do not only extend the covered period by at least three solar cycles but also constitute a qualitative improvement concerning the absolute index level and the ~11-year solar cycle variability. The analysis of A/HMC shows that (a) the annual geomagnetic activity experiences an interval-dependent trend with an overall linear decline during 1900–2010 of ~5 % (b) the average trend-free activity level amounts to ~28 nT (c) the solar cycle related variability shows amplitudes of ~15–45 nT (d) the activity level for geomagnetically quiet conditions (Kp<2) lies slightly below 20 nT. The plausibility of the last three points is ensured by comparison to independent estimations either based on magnetic field measurements from LEO satellite missions (since the 1990s) or the modeling of geomagnetic activity from solar wind input (since the 1960s). An independent validation of the longterm trend is problematic mainly because the sensitivity of the locally measured geomagnetic activity depends on geomagnetic latitude. Consequently, A/HMC is neither directly comparable to global geomagnetic activity indices (e.g., the aa index) nor to the partly reconstructed open solar magnetic flux, which requires a homogeneous response of the ground-based measurements to the interplanetary magnetic field and the solar wind speed.
The last study combines a consistent, HMC-based identification of geomagnetic storms from 1930–2015 with an analysis of the corresponding spatial (magnetic local time-dependent) disturbance patterns. Amongst others, the disturbances at dawn and dusk, particularly their evolution during the storm recovery phases, are shown to be indicative of the solar wind driving structure (Interplanetary Coronal Mass Ejections vs. Stream or Co-rotating Interaction Regions), which enables a backward-prediction of the storm driver classes. The results indicate that ICME-driven geomagnetic storms have decreased since 1930 which is consistent with the concurrent decrease of HMC. Out of the collection of compiled follow-up studies the inclusion of measurements from high-latitude geomagnetic observatories into the third study’s framework seems most promising at this point.
Metal halide perovskites have merged as an attractive class of materials for photovoltaic applications due to their excellent optoelectronic properties. However, the long term stability is a roadblock for this class of material’s industrial pathway. Increasing evidence shows that intrinsic defects in perovskite promote material degradation. Consequently, understanding defect behaviours in perovskite materials is essential to further improve device stability and performance. This dissertation, hence, focuses on the topic of defect chemistry in halide perovskites.
The first part of the dissertation gives a brief overview of the defect properties in halide perovskite. Subsequently, the second part shows that doping methylammonium lead iodide with a small amount of alkaline earth metals (Sr and Mg) creates a higher quality, less defective material resulted in high open circuit voltages in both n-i-p and p-i-n architecture. It has been found that the mechanism of doping has two distinct regimes in which a low doping concentration enables the inclusion of the dopants into the lattice whereas higher doping concentrations lead to phase segregation. The material can be more n-doped in the low doping regime while being less n-doped in the high doping regime. The threshold of these two regimes is based on the atomic size of the dopants.
The next part of the dissertation examines the photo-induced degradation in methylammonium lead iodide. This degradation mechanism links closely to the formation and migration of ionic defects. After they are formed, these ionic defects can migrate, however, not freely depending on the defect concentration and their distribution. In fact, a highly concentrated defect region such as the grain boundaries can inhibit the migration of ionic defects. This has implications for material design as perovskite solar cells normally employ a polycrystalline thin-film which has a high density of grain boundary.
The final study presented in this PhD dissertation focuses on the stability of the state-of-the-art triple cation perovskite-based solar devices under external bias. Prolonged bias (more than three hours) is found to promote amorphization in halide perovskite. The amorphous phase is suspected to accumulate at the interfaces especially between the hole selective layer and perovskite. This amorphous phase inhibits the charge collection and severely affects the device performance. Nonetheless, the devices can recover after resting without bias in the dark. This amorphization is attributed to ionic defect migration most likely halides. This provides a new understanding of the potential degradation mechanisms in perovskite solar cells under operational conditions.
The impact that catalysis has on global economy and environment is substantial, since 85% of all chemical industrial processes are catalytic. Among those, 80% of the processes are heterogeneously catalyzed, 17% make use of homogeneous catalysts, and 3% are biocatalytic processes. Especially in the pharmaceutical and agrochemical industry, a significant part of these processes involves chiral compounds. Obtaining enantiomerically pure compounds is necessary and it is usually accomplished by asymmetric synthesis and catalysis, as well as chiral separation. The efficiency of these processes may be vastly improved if the chiral selectors are positioned on a porous solid support, thereby increasing the available surface area for chiral recognition. Similarly, the majority of commercial catalysts are also supported, usually comprising of metal nanoparticles (NPs) dispersed on highly porous oxide or nanoporous carbon material.
Materials that have exceptional thermal and chemical stability, and are electrically conductive are porous carbons. Their stability in extreme pH regions and temperatures, the possibility to tailor their pore architecture and chemical functionalization, and their electric conductivity have already established these materials in the fields of separation and catalysis. However, their heterogeneous chemical structure with abundant defects make it challenging to develop reliable models for the investigation of structure-performance relationships. Therefore, there is a necessity for expanding the fundamental understanding of these robust materials under experimental conditions to allow for their further optimization for particular applications. This thesis gives a contribution to our knowledge about carbons, through different aspects, and in different applications.
On the one hand, a rather exotic novel application was investigated by attempts in synthesizing porous carbon materials with an enantioselective surface. Chapter 4.1 described an approach for obtaining mesoporous carbons with an enantioselective surface by direct carbonization of a chiral precursor. Two enantiomers of chiral ionic liquids (CIL) based on amino acid tyrosine were used as carbon precursors and ordered mesoporous silica SBA-15 served as a hard template for obtaining porosity. The chiral recognition of the prepared carbons has been tested in the solution by isothermal titration calorimetry with enantiomers of Phenylalanine as probes, as well as chiral vapor adsorption with 2-butanol enantiomers. Measurements in both solution and the gas phase revealed the differences in the affinity of carbons towards two enantiomers.
The atomic efficiency of the CIL precursors was increased in Chapter 4.2, and the porosity was developed independently from the development of chiral carbons, through the formation of stable composites of pristine carbon and CIL-derived coating. After the same set of experiments for the investigation of chirality, the enantiomeric ratios of the composites reported herein were even higher than in the previous chapter.
On the other hand, the structure‒activity relationship of carbons as supports for gold nanoparticles in a rather traditional catalytic model reaction, on the interface between gas, liquid, and solid, was studied. In Chapter 5.1 it was shown on the series of catalysts with different porosities that the kinetics of ᴅ-glucose oxidation reaction can be enhanced by increasing the local concentration of the reactants around the active phase of the catalyst. A large amount of uniform narrow mesopores connected to the surface of the Au catalyst supported on ordered mesoporous carbon led to the water confinement, which increased the solubility of the oxygen in the proximity of the catalyst and thereby increased the apparent catalytic activity of this catalyst.
After increasing the oxygen concentration in the internal area of the catalyst, in Chapter 5.2 the concentration of oxygen was increased in the external environment of the catalyst, by the introduction of less cohesive liquids that serve as efficient solvent for oxygen, perfluorinated compounds, near the active phase of the catalyst. This was achieved by a formation of catalyst particle-stabilized emulsions of perfluorocarbon in aqueous ᴅ-glucose solution, that further promoted the catalytic activity of gold-on-carbon catalyst.
The findings reported within this thesis are an important step in the understanding of the structure-related properties of carbon materials.
The PhD thesis entitled “Actions through the lens of communicative cues. The influence of verbal cues and emotional cues on action processing and action selection in the second year of life” is based on four studies, which examined the cognitive integration of another person’s communicative cues (i.e., verbal cues, emotional cues) with behavioral cues in 18- and 24-month-olds. In the context of social learning of instrumental actions, it was investigated how the intention-related coherence of either a verbally announced action intention or an emotionally signaled action evaluation with an action demonstration influenced infants’ neuro-cognitive processing (Study I) and selection (Studies II, III, IV) of a novel object-directed action. Developmental research has shown that infants benefit from another’s behavioral cues (e.g., action effect, persistency, selectivity) to infer the underlying goal or intention, respectively, of an observed action (e.g., Cannon & Woodward, 2012; Woodward, 1998). Particularly action effects support infants in distinguishing perceptual action features (e.g., target object identity, movement trajectory, final target object state) from conceptual action features such as goals and intentions. However, less is known about infants’ ability to cognitively integrate another’s behavioral cues with additional action-related communicative cues. There is some evidence showing that in the second year of life, infants selectively imitate a novel action that is verbally (“There!”) or emotionally (positive expression) marked as aligning with the model’s action intention over an action that is verbally (“Whoops!”) or emotionally (negative expression) marked as unintentional (Carpenter, Akhtar, & Tomasello, 1998; Olineck & Poulin-Dubois, 2005, 2009; Repacholi, 2009; Repacholi, Meltzoff, Toub, & Ruba, 2016). Yet, it is currently unclear which role the specific intention-related coherence of a communicative cue with a behavioral cue plays in infants’ action processing and action selection that is, whether the communicative cue confirms, contrasts, clarifies, or is unrelated to the behavioral cue. Notably, by using both verbal cues and emotional cues, we examined not only two domains of communicative cues but also two qualitatively distinct relations between behavioral cues on the one hand and communicative cues on the other hand. More specifically, a verbal cue has the potential to communicate an action intention in the absence of an action demonstration and thus a prior-intention (Searle, 1983), whereas an emotional cue evaluates an ongoing or past action demonstration and thus signals an intention-in-action (Searle, 1983). In a first research focus, this thesis examined infants’ capacity to cognitively integrate another’s intention-related communicative cues and behavioral cues, and also focused on the role of the social cues’ coherence in infants’ action processing and action selection. In a second research focus, and to gain more elaborate insights into how the sub-processes of social learning (attention, encoding, response; cf. Bandura, 1977) are involved in this coherence-sensitive integrative processing, we employed a multi-measures approach. More specifically, we used Electroencephalography (EEG) and looking times to examine how the cues’ coherence influenced the compound of attention and encoding, and imitation (including latencies to first-touch and first-action) to address the compound of encoding and response. Based on the action-reconstruction account (Csibra, 2007), we predicted that infants use extra-motor information (i.e., communicative cues) together with behavioral cues to reconstruct another’s action intention. Accordingly, we expected infants to possess a flexibly organized internal action hierarchy, which they adapt according to the cues’ coherence that is, according to what they inferred to be the overarching action goal. More specifically, in a social-learning situation that comprised an adult model, who demonstrated an action on a novel object that offered two actions, we expected the demonstrated action to lead infants’ action hierarchy when the communicative (i.e., verbal, emotional) cue conveyed similar (confirming coherence) or no additional (un-related coherence) intention-related information relative to the behavioral cue. In terms of action selection, this action hierarchy should become evident in a selective imitation of the demonstrated action. However, when the communicative cue questioned (contrasting coherence) the behaviorally implied action goal or was the only cue conveying meaningful intention-related information (clarifying coherence), the verbally/emotionally intended action should ascend infants’ action hierarchy. Consequently, infants’ action selection should align with the verbally/emotionally intended action (goal emulation). Notably, these predictions oppose the direct-matching perspective (Rizzolatti & Craighero, 2004), according to which the observation of another’s action directly resonates with the observer’s motor repertoire, with this motor resonance enabling the identification of the underlying action goal. Importantly, the direct-matching perspective predicts a rather inflexible action hierarchy inasmuch as the process of goal identification should solely rely on the behavioral cue, irrespective of the behavioral cue’s coherence with extra-motor intention-related information, as it may be conveyed via communicative cues. As to the role of verbal cues, Study I used EEG to examine the influence of a confirming (Congruent) versus contrasting (Incongruent) coherence of a verbal action intention with the same action demonstration on 18-month-olds’ conceptual action processing (as measured via mid-latency mean negative ERP amplitude) and motor activation (as measured via central mu-frequency band power). The action was demonstrated on a novel object that offered two action alternatives from a neutral position. We expected mid-latency ERP negativity to be enhanced in Incongruent compared to Congruent, because past EEG research has demonstrated enhanced conceptual processing for stimuli that mismatched rather than matched the semantic context (Friedrich & Friederici, 2010; Kaduk et al., 2016). Regarding motor activation, Csibra (2007) posited that the identification of a clear action goal constitutes a crucial basis for motor activation to occur. We therefore predicted reduced mu power (indicating enhanced motor activation) for Congruent than Incongruent, because in Congruent, the cues’ match provides unequivocal information about the model’s action goal, whereas in Incongruent, the conflict may render the model’s action goal more unclear. Unexpectedly, in the entire sample, 18-month-olds’ mid-latency ERP negativity during the observation of the same action demonstration did not differ significantly depending on whether this action was congruent or incongruent with the model’s verbal action intention. Yet, post hoc analyses revealed the presence of two subgroups of infants, each of which exhibited significantly different mid-latency ERP negativity for Congruent versus Incongruent, but in opposing directions. The subgroups differed in their productive action-related language skills, with the linguistically more advanced infants exhibiting the expected response pattern of enhanced ERP mean negativity in Incongruent than Congruent, indicating enhanced conceptual processing of an action demonstration that was contrasted rather than confirmed by the verbal action context. As expected, central mu power in the entire sample was reduced in Congruent relative to Incongruent, indicating enhanced motor activation when the action demonstration was preceded by a confirming relative to a contrasting verbal action intention. This finding may indicate the covert preparation for a preferential imitation of the congruent relative to the incongruent action (Filippi et al., 2016; Frey & Gerry, 2006). Overall, these findings are in line with the action-reconstruction account (Csibra, 2007), because they suggest a coherence-sensitive attention to and encoding of the same perceptual features of another’s behavior and thus a cognitive integration of intention-related verbal cues and behavioral cues. Yet, because the subgroup constellation in infants’ ERPs was only discovered post hoc, future research is clearly required to substantiate this finding. Also, future research should validate our interpretation that enhanced motor activation may reflect an electrophysiological marker of subsequent imitation by employing EEG and imitation in a within-subjects design. Study II built on Study I by investigating the impact of coherence of a verbal cue and a behavioral cue on 18- and 24-month-olds’ action selection in an imitation study. When infants of both age groups observed a confirming (Congruent) or unrelated (Pseudo-word: action demonstration was associated with novel verb-like cue) coherence, they selectively imitated the demonstrated action over the not demonstrated, alternative action, with no difference between these two conditions. These findings suggest that, as expected, infants’ action hierarchy was led by the demonstrated action when the verbal cue provided similar (Congruent) or no additional (Pseudo-word) intention-related information relative to a meaningful behavioral cue. These findings support the above-mentioned interpretation that enhanced motor activation during action observation may reflect a covert preparation for imitation (Study I). Interestingly, infants did not seem to benefit from the intention-highlighting effect of the verbal cue in Congruent, suggesting that the verbal cue had an unspecific (e.g., attention-guiding) effect on infants’ action selection. Contrary, when infants observed a contrasting (Incongruent) or clarifying (Failed-attempt: model failed to manipulate the object but verbally announced a certain action intention) coherence, their action selection varied with age and also varied across the course of the experiment (block 1 vs. block 2). More specifically, the 24-month-olds made stronger use of the verbal cue for their action selection in block 1 than did the 18-month-olds. However, while the 18-month-olds’ use of the verbal cue increased across blocks, particularly in Incongruent, the 24-month-olds’ use of the verbal cue decreased across blocks. Overall, these results suggest that, as expected, infants’ action hierarchy in Incongruent (both age groups) and Failed-attempt (only 24-month-olds) drew on the verbal action intention, because in both age groups, infants emulated the verbal intention about as often as they imitated the demonstrated action or even emulated the verbal action intention preferentially. Yet, these findings were confined to certain blocks. It may be argued that the younger age group had a harder time inferring and emulating the intended, yet never observed action, because this requirement is more demanding in cognitive and motor terms. These demands may explain why the 18-month-olds needed some time to take account of the verbal action intention. Contrary, it seems that the 24-month-olds, although demonstrating their principle capacity to take account of the verbal cue in block 1, lost trust in the model’s verbal cue, maybe because the verbal cue did not have predictive value for the model’s actual behavior. Supporting this interpretation, research on selective trust has demonstrated that already infants evaluate another’s reliability or competence, respectively, based on how that model handles familiar objects (behavioral reliability) or labels familiar objects (verbal reliability; for reviews, see Mills, 2013; Poulin-Dubois & Brosseau-Liard, 2016). Relatedly, imitation research has demonstrated that the interpersonal aspects of a social-learning situation gain increasing relevance for infants during the second year of life (Gellén & Buttelmann, 2019; Matheson, Moore, & Akhtar, 2013; Uzgiris, 1981). It may thus be argued that when the 24-month-olds were repeatedly faced with a verbally unreliable model, they de-evaluated the verbal cue as signaling the model’s action intention and instead relied more heavily on alternative cues such as the behavioral cue (Incongruent) or the action context (e.g., object affordances, salience; Failed-attempt). Infants’ first-action latencies were higher in Incongruent and Failed-attempt than in both Congruent and Pseudo-word, and were also higher in Failed-attempt than in Incongruent. These latency-findings thus indicate that situations involving a meaningful verbal cue that deviated from the behavioral cue are cognitively more demanding, resulting in a delayed initiation of a behavioral response. In sum, the findings of Study II suggest that both age groups were highly flexible in their integration of a verbal cue and behavioral cue. Moreover, our results do not indicate a general superiority of either cue. Instead, it seems to depend on the informational gain conveyed by the verbal cue whether it exerts a specific, intention-highlighting effect (Incongruent, Failed-attempt) or an unspecific (e.g., attention-guiding) effect (Congruent, Pseudo-word). Studies III and IV investigated the impact of another’s action-related emotional cues on 18-month-olds’ action selection. In Study III, infants observed a model, who demonstrated two actions on a novel object in direct succession, and who combined one of the two actions with a positive (happy) emotional expression and the other action with a negative (sad) emotional expression. As expected, infants imitated the positively emoted (PE) action more often than the negatively emoted (NE) action. This preference arose from an increase in infants’ readiness to perform the PE action from the baseline period (prior to the action demonstrations) to the test period (following the action demonstrations), rather than from a decrease in readiness to the perform the NE action. The positive cue thus had a stronger behavior-regulating effect than the negative cue. Notably, infants’ more general object-directed behavior in terms of first-touch latencies remained unaffected by the emotional cues’ valence, indicating that infants had linked the emotional cues specifically to the corresponding action and not the object as a whole (Repacholi, 2009). Also, infants’ looking times during the action demonstration did not differ significantly as a function of emotional valence and were characterized by a predominant attentional focus to the action/object rather than to the model’s face. Together with the findings on infants’ first-touch latencies, these results indicate a sensitivity for the notion that emotions can have very specific referents (referential specificity; Martin, Maza, McGrath, & Phelps, 2014). Together, Study III provided evidence for selective imitation based on another’s intention-related (particularly positive) emotional cues in an action-selection task, and thus indicates that infants’ action hierarchy flexibly responds to another’s emotional evaluation of observed actions. According to Repacholi (2009), we suggest that infants used the model’s emotional evaluation to re-appraise the corresponding action (effect), for instance in terms of desirability. Study IV followed up on Study III by investigating the role of the negative emotional cue for infants’ action selection in more detail. Specifically, we investigated whether a contrasting (negative) emotional cue alone would be sufficient to differentially rank the two actions along infants’ action hierarchy or whether instead infants require direct information about the model’s action intention (in the form of a confirming action-emotion pair) to align their action selection with the emotional cues. Also, we examined whether the absence of a direct behavior-regulating effect of the negative cue in Study III was due to the negative cue itself or to the concurrently available positive cue masking the negative cue’s potential effect. To this end, we split the demonstration of the two action-emotion pairs across two trials. In each trial, one action was thus demonstrated and emoted (PE, NE action), and one action was not demonstrated and un-emoted (UE action). For trial 1, we predicted that infants, who observed a PE action demonstration, would selectively imitate the PE action, whereas infants, who observed a NE action demonstration would selectively emulate the UE action. As to trial 2, we expected the complementary action-emotion pair to provide additional clarifying information as the model’s emotional evaluation of both actions, which should either lead to adaptive perseveration (if infants’ action selection in trial 1 had already drawn on the emotional cue) or adaptive change (if infants’ action selection in trial 1 signaled a disregard of the emotional cue). As to trial 1, our findings revealed that, as expected, infants imitated the PE action more often than they emulated the UE action. Like in Study III, this selectivity arose from an increase in infants’ propensity to perform the PE action from baseline to trial 1. Also like in Study III, infants performed the NE action about equally often in baseline and trial 1, which speaks against a direct behavior-regulating effect of the negative cue also when presented in isolation. However, after a NE action demonstration, infants emulated the UE action more often in trial 1 than in baseline, suggesting an indirect behavior-regulating effect of the negative cue. Yet, this indirect effect did not yield a selective emulation of the UE action, because infants performed both action alternatives about equally often in trial 1. Unexpectedly, infants’ action selection in trial 2 was unaffected by the emotional cue. Instead, infants perseverated their action selection of trial 1 in trial 2, irrespective of whether it was adaptive or non-adaptive with respect to the model’s emotional evaluation of the action. It seems that infants changed their strategy across trials, from an initial adherence to the emotional (particularly positive) cue, towards bringing about a salient action effect (Marcovich & Zelazo, 2009). In sum, Studies III and IV indicate a dynamic interplay of different action-selection strategies, depending on valence and presentation order. Apparently, at least in infancy, action reconstruction as one basis for selective action performance reaches its limits when infants can only draw on indirect intention-related information (i.e., which action should be avoided). Overall, our findings favor the action-reconstruction account (Csibra, 2007), according to which actions are flexibly organized along a hierarchy, depending on inferential processes based on extra-motor intention-related information. At the same time, the findings question the direct-matching hypothesis (Rizzolatti & Craighero, 2004), according to which the identification (and pursuit) of action goals hinges on a direct simulation of another’s behavioral cues. Based on the studies’ findings, a preliminary working model is introduced, which seeks to integrate the two theoretical accounts by conceptualizing the routes that activation induced by social cues may take to eventually influence an infant’s action selection. Our findings indicate that it is useful to strive a differentiated conceptualization of communicative cues, because they seem to operate at different places within the process of cue integration, depending on their potential to convey direct intention-related information. Moreover, we suggest that there is bidirectional exchange within each compound of adjacent sub-processes (i.e., between attention and encoding, and encoding and response), and between the compounds. Hence, our findings highlight the benefits of a multi-measures approach when studying the development of infants’ social-cognitive abilities, because it provides a more comprehensive picture how the concerted use of social cues from different domains influences infants’ processing and selection of instrumental actions. Finally, this thesis points to potential future directions to substantiate our current interpretation of the findings.. Moreover, an extension to additional kinds of coherence is suggested to get closer to infants’ everyday-world of experience.
Negotiations have become a central aspect of managerial life and influence a company’s profit significantly. This is why organizations generally endeavor to increase their negotiation performance. Over the last decades, besides other factors, research found goal setting to be one of the best predictor of negotiation outcomes. Given the extent and complexity of multi-issue business negotiations, profit optimizing by means of improving a company’s goal setting has a great deal of potential. However, developing goal setting strategies before the actual negotiation is still rather uncommon in business practice. In order to provide professionals with empirical guidance, this work aims at investigating three steps for the development and effective management of goal setting strategies for business negotiations. Therefore, this dissertation contains three papers, each one dealing with one specific step. The first paper explores the characterization of social and economic outcomes in different business relationship types at the beginning of the relationship and the development of these outcomes toward the actual status quo. The second paper takes the number of goals into account for goal setting strategies. This paper uses the two dimensions goal scope and goal difficulty to investigate the relevance and potentials of combining different level of these dimensions in multi-issue negotiations. Therefore, a large experiment was conducted measuring the impact on individual and joint negotiation outcomes, and the impasse rate. The third paper analyzes the type and orientation of negotiation goals. When the set of negotiation issues has an integrative potential, the opportunity to increase the joint gains arises. To what extent negotiators pursue the integrative potential depends largely on their goal orientation. A quantitative analysis with practitioners was used to examine the influence that business negotiations’ situative and organizational factors have on the negotiators’ goal orientation. The dissertation closes with implications for practice, limitations of the work, and ideas for future research.
The idea of critical childhood studies is a relatively young disciplinary undertaking in eastern Africa. And so, a lot of inquiries have not been carried out. This field is a potential important socio-political marker, among others, of some narratives, that have emerged out of eastern Africa. Towards this end, my research seeks out an archaeology of childhood in eastern Africa. There is a monochromatic hue which has often painted the eastern African childhood. This broad stroke portrays the childhood as characterized by want. The image of the eastern African childhood is composed in terms of the war-child, poverty, disease-ridden, and aid-begging. The pitfall of this consciousness is that it erases a differentiated and pluralist nature of the eastern African childhood. Therefore, I hypothesise that childhood is a discourse from which institutional vectors become conduits of certain statement-making both process-wise and content-wise. As such a critical childhood study is a theatre of staging and unearthing its joys, tribulations, cultural constructions, and even political interventions. To this end childhood and its literatures not only reflect but also contribute to meaning making and worldliness thereof. As an attempt to move from an un-nuanced depiction, which is often monodirectional, I seek to present a chronologically synchronic and diachronic analysis of childhood in the eastern Africa. Accordingly, I excavate a chronological construction of childhood within this geopolitical region. The main conceptual anchorage is Francis Nyamnjoh who tells of the African occupying a life on convivial frontiers. He theorises an Africa that is involved in technologies of self-definition that privilege conversations, fluidity of being and relational connections on a globalised scale. I also appropriate the notion of Bula Matadi from the Congo as a decolonialist epistemological exercise to break apart polarising representations and practices of childhood in eastern Africa. This opens a space for an unbounded reconfiguration of childhood in eastern Africa. This book works on and with archival matter, in a cross-disciplinary manner and ranges from pre-colonial to post-colonial eastern Africa. It is an exploration of the trajectory of the discourse of childhood in eastern Africa, in order to eclectically investigate childhood in eastern Africa, in fictional and non-fictional representations.
Recent advances in microscopy have led to an improved visualization of different cell processes. Yet, this also leads to a higher demand of tools which can process images in an automated and quantitative fashion. Here, we present two applications that were developed to quantify different processes in eukaryotic cells which rely on the organization and dynamics of the cytoskeleton.. In plant cells, microtubules and actin filaments form the backbone of the cytoskeleton. These structures support cytoplasmic streaming, cell wall organization and tracking of cellular material to and from the plasma membrane. To better understand the underlying mechanisms of cytoskeletal organization, dynamics and coordination, frameworks for the quantification are needed. While this is fairly well established for the microtubules, the actin cytoskeleton has remained difficult to study due to its highly dynamic behaviour. One aim of this thesis was therefore to provide an automated framework to quantify and describe actin organization and dynamics. We used the framework to represent actin structures as networks and examined the transport efficiency in Arabidopsis thaliana hypocotyl cells. Furthermore, we applied the framework to determine the growth mode of cotton fibers and compared the actin organization in wild-type and mutant cells of rice. Finally, we developed a graphical user interface for easy usage. Microtubules and the actin cytoskeleton also play a major role in the morphogenesis of epidermal leaf pavement cells. These cells have highly complex and interdigitated shapes which are hard to describe in a quantitative way. While the relationship between microtubules, the actin cytoskeleton and shape formation is the object of many studies, it is still not clear how and if the cytoskeletal components predefine indentations and protrusions in pavement cell shapes. To understand the underlying cell processes which coordinate cell morphogenesis, a quantitative shape descriptor is needed. Therefore, the second aim of this thesis was the development of a network-based shape descriptor which captures global and local shape features, facilitates shape comparison and can be used to evaluate shape complexity. We demonstrated that our framework can be used to describe and compare shapes from various domains. In addition, we showed that the framework accurately detects local shape features of pavement cells and outperform contending approaches. In the third part of the thesis, we extended the shape description framework to describe pavement cell shape features on tissue-level by proposing different network representations of the underlying imaging data.
Die Fehlerkorrektur in der Codierungstheorie beschäftigt sich mit der Erkennung und Behebung von Fehlern bei der Übertragung und auch Sicherung von Nachrichten.
Hierbei wird die Nachricht durch zusätzliche Informationen in ein Codewort kodiert.
Diese Kodierungsverfahren besitzen verschiedene Ansprüche, wie zum Beispiel die maximale Anzahl der zu korrigierenden Fehler und die Geschwindigkeit der Korrektur.
Ein gängiges Codierungsverfahren ist der BCH-Code, welches industriell für bis zu vier Fehler korrigiere Codes Verwendung findet. Ein Nachteil dieser Codes ist die technische Durchlaufzeit für die Berechnung der Fehlerstellen mit zunehmender Codelänge.
Die Dissertation stellt ein neues Codierungsverfahren vor, bei dem durch spezielle Anordnung kleinere Codelängen eines BCH-Codes ein langer Code erzeugt wird. Diese Anordnung geschieht über einen weiteren speziellen Code, einem LDPC-Code, welcher für eine schneller Fehlererkennung konzipiert ist.
Hierfür wird ein neues Konstruktionsverfahren vorgestellt, welches einen Code für einen beliebige Länge mit vorgebbaren beliebigen Anzahl der zu korrigierenden Fehler vorgibt. Das vorgestellte Konstruktionsverfahren erzeugt zusätzlich zum schnellen Verfahren der Fehlererkennung auch eine leicht und schnelle Ableitung eines Verfahrens zu Kodierung der Nachricht zum Codewort. Dies ist in der Literatur für die LDPC-Codes bis zum jetzigen Zeitpunkt einmalig.
Durch die Konstruktion eines LDPC-Codes wird ein Verfahren vorgestellt wie dies mit einem BCH-Code kombiniert wird, wodurch eine Anordnung des BCH-Codes in Blöcken erzeugt wird. Neben der allgemeinen Beschreibung dieses Codes, wird ein konkreter Code für eine 2-Bitfehlerkorrektur beschrieben. Diese besteht aus zwei Teilen, welche in verschiedene Varianten beschrieben und verglichen werden. Für bestimmte Längen des BCH-Codes wird ein Problem bei der Korrektur aufgezeigt, welche einer algebraischen Regel folgt.
Der BCH-Code wird sehr allgemein beschrieben, doch existiert durch bestimmte Voraussetzungen ein BCH-Code im engerem Sinne, welcher den Standard vorgibt. Dieser BCH-Code im engerem Sinne wird in dieser Dissertation modifiziert, so dass das algebraische Problem bei der 2-Bitfehler Korrektur bei der Kombination mit dem LDPC-Code nicht mehr existiert. Es wird gezeigt, dass nach der Modifikation der neue Code weiterhin ein BCH-Code im allgemeinen Sinne ist, welcher 2-Bitfehler korrigieren und 3-Bitfehler erkennen kann. Bei der technischen Umsetzung der Fehlerkorrektur wird des Weiteren gezeigt, dass die Durchlaufzeiten des modifizierten Codes im Vergleich zum BCH-Code schneller ist und weiteres Potential für Verbesserungen besitzt.
Im letzten Kapitel wird gezeigt, dass sich dieser modifizierte Code mit beliebiger Länge eignet für die Kombination mit dem LDPC-Code, wodurch dieses Verfahren nicht nur umfänglicher in der Länge zu nutzen ist, sondern auch durch die schnellere Dekodierung auch weitere Vorteile gegenüber einem BCH-Code im engerem Sinne besitzt.
Seismological agencies play an important role in seismological research and seismic hazard mitigation by providing source parameters of seismic events (location, magnitude, mechanism), and keeping these data accessible in the long term. The availability of catalogues of seismic source parameters is an important requirement for the evaluation and mitigation of seismic hazards, and the catalogues are particularly valuable to the research community as they provide fundamental long-term data in the geophysical sciences. This work is well motivated by the rising demand for developing more robust and efficient methods for routine source parameter estimation, and ultimately generation of reliable catalogues of seismic source parameters. Specifically, the aim of this work is to develop some methods to determine hypocentre location and temporal evolution of seismic sources based on regional and teleseismic observations more accurately, and investigate the potential of these methods to be integrated in near real-time processing.
To achieve this, a location method that considers several events simultaneously and improves the relative location accuracy among nearby events has been provided. This method tries to reduce the biasing effects of the lateral velocity heterogeneities (or equivalently to compensate for limitations and inaccuracies in the assumed velocity model) by calculating a set of timing corrections for each seismic station. The systematic errors introduced into the locations by the inaccuracies in the assumed velocity structure can be corrected without explicitly solving for a velocity model. Application to sets of multiple earthquakes in complex tectonic environments with strongly heterogeneous structure such as subduction zones and plate boundary region demonstrate that this relocation process significantly improves the hypocentre locations compared to standard locations.
To meet the computational demands of this location process, a new open-source software package has been developed that allows for efficient relocation of large-scale multiple seismic events using arrival time data. Upon that, a flexible location framework is provided which can be tailored to various application cases on local, regional, and global scales. The latest version of the software distribution including source codes, a user guide, an example data set, and a change history, is freely available to the community.
The developed relocation algorithm has been modified slightly and then its performance in a simulated near real-time processing has been evaluated. It has been demonstrated that applying the proposed technique significantly reduces the bias in routine locations and enhance the ability to locate the lower magnitude events using only regional arrival data.
Finally, to return to emphasis on global seismic monitoring, an inversion framework has been developed to determine the seismic source time function through direct waveform fitting of teleseismic recordings. The inversion technique can be systematically applied to moderate- size seismic events and has the potential to be performed in near real-time applications. It is exemplified by application to an abnormal seismic event; the 2017 North Korean nuclear explosion.
The presented work and application case studies in this dissertation represent the first step in an effort to establish a framework for automatic, routine generation of reliable catalogues of seismic event locations and source time functions.
Galaxies are gravitationally bound systems of stars, gas, dust and - probably - dark matter. They are the building blocks of the Universe. The morphology of galaxies is diverse: some galaxies have structures such as spirals, bulges, bars, rings, lenses or inner disks, among others. The main processes that characterise galaxy evolution can be separated into fast violent events that dominated evolution at earlier times and slower processes, which constitute a phase called secular evolution, that became dominant at later times. Internal processes of secular evolution include the gradual rearrangement of matter and angular momentum, the build-up and dissolution of substructures or the feeding of supermassive black holes and their feedback. Galaxy bulges – bright central components in disc galaxies –, on one hand, are relics of galaxy formation and evolution. For instance, the presence of a classical bulge suggests a relatively violent history. In contrast, the presence of a disc-like bulge instead indicates the occurrence of secular evolution processes in the main disc. Galaxy bars – elongated central stellar structures –, on the other hand, are the engines of secular evolution. Studying internal properties of both bars and bulges is key to comprehending some of the processes through which secular evolution takes place. The main objectives of this thesis are (1) to improve the classification of bulges by combining photometric and spectroscopic approaches for a large sample of galaxies, (2) to quantify star formation in bars and verify dependencies on galaxy properties and (3) to analyse stellar populations in bars to aid in understanding the formation and evolution of bars. Integral field spectroscopy is fundamental to the work presented in this thesis, which consists of three different projects as part of three different galaxy surveys: the CALIFA survey, the CARS survey and the TIMER project.
The first part of this thesis constitutes an investigation of the nature of bulges in disc galaxies. We analyse 45 galaxies from the integral-field spectroscopic survey CALIFA by performing 2D image decompositions, growth curve measurements and spectral template fitting to derive stellar kinematics from CALIFA data cubes. From the obtained results, we present a recipe to classify bulges that combines four different parameters from photometry and kinematics: The bulge Sersic index nb, the concentration index C20;50, the Kormendy relation and the inner slope of the radial velocity dispersion profile ∇σ. The results of the different approaches are in good agreement and allow a safe classification for approximately 95% of the galaxies. We also find that our new ‘inner’ concentration index performs considerably better than the traditionally used C50;90 and, in combination with the Kormendy relation, provides a very robust indication of the physical nature of the bulge. In the second part, we study star formation within bars using VLT/MUSE observations for 16 nearby (0.01 < z < 0.06) barred active-galactic-nuclei (AGN)-host galaxies from the CARS survey. We derive spatially-resolved star formation rates (SFR) from Hα emission line fluxes and perform a detailed multi-component photometric decomposition on images derived from the data cubes. We find a clear separation into eight star-forming (SF) and eight non-SF bars, which we interpret as indication of a fast quenching process. We further report a correlation between the SFR in the bar and the shape of the bar surface brightness profile: only the flattest bars (nbar < 0.4) are SF. Both parameters are found to be uncorrelated with Hubble type. Additionally, owing to the high spatial resolution of the MUSE data cubes, for the first time, we are able to dissect the SFR within the bar and analyse trends parallel and perpendicular to the bar major axis. Star formation is 1.75 times stronger on the leading edge of a rotating bar than on the trailing edge and is radially decreasing. Moreover, from testing an AGN feeding scenario, we report that the SFR of the bar is uncorrelated with AGN luminosity. Lastly, we present a detailed analysis of star formation histories and chemical enrichment of stellar populations (SP) in galaxy bars. We use MUSE observations of nine very nearby barred galaxies from the TIMER project to derive spatially resolved maps of stellar ages and metallicities, [α/Fe] abundances, star formation histories, as well as Hα as tracer of star formation. Using these maps, we explore in detail variations of SP perpendicular to the bar major axes. We find observational evidence for a separation of SP, supposedly caused by an evolving bar. Specifically, intermediate-age stars (∼ 2-6 Gyr) get trapped on more elongated orbits forming a thinner bar, while old stars (> 8 Gyr) form a rounder and thicker bar. This evidence is further strengthened by very similar results obtained from barred galaxies in the cosmological zoom-in simulations from the Auriga project. In addition, we find imprints of typical star formation patterns in barred galaxies on the youngest populations (< 2 Gyr), which continuously become more dominant from the major axis towards the sides of the bar. The effect is slightly stronger on the leading side. Furthermore, we find that bars are on average more metal-rich and less α-enhanced than the inner parts of the discs that surrounds them. We interpret this result as an indication of a more prolonged or continuous formation of stars that shape the bar as compared to shorter formation episodes in the disc within the bar region.
The current thesis contains the results from two experimental and one modelling study focused on the topic of ductile strain localization in the presence of material heterogeneities. Localization of strain in the high temperature regime is a well known feature of rock deformation occurring in nature at different scales and in a variety of lithologies. Large scale shear zones at the roots of major crustal fault zones are considered responsible for the activity of plate tectonics on our planet. A large number of mechanisms are suggested to be associated with strain softening and nucleation of localization. Among these, the presence of material heterogeneities within homogeneous host rocks is frequently observed in field examples to trigger shear zone development. Despite a number of studies conducted on the topic, the mechanisms controlling initiation and evolution of localization are not fully understood yet. We investigated, experimentally and by means of numerical modelling, phenomenological and microphysical aspects of high temperature strain localization in a homogeneous body containing single and paired inclusions of weaker material. A monomineralic carbonate system composed of Carrara marble (homogeneous, strong matrix) and Solnhofen limestone (weak planar inclusions) is selected for our studies based on its versatility as an experimental material and on the frequent occurrence of carbonate rocks at the core of natural shear zones.
To explore the influence of different loading conditions on heterogeneity-induced high temperature shear zones we conducted torsion experiments under constant twist (deformation) rate and constant torque (stress) conditions in a Paterson-type deformation apparatus on hollow cylinders of marble containing single planar inclusions of limestone. At the imposed experimental conditions (900 ◦C temperature and 400 MPa confining pressure) both materials deform plastically and the marble is ≈ 9 times stronger than the limestone. The viscosity contrast between the two materials induces a perturbation of the stress field within the marble matrix at the tip of the planar inclusion. Early on along the deformation path (at bulk shear strains ≈ 0.3), heterogeneous distribution of strain can be observed under both loading conditions and a small area of incipient strain localization is formed at the tip of the weak limestone inclusion. Strongly deformed grains, incipient dynamic recrystallization and a weak crystallographic preferred orientation characterize the marble within an area a few mm in front of the inclusion. As the bulk strain is increased (up to γ ≈ 1), the area of microstructural modification is expanded along the inclusion plane, the texture strengthens and grain size refinement by dynamic recrystallization becomes pervasive. Locally, evidences for coexisting brittle deformation are also observed regardless of the imposed loading conditions. A shear zone is effectively formed within the deforming Carrara marble, its geometry controlled by the plane containing the thin plate of limestone. Thorough microstructural and textural analysis, however, do not reveal substantial differences in the mechanisms or magnitude of strain localization at the different loading conditions. We conclude that, in the presence of material heterogeneities capable of inducing strain softening, the imposed loading conditions do not affect ductile localization in its nucleating and transient stages.
As the ultimate goal of experimental rock deformation is the extrapolation of results to geologically relevant time and space scales, we developed 2D numerical models reproducing (and benchmarked to) our experimental results. Our cm-scaled models have been implemented with a first-order strain-dependent softening law to reproduce the effect of rheological weakening in the deforming material. We successfully reproduced the local stress concentration at the inclusion tips and the strain localization initiated in the marble matrix. The heterogeneous distribution of strain and its evolution with imposed bulk deformation (i.e. the shape and extent of the nucleating shear zone) are observed to depend on the degree of softening imposed to the deforming matrix. When a second (artificial) softening step is introduced at elevated bulk strains in the model, the formation of a secondary high strain layer is observed at the core of the initial shear zone, analogous to the development of ultramylonite bands in high strain natural shear zones. Our results do not only reproduce the nucleation and transient evolution of a heterogeneity-induced high temperature shear zone with high accuracy, but also confirm the importance of introducing reliable softening laws capable of mimicking strain weakening to numerical models of crustal scale ductile processes.
Material heterogeneities inducing strain localization in the field are often consisting of brittle precursors (joints and fractures). More generally, the interaction of brittle and ductile deformation mechanisms and its effect on the localization of strain have been a key topic in the structural geology community for a long time. The positive feedback between (micro)fracturing and ductile strain localization is a well recognized effect in a number of field examples. We experimentally investigated the influence of brittle deformation on the initiation and evolution of high temperature shear zones in a strong matrix containing pairs of weak material heterogeneities. Our Carrara marble-Solnhofen limestone inclusions system was tested in triaxial compression under constant strain rate and high temperature (900 ◦C) conditions in a Paterson deformation apparatus. The inclusion pairs were arranged in non-overlapping step-over geometries of either compressional or extensional nature. Experimental runs were conducted at different confining pressures (30, 50, 100 and 300 MPa) to induce various amounts of brittle deformation within the marble matrix. At low confinement (30 and 50 MPa) abundant brittle deformation is observed in all configurations, but the spatial distribution of cracks is dependent on the kinematics of the step-over region: concentrated along the shearing plane between the inclusions in the extensional samples, or broadly distributed around the inclusions but outside the step-over region in the compressional configuration. Accordingly, brittle-assisted ductile processes tend to localize deformation along the inclusions plane in the extensional geometry or to distribute widely across large areas of the matrix in the compressional step-over. At pressures of 100 and 300 MPa fracturing is mostly suppressed in both configurations and strain is accommodated almost entirely by viscous creep. In extensional samples this leads to progressive de-localization with increasing confinement. Our results show that, while ductile localization of strain is indeed more efficient where assisted by brittle processes, these latter are only effective if themselves heterogeneously distributed, ultimately a function of the local stress perturbations.
In der vorliegenden Arbeit wird untersucht, in wie weit physikalische Experimente ein flow-Erleben bei Lernenden hervorrufen. Flow-Erleben wird als Motivationsursache gesehen und soll den Weg zu Freude und Glück darstellen. Insbesondere wegen dem oft zitierten Fachkräftemangel in naturwissenschaftlichen und technischen Berufen ist eine Motivationssteigerung in naturwissenschaftlichen Unterrichtsfächern wichtig. Denn trotz Leistungssteigerungen in internationalen Vergleichstests möchten in Deutschland deutlich weniger Schüler*innen einen solchen Beruf ergreifen als in anderen Industriestaaten. Daher gilt es, möglichst früh Schüler*innen für naturwissenschaftlich-technische Fächer zu begeistern und insbesondere im regelrecht verhassten Physikunterricht flow-Erleben zu erzeugen.
Im Rahmen dieser Arbeit wird das flow-Erleben von Studierenden in klassischen Laborexperimenten und FELS (Forschend-Entdeckendes Lernen mit dem Smartphone) als Lernumgebung untersucht. FELS ist eine an die Lebenswelt der Schüler*innen angepasste Lernumgebung, in der sie mit Smartphones ihre eigene Lebenswelt experimentell untersuchen.
Es zeigt sich, dass sowohl klassische Laborexperimente als auch in der Lebenswelt durchgeführte, smartphonebasierte Experimente flow-Erleben erzeugen. Allerdings verursachen die smartphonebasierten Experimente kaum Stressgefühle.
Die in dieser Arbeit herausgefundenen Ergebnisse liefern einen ersten Ansatz, der durch Folgestudien erweitert werden sollte.
Die Praktische Fahrerlaubnisprüfung dient der Erfassung und Beurteilung der Fahrkompe-tenz von Fahrerlaubnisbewerbern. Die aus dieser Prüfung gewonnenen Rückschlüsse auf das Niveau der Fahrkompetenz sollen insbesondere auch der Weiterentwicklung des Bewerbers dienen. Bisher erhalten Bewerber nur bei nicht bestandener Praktischer Fahrerlaubnisprü-fung eine Auflistung der wichtigsten Fehler, die zum Nichtbestehen geführt haben. Für ein zielgerichtetes Weiterlernen ist es aber notwendig, dass die Ergebnisse der Leistungserfas-sung und der Leistungsbewertung gemäß prüfungsdidaktischer Grundsätze pädagogisch an-spruchsvoll an alle Fahranfänger (unabhängig vom Prüfungsergebnis) zurückgemeldet wer-den.
Das Ziel der vorliegenden Arbeit besteht darin, die Gestaltungsgrundlagen und einen Umset-zungsvorschlag für ein kompetenzbezogenes und lernförderliches Rückmeldesystem für die Praktische Fahrerlaubnisprüfung zu erarbeiten. Dieses Rückmeldesystem soll in der Praxis erprobt werden. Darüber hinaus sollen anhand einer Bewerberbefragung zur Nutzerzufrie-denheit Erkenntnisse für die Weiterentwicklung gewonnen werden. Der Entwicklungs- und Erprobungsprozess des optimierten Rückmeldesystems lässt sich in drei Projektphasen auf-teilen:
1. Im Zuge der Optimierungsarbeiten zur Praktischen Fahrerlaubnisprüfung wurde in der ersten Projektphase ein neues Rückmeldesystem erarbeitet, das aus einem kompetenz-bezogenen mündlichen Auswertungsgespräch und einer ergänzenden schriftlichen Rückmeldung einschließlich weiterführender Lernhinweise für alle Bewerber besteht. Dieses Rückmeldesystem soll einerseits die Fahranfänger dabei unterstützen, die Leis-tungsbewertung inhaltlich besser zu verstehen sowie ein zielgerichtetes Weiterlernen ermöglichen. Andererseits soll es die Bewerber dazu motivieren, die festgestellten Kompetenzdefizite weiter zu bearbeiten, und dadurch Lernzuwachs fördern.
2. Das Rückmeldesystem wurde in der zweiten Projektphase in verschiedenen Modell-
regionen Deutschlands anhand von ca. 9.000 realen Praktischen Fahrerlaubnisprüfun-gen erprobt. Die Fahrerlaubnisbewerber, die in den Modellregionen an einer optimier-ten Praktischen Fahrerlaubnisprüfung teilgenommen und somit eine schriftliche Rückmeldung gemäß der optimierten Vorgaben bzw. einen individuellen Zugangscode zum Downloadbereich erhalten haben, wurden zu einer Befragung eingeladen. Dabei wurden vor allem Aspekte der Akzeptanz und der Lernwirksamkeit aus Sicht der Be-werber erfasst. Ziel war es, die Qualität der verkehrspädagogischen Gestaltung des Rückmeldesystems und seinen Nutzen zu untersuchen, um die erprobte Rückmeldung weiterzuentwickeln. Für die Bewerberbefragung wurde eine Onlinebefragung mit ei-nem standardisierten Fragebogen durchgeführt.
3. Die Erprobungs- und Befragungsergebnisse dienten in der dritten Projektphase der Ableitung von Schlussfolgerungen für die Weiterentwicklung des Rückmeldesystems. Die vorliegenden Ergebnisse der Felderprobung deuten darauf hin, dass die Bereitstel-lung einer schriftlichen, ausführlichen Rückmeldung zu den Prüfungsleistungen der Praktischen Fahrerlaubnisprüfungen insgesamt als nützlich und gewinnbringend ange-sehen wird. Allerdings wurde auch deutlich, dass bezüglich der Umsetzung noch Op-timierungspotenzial besteht. Im Anschluss an die Erprobung wurde die schriftliche Rückmeldung daher – ausgehend von den Nutzererfahrungen während der Felderpro-bung – umfassend überarbeitet und eine revidierte Version vorgelegt.
Als Ergebnis der Arbeit liegt ein in mehreren Schritten entwickeltes, empirisch fundiertes und erprobtes Rückmeldesystem vor, das eine differenzierte Kompetenzrückmeldung er-möglicht. Die umfassende Rückmeldung bietet künftig einerseits eine verbesserte Ausgangs-lage für eine ggf. anschließende Wiederholungsprüfung und andererseits ist es dem Bewer-ber anhand der aufgezeigten Stärken und Schwächen auch nach einer bestandenen Prüfung möglich, diese Rückmeldung für das weitere Lernen zu nutzen.
The political legacy of the Martinican poet, novelist and philosopher Édouard Glissant (1928–2011) is the subject of an ongoing debate among postcolonial literary scholars. Responding to an influential view shaping this debate, that Glissant’s work can be categorised into an early political and late apolitical phase, this dissertation claims that this division is based on a narrow conception of 'engaged political writing' that prevents a more comprehensive view of the changing political strategies Glissant pursued throughout his life from emerging. Proceeding from this conceptual basis, the dissertation is concerned with re-reading the dimensions of Glissant's work that have hitherto been relegated as apolitical, literary or poetic, with the aim of conceptualising the politics of relation as an integral part of his overall poetic project. In methodological terms, the dissertation therefore proposes a relational reading of Glissant’s life-work across literary genres, epochs, as well as the conventional divisions between political thought, writing and activism. This perspective is informed by Glissant's philosophy of relation, and draws on a conception of political practice that includes both explicit engagements with established political systems and institutions, as well as literary and cultural interventions geared towards their transformation and the creation of alternatives to them. Theoretically the work thus combines a poststructuralist lens on the conceptual difference between 'politics' and 'the political' with arguments for an inherent political quality of literature, and perspectives from the Afro-Caribbean radical tradition, in which writers and intellectuals have historically sought to combine discursive interventions with organisational actions. Applying this theoretical angle to the analysis of Glissant's politics of relation results in an interdisciplinary research framework designed to explore the synergies between postcolonial political and literary studies.
In order to comprehensively describe Glissant's politics of relation without recourse to evolutionary or digressive models, the concept of an intellectual marronage is proposed as a framework to map the strategies making up Glissant's political archive. Drawing on a variety of historic, political theoretical and literary sources, intellectual marronage is understood as a mode of radical resistance to the neocolonial subjugation for which the plantation system stands historically and metaphorically, as an inherently innovative political practice invested in the creation of communities marked by relational ontologies, and as a commitment to fostering an imagination of the world and the human that differs fundamentally from the Enlightenment paradigm. This specific conception of intellectual marronage forms the basis on which three key strategies that consistently shape Glissant's political practice are identified and mapped. They revolve around Glissant's engagement with history (chapter 2), his commitment to fostering an imagination of the Tout-Monde (whole-world) as a political point of reference (chapter 3), and the continuous exploration of alternative forms of community on the levels of the island, the archipelago and the Tout-Monde (chapter 4). Together these strategies constitute Glissant's personal politics of relation. Its abstract characteristics can be put in a productive conversation with related theoretical traditions invested in exploring the political potentials of fugitivity (chapters 5), as well as with the work of other postcolonial actors whose holistic practice warrants to be described as a politics of relation (chapter 6).
Grenzen des Organiesierbaren
(2020)
Interessiert man sich für den gesellschaftlichen Einfluss der Organisationssoziologie auf die Praxis des Organisierens, so muss der Befund ernüchtern. Stärker als auf organisationssoziologische Wissensbestände wird in Unternehmen oder Verwaltungen auf aktuelle Managementtrends rekurriert. Man könnte diesen Befund beklagen und als fehlerhafte Rezeption der Praxis beiseitelegen. Alternativ ließe sich aber auch diskutieren, welchen Beitrag die Disziplin selbst zu dieser Rezeption leistet. Mit einer solchen Diskussion begibt man sich fast unweigerlich auf einen schwierigen Pfad. Zum einen kann die Soziologie gerade dann, wenn sie ihren Blick auf die Erforschung von Unternehmen oder Verwaltungen richtet, nicht die von der Praxis erwarteten positiven Antworten liefern. Gerade die Organisationssoziologie begibt sich zum anderen jedoch in direkte Konkurrenz zu Nachbardisziplinen wie die Betriebswirtschaftslehre oder die Organisationspsychologie, die die Rezeptionsfähigkeit ihrer Wissensbestände im Praxisfeld in den letzten Jahren unter Beweis gestellt haben. Die Erwartungen an die Umsetzbarkeit wissenschaftlicher Erkenntnisse in der Praxis sind dadurch gestiegen. Eine Soziologie, die ihre Erkenntniskraft in der kritischen Distanz sieht, mag das skeptisch stimmen. Es gilt daher, die Frage zu beantworten, wie die Praxisrelevanz einer Wissenschaft des zweiten Blicks auf Organisationen konkret aussehen kann. Diesem Vorhaben widmet sich das vorgelegte Promotionsprojekt. Die in der kumulativen Dissertation versammelten Beiträge verstehen sich allesamt als Erkundungen und Erprobungen der Praxisrelevanz der Organisationssoziologie anhand aktueller Managementfragen in Unternehmen. Die These lautet dabei, dass sich diese Praxisrelevanz nur als Kritik entfalten kann. Eine solche Kritik kann dabei zwei grundsätzliche Formen annehmen: Als Strukturkritik bezieht sie sich auf konkrete Organisationen, deren spezifische Eigenlogiken und strukturelle Verstrickungen. Sie beschreibt dabei für den Einzelfall Funktionen und Folgen von Erwartungsstrukturen, die sich dann z. B. fallvergleichend generalisieren oder typisieren lassen. Organisationssoziologische Strukturkritik kann sich damit sowohl als vergleichender, praxissensibler Forschungsansatz realisieren, als auch die Grundlage einer soziologisch orientierten Beratung bilden. Als Schematakritik richtet sie sich gegen verkürzte Vorstellungen des Organisierens, die sich etwa in Managementmoden finden lassen. Dem Kumulus zugrunde liegen fünf Beiträge, die konkrete Ausprägungen beider Kritikformen ausloten. Der erste Beitrag „Datafizierung und Organisation“ zeigt, wie Schematakritik an Nachbardisziplinen aussehen kann, indem er Organisation als blinden Fleck der Digitalisierungsforschung diskutiert und Anschlussstellen für interdisziplinäre Forschung ausweist. Daher liefert der Beitrag einen systematischen Zugang zu organisationalen Implikationen der Digitalisierung. Neben der Anreicherung der Digitalisierungsforschung kann die entwickelte Argumentation auch für die Praxis Erkenntniskraft haben, indem z. B. problematisiert wird, dass im Managementdiskurs um Digitalisierung überzogene Rationalisierungserwartungen herrschen oder durch digitale Infrastrukturen entstehende Informalitäten systematisch ausgeblendet werden Der zweite Beitrag „Führung als erfolgreiche Einflussnahme in kritischen Momenten“ legt eine Umdeutung des populären Managementbegriffs Führung durch Schematakritik vor. Damit trägt er in mehrfacher Hinsicht zu einer praxisrelevanten Neubestimmung von Führung bei. Für Führungskräfte ermöglicht er beispielsweise die Einsicht, dass sie ihre Führungsaufgaben auf kritische Momente konzentrieren können und postuliert die Abkehr vom heroischen Bild des dauerhaft Führenden. Diese Umdeutung kann auch für Führungskräfte in Organisationen entlastend sein, weist sie doch auf den Zusammenhang zwischen der organisationalen Verfasstheit und Führungschancen hin und eröffnet damit Gestaltungschancen jenseits der Führungskräfte- und Personalentwicklung. Für die Organisationsforschung liefert der Beitrag einen theoretisch integrierten Führungsbegriff, der Führung sowohl organisational als auch situativ bestimmt. Er steht somit exemplarisch für eine organisationssoziologische Schematakritik, die etablierte Managementbegriffe neu deutet. Der dritte Beitrag kritisiert mit dem Konzept der transformationalen Führung eine Managementmode und zeigt auf, wie das darin enthaltene Führungsmodell durch die Bildung moralischer Kategorien Organisationsprobleme auf Organisationsmitglieder (hier: Führungskräfte) verschiebt. Es wird einerseits eine organisationssoziologische Kritik am populären Managementkonzept der transformationalen Führung vorgelegt. Andererseits verdeutlicht der Beitrag anhand systemtheoretischer Konzepte wie elementarer Verhaltensweisen, Moral oder Rollentrennung exemplarisch, dass organisationssoziologisches Denken den Managementdiskurs bereichern kann, indem es Verkürzungen und Simplifizierungen aufdeckt und alternative Analyse- und Gestaltungsansätze bereitstellt. Dafür lässt sich auch im Praxisdiskurs Gehör finden, weil man annehmen darf, dass mit den Heilsversprechen von Kompaktlösungen auch Enttäuschungen einhergehen, für die die Organisationssoziologie Erklärungen liefern kann. Die Möglichkeiten und Grenzen von Strukturkritik werden in den letzten beiden Beiträgen diskutiert. Das Potenzial von Strukturkritik für die soziologisch orientierte Beratung von Organisationen exploriert der Beitrag „Die schwierige Liaison von Organisationssoziologie und Praxisbezug am Beispiel der Beratung“. Ausgehend vom Theorie-Praxis-Komplex wird eruiert, wie soziologischer Praxisbezug im Feld der Beratung aussehen kann. Dafür systematisiert der Beitrag organisationssoziologische Ansätze von Beratung und zeigt auf, wie ein genuin soziologischer Beratungsansatz aussehen könnte. Der letzte Beitrag stellt Grundzüge einer Methodologie strukturkritischer Forschung vor und illustriert diese an einem durchgeführten Forschungsprojekt zu Managementmoden. Anhand der Forschung in einem Produktionsbetrieb wird gezeigt, wie strukturkritische Forschung konkret aussehen kann. Solch strukturkritische Forschung steht im Forschungsprozess vor drei Herausforderungen: dem qualitativ hochwertigen Feldzugang, der Entwicklung einer für Forschung und Praxis instruktiven Fragestellung und der Rückspiegelung der Ergebnisse in das Feld. Der Beitrag stellt Grundzüge einer Methodologie strukturkritischer Organisationsforschung vor, die sich sachlich, zeitlich und sozial entlang der drei beschriebenen Momente des Feldzugangs, der Ausgangsfragestellung und der Rückspiegelung der Ergebnisse spezifizieren lassen.
Grenzen des Organisierbaren
(2020)
The Arctic region is especially impacted by global warming as temperatures in high latitude regions have increased and are predicted to further rise at levels above the global average. This is crucial to Arctic soils and the shallow shelves of the Arctic Ocean as they are underlain by permafrost. Perennially frozen ground is a habitat for a large number and great diversity of viable microorganisms, which can remain active even under freezing conditions. Warming and thawing of permafrost makes trapped soil organic carbon more accessible to microorganisms. They can transform it to the greenhouse gases carbon dioxide, methane and nitrous oxide. On the other hand, it is assumed that thawing of the frozen ground stimulates microbial activity and carbon turnover. This can lead to a positive feedback loop of warming and greenhouse gas release.
Submarine permafrost covers most areas of the Siberian Arctic Shelf and contains a large though unquantified carbon pool. However, submarine permafrost is not only affected by changes in the thermal regime but by drastic changes in the geochemical composition as it formed under terrestrial conditions and was inundated by Holocene sea level rise and coastal erosion. Seawater infiltration into permafrost sediments resulted in an increase of the pore water salinity and, thus, in thawing of permafrost in the upper sediment layers even at subzero temperatures. The permafrost below, which was not affected by seawater, remained ice-bonded, but warmed through seawater heat fluxes.
The objective of this thesis was to study microbial communities in submarine permafrost with a focus on their response to seawater influence and long-term warming using a combined approach of molecular biological and physicochemical analyses. The microbial abundance, community composition and structure as well as the diversity were investigated in drill cores from two locations in the Laptev Sea, which were subjected to submarine conditions for centuries to millennia. The microbial abundance was measured through total cell counts and copy numbers of the 16S rRNA gene and of functional genes. The latter comprised genes which are indicative for methane production (mcrA) and sulfate reduction (dsrB). The microbial community was characterized by high-throughput-sequencing of the 16S rRNA gene. Physicochemical analyses included the determination of the pore water geochemical and stable isotopic composition, which were used to describe the degree of seawater influence. One major outcome of the thesis is that the submarine permafrost stratified into different so-called pore water units centuries as well as millennia after inundation: (i) sediments that were mixed with seafloor sediments, (ii) sediments that were infiltrated with seawater, and (iii) sediments that were unaffected by seawater. This stratification was reflected in the submarine permafrost microbial community composition only millennia after inundation but not on time-scales of centuries.
Changes in the community composition as well as abundance were used as a measure for microbial activity and the microbial response to changing thermal and geochemical conditions. The results were discussed in the context of permafrost temperature, pore water composition, paleo-climatic proxies and sediment age. The combination of permafrost warming and increasing salinity as well as permafrost warming alone resulted in a disturbance of the microbial communities at least on time-scales of centuries. This was expressed by a loss of microbial abundance and bacterial diversity. At the same time, the bacterial community of seawater unaffected but warmed permafrost was mainly determined by environmental and climatic conditions at the time of sediment deposition. A stimulating effect of warming was observed only in seawater unaffected permafrost after millennia-scale inundation, visible through increased microbial abundance and reduced amounts of substrate.
Despite submarine exposure for centuries to millennia, the community of bacteria in submarine permafrost still generally resembled the community of terrestrial permafrost. It was dominated by phyla like Actinobacteria, Chloroflexi, Firmicutes, Gemmatimonadetes and Proteobacteria, which can be active under freezing conditions.
Moreover, the archaeal communities of both study sites were found to harbor high abundances of marine and terrestrial anaerobic methane oxidizing archaea (ANME). Results also suggested ANME populations to be active under in situ conditions at subzero temperatures. Modeling showed that potential anaerobic oxidation of methane (AOM) could mitigate the release of almost all stored or microbially produced methane from thawing submarine permafrost.
Based on the findings presented in this thesis, permafrost warming and thawing under submarine conditions as well as permafrost warming without thaw are supposed to have marginal effects on the microbial abundance and community composition, and therefore likely also on carbon mobilization and the formation of methane. Thawing under submarine conditions even stimulates AOM and thus mitigates the release of methane.