Refine
Has Fulltext
- yes (261) (remove)
Year of publication
- 2023 (261) (remove)
Document Type
- Doctoral Thesis (154)
- Article (47)
- Postprint (25)
- Working Paper (14)
- Monograph/Edited Volume (7)
- Review (5)
- Bachelor Thesis (2)
- Habilitation Thesis (2)
- Master's Thesis (2)
- Conference Proceeding (1)
Language
- English (261) (remove)
Keywords
- digital education (33)
- Digitale Bildung (32)
- Kursdesign (32)
- MOOC (32)
- Micro Degree (32)
- Online-Lehre (32)
- Onlinekurs (32)
- Onlinekurs-Produktion (32)
- e-learning (32)
- micro degree (32)
- micro-credential (32)
- online course creation (32)
- online course design (32)
- online teaching (32)
- climate change (9)
- Klimawandel (8)
- machine learning (6)
- Digitalisierung (4)
- deep learning (4)
- maschinelles Lernen (4)
- Fernerkundung (3)
- Modellierung (3)
- Seismologie (3)
- Tektonik (3)
- digitalization (3)
- innovation (3)
- körperliche Fitness (3)
- motivation (3)
- physical fitness (3)
- reinforcement learning (3)
- seismology (3)
- sustainability (3)
- tectonics (3)
- 5-methoxycarbonylmethyl-2-thiouridine (2)
- Adaptive Force (2)
- Anden (2)
- Andes (2)
- Argentina (2)
- Argentinien (2)
- Atmosphäre (2)
- Ausbreitung (2)
- Bayesian inference (2)
- Central Asia (2)
- Datenanalyse (2)
- Design (2)
- Diversity (2)
- Dunkle Materie (2)
- Evolutionsbiologie (2)
- Geochronologie (2)
- Geodynamik (2)
- H2S biosynthesis (2)
- Immunoassay (2)
- Jugendliche (2)
- Kinder (2)
- Kohlenstoffnitriden (2)
- Kupfer (2)
- Künstliche Intelligenz (2)
- Luftverschmutzung (2)
- Moco biosynthesis (2)
- Motivation (2)
- Nachhaltigkeit (2)
- Reflexion (2)
- Risikoanalyse (2)
- Selen (2)
- Sternentwicklung (2)
- Subduktion (2)
- Virus (2)
- Vulnerabilität (2)
- Zentralasien (2)
- adolescents (2)
- air pollution (2)
- atmosphere (2)
- cellular bioenergetics (2)
- children (2)
- cognitive skills (2)
- copper (2)
- cosmic rays (2)
- cytosolic tRNA thiolation (2)
- data analysis (2)
- design (2)
- design thinking (2)
- digital technologies (2)
- dispersal (2)
- ecology (2)
- evolutionary biology (2)
- heterogene Photokatalyse (2)
- heterogeneous photocatalysis (2)
- holding capacity (2)
- immunoassay (2)
- individual differences (2)
- individuelle Unterschiede (2)
- kognitive Fähigkeiten (2)
- kosmische Strahlung (2)
- luminescence (2)
- mathematical modeling (2)
- mathematische Modellierung (2)
- maximal isometric Adaptive Force (2)
- modeling (2)
- neuromuscular control (2)
- reflection (2)
- remote sensing (2)
- risk analysis (2)
- seismic noise (2)
- seismisches Rauschen (2)
- selenium (2)
- sentence comprehension (2)
- situated processes (2)
- stellar evolution (2)
- sulfite oxidase (2)
- survival (2)
- tiefes Lernen (2)
- virus (2)
- vulnerability (2)
- well-being (2)
- 0-day (1)
- 10-Formyltetrahydrofolat (1)
- 10-Formyltetrahydrofolate (1)
- 10Be (1)
- 239+240Plutonium (1)
- 26Al/10Be cosmogenic radionuclides (1)
- 26Al/10Be kosmogene Radionuklide (1)
- 26S-Proteasom-System-Abbau (1)
- 3-Phosphoglycerinsäure (1)
- 3D Point Clouds (1)
- 3D-Punktwolken (1)
- 5,10-Methenyltetrahydrofolat (1)
- 5,10-Methenyltetrahydrofolate (1)
- 5,10-Methylenetetrahydrofolate (1)
- 5-Methoxycarbonylmethyl-2-Thiouridin (1)
- 5-Methylaminomethyl-2-Thiouridin (1)
- 5-Methyltetrahydrofolat (1)
- 5-Methyltetrahydrofolate (1)
- 5-methylaminomethyl-2-thiouridine (1)
- 60S maturation (1)
- 60S-Reifung (1)
- AC Elektrokinetik (1)
- AC Elektroosmosis (1)
- AC electrokinetics (1)
- AC electroosmosis (1)
- APT (1)
- ARPES (1)
- ASIC (1)
- ASIC (Applikationsspezifische Integrierte Schaltkreise) (1)
- Adaptation (1)
- Advanced Persistent Threats (1)
- Afrika südlich der Sahara (1)
- AgI (1)
- Agricultural soils (1)
- Akzeptanz (1)
- Alfv´en mode MHD turbulence (1)
- Alfv´en-Modus MHD-Turbulenz (1)
- Algorithmic Game Theory (1)
- Algorithmische Spieltheorie (1)
- All-Carbon-Kompositen (1)
- Alter (1)
- Alterseffekte (1)
- Altiplano (1)
- Amperometrie (1)
- Anfangsrandwertproblem (1)
- Anfrageoptimierung (1)
- Angle- and spin-resolved photoemission spectroscopy (1)
- Annahme (1)
- Anomalieerkennung (1)
- Antarctica (1)
- Antibiotikaresistenz (1)
- Antikörper (1)
- Aphasie (1)
- Approximate Bayesian Computation (1)
- Arabidopsis thaliana (1)
- Arbeitgeberattraktivität (1)
- Arbeitskampf (1)
- Architekturadaptation (1)
- Arctic aerosol (1)
- Arctic haze (1)
- Arctic sea ice (1)
- Arealverschiebungen (1)
- Artificial Intelligence (1)
- Artverbreitungsmodelle (1)
- Astrozyten (1)
- Au(111) (1)
- Aufmerksamkeit (1)
- Augmented reality (1)
- Auktion (1)
- Ausreißererkennung (1)
- Average-Case Analyse (1)
- BROAD LEAF1 (1)
- Bachelor (1)
- Bahnwesen (1)
- Barley (1)
- Bayes'sche Inferenz (1)
- Bayesian statistics (1)
- Bayessche Inferenz (1)
- Bayessche Statistik (1)
- Bedrohungserkennung (1)
- Begleitgalaxien (1)
- Beobachtungen mit TESS (1)
- Beschleunigungsmessungen (1)
- Beschäftigung im öffentlichen Sektor (1)
- Betriebssysteme (1)
- Beugung niederenergetischer Elektronen (1)
- Bewegungsverhalten (1)
- Bewegungswissenschaft (1)
- Bewegungsökologie (1)
- Bibliometrics (1)
- Big Data Analytics (1)
- Bildverarbeitung (1)
- Bilophila wadsworthia (1)
- Bindungsinteraktion (1)
- Biodiversität (1)
- Biomarker (1)
- Biomarker-Erkennung (1)
- Biomaterialien (1)
- Biomoleküle (1)
- Biomolekülinteraktionen (1)
- Bioraffinerie (1)
- Biosensoren (1)
- Blattbreite (1)
- Blei (1)
- Blickbewegungen (1)
- Blickpfade (1)
- Blockchain (1)
- Blockchains (1)
- Blockcopolymer (1)
- Blut-Hirn-Schranke (1)
- Bodenerosion (1)
- Bodenunruhe (1)
- Brasilien (1)
- Brazil (1)
- Bruchmodel (1)
- Bulge (1)
- Burnout (1)
- Bäche und Flüsse hoher Ordnung (1)
- CED (1)
- CHO-THF, CH-THF, CH2-THF und CH3-THF (1)
- CHO-THF, CH-THF, CH2-THF, and CH3-THF (1)
- COVID-19 (1)
- CVD (1)
- Caco-2 (1)
- Calderas (1)
- Carbon Capture (1)
- Carbon Dioxide Removal (1)
- Carbonization (1)
- Carlini Station (1)
- Cenozoic aridification (1)
- Central America (1)
- Ceroxid (1)
- Change (1)
- Changsha (1)
- Charlotte Perkins Gilman (1)
- Chelonia mydas (1)
- Chemical Sensors (1)
- Chemical Vapour Deposition (1)
- Cherenkov telescopes (1)
- Cherenkov-Teleskope (1)
- Chile (1)
- China (1)
- Chytridiomycota (1)
- Cimmerian orogeny (1)
- Citrazinsäure (1)
- Climate Policy (1)
- Clustering (1)
- Co-Nonsolvency (1)
- Cobalt thin film (1)
- Codierungssequenz (1)
- Cold acclimation (1)
- Colloid Chemistry (1)
- Colorado (1)
- Computational Photography (1)
- Computed Tomography (1)
- Computertomographie (1)
- Concurrent Training (1)
- Controlling (1)
- Coping (1)
- Cosmogenic nuclides (1)
- Course development (1)
- Course marketing (1)
- Courses for female students (1)
- Crisis (1)
- CsPbI3 (1)
- Cu doped InP (1)
- Cu-dotiertes InP (1)
- Cue-Gewichtung (1)
- Cue-basierter Retrieval (1)
- Curricula Development (1)
- Curriculum analysis (1)
- Cyber-Sicherheit (1)
- Cytosolische Translation in Pflanzen (1)
- DAS (1)
- DBMS (1)
- DNA-Aptamer (1)
- Dark Matter (1)
- Darstellung Verfälschung (1)
- Data-Mining (1)
- Data-Science (1)
- Datenassimilation (1)
- Datenbank (1)
- Datenbanksysteme (1)
- Datenmodelle (1)
- Datenschutz (1)
- Dauer (1)
- Deanonymisierung (1)
- Deep Learning (1)
- Defizit (1)
- Deformation (1)
- Dehnungsdeformation (1)
- Deichrückverlegung (1)
- Dekubitus (1)
- Denudationsraten (1)
- Deoxyfructosazin (1)
- Derecho (1)
- Dermochelys coriacea (1)
- Design Thinking (1)
- Determinierer (1)
- Deutsch (1)
- DiD (1)
- Diagnose und Klassifikation (1)
- Dialog (1)
- Dialogsystem (1)
- Diamantstempelzellen (1)
- Dielektrophorese (1)
- Diffusion kosmischer Strahlung (1)
- Digital Design (1)
- Digital Rebound (1)
- Dirac operator (1)
- Diracoperator (1)
- Dissertation (1)
- Distally steepened ramps (1)
- Distributed-Ledger-Technologie (DLT) (1)
- Doppelschichtstruktur (1)
- Dürre (1)
- E-DSGE (1)
- East African Rift System (1)
- Eastern Cordillera (1)
- Echtzeit (1)
- Echtzeit-Rendering (1)
- Economics (1)
- Ein Kohlenstoff (1)
- Einbruchserkennung (1)
- Einstellung (1)
- Einzelatomkatalyse (1)
- Electrical resistivity tomography (1)
- Electronic materials (1)
- Elektronentomographie (1)
- Emotionen (1)
- Emotions (1)
- Emulsion (1)
- Endpunktsicherheit (1)
- Energiebudget (1)
- English and Physics teacher trainees (1)
- Ensemble analysis (1)
- Ensemble-Analyse (1)
- Entscheidungsfindung (1)
- Entstehung (1)
- Entzug (1)
- Enzym (1)
- Erdbeben (1)
- Erdbebenschäden (1)
- Erdrutsch (1)
- Erkennung von Quasi-Identifikatoren (1)
- Erz (1)
- Ethics (1)
- Etna (1)
- European Union (1)
- European hare (1)
- Europäische Union (1)
- Evolutionsrunde (1)
- Exoplaneten (1)
- Expositionsmodellen (1)
- Extremereignisse (1)
- F0 (1)
- Falten (1)
- Fehlertoleranz (1)
- Feldarbeit (1)
- Feldhase (1)
- Fernausbildung (1)
- Fettsäuren (1)
- Filamente (1)
- Fis (1)
- Flache Subduktion (1)
- Flat subduction (1)
- Fluoreszenzfluktuationsspektroskopie (1)
- Fluoreszenzkorrelationspektroskopie (1)
- Fluoreszenzmikroskopie (1)
- Flusshochwasser (1)
- Foreland (1)
- Forest (1)
- Forschungsbedarf (1)
- Fortbewegung (1)
- Fortbildung von Lehrkräften (1)
- Fotografie (1)
- Francesca Woodman (1)
- Freie Elektronen Laser (1)
- Frühwarnung (1)
- FtsZ (1)
- FtsZ ring assemby (1)
- FtsZ-Ringbildung (1)
- Futtersuchverhalten (1)
- Führungskräfte (1)
- Führungsstärke (1)
- GPS (1)
- GPU acceleration (1)
- GPU-Beschleunigung (1)
- Galaxie: allgemein (1)
- Galaxien (1)
- Galaxienhaufen (1)
- Game Dynamics (1)
- Gammastrahlung (1)
- Gating Paradigma (1)
- Gaussian process state-space models (1)
- Gaussian processes (1)
- Gauß-Prozess Zustandsraummodelle (1)
- Gauß-Prozesse (1)
- Gebirgshydrologie (1)
- Gen-Expression (1)
- Gender (1)
- Generalized Discrimination Networks (1)
- Genomik (1)
- Geochemie (1)
- Geochronology (1)
- Geodynamics (1)
- Geodäsie (1)
- Geologie (1)
- Geomechanik (1)
- Geomorphologie (1)
- Geophysik (1)
- Gerechtigkeitsentwicklung (1)
- Gerichtsbarkeit (1)
- Gerste (1)
- Geschichte der Sprachwissenschaft (1)
- Geschlecht (1)
- Geschwindigkeitsmodell (1)
- Geschäftsprozessarchitekturen (1)
- Geschäftsprozessmodellierung (1)
- Gestaltung (1)
- Gestaltungstheorie (1)
- Gestein-Wasser-Wechselwirkung (1)
- Gezeitenwechselwirkungen (1)
- Gleichheit (1)
- Global Value Chains (1)
- Global inversion (1)
- Globale Inversion (1)
- Globale Wertschöpfungsketten (1)
- Glycin-Decarboxylase-Komplex (=GCV) (1)
- Glycin-Spaltsystem (1)
- Glycin-Synthase-Komplex (Umkehrung von GCV) (1)
- Glykopeptid (1)
- Goldsubstrat (1)
- Gothic (1)
- Gradient Boosting (1)
- Graph-Mining (1)
- Graphableitung (1)
- Graphene (1)
- Graphentheorie (1)
- Gravimetrie (1)
- Grenzflächen (1)
- Grundschule (1)
- Grundwasserentwicklung (1)
- Grüne Chemie (1)
- Grünland (1)
- Gyrochronologie (1)
- H/V (1)
- H2S-Biosynthese (1)
- HPI Schul-Cloud (1)
- HVSR (1)
- Hantaviren (1)
- Hantavirus (1)
- Hardware Design (1)
- Heimsuchung (1)
- HepG2 (1)
- Hercynian orogeny (1)
- Heterozoan (1)
- Heterozoikum (1)
- High growth firms (1)
- Hochplateau (1)
- Hochwasserrisikomanagement (1)
- Holozän (1)
- Home (1)
- Hordeum vulgare (1)
- Humboldt Center for Transdisciplinary Studies (1)
- Hydratbildung (1)
- Hydrogravimetrie (1)
- Hydrologie (1)
- Hyrise (1)
- IBD (1)
- INDETERMINATE DOMAIN protein (1)
- INDETERMINATE DOMAIN-Protein (1)
- IT Softwarentwicklung (1)
- IT systems engineering (1)
- Illumina amplicon sequencing (1)
- Impermanence (1)
- Implementierung (1)
- InSAR (1)
- Industrial Internet of Things (1)
- Industrie 4.0 (1)
- Industrielles Internet der Dinge (1)
- Industry 4.0 (1)
- Infektionsrisiko (1)
- Influenza (1)
- Influenza A Virus (1)
- Informatics (1)
- Informations- und Kommunikationstechnologien (1)
- Innovation (1)
- Interaktion (1)
- Interaktion zwischen sich ausbreitenden Riftsegmenten (1)
- Interaktions Netzwerk (1)
- Internal Audit (1)
- Interne Revision (1)
- Intersectionality (1)
- Intervention (1)
- Inversion (1)
- Ionic liquids (1)
- Ionosphere (1)
- Ionosphäre (1)
- Iroquoian languages (1)
- Isotroper schneller Modus Turbulenzen (1)
- Iterative reconstruction (1)
- Jaguar (1)
- Jewish Maritime Studies (1)
- Jewish Sea (1)
- Jüdisch-Maritime Studien (1)
- Jüdische Meer (1)
- KMDL (1)
- Kampfsport (1)
- Karbonat (1)
- Karbonat-Stabilität (1)
- Karbonatplattformen (1)
- Karbonatrampen (1)
- Karbonisierung (1)
- Karten (1)
- Kartoffel (1)
- Kaskadeneffekte (1)
- Kaskadenrate (1)
- Katalysator (1)
- Kindergarten (1)
- Klimafolgenforschung (1)
- Klimagovernance (1)
- Klimapolitik (1)
- Klimatologie (1)
- Klimaänderung (1)
- Knickpoint (1)
- Knickpoint retreat (1)
- Knickpunkt (1)
- Knickpunkt-Rückzug (1)
- Knochenumbau (1)
- Kobalt-Dünnfilm (1)
- Koexistenz (1)
- Kognition (1)
- Kolloidchemie (1)
- Kombination aus Kraft und Ausdauer (1)
- Konnektionskalkül (1)
- Konsensprotokolle (1)
- Konsequenzen von Fang und Besenderung (1)
- Koreferenzmustern (1)
- Kosmogene Nuklide (1)
- Krankheitsschwere (1)
- Kreativität (1)
- Krebs (1)
- Krebsbiomarker (1)
- Krebserkennung (1)
- Krebstherapie (1)
- Krise (1)
- Kryo-Elektronenmikroskopie (1)
- Kultur (1)
- Kälteakklimatisierung (1)
- Känozoische Aridifizierung (1)
- LEED (1)
- LGTBQI+ communities (1)
- LIBS (1)
- LOC (1)
- LSTM (1)
- Labormäuse (1)
- Lagerstätte (1)
- Landnutzung (1)
- Landscape Evolution (1)
- Landschaft der Angst (1)
- Landschaftsentwicklung (1)
- Landwirtschaftlicher Böden (1)
- Laser-Carbonization (1)
- Laserheizsystem (1)
- Laserkarbonisierung (1)
- Laserschneiden (1)
- Lebensmittelanalytik (1)
- Lebensraumnutzung (1)
- Lebensspannenpsychologie (1)
- Lehrkräftelernen (1)
- Leitungsbandstruktur (1)
- Lesson Study (1)
- Leukozytose (1)
- Li-Ionen-Kondensator (1)
- Li-S batteries (1)
- Li-S-Batterien (1)
- Li-ion capacitor (1)
- Lidar (1)
- Limnologie (Seenkunde) (1)
- Linguistik (1)
- Lipidomics (1)
- Lipidstoffwechsel (1)
- Lithiophilizität (1)
- Lithium-Ionen-Kondensatoren (1)
- Lokalitätsprinzip (1)
- Loss modeling (1)
- Low Energy Electron Diffraction (1)
- Lumineszenz (1)
- Lyman-Alpha-Emitter (1)
- Lyman-alpha emitters (1)
- MBE (1)
- MERLOT (1)
- MSPAC (1)
- Machine-Learning (1)
- Mackenzie Delta (1)
- Mackenzie-Delta (1)
- Magmagänge (1)
- Magnesit (1)
- Magnetohydrodynamik (1)
- Magnetostratigraphie (1)
- Major mergers (1)
- Malawi (1)
- Maltodextrin (1)
- Management Accounting (1)
- Maritime Räume (1)
- Maritime spaces (1)
- Medientheorie (1)
- Meeressedimente (1)
- Mekong Delta (1)
- Membranfluidität (1)
- Mensch-Maschine Interaktion (1)
- Merkmalsauswahl (1)
- Metabolit (1)
- Metaverse (1)
- Methanhydrat (1)
- Meyer-Neldel-Regel (1)
- Meyer-Neldel-rule (1)
- Microlensing (1)
- Microscale Thermophoresis (MST) (1)
- Mikrobiologie (1)
- Mikroemulsionen (1)
- Mikrofluidik (1)
- Mikrowellensynthese (1)
- Mikrozonierung (1)
- Milnor Moore theorem (1)
- Missionarsgrammatik (1)
- Mitarbeiter (1)
- Mitarbeiterstreiks (1)
- Mittelamerika (1)
- MoS₂ (1)
- Mobilität (1)
- Moco-Biosynthese (1)
- Modelle mit mehreren Versionen (1)
- Modellierung der internationalen Migration (1)
- Molecular Beam Epitaxy (1)
- Molekularstrahlepitaxie (1)
- Molybdenum sulfide monolayer (1)
- Molybdän (1)
- Molybdänsulfid Monolagen (1)
- Multi-Hazard (1)
- Multizeta-Abbildungen (1)
- Multiziel (1)
- Mykotoxine (1)
- Nahrungssulfonate (1)
- Nanoelektroden (1)
- Nanokapseln (1)
- Nanokomposit (1)
- Nanokomposite (1)
- Nanomaterialien (1)
- Nanopartikeln (1)
- Nanopartikeln-Anordnung (1)
- Nanoplastik (1)
- Nanospindeln (1)
- Nanostrukturen (1)
- Naturgefahren (1)
- Near-surface geophysics (1)
- Negotiation (1)
- Neotectonics (1)
- Neotektonik (1)
- Netzoptimierung (1)
- Netzwerkprotokolle (1)
- Neu-Delhi Metallo-Beta-Laktamase 1 (NDM-1) (1)
- Neuronen (1)
- New Delhi metallo-β-lactamase 1 (NDM-1) (1)
- Newton Polytope (1)
- Newton polytopes (1)
- Nicht-Einmaligkeit (1)
- Non-photorealistic Rendering (1)
- Non-uniqueness (1)
- NutriAct Family Study (1)
- Oberflächenchemie (1)
- Oberflächenexpositionsdatierung (1)
- Oberflächennahe Geophysik (1)
- Oberflächenplasmonenresonanzspektroskopie (SPR-Spektroskopie) (1)
- Oberflächensediment (1)
- Omega (1)
- One-carbon (1)
- Online-Gerichte (1)
- Open Access (1)
- Open Source (1)
- Optimierung (1)
- Organisationsstudien (1)
- Orogen (1)
- Ortsbindung (1)
- Ortscharakterisierung (1)
- Ortseffekte (1)
- Ostafrikanisches Grabensystem (1)
- Osteoblast (1)
- Osteoklast (1)
- Othering (1)
- PARAFAC (1)
- PBCEC (1)
- PBH (1)
- PCA (1)
- PHREEQC (1)
- PLSR (1)
- POC (1)
- PVA (population viability analysis) (1)
- Paleoseismologie (1)
- Paläo-See Mweru (1)
- Paläoklima (1)
- Paläoklimatologie (1)
- Paläoökologie (1)
- Pamir (1)
- Panelstichprobe (1)
- Particle swarm optimization (1)
- Partikelschwarm-Optimierung (1)
- Patent (1)
- Pause (1)
- Perowskit-Solarzellen (1)
- Persönlichkeitsentwicklung (1)
- Phasenübergang (1)
- Photoelektronenspektroskopie (1)
- Photometrie (1)
- Photozoan (1)
- Photozoikum (1)
- Piano delle Concazze (1)
- Pipistrellus nathusii (1)
- Pirquitas (1)
- Plant cytosolic translation (1)
- Pleistozän (1)
- Poincaré Birkhoff Witt theorem (1)
- Politik (1)
- Poly(N-Isopropylacrylamid) (1)
- Poly(N-Isopropylmethacrylamid) (1)
- Poly(N-Vinylisobutyramid) (1)
- Poly(ionische Flüssigkeit) (1)
- Poly(lactic acid) (1)
- Polymerchemie (1)
- Polymilchsäure (1)
- Polyolefin (1)
- Populationsdynamik (1)
- Populationsgefährdungsanalyse (1)
- Populationskonnektivität (1)
- Posenabschätzung (1)
- Preis der Anarchie (1)
- Price of Anarchy (1)
- Primordiale Schwarzen Löchern (1)
- Profisportler (1)
- Prosodie (1)
- Protein (1)
- Protein synthesis (1)
- Protein-Aptamer Interaktion (1)
- Proteinmultimerisierung (1)
- Proteinrekonstitution (1)
- Proteinspiegel regulieren (1)
- Proteinsynthese (1)
- Präferenzen (1)
- Präferenzmessung (1)
- Psychologie (1)
- Psychotherapie (1)
- Psychotherapieausbildung (1)
- Public Management (1)
- Puna (1)
- Punktprozess (1)
- QD device (1)
- QD stability (1)
- QD-Gerät (1)
- QD-Stabilität (1)
- QSP (1)
- Quality of regional governments (1)
- Quanten-Computing (1)
- Quantoren (1)
- Quartär (1)
- Quaternary (1)
- Query-Optimierung (1)
- Química de Coloides (1)
- RDoC (1)
- RING/U-box E3 (1)
- RIXS (1)
- RL (1)
- Radar-Satelliteninterferometrie (1)
- Rahmenübereinkommen der Vereinten Nationen über Klimaänderungen (1)
- Randbedingungen (1)
- Raum (1)
- Raumzeiten mit zeitartigen Rand (1)
- Reaktionspfadmodellierung (1)
- Real-Time Rendering (1)
- Recht (1)
- Reduzierte Ressourcen (1)
- Refinement (1)
- Regions (1)
- Regulation (1)
- Rekurrenzanalyse (1)
- Remote sensing (1)
- Rentenpolitik (1)
- Rerenzmaterial (1)
- Research Domain Criteria (1)
- Reverse Engineering (1)
- Rezeptor (1)
- Ribosomal protein heterogeneity (1)
- Ribosomal protein substoichiometry (1)
- Ribosomale Protein Substöchiometrie (1)
- Ribosomale Proteinheterogenität (1)
- Ribosome biogenesis (1)
- Ribosome specialization (1)
- Ribosomen-Biogenese (1)
- Ribosomen-Spezialisierung (1)
- Richardson Superdiffusion (1)
- Richardson-Superdiffusion (1)
- Riesenvesikel (1)
- Ringstrom (1)
- Risikobewertung von Vulkanausbrüchen (1)
- Risikokommunikation (1)
- Risikowahrnehmung (1)
- Roboter (1)
- Rotation (1)
- RpoS (1)
- Röntgenstrahlung (1)
- SPAC (1)
- STEM (1)
- SWIRL (1)
- Saccharomyces cerevisiae (1)
- Salta Rift (1)
- Salzschmelze-Templating (1)
- Samenausbreitung (1)
- Samuel Tolkowsky (1)
- Sandwich-Assay auf Basis von Aptameren (1)
- Satz von Milnor Moore (1)
- Satz von Poincaré Birkhoff Witt (1)
- Satzverarbeitung (1)
- Satzverständnis (1)
- Schadensmodellierung (1)
- Scheibe (1)
- Schelling Process (1)
- Schelling Prozess (1)
- Schelling Segregation (1)
- Schutz von Raubtieren (1)
- Schwefelwirt (1)
- Seafaring (1)
- Second Life (1)
- Sedimentgeochemie (1)
- Sedimentologie (1)
- Seefahrtswesen (1)
- Selbstmitgefühl (1)
- Selbstähnlichkeit (1)
- Selenonein (1)
- Semantic Web (1)
- Sequenzierung der nächsten Generation (1)
- Serin-Biosensor (1)
- Serin-Hydroxymethyltransferase (1)
- Shirley Jackson (1)
- Shortening (1)
- Siberia (1)
- Sibirien (1)
- Sicherheitsanalyse (1)
- Sierras Pampeanas (1)
- Signalbildung (1)
- Silikatverwitterung (1)
- Silizium (1)
- Simulationsframework (1)
- Skalierung (1)
- Skopusambiguitäten (1)
- Social Cost of Carbon (1)
- Social impact (1)
- Sociotechnical Design (1)
- Soft-Templaten (1)
- Software/Hardware Co-Design (1)
- Solanum tuberosum (1)
- Sonnenteilchen-Ereignis (1)
- South America (1)
- Soziohydrologie (1)
- Spezies (1)
- Spieldynamiken (1)
- Spieltheorie (1)
- Spinstruktur (1)
- Sport und Bewegung (1)
- Sportförderunterricht (1)
- Sprache (1)
- Spracherwerb (1)
- Sprachlernen im Limes (1)
- Sprachproduktion (1)
- Sprachverständnis (1)
- Spritzgießen (1)
- Standardisierung (1)
- Statistik (1)
- Stern-Brauner Zwerg Systeme (1)
- Stern-Planet Systeme (1)
- Sterne: Entfernungen (1)
- Stethophyma grossum (1)
- Steuerehrlichkeit (1)
- Stichprobenentnahme aus einem statistischen Modell (1)
- Stickstoffdynamik in Fliessgewässern (1)
- Stoffwechsel (1)
- Stoffwechselkäfig (1)
- Strahlungsgürtel (1)
- Strahlungshartes Design (1)
- Stream Power Law (1)
- Stressmodellierung (1)
- Stressverarbeitung (1)
- Strukturgleichungsmodell (1)
- Städte (1)
- Stärke (1)
- Subduction (1)
- Sulfid (1)
- Sulfit-Oxidase (1)
- Sumpfschrecke (1)
- Supernova-Überrest (1)
- Surface Plasmon Resonance (SPR) (1)
- Survey Studies (1)
- Sweet (1)
- Synthese (1)
- Systematics (1)
- Systembiologie (1)
- Säugetiere (1)
- Säuglinge (1)
- Südamerika (1)
- TPTP (1)
- Tax Audit (1)
- Tax Compliance (1)
- Taxonomy (1)
- Teilchenbeschleunigung (1)
- Telemedizin (1)
- Tenside (1)
- Textgenre (1)
- Thermochronologie (1)
- Tian Shan (1)
- Tierortung (1)
- Tierpersönlichkeit (1)
- Tierschutz (1)
- Tierökologie (1)
- Tomographie des elektrischen Widerstands (1)
- Topographie (1)
- Trans-Fettsäuren (1)
- Transdisziplinarität (1)
- Translational regulation (1)
- Translationseffizienz (1)
- Translationsregulation (1)
- Transmissionsspektroskopie (1)
- Transponierbare Elemente (1)
- Treibhausgase (1)
- Tripel-Graph-Grammatiken (1)
- Tsunami-Risiko (1)
- TusA (1)
- Typ-2-Diabetes (1)
- UVR (1)
- Ubiquitinierung (1)
- Umfragestudien (1)
- Umweltfilterung (1)
- Ungerechtigkeitssensibilität (1)
- Unsicherheitsanalyse (1)
- Uran-Blei-Datierung (1)
- Urokinase-Typ Plasminogen Aktivator (uPA) (1)
- Urokinase-type Plasminogen Activator (uPA) (1)
- V2X (1)
- Variabilität (1)
- Variation (1)
- Verbindungen auf Eisenbasis (1)
- Verhaltensforschung (1)
- Verhandlung (1)
- Verkürzung (1)
- Verlässlichkeit (1)
- Verteilte Systeme (1)
- Verteilungsgerechtigkeit (1)
- Vertreibung (1)
- Verwerfungen (1)
- Veränderung (1)
- Vesikel (1)
- Videoanalyse (1)
- Vietnamese Mekong Delta (1)
- Virionenbildung (1)
- Virtual Reality (1)
- Virtuelle Realität (1)
- Virus-Wirt-Interaktion (1)
- Visualisierung (1)
- Visualization (1)
- Vorland (1)
- Vorschulkinder (1)
- Wald (1)
- Wasserbilanz (1)
- Wasserqualität (1)
- Wasserressourcen (1)
- Web-Based Rendering (1)
- Webbasiertes Rendering (1)
- Weltraumphysik (1)
- Wetterextreme (1)
- Windblase (1)
- Wissenszirkulation (1)
- Wohlbefinden (1)
- Women and IT (1)
- X-rays (1)
- X-rays Photoemission Spectroscopy (1)
- XPS (1)
- XRF analysis (1)
- XRF-Analyse (1)
- Zeitgewinn (1)
- Zeitpunkt der Einschulung (1)
- Zeitpunkt von Störungen (1)
- Zellmotilität (1)
- Zellproliferation (1)
- Zellteilung (1)
- Zellteilungsdefekt (1)
- Zink (1)
- Zirkulationsregime (1)
- Zooplankton (1)
- Zwei-Prozess Modelle (1)
- abrupte Übergänge (1)
- abuse cycles (1)
- accelerometry (1)
- acceptance (1)
- achilles tendinopathy (1)
- acoustic communication (1)
- actinide, organic ligand, sorption, cementitious material, concrete, luminescence (1)
- active galactic nuclei (1)
- adaptation (1)
- adaptive Laborentwicklung (1)
- adaptive laboratory evolution (1)
- adoption (1)
- advanced persistent threat (1)
- advanced threats (1)
- aerosol: hygroscopic growth (1)
- aerosol: hygroskopisches Wachstum (1)
- aerosol: optical properties (1)
- aerosol: optische Eigenschaften (1)
- affordable housing (1)
- age (1)
- age effects (1)
- aktive Galaxienkerne (1)
- all-carbon composites (1)
- allostatic load (1)
- allostatische Belastung (1)
- ambient vibration (1)
- amoeboid motion (1)
- amperometry (1)
- amphiphile Blockcopolymere (1)
- amphiphilic block copolymer (1)
- amöboide Bewegung (1)
- animal migration (1)
- animal personality (1)
- animal welfare (1)
- anomaly detection (1)
- antibiotic resistance (1)
- antibody (1)
- aphasia (1)
- apt (1)
- aptamer-based sandwich assay (1)
- aquatic fungi (1)
- architectural adaptation (1)
- arithmethische Prozeduren (1)
- arithmetic procedures (1)
- arktischer Dunst (1)
- arktisches Aerosol (1)
- arktisches Meereis (1)
- artificial intelligence (1)
- asset management (1)
- astrocytes (1)
- athletes (1)
- attention (1)
- attitude (1)
- auction (1)
- automatic theorem prover (1)
- automatisierter Theorembeweiser (1)
- autonom replizierende Sequenz (1)
- autonomous (1)
- average-case analysis (1)
- basic psychological need frustration (1)
- bats (1)
- bedeuten freie Bahn (1)
- behavioral sciences (1)
- behaviourally correct learning (1)
- bestärkendes Lernen (1)
- bezahlbarer Wohnraum (1)
- bilayer system (1)
- bildbasiertes Rendering (1)
- binary systems (1)
- binding interactions (1)
- biodiversity (1)
- biologisches Vorwissen (1)
- biomarker detection (1)
- biomaterials (1)
- biomechanical parameter (1)
- biomechanics (1)
- biomolecule (1)
- biomolecule interactions (1)
- biorefinery (1)
- biosensors (1)
- block copolymer (1)
- blood-brain barrier (1)
- board diversity (1)
- bone remodeling (1)
- boundary conditions (1)
- bulge (1)
- burnout (1)
- business expansion (1)
- business process architectures (1)
- business process modeling (1)
- cancer (1)
- cancer biomarker (1)
- cancer detection (1)
- cancer therapy (1)
- capabilities (1)
- carbon nitride (1)
- carbon nitrides (1)
- carbonate (1)
- carbonate platforms (1)
- carbonate ramps (1)
- carbonate stability (1)
- career costs of children (1)
- cascade rate (1)
- cascading effects (1)
- catalyst (1)
- causal discovery (1)
- causal structure learning (1)
- cell division (1)
- cell motility (1)
- cell proliferation (1)
- cementitious material (1)
- cerium oxide (1)
- chemische Gasphasenabscheidung (1)
- chemische Sensoren (1)
- child care (1)
- circulation of knowledge (1)
- circulation regimes (1)
- circumgalactic medium (1)
- citrazinic acid (1)
- climate governance (1)
- climate impact research (1)
- climate policy (1)
- climatolgoy (1)
- clustering (1)
- co-creation (1)
- co-nonsolvency (1)
- coexistence (1)
- cognition (1)
- cognitive and metacognitive learning strategies (1)
- collective consumption context (1)
- colloidal quantum dot (1)
- combat sports (1)
- combinatorial inverse modelling (1)
- combined strength and endurance (1)
- commercial sector (1)
- complex emulsion (1)
- comprehension (1)
- computational design (1)
- computational photography (1)
- concrete (1)
- concurrent training (1)
- connection calculus (1)
- consensus protocols (1)
- consistent learning (1)
- context factors (1)
- contrast effects (1)
- convolutional neural networks (1)
- coordinate name structures (1)
- coping (1)
- copper-bearing minerals (1)
- coreference (1)
- cosmic ray diffusion (1)
- cosmological simulations (1)
- counterclockwise block rotation between overlapping rift segments (1)
- courts of justice (1)
- coworking spaces (1)
- creative process (1)
- creativity (1)
- cross-lagged panel analysis (1)
- cryo-electron microscopy (1)
- cue reliability (1)
- cue-based retrieval (1)
- culture (1)
- customer acceptance (1)
- cybersecurity (1)
- dark exciton (1)
- dark matter (1)
- data assimilation (1)
- data dependencies (1)
- data mining (1)
- data models (1)
- data privacy (1)
- data science (1)
- data-driven (1)
- database (1)
- database optimization (1)
- database systems (1)
- datengetrieben (1)
- de-anonymisation (1)
- decentral identities (1)
- decision making (1)
- decomposition (1)
- decubitus (1)
- deep Gaussian processes (1)
- deep carbon (1)
- deep eutectic solvents (1)
- deep reinforcement learning (1)
- deficites (1)
- denudation rates (1)
- deoxyfructosazine (1)
- dependability (1)
- deposits (1)
- depression (1)
- depressive disorder (1)
- design theory (1)
- determiners (1)
- dezentrale Identitäten (1)
- diagnosis and classification (1)
- dialogue (1)
- dialogue system (1)
- diamond anvil cells (1)
- die arabische Welt (1)
- dielectrophoresis (1)
- dietary quality (1)
- dietary sulfonates (1)
- digital design (1)
- digital ethnography (1)
- digital fabrication (1)
- digital media (1)
- digital sovereignty (1)
- digital transformation (1)
- digitale Bildung (1)
- digitale Ethnographie (1)
- digitale Fabrikation (1)
- digitale Medien (1)
- digitale Souveränität (1)
- digitalisation (1)
- digitalización (1)
- dike pathways (1)
- dike relocation (1)
- disc (1)
- disease severity (1)
- diseño (1)
- displacement (1)
- distal steil ansteigende Rampen (1)
- distributed systems (1)
- distributive justice (1)
- disturbance timing (1)
- divergent thinking (1)
- doctoral thesis (1)
- domesticity (1)
- doppelganger (1)
- drahtloses Netzwerk (1)
- drought (1)
- dual process models (1)
- dual-process models (1)
- dunkles Exziton (1)
- duration (1)
- dwarf spheroidal galaxies (1)
- dynamic modeling (1)
- dynamic panel estimation (1)
- dynamical models (1)
- dynamische Modelle (1)
- dynamische Modellierung (1)
- early modern manuscript culture (1)
- early warning (1)
- earthquake damage (1)
- earthquakes (1)
- eavesdropping (1)
- echolocation (1)
- ecological modelling (1)
- efficient scattering (1)
- effiziente Streuung (1)
- electromyography (1)
- electron tomography (1)
- elektronische Materialien (1)
- elementare Bewegungsfertigkeiten (1)
- elf-determination theory (1)
- embodied power structures (1)
- embodiment (1)
- emergence (1)
- emotional regulation (1)
- empirical modeling (1)
- empirische Modellierung (1)
- employee strikes (1)
- employer attractiveness (1)
- employment (1)
- empowering leadership (1)
- emulsion (1)
- endpoint security (1)
- energy budget (1)
- energy prices (1)
- ensamblaje de nanopartículas (1)
- entrepreneurial performance (1)
- entrepreneurship (1)
- environment filtering (1)
- environmental upgrading (1)
- enzyme (1)
- equality (1)
- estructuras templadas blandas (1)
- estudios de organización (1)
- europium (1)
- exercise (1)
- exercise science (1)
- exhumation (1)
- exoplanets (1)
- exposure (1)
- extend (1)
- extension (1)
- extreme events (1)
- eye movement (1)
- eye-tracking (1)
- fashion industry (1)
- fatty acids (1)
- fault tolerance (1)
- feature selection (1)
- felid conservation (1)
- fieldwork (1)
- filaments (1)
- final lengthening (1)
- finale Längung (1)
- firm performance (1)
- fiscal policy (1)
- flexible pattern matching approach (1)
- flood risk management (1)
- flooding (1)
- fluorescence correlation spectroscopy (1)
- fluorescence fluctuation spectroscopy (1)
- fluorescence microscopy (1)
- fluorescent proteins (1)
- fluoreszierende Proteine (1)
- fluvial flooding (1)
- follower (1)
- food analysis (1)
- food neophilia (1)
- foraging behaviour (1)
- fortschrittliche Angriffe (1)
- free electron laser (1)
- frustration (1)
- frühneuzeitliche Manuskriptkultur (1)
- functional dependencies (1)
- functional traits (1)
- fundamental movement skills (1)
- fundamental parameters (1)
- fundamentale Parameter (1)
- funktionale Abhängigkeiten (1)
- funktionale Merkmale (1)
- galaxies (1)
- galaxy clusters (1)
- galaxy: general (1)
- gambler’s fallac (1)
- game theory (1)
- gamma-rays (1)
- gating paradigm (1)
- gefaltete neuronale Netze (1)
- gegen den Uhrzeigersinn gerichtete Rotation von Krustenblöcken zwischen zwei überlappenden Riftsegmenten (1)
- gemeinsame Inversion (1)
- gender (1)
- gender pay gap (1)
- gene expression (1)
- general self-efficacy (1)
- generalized discrimination networks (1)
- genomics (1)
- genre (1)
- geochemistry (1)
- geochronology (1)
- geodesy (1)
- geodynamics (1)
- geologic fault (1)
- geologische Verwerfung (1)
- geology (1)
- geomechanics (1)
- geomorphology (1)
- geophysical methods (1)
- geophysics (1)
- geophysikalische Methoden (1)
- german (1)
- geschichtsbewusste Laufzeit-Modelle (1)
- geschrieben (1)
- gesprochen (1)
- gewerblicher Sektor (1)
- giant vesicles (1)
- giving-up density (1)
- global environmental politics (1)
- global flood model (1)
- global model management (1)
- globale Umweltpolitik (1)
- globales Modellmanagement (1)
- globales Überschwemmungsmodell (1)
- glycine cleavage system (1)
- glycine decarboxylase complex (=GCV) (1)
- glycine synthase complex (reversal of GCV) (1)
- glycopeptide (1)
- gold substrate (1)
- gothic (1)
- gradient boosting (1)
- graph inference (1)
- graph mining (1)
- graph theory (1)
- graphene (1)
- grassland (1)
- gravimetry (1)
- green chemistry (1)
- green house gases (1)
- groundwater evolution (1)
- growth defect (1)
- großräumige Struktur des Universums (1)
- großskalige Zirkulation (1)
- gyrochronology (1)
- habitat use (1)
- halbstrukturiertes Interview (1)
- hardware design (1)
- haunted house (1)
- haunting (1)
- healthy eating (1)
- herzynische Orogenese (1)
- heteroatom-doped carbons (1)
- heteroatom-dotierte Kohlenstoffe (1)
- heterogene Katalyse (1)
- heterogeneous catalysis (1)
- hierarchical pore structure (1)
- hierarchische Porenstruktur (1)
- high resolution (1)
- high-frequency monitoring (1)
- high-order streams and rivers (1)
- high-redshift (1)
- history of linguistics (1)
- history-aware runtime models (1)
- hohe Auflösung (1)
- hoher Rotverschiebung (1)
- holding (HIMA) and pushing (PIMA) isometric muscle action (1)
- holocene (1)
- horizontal-vertikales Spektralverhältnis (1)
- hot hand fallacy (1)
- human computer interaction (1)
- human-centered design (1)
- hybrid nanostructures (1)
- hybride Nanostrukturen (1)
- hydrate formation (1)
- hydrogravimetry (1)
- hydrological modelling (1)
- hydrological processes (1)
- hydrologische Modellierung (1)
- hydrologische Prozesse (1)
- hydrology (1)
- hydrothermal (1)
- hyperbolic geometry (1)
- hyperbolische Geometrie (1)
- ill-being (1)
- image processing (1)
- image stylization (1)
- image-based rendering (1)
- implementation (1)
- in-stream nitrogen dynamics (1)
- inclusion dependencies (1)
- incremental graph query evaluation (1)
- index selection (1)
- indirect economic impacts (1)
- indirekte ökonomische Effekte (1)
- individual interest (1)
- individual variability (1)
- individual-based modelling (1)
- individuelle Interessen (1)
- individuelle Variabilität (1)
- individuen-basierte Modellierung (1)
- industrial action (1)
- inequality (1)
- inequality of opportunity (1)
- infants (1)
- inflation (1)
- influenza (1)
- influenza A virus (1)
- information and communication technologies (1)
- initial boundary value problem (1)
- injection molding (1)
- injury mechanisms (1)
- inkrementelle Ausführung von Graphanfragen (1)
- innovación (1)
- interaction (1)
- interaction network (1)
- interactomics (1)
- interest (1)
- interfaces (1)
- international migration (1)
- international migration modeling (1)
- international relations (1)
- internationale Beziehungen (1)
- internationale Migration (1)
- intervention (1)
- intestinal (1)
- intracluster medium (1)
- intrusion detection (1)
- inversion (1)
- ionic conductivity (1)
- irokesische Sprachen (1)
- iron-based compounds (1)
- irrelevant information (1)
- isotropic fast mode turbulence (1)
- iterative Rekonstruktion (1)
- jaguar (1)
- job creation (1)
- joint inversion (1)
- juridical recording (1)
- justice development (1)
- justice sensitivity (1)
- kausale Entdeckung (1)
- kausales Strukturlernen (1)
- kimmerische Orogenese (1)
- kindergarten (1)
- kinematics (1)
- kinetics (1)
- kognitive und metakognitive Lernstrategien (1)
- kolloidaler Quantenpunkt (1)
- kombinatorische inverse Modellierung (1)
- komplexe Emulsion (1)
- konsistentes Lernen (1)
- koordinierte Namensstrukturen (1)
- kosmologische Computersimulationen (1)
- kreativer Prozess (1)
- lab-on-chip (1)
- laboratory mice (1)
- lacustrine sediment (1)
- land use (1)
- landscape of fear (1)
- landslide (1)
- language acquisition (1)
- language learning in the limit (1)
- large marsh grasshopper (1)
- large-scale circulation (1)
- large-scale structure (1)
- laser cutting (1)
- laser heating (1)
- law (1)
- lead (1)
- leader (1)
- leaf width (1)
- leanCoP (1)
- lesson study (1)
- leukocytosis (1)
- lifespan psychology (1)
- limnische Sedimente (Seesedimente) (1)
- limnology (1)
- linear mixed models (1)
- lineare gemischte Modelle (1)
- linguistics (1)
- linked employer-employee data (1)
- lipidomics (1)
- lithiophilicity (1)
- lithium ion capacitors (1)
- local group (1)
- locality principle (1)
- locally ambiguous sentences (1)
- lokal ambige Sätze (1)
- lokalen Gruppe (1)
- long COVID (1)
- longitudinal studies (1)
- low-cost sensor (1)
- lower critical solution temperature (1)
- lower mantle (1)
- längschnittliche Studien (1)
- lösungsmittelfreie Synthese (1)
- macro-economic modelling (1)
- macroeconomic impacts (1)
- magma assisted continental rifting (1)
- magmagestütztes kontinentales Rifting (1)
- magmatic (1)
- magmatisch (1)
- magnesite (1)
- magnetic proximity effect (1)
- magnetischer Näherungseffekt (1)
- magnetohydrodynamics (1)
- magnetostratigraphy (1)
- magnetostriction (1)
- major mergers (1)
- makroökonomische Folgen (1)
- makroökonomische Modellierung (1)
- maltodextrin (1)
- mammals (1)
- maps (1)
- marine Terrassen (1)
- marine sediments (1)
- marine terrace (1)
- maternal employment (1)
- mean free path (1)
- media theory (1)
- membrane fluidity (1)
- memory distortion (1)
- menschenzentriertes Design (1)
- mental health (1)
- meromorphe Fortsetzung (1)
- meromorphic continuation (1)
- mesoporous (1)
- mesoporös (1)
- metabolic cage (1)
- metabolism (1)
- metabolite (1)
- metabolomics (1)
- methane hydrate (1)
- microRNA (1)
- microbial communities (1)
- microbiology (1)
- microemulsiones (1)
- microemulsions (1)
- microfluidics (1)
- microlensing (1)
- microwave synthesis (1)
- microzonation (1)
- middle childhood (1)
- mikrobielle Gemeinschaften (1)
- military conflicts (1)
- missionary grammar (1)
- mittlere Kindheit (1)
- mixed methods (1)
- mobility (1)
- model-driven software engineering (1)
- modellgetriebene Softwaretechnik (1)
- modifizierte räumliche Autkorrelationsmethode (1)
- modular production (1)
- molybdenum (1)
- moral development (1)
- moralische Entwicklung (1)
- mountain hydrology (1)
- movement (1)
- movement ecology (1)
- mpmUCC (1)
- multi-agent system (1)
- multi-hazard (1)
- multi-objective (1)
- multi-objective optimisation (1)
- multi-version models (1)
- multiresponsiv (1)
- multiresponsive (1)
- multizeta functions (1)
- muscle function (1)
- muscle weakness (1)
- music information retrieval (1)
- mycotoxins (1)
- nachhaltige industrielle Entwicklung (1)
- nachhaltiges Lieferkettenmanagement (1)
- nanocapsules (1)
- nanocomposite (1)
- nanoelectrodes (1)
- nanoestructuras (1)
- nanoestructuras híbridas (1)
- nanomaterials (1)
- nanoparticle assembly (1)
- nanoparticles (1)
- nanopartículas (1)
- nanoplastic (1)
- nanoscale heat transfer (1)
- nanospindles (1)
- nanostructures (1)
- natural hazards (1)
- naturbasierte Lösungen (1)
- nature-based solutions (1)
- need satisfaction (1)
- negative thermal expansion (1)
- negative valence systems (1)
- negatives Valenzsystem (1)
- network optimization (1)
- network protocols (1)
- neural networks (1)
- neuromuscular (1)
- neuronale Netze (1)
- neurons (1)
- next generation sequencing (1)
- nicht-thermische Emission (1)
- nichtlineare Zeitreihenanalyse (1)
- noise reduction (1)
- non-photorealistic rendering (1)
- non-thermal emission (1)
- nonlinear time series analysis (1)
- normal faulting (1)
- numerical (1)
- numerical astrophysics (1)
- numerical simulation (1)
- numerisch (1)
- numerische Astrophysik (1)
- numerische Simulation (1)
- observations with TESS (1)
- odd chain fatty acids (1)
- offene Wissenschaft (1)
- omega (1)
- online courts (1)
- open science (1)
- openHPI (1)
- operating systems (1)
- optimization (1)
- order dependencies (1)
- ore (1)
- organic ligand (1)
- organic synthesis (1)
- organische Synthese (1)
- organization studies (1)
- ortsverteile faseroptische Dehnungsmessung (1)
- osteoblast (1)
- osteoclast (1)
- outcomes (1)
- outlier detection (1)
- palaeoclimate (1)
- palaeoclimatology (1)
- paleo-lake Mweru (1)
- paleoecology (1)
- paleoseismology (1)
- parallel processing (1)
- parallele Verarbeitung (1)
- particle acceleration (1)
- patent (1)
- patterns of violence (1)
- pause (1)
- pension policy (1)
- percept cycles (1)
- perceptual attunement (1)
- perovskite solar cells (1)
- person-centered approaches (1)
- personality development (1)
- personenzentrierte Ansätze (1)
- perzeptuelle Reorganisation (1)
- phase transition (1)
- phonotaxis (1)
- photometry (1)
- physical activity (1)
- phytoplankton host (1)
- picosecond ultrasonics (1)
- place attachment (1)
- plateau (1)
- playback (1)
- pleistocene (1)
- point process (1)
- point-of-care (1)
- polar ice (1)
- polares Eis (1)
- policy (1)
- poly ungesättigte Fettsäuren (1)
- poly(N-isopropyl acrylamide) (1)
- poly(N-isopropyl methacrylamide) (1)
- poly(N-vinyl isobutyramide) (1)
- poly(ionic liquid) (1)
- poly(ionic liquid)s (1)
- poly(ionische Flüssigkeiten) (1)
- polymer (1)
- polymer chemistry (1)
- polyolefin (1)
- polyunsaturated fatty acids (1)
- population connectivity (1)
- population dynamics (1)
- porous materials (1)
- poröse Materialien (1)
- poröse Struktur (1)
- pose estimation (1)
- positive valence systems (1)
- positives Valenzsystem (1)
- post COVID syndrome (1)
- potato (1)
- preference assessment (1)
- preferences (1)
- preschool children (1)
- primary school (1)
- primordial black holes (1)
- prior knowledge (1)
- probabilistic machine learning (1)
- probabilistic modeling (1)
- probabilistische Modellierung (1)
- probabilistisches maschinelles Lernen (1)
- process (1)
- process mining (1)
- production control (1)
- professional development (1)
- prosodic boundary cues (1)
- prosodic cues (1)
- prosodic disambiguation (1)
- prosodic phrase boundary (1)
- prosodische Disambiguierung (1)
- prosodische Grenzmarkierungen (1)
- prosodische Hinweise (1)
- prosodische Phrasengrenze (1)
- prosody (1)
- proteasomal degradation (1)
- protein (1)
- protein multimerization (1)
- protein reconstitution (1)
- protein-aptamer interaction (1)
- protein-level regulation (1)
- proteomics (1)
- psychology (1)
- psychopathology (1)
- psychotherapy (1)
- psychotherapy training (1)
- public management (1)
- public transport (1)
- punishment (1)
- quantifiers (1)
- quantum computing (1)
- quasi-identifier discovery (1)
- query optimization (1)
- radar satellite interferometry (1)
- radiation belts (1)
- radiation hardness (1)
- railways (1)
- range shifts (1)
- rate equation (1)
- reaction path modelling (1)
- real-time (1)
- receptor (1)
- recurrence analysis (1)
- reductive acetyl-CoA pathway (1)
- reductive glycine pathway (1)
- reduktiver Acetyl-CoA-Weg (1)
- reduktiver Glycinweg (1)
- reference material (1)
- refinement (1)
- reflective breadth (1)
- reflective depth (1)
- reflective skills (1)
- regime shifts (1)
- remedial physical education (1)
- remote instruction (1)
- repeated adaptive isometric–eccentric muscle action (1)
- research needs (1)
- resistance (1)
- resonante inelastische Röntgenstreuung (1)
- resource reduction (1)
- respondent pool (1)
- response styles theory (1)
- retirement policies (1)
- reverse engineering (1)
- rift segments interaction (1)
- ring current (1)
- risk communication (1)
- risk of infection (1)
- risk perception (1)
- robot (1)
- rock-water interaction (1)
- rotation (1)
- rumination (1)
- runners (1)
- räumlich explizites Modell (1)
- räumliche Aggregation (1)
- räumliche Autkorrelationsmethode (1)
- räumliche Autokorrelation (1)
- saccharomyces cerevisiae (1)
- salinity gradient (1)
- salt melt templating (1)
- sampling (1)
- satellite galaxies (1)
- scalability (1)
- scaling (1)
- scan paths (1)
- scope ambiguities (1)
- seasonality (1)
- secular trends (1)
- security analytics (1)
- sedaDNA (1)
- sediment geochemistry (1)
- sedimentology (1)
- seed dispersal (1)
- seismic risk (1)
- seismic signal processing (1)
- seismisches Risiko (1)
- selbstanpassendes Multiprozessorsystem (1)
- selbstbestimmte Identitäten (1)
- selenoneine (1)
- self-adaptive multiprocessing system (1)
- self-compassion (1)
- self-concept (1)
- self-driving (1)
- self-evaluation (1)
- self-similarity (1)
- self-sovereign identity (1)
- semi-structured interview (1)
- separate spheres (1)
- serine biosensor (1)
- serine hydroxymethyltransferase (1)
- service business models (1)
- sex (1)
- signal formation (1)
- silicate weathering (1)
- silicon (1)
- simulation framework (1)
- single cell imaging (1)
- single event upset (1)
- single-atom catalysis (1)
- site characterization (1)
- site effects (1)
- situierte Prozesse (1)
- skills (1)
- social comparison (1)
- socio-hydrology (1)
- socioeconomic background (1)
- soft template (1)
- soft-templates (1)
- software/hardware co-design (1)
- soil erosion (1)
- solar particle event (1)
- solvent-free reactions (1)
- sorption (1)
- source model (1)
- sozioökonomischer Hintergrund (1)
- space (1)
- space physics (1)
- spacetimes with timelike boundary (1)
- spatial aggregation (1)
- spatial autocorrelation (1)
- spatially explicit model (1)
- species (1)
- species distribution modelling (1)
- speech (1)
- speech production (1)
- sphäroidische Zwerggalaxien (1)
- spillover effects (1)
- spin structure (1)
- spindown (1)
- spoken (1)
- stabile Isotope (1)
- stable isotopes (1)
- standardization (1)
- star-brown dwarf systems (1)
- star-planet systems (1)
- starch (1)
- stark eutektisches Lösungsmittel (1)
- stark verhaltenskorrekt sperrend (1)
- stars: distances (1)
- static source-code analysis (1)
- statische Quellcodeanalyse (1)
- statistical machine learning (1)
- statistics (1)
- statistisches maschinelles Lernen (1)
- stellar content (1)
- stellarer Inhalt (1)
- steuerliche Außenprüfung (1)
- strahleninduzierte Einzelereignis-Effekte (1)
- stratigraphic forward modelling (1)
- stratigraphische Vorwärtsmodellierung (1)
- strength vs. endurance athletes (1)
- stress modeling (1)
- stress processing (1)
- strongly behaviourally correct locking (1)
- structural ambiguities (1)
- structural equation modeling (1)
- structural inheritance (1)
- strukturelle Ambiguitäten (1)
- städtische Überschwemmungen (1)
- sub-saharan Africa (1)
- subduction (1)
- sulfide (1)
- sulfur host (1)
- supernova remnant (1)
- surface chemistry (1)
- surface exposure dating (1)
- surface sediment (1)
- surfactants (1)
- sustainable industrial development (1)
- sustainable supply chain management (1)
- switchSENSE (1)
- switchSENSE Technologie (1)
- synthesis (1)
- synthetic biology (1)
- synthetic formatotrphy (1)
- synthetische Biologie (1)
- systematic literature review (1)
- systematic review (1)
- systematische Übersicht (1)
- systems biology (1)
- säkulare Trends (1)
- tRNA Thiomodifikation (1)
- tRNA thiomodifications (1)
- teacher learning (1)
- tectonic geomorphology (1)
- tektonische Geomorphologie (1)
- telemedicine (1)
- temporal graph queries (1)
- temporale Graphanfragen (1)
- tensioactivos (1)
- the Arab world (1)
- thermal modeling (1)
- thermal properties (1)
- thermochronology (1)
- thermoresponsive Polymere (1)
- thermoresponsive polymer (1)
- threat detection (1)
- tidal interactions (1)
- tiefe Gauß-Prozesse (1)
- tiefer Kohlenstoff (1)
- time-buying (1)
- timing of school enrollment (1)
- topography (1)
- tptp (1)
- tracking (1)
- tracking impacts (1)
- trans fatty acids (1)
- transdiagnostic (1)
- transdiagnostisch (1)
- transdisciplinary (1)
- transition metal systems (1)
- translation efficiency (1)
- transmission spectroscopy (1)
- transposable elements (1)
- tribunales de justicia (1)
- tribunales en línea (1)
- triple graph grammars (1)
- tropical lake (1)
- tropischer See (1)
- tsunami risk (1)
- two-way fixed effects (1)
- type 2 diabetes (1)
- ubiquitination (1)
- ultrafast photoacoustics (1)
- ultrafast x-ray diffraction (1)
- uncanny (1)
- uncertainty analysis (1)
- ungefähre Bayessche Komputation (1)
- ungeradkettige Fettsäuren (1)
- unheimlich (1)
- unique column combinations (1)
- unsupervised (1)
- untere kritische Lösungstemperatur (1)
- unterer Mantel (1)
- uranium-lead-dating (1)
- urban (1)
- urban pluvial flood (1)
- valence band structure (1)
- variability (1)
- variation (1)
- variational inference (1)
- variationelle Inferenz (1)
- varying interlocutors (1)
- velocity model (1)
- verhaltenskorrektes Lernen (1)
- verifiable credentials (1)
- verschiedene Gesprächspartner:innen (1)
- verstärkendes Lernen (1)
- vesicles (1)
- video analysis (1)
- virus assembly (1)
- virus-host interaction (1)
- volcanic hazard assessment (1)
- volcanic tremor (1)
- volcano seismology (1)
- vulkanischer Tremor (1)
- wars (1)
- water balance (1)
- water quality (1)
- water resources (1)
- wealth (1)
- weather extremes (1)
- weiche Vorlage (1)
- weiße Blutkörperchen (1)
- welfare (1)
- white blood cells (1)
- wind bubble (1)
- wireless networks (1)
- withdrawal (1)
- women in management (1)
- women’s careers (1)
- wrinkles (1)
- written (1)
- ytterbium (1)
- zeitlich hochaufgelöste Sensormessungen (1)
- zelluläre Bioenergetik (1)
- zero-day (1)
- zinc (1)
- zirkumgalaktischen Medium (1)
- zooplankton (1)
- zytosolische tRNA-Thiolierung (1)
- Ätna (1)
- Ökologie (1)
- Übergangsmetall - Komplexe (1)
- Überschwemmungen (1)
- öffentlicher Verkehr (1)
- ökologische Modellierung (1)
- ökologisches Upgrading (1)
- östliche Kordillere (1)
- überprüfbare Nachweise (1)
- δ18O and δ13C stabile Isotope (1)
- δ18O and δ13C stable isotopes (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (55)
- Extern (47)
- Institut für Geowissenschaften (27)
- Institut für Biochemie und Biologie (26)
- Institut für Physik und Astronomie (22)
- Institut für Chemie (19)
- Vereinigung für Jüdische Studien e. V. (13)
- Institut für Umweltwissenschaften und Geographie (12)
- Center for Economic Policy Analysis (CEPA) (11)
- Fachgruppe Volkswirtschaftslehre (11)
“They Took to the Sea”
(2023)
The sea and maritime spaces have long been neglected in the field of Jewish studies despite their relevance in the context of Jewish religious texts and historical narratives. The images of Noah’s arche, king Salomon’s maritime activities or the miracle of the parting of the Red Sea immediately come into mind, however, only illustrate a few aspects of Jewish maritime activities. Consequently, the relations of Jews and the sea has to be seen in a much broader spatial and temporal framework in order to understand the overall importance of maritime spaces in Jewish history and culture.
Almost sixty years after Samuel Tolkowsky’s pivotal study on maritime Jewish history and culture and the publication of his book “They Took to the Sea” in 1964, this volume of PaRDeS seeks to follow these ideas, revisit Jewish history and culture from different maritime perspectives and shed new light on current research in the field, which brings together Jewish and maritime studies.
The articles in this volume therefore reflect a wide range of topics and illustrate how maritime perspectives can enrich our understanding of Jewish history and culture and its entanglement with the sea – especially in modern times. They study different spaces and examine their embedded narratives and functions. They follow in one way or another the discussions which evolved in the last decades, focused on the importance of spatial dimensions and opened up possibilities for studying the production and construction of spaces, their influences on cultural practices and ideas, as well as structures and changes of social processes. By taking these debates into account, the articles offer new insights into Jewish history and culture by taking us out to “sea” and inviting us to revisit Jewish history and culture from different maritime perspectives.
“One video fit for all”
(2023)
Online learning in mathematics has always been challenging, especially for mathematics in STEM education. This paper presents how to make “one fit for all” lecture videos for mathematics in STEM education. In general, we do believe that there is no such thing as “one fit for all” video. The curriculum requires a high level of prior knowledge in mathematics from high school to get a good understanding, and the variation of prior knowledge levels among STEM education students is often high. This creates challenges for both online teaching and on-campus teaching. This article presents experimenting and researching on a video format where students can get a real-time feeling, and which fits their needs regarding their existing prior knowledge. They have the possibility to ask and receive answers during the video without having to feel that they must jump into different sources, which helps to reduce unnecessary distractions. The fundamental video format presented here is that of dynamic branching videos, which has to little degree been researched in education related studies. The reason might be that this field is quite new for higher education, and there is relatively high requirement on the video editing skills from the teachers’ side considering the platforms that are available so far. The videos are implemented for engineering students who take the Linear Algebra course at the Norwegian University of Science and Technology in spring 2023. Feedback from the students gathered via anonymous surveys so far (N = 21) is very positive. With the high suitability for online teaching, this video format might lead the trend of online learning in the future. The design and implementation of dynamic videos in mathematics in higher education was presented for the first time at the EMOOCs conference 2023.
“Israel am Meere”
(2023)
For Jews in Germany, the period following the Nazis’ rise to power in January 1933 was a period of decision-making on many levels: How should they respond to the persecution? If they decided to emigrate, many more decisions had to be made: How does one leave a country, and where should one go? A key moment in the process and in the cultural practice of emigration is the beginning of the sea voyage – when the need for departure and the hope for a new arrival jointly create a period of liminality. Looking at reports from sea voyages of exploration and emigration from the 1930s, this contribution discusses the question whether, and in what ways, such reflections can be read in the context of religious experiences and in the search for Jewish identities in times of turmoil.
“Creating a Maritime Future”
(2023)
This article explores the importance of the port city of Hamburg in the evolving discourses on the creation of a maritime future, a vision which became influential in the 1930s, 1940s and 1950s. While some Jewish representatives in the city aimed at preserving and intertwining Hanseatic and Jewish traditions in order to secure a Jewish presence in the port city under the pressure of the Nazi regime and thereafter, others wanted to create new emigration opportunities, especially to Mandatory Palestine, and create a Jewish maritime future in Eretz Israel. Different Zionist organizations supported the newly evolving maritime ideas, such as the “conquest of the sea”, and promoted the image of a Jewish seafaring nation. Despite the difficulties in the 1940s, these concepts gained influence post-1945 and led to the foundation of the fishery kibbutz “Zerubavel” in Blankenese/Hamburg. However, the idea of a Hanseatic Jewish future also remained influential and illustrates how differently a “Jewish maritime future” was imagined and used to link past, present and future.
xMOOCs
(2023)
The World Health Organization designed OpenWHO.org to provide an inclusive and accessible online environment to equip learners across the globe with critical up-to-date information and to be able to effectively protect themselves in health emergencies. The platform thus focuses on the eXtended Massive Open Online Course (xMOOC) modality – contentfocused and expert-driven, one-to-many modelled, and self-paced for scalable learning. In this paper, we describe how OpenWHO utilized xMOOCs to reach mass audiences during the COVID-19 pandemic; the paper specifically examines the accessibility, language inclusivity and adaptability of hosted xMOOCs. As of February 2023, OpenWHO had 7.5 million enrolments across 200 xMOOCs on health emergency, epidemic, pandemic and other public health topics available across 65 languages, including 46 courses targeted for the COVID-19 pandemic. Our results suggest that the xMOOC modality allowed OpenWHO to expand learning during the pandemic to previously underrepresented groups, including women, participants ages 70 and older, and learners younger than age 20. The OpenWHO use case shows that xMOOCs should be considered when there is a need for massive knowledge transfer in health emergency situations, yet the approach should be context-specific according to the type of health emergency, targeted population and region. Our evidence also supports previous calls to put intervention elements that contribute to removing barriers to access at the core of learning and health information dissemination. Equity must be the fundamental principle and organizing criteria for public health work.
Diversity is a term that is broadly used and challenging for informatics research, development and education. Diversity concerns may relate to unequal participation, knowledge and methodology, curricula, institutional planning etc. For a lot of these areas, measures, guidelines and best practices on diversity awareness exist. A systemic, sustainable impact of diversity measures on informatics is still largely missing. In this paper I explore what working with diversity and gender concepts in informatics entails, what the main challenges are and provide thoughts for improvement. The paper includes definitions of diversity and intersectionality, reflections on the disciplinary basis of informatics and practical implications of integrating diversity in informatics research and development. In the final part, two concepts from the social sciences and the humanities, the notion of “third space”/hybridity and the notion of “feminist ethics of care”, serve as a lens to foster more sustainable ways of working with diversity in informatics.
Leveraging two cohort-specific pension reforms, this paper estimates the forward-looking effects of an exogenous increase in the working horizon on (un)employment behaviour for individuals with a long remaining statutory working life. Using difference-in-differences and regression discontinuity approaches based on administrative and survey data, I show that a longer legal working horizon increases individuals’ subjective expectations about the length of their work life, raises the probability of employment, decreases the probability of unemployment, and increases the intensity of job search among the unemployed. Heterogeneity analyses show that the demonstrated employment effects are strongest for women and in occupations with comparatively low physical intensity, i.e., occupations that can be performed at older ages.
We analyze the impact of women’s managerial representation on the gender pay gap among employees on the establishment level using German Linked-Employer-Employee-Data from the years 2004 to 2018. For identification of a causal effect we employ a panel model with establishment fixed effects and industry-specific time dummies. Our results show that a higher share of women in management significantly reduces the gender pay gap within the firm. An increase in the share of women in first-level management e.g. from zero to above 33 percent decreases the adjusted gender pay gap from a baseline of 15 percent by 1.2 percentage points, i.e. to roughly 14 percent. The effect is stronger for women in second-level than first-level management, indicating that women managers with closer interactions with their subordinates have a higher impact on the gender pay gap than women on higher management levels. The results are similar for East and West Germany, despite the lower gender pay gap and more gender egalitarian social norms in East Germany. From a policy perspective, we conclude that increasing the number of women in management positions has the potential to reduce the gender pay gap to a limited extent. However, further policy measures will be needed in order to fully close the gender gap in pay.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
In an effort to describe and produce different formats for video instruction, the research community in technology-enhanced learning, and MOOC scholars in particular, have focused on the general style of video production: whether it is a digitally scripted “talk-and-chalk” or a “talking head” version of a learning unit. Since these production styles include various sub-elements, this paper deconstructs the inherited elements of video production in the context of educational live-streams. Using over 700 videos – both from synchronous and asynchronous modalities of large video-based platforms (YouTube and Twitch), 92 features were found in eight categories of video production. These include commonly analyzed features such as the use of green screen and a visible instructor, but also less studied features such as social media connections and changing camera perspective depending on the topic being covered. Overall, the research results enable an analysis of common video production styles and a toolbox for categorizing new formats – independent of their final (a)synchronous use in MOOCs. Keywords: video production, MOOC video styles, live-streaming.
What is it good for?
(2023)
Military conflicts and wars affect a country’s development in various dimensions. Rising inflation rates are a potentially important economic effect associated with conflict. High inflation can undermine investment, weigh on private consumption, and threaten macroeconomic stability. Furthermore, these effects are not necessarily restricted to the locality of the conflict, but can also spill over to other countries. Therefore, to understand how conflict affects the economy and to make a more comprehensive assessment of the costs of armed conflict, it is important to take inflationary effects into account. To disentangle the conflict-inflation-nexus and to quantify this relationship, we conduct a panel analysis for 175 countries over the period 1950–2019. To capture indirect inflationary effects, we construct a distance based spillover index. In general, the results of our analysis confirm a statistically significant positive direct association between conflicts and inflation rates. This finding is robust across various model specifications. Moreover, our results indicate that conflict induced inflation is not solely driven by increasing money supply. Furthermore, we document a statistically significant positive indirect association between conflicts and inflation rates in uninvolved countries.
Hybrid nanomaterials offer the combination of individual properties of different types of nanoparticles. Some strategies for the development of new nanostructures in larger scale rely on the self-assembly of nanoparticles as a bottom-up approach. The use of templates provides ordered assemblies in defined patterns. In a typical soft-template, nanoparticles and other surface-active agents are incorporated into non-miscible liquids. The resulting self-organized dispersions will mediate nanoparticle interactions to control the subsequent self-assembly. Especially interactions between nanoparticles of very different dispersibility and functionality can be directed at a liquid-liquid interface.
In this project, water-in-oil microemulsions were formulated from quasi-ternary mixtures with Aerosol-OT as surfactant. Oleyl-capped superparamagnetic iron oxide and/or silver nanoparticles were incorporated in the continuous organic phase, while polyethyleneimine-stabilized gold nanoparticles were confined in the dispersed water droplets. Each type of nanoparticle can modulate the surfactant film and the inter-droplet interactions in diverse ways, and their combination causes synergistic effects. Interfacial assemblies of nanoparticles resulted after phase-separation. On one hand, from a biphasic Winsor type II system at low surfactant concentration, drop-casting of the upper phase afforded thin films of ordered nanoparticles in filament-like networks. Detailed characterization proved that this templated assembly over a surface is based on the controlled clustering of nanoparticles and the elongation of the microemulsion droplets. This process offers versatility to use different nanoparticle compositions by keeping the surface functionalization, in different solvents and over different surfaces. On the other hand, a magnetic heterocoagulate was formed at higher surfactant concentration, whose phase-transfer from oleic acid to water was possible with another auxiliary surfactant in ethanol-water mixture. When the original components were initially mixed under heating, defined oil-in-water, magnetic-responsive nanostructures were obtained, consisting on water-dispersible nanoparticle domains embedded by a matrix-shell of oil-dispersible nanoparticles.
Herein, two different approaches were demonstrated to form diverse hybrid nanostructures from reverse microemulsions as self-organized dispersions of the same components. This shows that microemulsions are versatile soft-templates not only for the synthesis of nanoparticles, but also for their self-assembly, which suggest new approaches towards the production of new sophisticated nanomaterials in larger scale.
Volcanoes are one of the Earth’s most dynamic zones and responsible for many changes in our planet. Volcano seismology aims to provide an understanding of the physical processes in volcanic systems and anticipate the style and timing of eruptions by analyzing the seismic records. Volcanic tremor signals are usually observed in the seismic records before or during volcanic eruptions. Their analysis contributes to evaluate the evolving volcanic activity and potentially predict eruptions. Years of continuous seismic monitoring now provide useful information for operational eruption forecasting. The continuously growing amount of seismic recordings, however, poses a challenge for analysis, information extraction, and interpretation, to support timely decision making during volcanic crises. Furthermore, the complexity of eruption processes and precursory activities makes the analysis challenging.
A challenge in studying seismic signals of volcanic origin is the coexistence of transient signal swarms and long-lasting volcanic tremor signals. Separating transient events from volcanic tremors can, therefore, contribute to improving our understanding of the underlying physical processes. Some similar issues (data reduction, source separation, extraction, and classification) are addressed in the context of music information retrieval (MIR). The signal characteristics of acoustic and seismic recordings comprise a number of similarities. This thesis is going beyond classical signal analysis techniques usually employed in seismology by exploiting similarities of seismic and acoustic signals and building the information retrieval strategy on the expertise developed in the field of MIR.
First, inspired by the idea of harmonic–percussive separation (HPS) in musical signal processing, I have developed a method to extract harmonic volcanic tremor signals and to detect transient events from seismic recordings. This provides a clean tremor signal suitable for tremor investigation along with a characteristic function suitable for earthquake detection. Second, using HPS algorithms, I have developed a noise reduction technique for seismic signals. This method is especially useful for denoising ocean bottom seismometers, which are highly contaminated by noise. The advantage of this method compared to other denoising techniques is that it doesn’t introduce distortion to the broadband earthquake waveforms, which makes it reliable for different applications in passive seismological analysis. Third, to address the challenge of extracting information from high-dimensional data and investigating the complex eruptive phases, I have developed an advanced machine learning model that results in a comprehensive signal processing scheme for volcanic tremors. Using this method seismic signatures of major eruptive phases can be automatically detected. This helps to provide a chronology of the volcanic system. Also, this model is capable to detect weak precursory volcanic tremors prior to the eruption, which could be used as an indicator of imminent eruptive activity. The extracted patterns of seismicity and their temporal variations finally provide an explanation for the transition mechanism between eruptive phases.
Founded in 2013, OpenClassrooms is a French online learning company that offers both paid courses and free MOOCs on a wide range of topics, including computer science and education. In 2021, in partnership with the EDA research unit, OpenClassrooms shared a database to solve the problem of how to increase persistence in their paid courses, which consist of a series of MOOCs and human mentoring. Our statistical analysis aims to identify reasons for dropouts that are due to the course design rather than demographic predictors or external factors.We aim to identify at-risk students, i.e. those who are on the verge of dropping out at a specific moment. To achieve this, we use learning analytics to characterize student behavior. We conducted data analysis on a sample of data related to the “Web Designers” and “Instructional Design” courses. By visualizing the student flow and constructing speed and acceleration predictors, we can identify which parts of the course need to be calibrated and when particular attention should be paid to these at-risk students.
In the last century, several astronomical measurements have supported that a significant percentage (about 22%) of the total mass of the Universe, on galactic and extragalactic scales, is composed of a mysterious ”dark” matter (DM). DM does not interact with the electromagnetic force; in other words it does not reflect, absorb or emit light. It is possible that DM particles are weakly interacting massive particles (WIMPs) that can annihilate (or decay) into Standard Model (SM) particles, and modern very- high-energy (VHE; > 100 GeV) instruments such as imaging atmospheric Cherenkov telescopes (IACTs) can play an important role in constraining the main properties of such DM particles, by detecting these products. One of the most privileged targets where to look for DM signal are dwarf spheroidal galaxies (dSphs), as they are expected to be high DM-dominated objects with a clean, gas-free environment. Some dSphs could be considered as extended sources, considering the angular resolution of IACTs; their angu- lar resolution is adequate to detect extended emission from dSphs. For this reason, we performed an extended-source analysis, by taking into account in the unbinned maximum likelihood estimation both the energy and the angular extension dependency of observed events. The goal was to set more constrained upper limits on the velocity-averaged cross-section annihilation of WIMPs with VERITAS data. VERITAS is an array of four IACTs, able to detect γ-ray photons ranging between 100 GeV and 30 TeV. The results of this extended analysis were compared against the traditional spectral analysis. We found that a 2D analysis may lead to more constrained results, depending on the DM mass, channel, and source. Moreover, in this thesis, the results of a multi-instrument project are presented too. Its goal was to combine already published 20 dSphs data from five different experiments, such as Fermi-LAT, MAGIC, H.E.S.S., VERITAS and HAWC, in order to set upper limits on the WIMP annihilation cross-section in the widest mass range ever reported.
Most machine learning methods provide only point estimates when being queried to predict on new data. This is problematic when the data is corrupted by noise, e.g. from imperfect measurements, or when the queried data point is very different to the data that the machine learning model has been trained with. Probabilistic modelling in machine learning naturally equips predictions with corresponding uncertainty estimates which allows a practitioner to incorporate information about measurement noise into the modelling process and to know when not to trust the predictions. A well-understood, flexible probabilistic framework is provided by Gaussian processes that are ideal as building blocks of probabilistic models. They lend themself naturally to the problem of regression, i.e., being given a set of inputs and corresponding observations and then predicting likely observations for new unseen inputs, and can also be adapted to many more machine learning tasks. However, exactly inferring the optimal parameters of such a Gaussian process model (in a computationally tractable manner) is only possible for regression tasks in small data regimes. Otherwise, approximate inference methods are needed, the most prominent of which is variational inference.
In this dissertation we study models that are composed of Gaussian processes embedded in other models in order to make those more flexible and/or probabilistic. The first example are deep Gaussian processes which can be thought of as a small network of Gaussian processes and which can be employed for flexible regression. The second model class that we study are Gaussian process state-space models. These can be used for time-series modelling, i.e., the task of being given a stream of data ordered by time and then predicting future observations. For both model classes the state-of-the-art approaches offer a trade-off between expressive models and computational properties (e.g. speed or convergence properties) and mostly employ variational inference. Our goal is to improve inference in both models by first getting a deep understanding of the existing methods and then, based on this, to design better inference methods. We achieve this by either exploring the existing trade-offs or by providing general improvements applicable to multiple methods.
We first provide an extensive background, introducing Gaussian processes and their sparse (approximate and efficient) variants. We continue with a description of the models under consideration in this thesis, deep Gaussian processes and Gaussian process state-space models, including detailed derivations and a theoretical comparison of existing methods.
Then we start analysing deep Gaussian processes more closely: Trading off the properties (good optimisation versus expressivity) of state-of-the-art methods in this field, we propose a new variational inference based approach. We then demonstrate experimentally that our new algorithm leads to better calibrated uncertainty estimates than existing methods.
Next, we turn our attention to Gaussian process state-space models, where we closely analyse the theoretical properties of existing methods.The understanding gained in this process leads us to propose a new inference scheme for general Gaussian process state-space models that incorporates effects on multiple time scales. This method is more efficient than previous approaches for long timeseries and outperforms its comparison partners on data sets in which effects on multiple time scales (fast and slowly varying dynamics) are present.
Finally, we propose a new inference approach for Gaussian process state-space models that trades off the properties of state-of-the-art methods in this field. By combining variational inference with another approximate inference method, the Laplace approximation, we design an efficient algorithm that outperforms its comparison partners since it achieves better calibrated uncertainties.
This thesis explores the variation in coreference patterns across language modes (i.e., spoken and written) and text genres. The significance of research on variation in language use has been emphasized in a number of linguistic studies. For instance, Biber and Conrad [2009] state that “register/genre variation is a fundamental aspect of human language” and “Given the ubiquity of register/genre variation, an understanding of how linguistic features are used in patterned ways across text varieties is of central importance for both the description of particular languages and the development of cross-linguistic theories of language use.”[p.23]
We examine the variation across genres with the primary goal of contributing to the body of knowledge on the description of language use in English. On the computational side, we believe that incorporating linguistic knowledge into learning-based systems can boost the performance of automatic natural language processing systems, particularly for non-standard texts. Therefore, in addition to their descriptive value, the linguistic findings we provide in this study may prove to be helpful for improving the performance of automatic coreference resolution, which is essential for a good text understanding and beneficial for several downstream NLP applications, including machine translation and text summarization.
In particular, we study a genre of texts that is formed of conversational interactions on the well-known social media platform Twitter. Two factors motivate us: First, Twitter conversations are realized in written form but resemble spoken communication [Scheffler, 2017], and therefore they form an atypical genre for the written mode. Second, while Twitter texts are a complicated genre for automatic coreference resolution, due to their widespread use in the digital sphere, at the same time they are highly relevant for applications that seek to extract information or sentiments from users’ messages. Thus, we are interested in discovering more about the linguistic and computational aspects of coreference in Twitter conversations. We first created a corpus of such conversations for this purpose and annotated it for coreference. We are interested in not only the coreference patterns but the overall discourse behavior of Twitter conversations. To address this, in addition to the coreference relations, we also annotated the coherence relations on the corpus we compiled. The corpus is available online in a newly developed form that allows for separating the tweets from their annotations.
This study consists of three empirical analyses where we independently apply corpus-based, psycholinguistic and computational approaches for the investigation of variation in coreference patterns in a complementary manner. (1) We first make a descriptive analysis of variation across genres through a corpus-based study. We investigate the linguistic aspects of nominal coreference in Twitter conversations and we determine how this genre relates to other text genres in spoken and written modes. In addition to the variation across genres, studying the differences in spoken-written modes is also in focus of linguistic research since from Woolbert [1922]. (2) In order to investigate whether the language mode alone has any effect on coreference patterns, we carry out a crowdsourced experiment and analyze the patterns in the same genre for both spoken and written modes. (3) Finally, we explore the potentials of domain adaptation of automatic coreference resolution (ACR) for the conversational Twitter data. In order to answer the question of how the genre of Twitter conversations relates to other genres in spoken and written modes with respect to coreference patterns, we employ a state-of-the-art neural ACR model [Lee et al., 2018] to examine whether ACR on Twitter conversations will benefit from mode-based separation in out-of-domain training data.
Individuals with aphasia vary in the speed and accuracy they perform sentence comprehension tasks. Previous results indicate that the performance patterns of individuals with aphasia vary between tasks (e.g., Caplan, DeDe, & Michaud, 2006; Caplan, Michaud, & Hufford, 2013a). Similarly, it has been found that the comprehension performance of individuals with aphasia varies between homogeneous test sentences within and between sessions (e.g., McNeil, Hageman, & Matthews, 2005). These studies ascribed the variability in the performance of individuals with aphasia to random noise. This conclusion would be in line with an influential theory on sentence comprehension in aphasia, the resource reduction hypothesis (Caplan, 2012). However, previous studies did not directly compare variability in language-impaired and language-unimpaired adults. Thus, it is still unclear how the variability in sentence comprehension differs between individuals with and without aphasia. Furthermore, the previous studies were exclusively carried out in English. Therefore, the findings on variability in sentence processing in English still need to be replicated in a different language.
This dissertation aims to give a systematic overview of the patterns of variability in sentence comprehension performance in aphasia in German and, based on this overview, to put the resource reduction hypothesis to the test. In order to reach the first aim, variability was considered on three different dimensions (persons, measures, and occasions) following the classification by Hultsch, Strauss, Hunter, and MacDonald (2011). At the dimension of persons, the thesis compared the performance of individuals with aphasia and language-unimpaired adults. At the dimension of measures, this work explored the performance across different sentence comprehension tasks (object manipulation, sentence-picture matching). Finally, at the dimension of occasions, this work compared the performance in each task between two test sessions. Several methods were combined to study variability to gain a large and diverse database. In addition to the offline comprehension tasks, the self-paced-listening paradigm and the visual world eye-tracking paradigm were used in this work.
The findings are in line with the previous results. As in the previous studies, variability in sentence comprehension in individuals with aphasia emerged between test sessions and between tasks. Additionally, it was possible to characterize the variability further using hierarchical Bayesian models. For individuals with aphasia, it was shown that both between-task and between-session variability are unsystematic. In contrast to that, language-unimpaired individuals exhibited systematic differences between measures and between sessions. However, these systematic differences occurred only in the offline tasks. Hence, variability in sentence comprehension differed between language-impaired and language-unimpaired adults, and this difference could be narrowed down to the offline measures.
Based on this overview of the patterns of variability, the resource reduction hypothesis was evaluated. According to the hypothesis, the variability in the performance of individuals with aphasia can be ascribed to random fluctuations in the resources available for sentence processing. Given that the performance of the individuals with aphasia varied unsystematically, the results support the resource reduction hypothesis. Furthermore, the thesis proposes that the differences in variability between language-impaired and language-unimpaired adults can also be explained by the resource reduction hypothesis. More specifically, it is suggested that the systematic changes in the performance of language-unimpaired adults are due to decreasing fluctuations in available processing resources. In parallel, the unsystematic variability in the performance of individuals with aphasia could be due to constant fluctuations in available processing resources. In conclusion, the systematic investigation of variability contributes to a better understanding of language processing in aphasia and thus enriches aphasia research.
Satisfaction and frustration of the needs for autonomy, competence, and relatedness, as assessed with the 24-item Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS), have been found to be crucial indicators of individuals’ psychological health. To increase the usability of this scale within a clinical and health services research context, we aimed to validate a German short version (12 items) of this scale in individuals with depression including the examination of the relations from need frustration and need satisfaction to ill-being and quality of life (QOL). This cross-sectional study involved 344 adults diagnosed with depression (Mage (SD) = 47.5 years (11.1); 71.8% females). Confirmatory factor analyses indicated that the short version of the BPNSFS was not only reliable, but also fitted a six-factor structure (i.e., satisfaction/frustration X type of need). Subsequent structural equation modeling showed that need frustration related positively to indicators of ill-being and negatively to QOL. Surprisingly, need satisfaction did not predict differences in ill-being or QOL. The short form of the BPNSFS represents a practical instrument to measure need satisfaction and frustration in people with depression. Further, the results support recent evidence on the importance of especially need frustration in the prediction of psychopathology.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
The main aim of this article is to explore how learning analytics and synchronous collaboration could improve course completion and learner outcomes in MOOCs, which traditionally have been delivered asynchronously. Based on our experience with developing BigBlueButton, a virtual classroom platform that provides educators with live analytics, this paper explores three scenarios with business focused MOOCs to improve outcomes and strengthen learned skills.
To grant high-quality evidence-based research in the field of exercise sciences, it is often necessary for various institutions to collaborate over longer distances and internationally. Here, not only with regard to the recent COVID-19-pandemic, digital means provide new options for remote scientific exchanges. This thesis is meant to analyse and test digital opportunities to support the dissemination of knowledge and instruction of investigators about defined examination protocols in an international multi-center context.
The project consisted of three studies. The first study, a questionnaire-based survey, aimed at learning about the opinions and preferences of digital learning or social media among students of sport science faculties in two universities each in Germany, the UK and Italy. Based on these findings, in a second study, an examination video of an ultrasound determination of the intima-media-thickness and diameter of an artery was distributed by a messenger app to doctors and nursing personnel as simulated investigators and efficacy of the test setting was analysed. Finally, a third study integrated the use of an augmented reality device for direct remote supervision of the same ultrasound examinations in a long-distance international setting with international experts from the fields of engineering and sports science and later remote supervision of augmented reality equipped physicians performing a given task.
The first study with 229 participating students revealed a high preference for YouTube to receive video-based knowledge as well as a preference for using WhatsApp and Facebook for peer-to-peer contacts for learning purposes and to exchange and discuss knowledge. In the second study, video-based instructions send by WhatsApp messenger
showed high approval of the setup in both study groups, one with doctors familiar with the use of ultrasound technology as well as one with nursing staff who were not familiar with the device, with similar results in overall time of performance and the measurements of the femoral arteries. In the third and final study, experts from different continents were connected remotely to the examination site via an augmented reality device with good transmission quality. The remote supervision to doctors ́ examination produced a good interrater correlation. Experiences with the augmented reality-based setting were rated as highly positive by the participants. Potential benefits of this technique were seen in the fields of education, movement analysis, and supervision.
Concluding, the findings of this thesis were able to suggest modern and addressee- centred digital solutions to enhance the understanding of given examinations techniques of potential investigators in exercise science research projects. Head-mounted augmented reality devices have a special value and may be recommended for collaborative research projects with physical examination–based research questions. While the established setting should be further investigated in prospective clinical studies, digital competencies of future researchers should already be enhanced during the early stages of their education.
The amount of data stored in databases and the complexity of database workloads are ever- increasing. Database management systems (DBMSs) offer many configuration options, such as index creation or unique constraints, which must be adapted to the specific instance to efficiently process large volumes of data. Currently, such database optimization is complicated, manual work performed by highly skilled database administrators (DBAs). In cloud scenarios, manual database optimization even becomes infeasible: it exceeds the abilities of the best DBAs due to the enormous number of deployed DBMS instances (some providers maintain millions of instances), missing domain knowledge resulting from data privacy requirements, and the complexity of the configuration tasks.
Therefore, we investigate how to automate the configuration of DBMSs efficiently with the help of unsupervised database optimization. While there are numerous configuration options, in this thesis, we focus on automatic index selection and the use of data dependencies, such as functional dependencies, for query optimization. Both aspects have an extensive performance impact and complement each other by approaching unsupervised database optimization from different perspectives.
Our contributions are as follows: (1) we survey automated state-of-the-art index selection algorithms regarding various criteria, e.g., their support for index interaction. We contribute an extensible platform for evaluating the performance of such algorithms with industry-standard datasets and workloads. The platform is well-received by the community and has led to follow-up research. With our platform, we derive the strengths and weaknesses of the investigated algorithms. We conclude that existing solutions often have scalability issues and cannot quickly determine (near-)optimal solutions for large problem instances. (2) To overcome these limitations, we present two new algorithms. Extend determines (near-)optimal solutions with an iterative heuristic. It identifies the best index configurations for the evaluated benchmarks. Its selection runtimes are up to 10 times lower compared with other near-optimal approaches. SWIRL is based on reinforcement learning and delivers solutions instantly. These solutions perform within 3 % of the optimal ones. Extend and SWIRL are available as open-source implementations.
(3) Our index selection efforts are complemented by a mechanism that analyzes workloads to determine data dependencies for query optimization in an unsupervised fashion. We describe and classify 58 query optimization techniques based on functional, order, and inclusion dependencies as well as on unique column combinations. The unsupervised mechanism and three optimization techniques are implemented in our open-source research DBMS Hyrise. Our approach reduces the Join Order Benchmark’s runtime by 26 % and accelerates some TPC-DS queries by up to 58 times.
Additionally, we have developed a cockpit for unsupervised database optimization that allows interactive experiments to build confidence in such automated techniques. In summary, our contributions improve the performance of DBMSs, support DBAs in their work, and enable them to contribute their time to other, less arduous tasks.
Within the context of United Nations (UN) environmental institutions, it has become apparent that intergovernmental responses alone have been insufficient for dealing with pressing transboundary environmental problems. Diverging economic and political interests, as well as broader changes in power dynamics and norms within global (environmental) governance, have resulted in negotiation and implementation efforts by UN member states becoming stuck in institutional gridlock and inertia. These developments have sparked a renewed debate among scholars and practitioners about an imminent crisis of multilateralism, accompanied by calls for reforming UN environmental institutions. However, with the rise of transnational actors and institutions, states are not the only relevant actors in global environmental governance. In fact, the fragmented architectures of different policy domains are populated by a hybrid mix of state and non-state actors, as well as intergovernmental and transnational institutions. Therefore, coping with the complex challenges posed by severe and ecologically interdependent transboundary environmental problems requires global cooperation and careful management from actors beyond national governments.
This thesis investigates the interactions of three intergovernmental UN treaty secretariats in global environmental governance. These are the secretariats of the United Nations Framework Convention on Climate Change, the Convention on Biological Diversity, and the United Nations Convention to Combat Desertification. While previous research has acknowledged the increasing autonomy and influence of treaty secretariats in global policy-making, little attention has been paid to their strategic interactions with non-state actors, such as non-governmental organizations, civil society actors, businesses, and transnational institutions and networks, or their coordination with other UN agencies. Through qualitative case-study research, this thesis explores the means and mechanisms of these interactions and investigates their consequences for enhancing the effectiveness and coherence of institutional responses to underlying and interdependent environmental issues.
Following a new institutionalist ontology, the conceptual and theoretical framework of this study draws on global governance research, regime theory, and scholarship on international bureaucracies. From an actor-centered perspective on institutional interplay, the thesis employs concepts such as orchestration and interplay management to assess the interactions of and among treaty secretariats. The research methodology involves structured, focused comparison, and process-tracing techniques to analyze empirical data from diverse sources, including official documents, various secondary materials, semi-structured interviews with secretariat staff and policymakers, and observations at intergovernmental conferences.
The main findings of this research demonstrate that secretariats employ tailored orchestration styles to manage or bypass national governments, thereby raising global ambition levels for addressing transboundary environmental problems. Additionally, they engage in joint interplay management to facilitate information sharing, strategize activities, and mobilize relevant actors, thereby improving coherence across UN environmental institutions. Treaty secretariats play a substantial role in influencing discourses and knowledge exchange with a wide range of actors. However, they face barriers, such as limited resources, mandates, varying leadership priorities, and degrees of politicization within institutional processes, which may hinder their impact. Nevertheless, the secretariats, together with non-state actors, have made progress in advancing norm-building processes, integrated policy-making, capacity building, and implementation efforts within and across framework conventions. Moreover, they utilize innovative means of coordination with actors beyond national governments, such as data-driven governance, to provide policy-relevant information for achieving overarching governance targets.
Importantly, this research highlights the growing interactions between treaty secretariats and non-state actors, which not only shape policy outcomes but also have broader implications for the polity and politics of international institutions. The findings offer opportunities for rethinking collective agency and actor dynamics within UN entities, addressing gaps in institutionalist theory concerning the interaction of actors in inter-institutional spaces. Furthermore, the study addresses emerging challenges and trends in global environmental governance that are pertinent to future policy-making. These include reflections for the debate on reforming international institutions, the role of emerging powers in a changing international world order, and the convergence of public and private authority through new alliance-building and a division of labor between international bureaucracies and non-state actors in global environmental governance.
The near-Earth space environment is a highly complex system comprised of several regions and particle populations hazardous to satellite operations. The trapped particles in the radiation belts and ring current can cause significant damage to satellites during space weather events, due to deep dielectric and surface charging. Closer to Earth is another important region, the ionosphere, which delays the propagation of radio signals and can adversely affect navigation and positioning. In response to fluctuations in solar and geomagnetic activity, both the inner-magnetospheric and ionospheric populations can undergo drastic and sudden changes within minutes to hours, which creates a challenge for predicting their behavior. Given the increasing reliance of our society on satellite technology, improving our understanding and modeling of these populations is a matter of paramount importance.
In recent years, numerous spacecraft have been launched to study the dynamics of particle populations in the near-Earth space, transforming it into a data-rich environment. To extract valuable insights from the abundance of available observations, it is crucial to employ advanced modeling techniques, and machine learning methods are among the most powerful approaches available. This dissertation employs long-term satellite observations to analyze the processes that drive particle dynamics, and builds interdisciplinary links between space physics and machine learning by developing new state-of-the-art models of the inner-magnetospheric and ionospheric particle dynamics.
The first aim of this thesis is to investigate the behavior of electrons in Earth's radiation belts and ring current. Using ~18 years of electron flux observations from the Global Positioning System (GPS), we developed the first machine learning model of hundreds-of-keV electron flux at Medium Earth Orbit (MEO) that is driven solely by solar wind and geomagnetic indices and does not require auxiliary flux measurements as inputs. We then proceeded to analyze the directional distributions of electrons, and for the first time, used Fourier sine series to fit electron pitch angle distributions (PADs) in Earth's inner magnetosphere. We performed a superposed epoch analysis of 129 geomagnetic storms during the Van Allen Probes era and demonstrated that electron PADs have a strong energy-dependent response to geomagnetic activity. Additionally, we showed that the solar wind dynamic pressure could be used as a good predictor of the PAD dynamics. Using the observed dependencies, we created the first PAD model with a continuous dependence on L, magnetic local time (MLT) and activity, and developed two techniques to reconstruct near-equatorial electron flux observations from low-PA data using this model.
The second objective of this thesis is to develop a novel model of the topside ionosphere. To achieve this goal, we collected observations from five of the most widely used ionospheric missions and intercalibrated these data sets. This allowed us to use these data jointly for model development, validation, and comparison with other existing empirical models. We demonstrated, for the first time, that ion density observations by Swarm Langmuir Probes exhibit overestimation (up to ~40-50%) at low and mid-latitudes on the night side, and suggested that the influence of light ions could be a potential cause of this overestimation. To develop the topside model, we used 19 years of radio occultation (RO) electron density profiles, which were fitted with a Chapman function with a linear dependence of scale height on altitude. This approximation yields 4 parameters, namely the peak density and height of the F2-layer and the slope and intercept of the linear scale height trend, which were modeled using feedforward neural networks (NNs). The model was extensively validated against both RO and in-situ observations and was found to outperform the International Reference Ionosphere (IRI) model by up to an order of magnitude. Our analysis showed that the most substantial deviations of the IRI model from the data occur at altitudes of 100-200 km above the F2-layer peak. The developed NN-based ionospheric model reproduces the effects of various physical mechanisms observed in the topside ionosphere and provides highly accurate electron density predictions.
This dissertation provides an extensive study of geospace dynamics, and the main results of this work contribute to the improvement of models of plasma populations in the near-Earth space environment.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Transferability of data-driven models to predict urban pluvial flood water depth in Berlin, Germany
(2023)
Data-driven models have been recently suggested to surrogate computationally expensive hydrodynamic models to map flood hazards. However, most studies focused on developing models for the same area or the same precipitation event. It is thus not obvious how transferable the models are in space. This study evaluates the performance of a convolutional neural network (CNN) based on the U-Net architecture and the random forest (RF) algorithm to predict flood water depth, the models' transferability in space and performance improvement using transfer learning techniques. We used three study areas in Berlin to train, validate and test the models. The results showed that (1) the RF models outperformed the CNN models for predictions within the training domain, presumable at the cost of overfitting; (2) the CNN models had significantly higher potential than the RF models to generalize beyond the training domain; and (3) the CNN models could better benefit from transfer learning technique to boost their performance outside training domains than RF models.
Traditionally, mental disorders have been identified based on specific symptoms and standardized diagnostic systems such as the DSM-5 and ICD-10. However, these symptom-based definitions may only partially represent neurobiological and behavioral research findings, which could impede the development of targeted treatments. A transdiagnostic approach to mental health research, such as the Research Domain Criteria (RDoC) approach, maps resilience and broader aspects of mental health to associated components. By investigating mental disorders in a transnosological way, we can better understand disease patterns and their distinguishing and common factors, leading to more precise prevention and treatment options.
Therefore, this dissertation focuses on (1) the latent domain structure of the RDoC approach in a transnosological sample including healthy controls, (2) its domain associations to disease severity in patients with anxiety and depressive disorders, and (3) an overview of the scientific results found regarding Positive (PVS) and Negative Valence Systems (NVS) associated with mood and anxiety disorders.
The following main results were found: First, the latent RDoC domain structure for PVS and NVS, Cognitive Systems (CS), and Social Processes (SP) could be validated using self-report and behavioral measures in a transnosological sample. Second, we found transdiagnostic and disease-specific associations between those four domains and disease severity in patients with depressive and anxiety disorders. Third, the scoping review showed a sizable amount of RDoC research conducted on PVS and NVS in mood and anxiety disorders, with research gaps for both domains and specific conditions.
In conclusion, the research presented in this dissertation highlights the potential of the transnosological RDoC framework approach in improving our understanding of mental disorders. By exploring the latent RDoC structure and associations with disease severity and disease-specific and transnosological associations for anxiety and depressive disorders, this research provides valuable insights into the full spectrum of psychological functioning. Additionally, this dissertation highlights the need for further research in this area, identifying both RDoC indicators and research gaps. Overall, this dissertation represents an important contribution to the ongoing efforts to improve our understanding and the treatment of mental disorders, particularly within the commonly comorbid disease spectrum of mood and anxiety disorders.
Selenium (Se) is an essential trace element that is ubiquitously present in the environment in small concentrations. Essential functions of Se in the human body are manifested through the wide range of proteins, containing selenocysteine as their active center. Such proteins are called selenoproteins which are found in multiple physiological processes like antioxidative defense and the regulation of thyroid hormone functions. Therefore, Se deficiency is known to cause a broad spectrum of physiological impairments, especially in endemic regions with low Se content. Nevertheless, being an essential trace element, Se could exhibit toxic effects, if its intake exceeds tolerable levels. Accordingly, this range between deficiency and overexposure represents optimal Se supply. However, this range was found to be narrower than for any other essential trace element. Together with significantly varying Se concentrations in soil and the presence of specific bioaccumulation factors, this represents a noticeable difficulty in the assessment of Se
epidemiological status. While Se is acting in the body through multiple selenoproteins, its intake occurs mainly in form of small organic or inorganic molecular mass species. Thus, Se exposure not only depends on daily intake but also on the respective chemical form, in which it is present.
The essential functions of selenium have been known for a long time and its primary forms in different food sources have been described. Nevertheless, analytical capabilities for a comprehensive investigation of Se species and their derivatives have been introduced only in the last decades. A new Se compound was identified in 2010 in the blood and tissues of bluefin tuna. It was called selenoneine (SeN) since it is an isologue of naturally occurring antioxidant ergothioneine (ET), where Se replaces sulfur. In the following years, SeN was identified in a number of edible fish species and attracted attention as a new dietary Se source and potentially strong antioxidant. Studies in populations whose diet largely relies on fish revealed that SeN
represents the main non-protein bound Se pool in their blood. First studies, conducted with enriched fish extracts, already demonstrated the high antioxidative potential of SeN and its possible function in the detoxification of methylmercury in fish. Cell culture studies demonstrated, that SeN can utilize the same transporter as ergothioneine, and SeN metabolite was found in human urine.
Until recently, studies on SeN properties were severely limited due to the lack of ways to obtain the pure compound. As a predisposition to this work was firstly a successful approach to SeN synthesis in the University of Graz, utilizing genetically modified yeasts. In the current study, by use of HepG2 liver carcinoma cells, it was demonstrated, that SeN does not cause toxic effectsup to 100 μM concentration in hepatocytes. Uptake experiments showed that SeN is not bioavailable to the used liver cells.
In the next part a blood-brain barrier (BBB) model, based on capillary endothelial cells from the porcine brain, was used to describe the possible transfer of SeN into the central nervous system (CNS). The assessment of toxicity markers in these endothelial cells and monitoring of barrier conditions during transfer experiments demonstrated the absence of toxic effects from SeN on the BBB endothelium up to 100 μM concentration. Transfer data for SeN showed slow but substantial transfer. A statistically significant increase was observed after 48 hours following SeN incubation from the blood-facing side of the barrier. However, an increase in Se content was clearly visible already after 6 hours of incubation with 1 μM of SeN. While the transfer rate of SeN after application of 0.1 μM dose was very close to that for 1 μM, incubation with 10 μM of SeN resulted in a significantly decreased transfer rate. Double-sided application of SeN caused no side-specific transfer of SeN, thus suggesting a passive diffusion mechanism of SeN across the BBB. This data is in accordance with animal studies, where ET accumulation was observed in the rat brain, even though rat BBB does not have the primary ET transporter – OCTN1. Investigation of capillary endothelial cell monolayers after incubation with SeN and reference selenium compounds showed no significant increase of intracellular selenium concentration. Speciesspecific Se measurements in medium samples from apical and basolateral compartments, as good as in cell lysates, showed no SeN metabolization. Therefore, it can be concluded that SeN may reach the brain without significant transformation.
As the third part of this work, the assessment of SeN antioxidant properties was performed in Caco-2 human colorectal adenocarcinoma cells. Previous studies demonstrated that the intestinal epithelium is able to actively transport SeN from the intestinal lumen to the blood side and accumulate SeN. Further investigation within current work showed a much higher antioxidant potential of SeN compared to ET. The radical scavenging activity after incubation with SeN was close to the one observed for selenite and selenomethionine. However, the SeN effect on the viability of intestinal cells under oxidative conditions was close to the one caused by ET. To answer the question if SeN is able to be used as a dietary Se source and induce the activity of selenoproteins, the activity of glutathione peroxidase (GPx) and the secretion of selenoprotein P (SelenoP) were measured in Caco-2 cells, additionally. As expected, reference selenium compounds selenite and selenomethionine caused efficient induction of GPx activity. In contrast to those SeN had no effect on GPx activity. To examine the possibility of SeN being embedded into the selenoproteome, SelenoP was measured in a culture medium. Even though Caco-2 cells effectively take up SeN in quantities much higher than selenite or selenomethionine, no secretion of SelenoP was observed after SeN incubation.
Summarizing, we can conclude that SeN can hardly serve as a Se source for selenoprotein synthesis. However, SeN exhibit strong antioxidative properties, which appear when sulfur in ET is exchanged by Se. Therefore, SeN is of particular interest for research not as part of Se metabolism, but important endemic dietary antioxidant.
Casualties and damages from urban pluvial flooding are increasing. Triggered by short, localized, and intensive rainfall events, urban pluvial floods can occur anywhere, even in areas without a history of flooding. Urban pluvial floods have relatively small temporal and spatial scales. Although cumulative losses from urban pluvial floods are comparable, most flood risk management and mitigation strategies focus on fluvial and coastal flooding. Numerical-physical-hydrodynamic models are considered the best tool to represent the complex nature of urban pluvial floods; however, they are computationally expensive and time-consuming. These sophisticated models make large-scale analysis and operational forecasting prohibitive. Therefore, it is crucial to evaluate and benchmark the performance of other alternative methods.
The findings of this cumulative thesis are represented in three research articles. The first study evaluates two topographic-based methods to map urban pluvial flooding, fill–spill–merge (FSM) and topographic wetness index (TWI), by comparing them against a sophisticated hydrodynamic model. The FSM method identifies flood-prone areas within topographic depressions while the TWI method employs maximum likelihood estimation to calibrate a TWI threshold (τ) based on inundation maps from the 2D hydrodynamic model. The results point out that the FSM method outperforms the TWI method. The study highlights then the advantage and limitations of both methods.
Data-driven models provide a promising alternative to computationally expensive hydrodynamic models. However, the literature lacks benchmarking studies to evaluate the different models' performance, advantages and limitations. Model transferability in space is a crucial problem. Most studies focus on river flooding, likely due to the relative availability of flow and rain gauge records for training and validation. Furthermore, they consider these models as black boxes. The second study uses a flood inventory for the city of Berlin and 11 predictive features which potentially indicate an increased pluvial flooding hazard to map urban pluvial flood susceptibility using a convolutional neural network (CNN), an artificial neural network (ANN) and the benchmarking machine learning models random forest (RF) and support vector machine (SVM). I investigate the influence of spatial resolution on the implemented models, the models' transferability in space and the importance of the predictive features. The results show that all models perform well and the RF models are superior to the other models within and outside the training domain. The models developed using fine spatial resolution (2 and 5 m) could better identify flood-prone areas. Finally, the results point out that aspect is the most important predictive feature for the CNN models, and altitude is for the other models.
While flood susceptibility maps identify flood-prone areas, they do not represent flood variables such as velocity and depth which are necessary for effective flood risk management. To address this, the third study investigates data-driven models' transferability to predict urban pluvial floodwater depth and the models' ability to enhance their predictions using transfer learning techniques. It compares the performance of RF (the best-performing model in the previous study) and CNN models using 12 predictive features and output from a hydrodynamic model. The findings in the third study suggest that while CNN models tend to generalise and smooth the target function on the training dataset, RF models suffer from overfitting. Hence, RF models are superior for predictions inside the training domains but fail outside them while CNN models could control the relative loss in performance outside the training domains. Finally, the CNN models benefit more from transfer learning techniques than RF models, boosting their performance outside training domains.
In conclusion, this thesis has evaluated both topographic-based methods and data-driven models to map urban pluvial flooding. However, further studies are crucial to have methods that completely overcome the limitation of 2D hydrodynamic models.
Towards unifying approaches in exposure modelling for scenario-based multi-hazard risk assessments
(2023)
This cumulative thesis presents a stepwise investigation of the exposure modelling process for risk assessment due to natural hazards while highlighting its, to date, not much-discussed importance and associated uncertainties. Although “exposure” refers to a very broad concept of everything (and everyone) that is susceptible to damage, in this thesis it is narrowed down to the modelling of large-area residential building stocks. Classical building exposure models for risk applications have been constructed fully relying on unverified expert elicitation over data sources (e.g., outdated census datasets), and hence have been implicitly assumed to be static in time and in space. Moreover, their spatial representation has also typically been simplified by geographically aggregating the inferred composition onto coarse administrative units whose boundaries do not always capture the spatial variability of the hazard intensities required for accurate risk assessments. These two shortcomings and the related epistemic uncertainties embedded within exposure models are tackled in the first three chapters of the thesis. The exposure composition of large-area residential building stocks is studied on the scope of scenario-based earthquake loss models. Then, the proposal of optimal spatial aggregation areas of exposure models for various hazard-related vulnerabilities is presented, focusing on ground-shaking and tsunami risks. Subsequently, once the experience is gained in the study of the composition and spatial aggregation of exposure for various hazards, this thesis moves towards a multi-hazard context while addressing cumulative damage and losses due to consecutive hazard scenarios. This is achieved by proposing a novel method to account for the pre-existing damage descriptions on building portfolios as a key input to account for scenario-based multi-risk assessment. Finally, this thesis shows how the integration of the aforementioned elements can be used in risk communication practices. This is done through a modular architecture based on the exploration of quantitative risk scenarios that are contrasted with social risk perceptions of the directly exposed communities to natural hazards.
In Chapter 1, a Bayesian approach is proposed to update the prior assumptions on such composition (i.e., proportions per building typology). This is achieved by integrating high-quality real observations and then capturing the intrinsic probabilistic nature of the exposure model. Such observations are accounted as real evidence from both: field inspections (Chapter 2) and freely available data sources to update existing (but outdated) exposure models (Chapter 3). In these two chapters, earthquake scenarios with parametrised ground motion fields were transversally used to investigate the role of such epistemic uncertainties related to the exposure composition through sensitivity analyses. Parametrised scenarios of seismic ground shaking were the hazard input utilised to study the physical vulnerability of building portfolios. The second issue that was investigated, which refers to the spatial aggregation of building exposure models, was investigated within two decoupled vulnerability contexts: due to seismic ground shaking through the integration of remote sensing techniques (Chapter 3); and within a multi-hazard context by integrating the occurrence of associated tsunamis (Chapter 4). Therein, a careful selection of the spatial aggregation entities while pursuing computational efficiency and accuracy in the risk estimates due to such independent hazard scenarios (i.e., earthquake and tsunami) are discussed. Therefore, in this thesis, the physical vulnerability of large-area building portfolios due to tsunamis is considered through two main frames: considering and disregarding the interaction at the vulnerability level, through consecutive and decoupled hazard scenarios respectively, which were then contrasted.
Contrary to Chapter 4, where no cumulative damages are addressed, in Chapter 5, data and approaches, which were already generated in former sections, are integrated with a novel modular method to ultimately study the likely interactions at the vulnerability level on building portfolios. This is tested by evaluating cumulative damages and losses after earthquakes with increasing magnitude followed by their respective tsunamis. Such a novel method is grounded on the possibility of re-using existing fragility models within a probabilistic framework. The same approach is followed in Chapter 6 to forecast the likely cumulative damages to be experienced by a building stock located in a volcanic multi-hazard setting (ash-fall and lahars). In that section, special focus was made on the manner the forecasted loss metrics are communicated to locally exposed communities. Co-existing quantitative scientific approaches (i.e., comprehensive exposure models; explorative risk scenarios involving single and multiple hazards) and semi-qualitative social risk perception (i.e., level of understanding that the exposed communities have about their own risk) were jointly considered. Such an integration ultimately allowed this thesis to also contribute to enhancing preparedness, science divulgation at the local level as well as technology transfer initiatives.
Finally, a synthesis of this thesis along with some perspectives for improvement and future work are presented.
Using time-resolved x-ray diffraction, we demonstrate the manipulation of the picosecond strain response of a metallic heterostructure consisting of a dysprosium (Dy) transducer and a niobium (Nb) detection layer by an external magnetic field. We utilize the first-order ferromagnetic–antiferromagnetic phase transition of the Dy layer, which provides an additional large contractive stress upon laser excitation compared to its zerofield response. This enhances the laser-induced contraction of the transducer and changes the shape of the picosecond strain pulses driven in Dy and detected within the buried Nb layer. Based on our experiment with rare-earth metals we discuss required properties for functional transducers, which may allow for novel field-control of the emitted picosecond strain pulses.
The shallow Earth’s layers are at the interplay of many physical processes: some being driven by atmospheric forcing (precipitation, temperature...) whereas others take their origins at depth, for instance ground shaking due to seismic activity. These forcings cause the subsurface to continuously change its mechanical properties, therefore modulating the strength of the surface geomaterials and hydrological fluxes. Because our societies settle and rely on the layers hosting these time-dependent properties, constraining the hydro-mechanical dynamics of the shallow subsurface is crucial for our future geographical development. One way to investigate the ever-changing physical changes occurring under our feet is through the inference of seismic velocity changes from ambient noise, a technique called seismic interferometry. In this dissertation, I use this method to monitor the evolution of groundwater storage and damage induced by earthquakes. Two research lines are investigated that comprise the key controls of groundwater recharge in steep landscapes and the predictability and duration of the transient physical properties due to earthquake ground shaking. These two types of dynamics modulate each other and influence the velocity changes in ways that are challenging to disentangle. A part of my doctoral research also addresses this interaction. Seismic data from a range of field settings spanning several climatic conditions (wet to arid climate) in various seismic-prone areas are considered. I constrain the obtained seismic velocity time-series using simple physical models, independent dataset, geophysical tools and nonlinear analysis. Additionally, a methodological development is proposed to improve the time-resolution of passive seismic monitoring.
The impact of individual differences in cognitive skills and socioeconomic background on key educational, occupational, and health outcomes, as well as the mechanisms underlying inequalities in these outcomes across the lifespan, are two central questions in lifespan psychology. The contextual embeddedness of such questions in ontogenetic (i.e., individual, age-related) and historical time is a key element of lifespan psychological theoretical frameworks such as the HIstorical changes in DEvelopmental COntexts (HIDECO) framework (Drewelies et al., 2019). Because the dimension of time is also a crucial part of empirical research designs examining developmental change, a third central question in research on lifespan development is how the timing and spacing of observations in longitudinal studies might affect parameter estimates of substantive phenomena. To address these questions in the present doctoral thesis, I applied innovative state-of-the-art methodology including static and dynamic longitudinal modeling approaches, used data from multiple international panel studies, and systematically simulated data based on empirical panel characteristics, in three empirical studies.
The first study of this dissertation, Study I, examined the importance of adolescent intelligence (IQ), grade point average (GPA), and parental socioeconomic status (pSES) for adult educational, occupational, and health outcomes over ontogenetic and historical time. To examine the possible impact of historical changes in the 20th century on the relationships between adolescent characteristics and key adult life outcomes, the study capitalized on data from two representative US cohort studies, the National Longitudinal Surveys of Youth 1979 and 1997, whose participants were born in the late 1960s and 1980s, respectively. Adolescent IQ, GPA, and pSES were positively associated with adult educational attainment, wage levels, and mental and physical health. Across historical time, the influence of IQ and pSES for educational, occupational, and health outcomes remained approximately the same, whereas GPA gained in importance over time for individuals born in the 1980s.
The second study of this dissertation, Study II, aimed to examine strict cumulative advantage (CA) processes as possible mechanisms underlying individual differences and inequality in wage development across the lifespan. It proposed dynamic structural equation models (DSEM) as a versatile statistical framework for operationalizing and empirically testing strict CA processes in research on wages and wage dynamics (i.e., wage levels and growth rates). Drawing on longitudinal representative data from the US National Longitudinal Survey of Youth 1979, the study modeled wage levels and growth rates across 38 years. Only 0.5 % of the sample revealed strict CA processes and explosive wage growth (autoregressive coefficients AR > 1), with the majority of individuals following logarithmic wage trajectories across the lifespan. Adolescent intelligence (IQ) and adult highest educational level explained substantial heterogeneity in initial wage levels and long-term wage growth rates over time.
The third study of this dissertation, Study III, investigated the role of observation timing variability in the estimation of non-experimental intervention effects in panel data. Although longitudinal studies often aim at equally spaced intervals between their measurement occasions, this goal is hardly ever met. Drawing on continuous time dynamic structural equation models, the study examines the –seemingly counterintuitive – potential benefits of measurement intervals that vary both within and between participants (often called individually varying time intervals, IVTs) in a panel study. It illustrates the method by modeling the effect of the transition from primary to secondary school on students’ academic motivation using empirical data from the German National Educational Panel Study (NEPS). Results of a simulation study based on this real-life example reveal that individual variation in time intervals can indeed benefit the estimation precision and recovery of the true intervention effect parameters.
This dissertation focuses on the handling of time in dialogue. Specifically, it investigates how humans bridge time, or “buy time”, when they are expected to convey information that is not yet available to them (e.g. a travel agent searching for a flight in a long list while the customer is on the line, waiting). It also explores the feasibility of modeling such time-bridging behavior in spoken dialogue systems, and it examines
how endowing such systems with more human-like time-bridging capabilities may affect humans’ perception of them.
The relevance of time-bridging in human-human dialogue seems to stem largely from a need to avoid lengthy pauses, as these may cause both confusion and discomfort among the participants of a conversation (Levinson, 1983; Lundholm Fors, 2015). However, this avoidance of prolonged silence is at odds with the incremental nature of speech production in dialogue (Schlangen and Skantze, 2011): Speakers often start to verbalize their contribution before it is fully formulated, and sometimes even before they possess the information they need to provide, which may result in them running out of content mid-turn.
In this work, we elicit conversational data from humans, to learn how they avoid being silent while they search for information to convey to their interlocutor. We identify commonalities in the types of resources employed by different speakers, and we propose a classification scheme. We explore ways of modeling human time-buying behavior computationally, and we evaluate the effect on human listeners of embedding this behavior in a spoken dialogue system.
Our results suggest that a system using conversational speech to bridge time while searching for information to convey (as humans do) can provide a better experience in several respects than one which remains silent for a long period of time. However, not all speech serves this purpose equally: Our experiments also show that a system whose time-buying behavior is more varied (i.e. which exploits several categories from the classification scheme we developed and samples them based on information from human data) can prevent overestimation of waiting time when compared, for example, with a system that repeatedly asks the interlocutor to wait (even if these requests for waiting are phrased differently each time). Finally, this research shows that it is possible to model human time-buying behavior on a relatively small corpus, and that a system using such a model can be preferred by participants over one employing a simpler strategy, such as randomly choosing utterances to produce during the wait —even when the utterances used by both strategies are the same.
Despite the popularity of thermoresponsive polymers, much is still unknown about their behavior, how it is triggered, and what factors influence it, hindering the full exploitation of their potential. One particularly puzzling phenomenon is called co-nonsolvency, in which a polymer is soluble in two individual solvents, but counter-intuitively becomes insoluble in mixtures of both. Despite the innumerous potential applications of such systems, including actuators, viscosity regulators and as carrier structures, this field has not yet been extensively studied apart from the classical example of poly(N isopropyl acrylamide) (PNIPAM) in mixtures of water and methanol. Therefore, this thesis focuses on evaluating how changes in the chemical structure of the polymers impact the thermoresponsive, aggregation and co-nonsolvency behaviors of both homopolymers and amphiphilic block copolymers. Within this scope, both the synthesis of the polymers and their characterization in solution is investigated. Homopolymers were synthesized by conventional free radical polymerization, whereas block copolymers were synthesized by consecutive reversible addition fragmentation chain transfer (RAFT) polymerizations. The synthesis of the monomers N isopropyl methacrylamide (NIPMAM) and N vinyl isobutyramide (NVIBAM), as well as a few chain transfer agents is also covered. Through turbidimetry measurements, the thermoresponsive and co-nonsolvency behavior of PNIPMAM and PNVIBAM homopolymers is then compared to the well-known PNIPAM, in aqueous solutions with 9 different organic co-solvents. Additionally, the effects of end-groups, molar mass, and concentration are investigated. Despite the similarity of their chemical structures, the 3 homopolymers show significant differences in transition temperatures and some divergences in their co-nonsolvency behavior. More complex systems are also evaluated, namely amphiphilic di- and triblock copolymers of PNIPAM and PNIPMAM with polystyrene and poly(methyl methacrylate) hydrophobic blocks. Dynamic light scattering is used to evaluate their aggregation behavior in aqueous and mixed aqueous solutions, and how it is affected by the chemical structure of the blocks, the chain architecture, presence of cosolvents and polymer concentration. The results obtained shed light into the thermoresponsive, co-nonsolvency and aggregation behavior of these polymers in solution, providing valuable information for the design of systems with a desired aggregation behavior, and that generate targeted responses to temperature and solvent mixture changes.
Background: The concept self-compassion (SC), a special way of being compassionate with oneself while dealing with stressful life circumstances, has attracted increasing attention in research over the past two decades. Research has already shown that SC has beneficial effects on affective well-being and other mental health outcomes. However, little is known in which ways SC might facilitate our affective well-being in stressful situations. Hence, a central concern of this dissertation was to focus on the question which underlying processes might influence the link between SC and affective well-being. Two established components in stress processing, which might also play an important role in this context, could be the amount of experienced stress and the way of coping with a stressor. Thus, using a multi-method approach, this dissertation aimed at finding to which extent SC might help to alleviate the experienced stress and promotes the use of more salutary coping, while dealing with stressful circumstances. These processes might ultimately help improve one’s affective well-being. Derived from that, it was hypothesized that more SC is linked to less perceived stress and intensified use of salutary coping responses. Additionally, it was suggested that perceived stress and coping mediate the relation between SC and affective well-being.
Method: The research questions were targeted in three single studies and one meta-study. To test my assumptions about the relations of SC and coping in particular, a systematic literature search was conducted resulting in k = 136 samples with an overall sample size of N = 38,913. To integrate the z-transformed Pearson correlation coefficients, random-effects models were calculated. All hypotheses were tested with a three-wave cross-lagged design in two short-term longitudinal online studies assessing SC, perceived stress and coping responses in all waves. The first study explored the assumptions in a student sample (N = 684) with a mean age of 27.91 years over a six-week period, whereas the measurements were implemented in the GESIS Panel (N = 2934) with a mean age of 52.76 years analyzing the hypotheses in a populationbased sample across eight weeks. Finally, an ambulatory assessment study was designed to expand the findings of the longitudinal studies to the intraindividual level. Thus, a sample of 213 participants completed questionnaires of momentary SC, perceived stress, engagement and disengagement coping, and affective well-being on their smartphones three times per day over seven consecutive days. The data was processed using 1-1-1 multilevel mediation analyses.
Results: Results of the meta-analysis indicated that higher SC is significantly associated with more use of engagement coping and less use of disengagement coping. Considering the relations between SC and stress processing variables in all three single studies, cross-lagged paths from the longitudinal data, as well as multilevel modeling paths from the ambulatory assessment data indicated a notable relation between all relevant stress variables. As expected, results showed a significant negative relation between SC and perceived stress and disengagement coping, as well as a positive connection with engagement coping responses at the dispositional and intra-individual level. However, considering the mediational hypothesis, the most promising pathway in the link between SC and affective well-being turned out to be perceived stress in all three studies, while effects of the mediational pathways through coping responses were less robust.
Conclusion: Thus, a more self-compassionate attitude and higher momentary SC, when needed in specific situations, can help to engage in effective stress processing. Considering the underlying mechanisms in the link between SC and affective well-being, stress perception in particular seemed to be the most promising candidate for enhancing affective well-being at the dispositional and at the intraindividual level. Future research should explore the pathways between SC and affective well-being in specific contexts and samples, and also take into account additional influential factors.
High growth firms (HGFs) are important for job creation and considered to be precursors of economic growth. We investigate how formal institutions, like product- and labor-market regulations, as well as the quality of regional governments that implement these regulations, affect HGF development across European regions. Using data from Eurostat, OECD, WEF, and Gothenburg University, we show that both regulatory stringency and the quality of the regional government influence the regional shares of HGFs. More importantly, we find that the effect of labor- and product-market regulations ultimately depends on the quality of regional governments: in regions with high quality of government, the share of HGFs is neither affected by the level of product market regulation, nor by more or less flexibility in hiring and firing practices. Our findings contribute to the debate on the effects of regulations by showing that regulations are not, per se, “good, bad, and ugly”, rather their impact depends on the efficiency of regional governments. Our paper offers important building blocks to develop tailored policy measures that may influence the development of HGFs in a region.
Jacob Brandon Maduro’s Memoirs and Related Observations (Havana, 1953) speak to the lasting yet malleable legacy of Jewish Caribbean/Atlantic mercantile communities that defined early modern settlement in the Americas. A close reading of the Memoirs, alongside relevant archival records and community narratives, lends new perspectives to scholarship on Port Jewries and the Atlantic Diaspora. Specifically concerned with Jacob’s adoption of such leading intellectual and political tropes as the Monroe doctrine, José Martí’s Nuestra America, and a Zionism that evolved from an ideology to a reality, the Memoirs reveal a narrative at once defined by the tremendous upheavals of the first half of the 20th century, and an enduring sense of Jewish diasporic peoplehood defined through a Port Jew paradigm whereby the preservation of Jewish ethnicity is understood as synonymous with the championing of modernity.
Ethical issues surrounding modern computing technologies play an increasingly important role in the public debate. Yet, ethics still either doesn’t appear at all or only to a very small extent in computer science degree programs. This paper provides an argument for the value of ethics beyond a pure responsibility perspective and describes the positive value of ethical debate for future computer scientists. It also provides a systematic analysis of the module handbooks of 67 German universities and shows that there is indeed a lack of ethics in computer science education. Finally, we present a principled design of a compulsory course for undergraduate students.
The Andean Cordillera is a mountain range located at the western South American margin and is part of the Eastern- Circum-Pacific orogenic Belt. The ~7000 km long mountain range is one of the longest on Earth and hosts the second largest orogenic plateau in the world, the Altiplano-Puna plateau. The Andes are known as a non-collisional subduction-type orogen which developed as a result of the interaction between the subducted oceanic Nazca plate and the South American continental plate. The different Andean segments exhibit along-strike variations of morphotectonic provinces characterized by different elevations, volcanic activity, deformation styles, crustal thickness, shortening magnitude and oceanic plate geometry. Most of the present-day elevation can be explained by crustal shortening in the last ~50 Ma, with the shortening magnitude decreasing from ~300 km in the central (15°S-30°S) segment to less than half that in the southern part (30°S-40°S). Several factors were proposed that might control the magnitude and acceleration of shortening of the Central Andes in the last 15 Ma. One important factor is likely the slab geometry. At 27-33°S, the slab dips horizontally at ~100 km depth due to the subduction of the buoyant Juan Fernandez Ridge, forming the Pampean flat-slab. This horizontal subduction is thought to influence the thermo-mechanical state of the Sierras Pampeanas foreland, for instance, by strengthening the lithosphere and promoting the thick-skinned propagation of deformation to the east, resulting in the uplift of the Sierras Pampeanas basement blocks. The flat-slab has migrated southwards from the Altiplano latitude at ~30 Ma to its present-day position and the processes and consequences associated to its passage on the contemporaneous acceleration of the shortening rate in Central Andes remain unclear. Although the passage of the flat-slab could offer an explanation to the acceleration of the shortening, the timing does not explain the two pulses of shortening at about 15 Ma and 4 Ma that are suggested from geological observations. I hypothesize that deformation in the Central Andes is controlled by a complex interaction between the subduction dynamics of the Nazca plate and the dynamic strengthening and weakening of the South American plate due to several upper plate processes. To test this hypothesis, a detailed investigation into the role of the flat-slab, the structural inheritance of the continental plate, and the subduction dynamics in the Andes is needed. Therefore, I have built two classes of numerical thermo-mechanical models: (i) The first class of models are a series of generic E-W-oriented high-resolution 2D subduction models thatinclude flat subduction in order to investigate the role of the subduction dynamics on the temporal variability of the shortening rate in the Central Andes at Altiplano latitudes (~21°S). The shortening rate from the models was then validated with the observed tectonic shortening rate in the Central Andes. (ii) The second class of models are a series of 3D data-driven models of the present-day Pampean flat-slab configuration and the Sierras Pampeanas (26-42°S). The models aim to investigate the relative contribution of the present-day flat subduction and inherited structures in the continental lithosphere on the strain localization. Both model classes were built using the advanced finite element geodynamic code ASPECT.
The first main finding of this work is to suggest that the temporal variability of shortening in the Central Andes is primarily controlled by the subduction dynamics of the Nazca plate while it penetrates into the mantle transition zone. These dynamics depends on the westward velocity of the South American plate that provides the main crustal shortening force to the Andes and forces the trench to retreat. When the subducting plate reaches the lower mantle, it buckles on it-self until the forced trench retreat causes the slab to steepen in the upper mantle in contrast with the classical slab-anchoring model. The steepening of the slab hinders the trench causing it to resist the advancing South American plate, resulting in the pulsatile shortening. This buckling and steepening subduction regime could have been initiated because of the overall decrease in the westwards velocity of the South American plate. In addition, the passage of the flat-slab is required to promote the shortening of the continental plate because flat subduction scrapes the mantle lithosphere, thus weakening the continental plate. This process contributes to the efficient shortening when the trench is hindered, followed by mantle lithosphere delamination at ~20 Ma. Finally, the underthrusting of the Brazilian cratonic shield beneath the orogen occurs at ~11 Ma due to the mechanical weakening of the thick sediments covered the shield margin, and due to the decreasing resistance of the weakened lithosphere of the orogen.
The second main finding of this work is to suggest that the cold flat-slab strengthens the overriding continental lithosphere and prevents strain localization. Therefore, the deformation is transmitted to the eastern front of the flat-slab segment by the shear stress operating at the subduction interface, thus the flat-slab acts like an indenter that “bulldozes” the mantle-keel of the continental lithosphere. The offset in the propagation of deformation to the east between the flat and steeper slab segments in the south causes the formation of a transpressive dextral shear zone. Here, inherited faults of past tectonic events are reactivated and further localize the deformation in an en-echelon strike-slip shear zone, through a mechanism that I refer to as “flat-slab conveyor”. Specifically, the shallowing of the flat-slab causes the lateral deformation, which explains the timing of multiple geological events preceding the arrival of the flat-slab at 33°S. These include the onset of the compression and of the transition between thin to thick-skinned deformation styles resulting from the crustal contraction of the crust in the Sierras Pampeanas some 10 and 6 Myr before the Juan Fernandez Ridge collision at that latitude, respectively.
This thesis bridges two areas of mathematics, algebra on the one hand with the Milnor-Moore theorem (also called Cartier-Quillen-Milnor-Moore theorem) as well as the Poincaré-Birkhoff-Witt theorem, and analysis on the other hand with Shintani zeta functions which generalise multiple zeta functions.
The first part is devoted to an algebraic formulation of the locality principle in physics and generalisations of classification theorems such as Milnor-Moore and Poincaré-Birkhoff-Witt theorems to the locality framework. The locality principle roughly says that events that take place far apart in spacetime do not infuence each other. The algebraic formulation of this principle discussed here is useful when analysing singularities which arise from events located far apart in space, in order to renormalise them while keeping a memory of the fact that they do not influence each other. We start by endowing a vector space with a symmetric relation, named the locality relation, which keeps track of elements that are "locally independent". The pair of a vector space together with such relation is called a pre-locality vector space. This concept is extended to tensor products allowing only tensors made of locally independent elements. We extend this concept to the locality tensor algebra, and locality symmetric algebra of a pre-locality vector space and prove the universal properties of each of such structures. We also introduce the pre-locality Lie algebras, together with their associated locality universal enveloping algebras and prove their universal property. We later upgrade all such structures and results from the pre-locality to the locality context, requiring the locality relation to be compatible with the linear structure of the vector space. This allows us to define locality coalgebras, locality bialgebras, and locality Hopf algebras. Finally, all the previous results are used to prove the locality version of the Milnor-Moore and the Poincaré-Birkhoff-Witt theorems. It is worth noticing that the proofs presented, not only generalise the results in the usual (non-locality) setup, but also often use less tools than their counterparts in their non-locality counterparts.
The second part is devoted to study the polar structure of the Shintani zeta functions. Such functions, which generalise the Riemman zeta function, multiple zeta functions, Mordell-Tornheim zeta functions, among others, are parametrised by matrices with real non-negative arguments. It is known that Shintani zeta functions extend to meromorphic functions with poles on afine hyperplanes. We refine this result in showing that the poles lie on hyperplanes parallel to the facets of certain convex polyhedra associated to the defining matrix for the Shintani zeta function. Explicitly, the latter are the Newton polytopes of the polynomials induced by the columns of the underlying matrix. We then prove that the coeficients of the equation which describes the hyperplanes in the canonical basis are either zero or one, similar to the poles arising when renormalising generic Feynman amplitudes. For that purpose, we introduce an algorithm to distribute weight over a graph such that the weight at each vertex satisfies a given lower bound.
In recent decades, astronomy has seen a boom in large-scale stellar surveys of the Galaxy. The detailed information obtained about millions of individual stars in the Milky Way is bringing us a step closer to answering one of the most outstanding questions in astrophysics: how do galaxies form and evolve? The Milky Way is the only galaxy where we can dissect many stars into their high-dimensional chemical composition and complete phase space, which analogously as fossil records can unveil the past history of the genesis of the Galaxy. The processes that lead to large structure formation, such as the Milky Way, are critical for constraining cosmological models; we call this line of study Galactic archaeology or near-field cosmology.
At the core of this work, we present a collection of efforts to chemically and dynamically characterise the disks and bulge of our Galaxy. The results we present in this thesis have only been possible thanks to the advent of the Gaia astrometric satellite, which has revolutionised the field of Galactic archaeology by precisely measuring the positions, parallax distances and motions of more than a billion stars. Another, though not less important, breakthrough is the APOGEE survey, which has observed spectra in the near-infrared peering into the dusty regions of the Galaxy, allowing us to determine detailed chemical abundance patterns in hundreds of thousands of stars. To accurately depict the Milky Way structure, we use and develop the Bayesian isochrone fitting tool/code called StarHorse; this software can predict stellar distances, extinctions and ages by combining astrometry, photometry and spectroscopy based on stellar evolutionary models. The StarHorse code is pivotal to calculating distances where Gaia parallaxes alone cannot allow accurate estimates.
We show that by combining Gaia, APOGEE, photometric surveys and using StarHorse, we can produce a chemical cartography of the Milky way disks from their outermost to innermost parts. Such a map is unprecedented in the inner Galaxy. It reveals a continuity of the bimodal chemical pattern previously detected in the solar neighbourhood, indicating two populations with distinct formation histories. Furthermore, the data reveals a chemical gradient within the thin disk where the content of 𝛼-process elements and metals is higher towards the centre. Focusing on a sample in the inner MW we confirm the extension of the chemical duality to the innermost regions of the Galaxy. We find stars with bar shape orbits to show both high- and low-𝛼 abundances, suggesting the bar formed by secular evolution trapping stars that already existed. By analysing the chemical orbital space of the inner Galactic regions, we disentangle the multiple populations that inhabit this complex region. We reveal the presence of the thin disk, thick disk, bar, and a counter-rotating population, which resembles the outcome of a perturbed proto-Galactic disk. Our study also finds that the inner Galaxy holds a high quantity of super metal-rich stars up to three times solar suggesting it is a possible repository of old super-metal-rich stars found in the solar neighbourhood.
We also enter into the complicated task of deriving individual stellar ages. With StarHorse, we calculate the ages of main-sequence turn-off and sub-giant stars for several public spectroscopic surveys. We validate our results by investigating linear relations between chemical abundances and time since the 𝛼 and neutron capture elements are sensitive to age as a reflection of the different enrichment timescales of these elements. For further study of the disks in the solar neighbourhood, we use an unsupervised machine learning algorithm to delineate a multidimensional separation of chrono-chemical stellar groups revealing the chemical thick disk, the thin disk, and young 𝛼-rich stars. The thick disk is shown to have a small age dispersion indicating its fast formation contrary to the thin disk that spans a wide range of ages.
With groundbreaking data, this thesis encloses a detailed chemo-dynamical view of the disk and bulge of our Galaxy. Our findings on the Milky Way can be linked to the evolution of high redshift disk galaxies, helping to solve the conundrum of galaxy formation.
Research within the framework of Basic Psychological Need Theory (BPNT) finds strong associations between basic need frustration and depressive symptoms. This study examined the role of rumination as an underlying mechanism in the association between basic psychological need frustration and depressive symptoms. A cross-sectional sample of N = 221 adults (55.2% female, mean age = 27.95, range = 18–62, SD = 10.51) completed measures assessing their level of basic psychological need frustration, rumination, and depressive symptoms. Correlational analyses and multiple mediation models were conducted. Brooding partially mediated the relation between need frustration and depressive symptoms. BPNT and Response Styles Theory are compatible and can further advance knowledge about depression vulnerabilities.
Development of self-concept and task interest has been shown to be affected by social comparison processes in a variety of cross-sectional studies. A potential explanation for these effects is an effect of social comparative performance feedback on an individual’s self-evaluation of performance, which in turn influences development of self-concept and task interest. There are, however, only few studies addressing this topic with experimental designs. This study was aimed at closing this research gap by experimentally manipulating social comparative performance. Feedback given was based on 2 × 2 experimental conditions: social position (high vs. low) and average performance of the reference group (high vs. low). Results show a strong effect of social position on self-evaluation of performance and smaller effects on self-concept and task interest.
Sulfur is an important element that is incorporated into many biomolecules in humans. The incorporation and transfer of sulfur into biomolecules is, however, facilitated by a series of different sulfurtransferases. Among these sulfurtransferases is the human mercaptopyruvate sulfurtransferase (MPST) also designated as tRNA thiouridine modification protein (TUM1). The role of the human TUM1 protein has been suggested in a wide range of physiological processes in the cell among which are but not limited to involvement in Molybdenum cofactor (Moco) biosynthesis, cytosolic tRNA thiolation and generation of H2S as signaling molecule both in mitochondria and the cytosol. Previous interaction studies showed that TUM1 interacts with the L-cysteine desulfurase NFS1 and the Molybdenum cofactor biosynthesis protein 3 (MOCS3). Here, we show the roles of TUM1 in human cells using CRISPR/Cas9 genetically modified Human Embryonic Kidney cells. Here, we show that TUM1 is involved in the sulfur transfer for Molybdenum cofactor synthesis and tRNA thiomodification by spectrophotometric measurement of the activity of sulfite oxidase and liquid chromatography quantification of the level of sulfur-modified tRNA. Further, we show that TUM1 has a role in hydrogen sulfide production and cellular bioenergetics.
This paper presents a new design for MOOCs for professional development of skills needed to meet the UN Sustainable Development Goals – the CoMOOC or Co-designed Massive Open Online Collaboration. The CoMOOC model is based on co-design with multiple stakeholders including end-users within the professional communities the CoMOOC aims to reach. This paper shows how the CoMOOC model could help the tertiary sector deliver on the UN Sustainable Development Goals (UNSDGs) – including but not limited to SDG 4 Education – by providing a more effective vehicle for professional development at a scale that the UNSDGs require. Interviews with professionals using MOOCs, and design-based research with professionals have informed the development of the Co-MOOC model. This research shows that open, online, collaborative learning experiences are highly effective for building professional community knowledge. Moreover, this research shows that the collaborative learning design at the heart of the CoMOOC model is feasible cross-platform Research with teachers working in crisis contexts in Lebanon, many of whom were refugees, will be presented to show how this form of large scale, co-designed, online learning can support professionals, even in the most challenging contexts, such as mass displacement, where expertise is urgently required.
Long COVID patients show symptoms, such as fatigue, muscle weakness and pain. Adequate diagnostics are still lacking. Investigating muscle function might be a beneficial approach. The holding capacity (maximal isometric Adaptive Force; AFisomax) was previously suggested to be especially sensitive for impairments. This longitudinal, non-clinical study aimed to investigate the AF in long COVID patients and their recovery process. AF parameters of elbow and hip flexors were assessed in 17 patients at three time points (pre: long COVID state, post: immediately after first treatment, end: recovery) by an objectified manual muscle test. The tester applied an increasing force on the limb of the patient, who had to resist isometrically for as long as possible. The intensity of 13 common symptoms were queried. At pre, patients started to lengthen their muscles at ~50% of the maximal AF (AFmax), which was then reached during eccentric motion, indicating unstable adaptation. At post and end, AFisomax increased significantly to ~99% and 100% of AFmax, respectively, reflecting stable adaptation. AFmax was statistically similar for all three time points. Symptom intensity decreased significantly from pre to end. The findings revealed a substantially impaired maximal holding capacity in long COVID patients, which returned to normal function with substantial health improvement. AFisomax might be a suitable sensitive functional parameter to assess long COVID patients and to support therapy process
Thai MOOC academy
(2023)
Thai MOOC Academy is a national digital learning platform that has been serving as a mechanism for promoting lifelong learning in Thailand since 2017. It has recently undergone significant improvements and upgrades, including the implementation of a credit bank system and a learner’s eportfolio system interconnected with the platform. Thai MOOC Academy is introducing a national credit bank system for accreditation and management, which allows for the transfer of expected learning outcomes and educational qualifications between formal education, non-formal education, and informal education. The credit bank system has five distinct features, including issuing forgery-prevented certificates, recording learning results, transferring external credits within the same wallet, accumulating learning results, and creating a QR code for verification purposes. The paper discusses the features and future potential of Thai MOOC Academy, as it is extended towards a sandbox for the national credit bank system in Thailand.
Technologically important, environmentally friendly InP quantum dots (QDs) typically used as green and red emitters in display devices can achieve exceptional photoluminescence quantum yields (PL QYs) of near-unity (95-100%) when the-state-of-the-art core/shell heterostructure of the ZnSe inner/ZnS outer shell is elaborately applied. Nevertheless, it has only led to a few industrial applications as QD liquid crystal display (QD–LCD) which is applied to blue backlight units, even though QDs has a lot of possibilities that able to realize industrially feasible applications, such as QD light-emitting diodes (QD‒LEDs) and luminescence solar concentrator (LSC), due to their functionalizable characteristics.
Before introducing the main research, the theoretical basis and fundamentals of QDs are described in detail on the basis of the quantum mechanics and experimental synthetic results, where a concept of QD and colloidal QD, a type-I core/shell structure, a transition metal doped semiconductor QDs, the surface chemistry of QD, and their applications (LSC, QD‒LEDs, and EHD jet printing) are sequentially elucidated for better understanding. This doctoral thesis mainly focused on the connectivity between QD materials and QD devices, based on the synthesis of InP QDs that are composed of inorganic core (core/shell heterostructure) and organic shell (surface ligands on the QD surface). In particular, as for the former one (core/shell heterostructure), the ZnCuInS mid-shell as an intermediate layer is newly introduced between a Cu-doped InP core and a ZnS shell for LSC devices. As for the latter one (surface ligands), the ligand effect by 1-octanethiol and chloride ion are investigated for the device stability in QD‒LEDs and the printability of electro-hydrodynamic (EHD) jet printing system, in which this research explores the behavior of surface ligands, based on proton transfer mechanism on the QD surface.
Chapter 3 demonstrates the synthesis of strain-engineered highly emissive Cu:InP/Zn–Cu–In–S (ZCIS)/ZnS core/shell/shell heterostructure QDs via a one-pot approach. When this unconventional combination of a ZCIS/ZnS double shelling scheme is introduced to a series of Cu:InP cores with different sizes, the resulting Cu:InP/ZCIS/ZnS QDs with a tunable near-IR PL range of 694–850 nm yield the highest-ever PL QYs of 71.5–82.4%. These outcomes strongly point to the efficacy of the ZCIS interlayer, which makes the core/shell interfacial strain effectively alleviated, toward high emissivity. The presence of such an intermediate ZCIS layer is further examined by comparative size, structural, and compositional analyses. The end of this chapter briefly introduces the research related to the LSC devices, fabricated from Cu:InP/ZCIS/ZnS QDs, currently in progress.
Chapter 4 mainly deals with ligand effect in 1-octanethiol passivation of InP/ZnSe/ZnS QDs in terms of incomplete surface passivation during synthesis. This chapter demonstrates the lack of anionic carboxylate ligands on the surface of InP/ZnSe/ZnS quantum dots (QDs), where zinc carboxylate ligands can be converted to carboxylic acid or carboxylate ligands via proton transfer by 1-octanethiol. The as-synthesized QDs initially have an under-coordinated vacancy surface, which is passivated by solvent ligands such as ethanol and acetone. Upon exposure of 1-octanethiol to the QD surface, 1-octanthiol effectively induces the surface binding of anionic carboxylate ligands (derived from zinc carboxylate ligands) by proton transfer, which consequently exchanges ethanol and acetone ligands that bound on the incomplete QD surface. The systematic chemical analyses, such as thermogravimetric analysis‒mass spectrometry and proton nuclear magnetic resonance spectroscopy, directly show the interplay of surface ligands, and it associates with QD light-emitting diodes (QD‒LEDs).
Chapter 5 shows the relation between material stability of QDs and device stability of QD‒LEDs through the investigation of surface chemistry and shell thickness. In typical III–V colloidal InP quantum dots (QDs), an inorganic ZnS outermost shell is used to provide stability when overcoated onto the InP core. However, this work presents a faster photo-degradation of InP/ZnSe/ZnS QDs with a thicker ZnS shell than that with a thin ZnS shell when 1-octanethiol was applied as a sulfur source to form ZnS outmost shell. Herein, 1-octanethiol induces the form of weakly-bound carboxylate ligand via proton transfer on the QD surface, resulting in a faster degradation at UV light even though a thicker ZnS shell was formed onto InP/ZnSe QDs. Detailed insight into surface chemistry was obtained from proton nuclear magnetic resonance spectroscopy and thermogravimetric analysis–mass spectrometry. However, the lifetimes of the electroluminescence devices fabricated from InP/ZnSe/ZnS QDs with a thick or a thin ZnS shell show surprisingly the opposite result to the material stability of QDs, where the QD light-emitting diodes (QD‒LEDs) with a thick ZnS shelled QDs maintained its luminance more stable than that with a thin ZnS shelled QDs. This study elucidates the degradation mechanism of the QDs and the QD light-emitting diodes based on the results and discuss why the material stability of QDs is different from the lifetime of QD‒LEDs.
Chapter 6 suggests a method how to improve a printability of EHD jet printing when QD materials are applied to QD ink formulation, where this work introduces the application of GaP mid-shelled InP QDs as a role of surface charge in EHD jet printing technique. In general, GaP intermediate shell has been introduced in III–V colloidal InP quantum dots (QDs) to enhance their thermal stability and quantum efficiency in the case of type-I core/shell/shell heterostructure InP/GaP/ZnSeS QDs. Herein, these highly luminescent InP/GaP/ZnSeS QDs were synthesized and applied to EHD jet printing, by which this study demonstrates that unreacted Ga and Cl ions on the QD surface induce the operating voltage of cone jet and cone jet formation to be reduced and stabilized, respectively. This result indicates GaP intermediate shell not only improves PL QY and thermal stability of InP QDs but also adjusts the critical flow rate required for cone-jet formation. In other words, surface charges of quantum dots can have a significant role in forming cone apex in the EHD capillary nozzle. For an industrially convenient validation of surface charges on the QD surface, Zeta potential analyses of QD solutions as a simple method were performed, as well as inductively coupled plasma optical emission spectrometry (ICP-OES) for a composition of elements.
Beyond the generation of highly emissive InP QDs with narrow FWHM, these studies talk about the connection between QD material and QD devices not only to make it a vital jumping-off point for industrially feasible applications but also to reveal from chemical and physical standpoints the origin that obstructs the improvement of device performance experimentally and theoretically.
Layered structures are ubiquitous in nature and industrial products, in which individual layers could have different mechanical/thermal properties and functions independently contributing to the performance of the whole layered structure for their relevant application. Tuning each layer affects the performance of the whole layered system.
Pores are utilized in various disciplines, where low density, but large surfaces are demanded. Besides, open and interconnected pores would act as a transferring channel for guest chemical molecules. The shape of pores influences compression behavior of the material. Moreover, introducing pores decreases the density and subsequently the mechanical strength. To maintain defined mechanical strength under various stress, porous structure can be reinforced by adding reinforcement agent such as fiber, filler or layered structure to bear the mechanical stress on demanded application.
In this context, this thesis aimed to generate new functions in bilayer systems by combining layers having different moduli and/or porosity, and to develop suitable processing techniques to access these structures.
Manufacturing processes of layered structures employ often organic solvents mostly causing environmental pollution. In this regard, the studied bilayer structures here were manufactured by processes free of organic solvents.
In this thesis, three bilayer systems were studied to answer the individual questions.
First, while various methods of introducing pores in melt-phase are reported for one-layer constructs with simple geometry, can such methods be applied to a bilayer structure, giving two porous layers?
This was addressed with Bilayer System 1. Two porous layers were obtained from melt-blending of two different polyurethanes (PU) and polyvinyl alcohol (PVA) in a co-continuous phase followed by sequential injection molding and leaching the PVA phase in deionized water. A porosity of 50 ± 5% with a high interconnectivity was obtained, in which the pore sizes in both layers ranged from 1 µm to 100 µm with an average of 22 µm in both layers. The obtained pores were tailored by applying an annealing treatment at relevant high temperatures of 110 °C and 130 °C, which allowed the porosity to be kept constant. The disadvantage of this system is that a maximum of 50% porosity could be reached and removal of leaching material in the weld line section of both layers is not guaranteed. Such a construct serves as a model for bilayer porous structure for determining structure-property relationships with respect to the pore size, porosity and mechanical properties of each layer. This fabrication method is also applicable to complex geometries by designing a relevant mold for injection molding.
Secondly, utilizing scCO2 foaming process at elevated temperature and pressure is considered as a green manufacturing process. Employing this method as a post-treatment can alter the history orientation of polymer chains created by previous fabrication methods. Can a bilayer structure be fabricated by a combination of sequential injection molding and scCO2 foaming process, in which a porous layer is supported by a compact layer?
Such a construct (Bilayer System 2) was generated by sequential injection molding of a PCL (Tm ≈ 58 °C) layer and a PLLA (Tg ≈ 58 °C) layer. Soaking this structure in the autoclave with scCO2 at T = 45 °C and P = 100 bar led to the selective foaming of PCL with a porosity of 80%, while the PLA layer was kept compact. The scCO2 autoclave led to the formation of a porous core and skin layer of the PCL, however, the degree of crystallinity of PLLA layer increased from 0 to 50% at the defined temperature and pressure. The microcellular structure of PCL as well as the degree of crystallinity of PLLA were controlled by increasing soaking time.
Thirdly, wrinkles on surfaces in micro/nano scale alter the properties, which are surface-related. Wrinkles are formed on a surface of a bilayer structure having a compliant substrate and a stiff thin film. However, the reported wrinkles were not reversible. Moreover, dynamic wrinkles in nano and micro scale have numerous examples in nature such as gecko foot hair offering reversible adhesion and an ability of lotus leaves for self-cleaning altering hydrophobicity of the surface. It was envisioned to imitate this biomimetic function on the bilayer structure, where self-assembly on/off patterns would be realized on the surface of this construct.
In summary, developing layered constructs having different properties/functions in the individual layer or exhibiting a new function as the consequence of layered structure can give novel insight for designing layered constructs in various disciplines such as packaging and transport industry, aerospace industry and health technology.
The present work focuses on the preparation and characterisation of various nanoplastic reference material candidates. Nanoplastics are plastic particles in a size range of 1 − 1000 nm. The term has emerged in recent years as a distinction from the larger microplastic (1 − 1000 μm). Since the properties of the two plastic particles differ significantly due to their size, it is important to have nanoplastic reference material. This was produced for the polymer types polypropylene (PP) and polyethylene (PE) as well as poly(lactic acid) (PLA).
A top-down method was used to produce the nanoplastic for the polyolefins PP and PE (Section 3.1). The material was crushed in acetone using an Ultra-Turrax disperser and then transferred to water. This process produces reproducible results when repeated, making it suitable for the production of a reference material candidate. The resulting dispersions were investigated using dynamic and electrophoretic light scattering. The dispersion of PP particles gave a mean hydrodynamic diameter Dh = 180.5±5.8 nm with a PDI = 0.08±0.02 and a zeta potential ζ = −43.0 ± 2.0 mV. For the PE particles, a diameter Dh = 344.5 ± 34.6 nm, with a PDI = 0.39 ± 0.04 and a zeta potential of ζ = −40.0 ± 4.2 mV was measured. This means that both dispersions are nanoplastics, as the particles are < 1000 nm. Furthermore, the starting material of these polyolefin particles was mixed with a gold salt and thereby the nanoplastic production was repeated in order to obtain nanoplastic particles doped with gold, which should simplify the detection of the particles.
In addition to the top-down approach, a bottom-up method was chosen for the PLA (Section 3.2). Here, the polymer was first dissolved in THF and stabilised with a surfactant. Then water was added and THF evaporated, leaving an aqueous PLA dispersion. This experiment was also investigated using dynamic light scattering and, when repeated, yielded reproducible results, i. e. an average hydrodynamic diameter of Dh = 89.2 ± 3.0 nm. Since the mass concentration of PLA in the dispersion is known due to the production method, a Python notebook was tested for these samples to calculate the number and mass concentration of nano(plastic) particles using the MALS results. Similar to the plastic produced in Section 3.1, gold was also incorporated into the particle, which was achieved by adding a dispersion of gold clusters with a diameter of D = 1.15 nm in an ionic liquid (IL) in the production process. Here, the preparation of the gold clusters in the ionic liquid 1-ethyl-3-methylimidazolium dicyanamide ([Emim][DCA]) represented the first use of an IL both as a reducing agent for gold and as a solvent for the gold clusters. Two volumes of gold cluster dispersion were added during the PLA particle synthesis. The addition of the gold clusters leads to much larger particles. The nanoPLA with 0.8% Au has a diameter of Dh = 198.0 ± 10.8 nm and the nanoPLA with 4.9% Au has a diameter of Dh = 259.1 ± 23.7 nm. First investigations by TEM imaging show that the nanoPLA particles form hollow spheres when gold clusters are added. However, the mechanism leading to these structures remains unclear.
“How can a course structure be redesigned based on empirical data to enhance the learning effectiveness through a student-centered approach using objective criteria?”, was the research question we asked. “Digital Twins for Virtual Commissioning of Production Machines” is a course using several innovative concepts including an in-depth practical part with online experiments, called virtual labs. The teaching-learning concept is continuously evaluated. Card Sorting is a popular method for designing information architectures (IA), “a practice of effectively organizing, structuring, and labeling the content of a website or application into a structuref that enables efficient navigation” [11]. In the presented higher education context, a so-called hybrid card sort was used, in which each participants had to sort 70 cards into seven predefined categories or create new categories themselves. Twelve out of 28 students voluntarily participated in the process and short interviews were conducted after the activity. The analysis of the category mapping creates a quantitative measure of the (dis-)similarity of the keywords in specific categories using hierarchical clustering (HCA). The learning designer could then interpret the results to make decisions about the number, labeling and order of sections in the course.
The deformation style of mountain belts is greatly influenced by the upper plate architecture created during preceding deformation phases. The Mesozoic Salta Rift extensional phase has created a dominant structural and lithological framework that controls Cenozoic deformation and exhumation patterns in the Central Andes. Studying the nature of these pre-existing anisotropies is a key to understanding the spatiotemporal distribution of exhumation and its controlling factors. The Eastern Cordillera in particular, has a structural grain that is in part controlled by Salta Rift structures and their orientation relative to Andean shortening. As a result, there are areas in which Andean deformation prevails and areas where the influence of the Salta Rift is the main control on deformation patterns.
Between 23 and 24°S, lithological and structural heterogeneities imposed by the Lomas de Olmedo sub-basin (Salta Rift basin) affect the development of the Eastern Cordillera fold-and-thrust belt. The inverted northern margin of the sub-basin now forms the southern boundary of the intermontane Cianzo basin. The former western margin of the sub-basin is located at the confluence of the Subandean Zone, the Santa Barbara System and the Eastern Cordillera. Here, the Salta Rift basin architecture is responsible for the distribution of these morphotectonic provinces. In this study we use a multi-method approach consisting of low-temperature (U-Th-Sm)/He and apatite fission track thermochronology, detrital geochronology, structural and sedimentological analyses to investigate the Mesozoic structural inheritance of the Lomas de Olmedo sub-basin and Cenozoic exhumation patterns.
Characterization of the extension-related Tacurú Group as an intermediate succession between Paleozoic basement and the syn-rift infill of the Lomas de Olmedo sub-basin reveals a Jurassic maximum depositional age. Zircon (U-Th-Sm)/He cooling ages record a pre-Cretaceous onset of exhumation for the rift shoulders in the northern part of the sub-basin, whereas the western shoulder shows a more recent onset (140–115 Ma). Variations in the sedimentary thickness of syn- and post-rift strata document the evolution of accommodation space in the sub-basin. While the thickness of syn-rift strata increases rapidly toward the northern basin margin, the post-rift strata thickness decreases toward the margin and forms a condensed section on the rift shoulder.
Inversion of Salta Rift structures commenced between the late Oligocene and Miocene (24–15 Ma) in the ranges surrounding the Cianzo basin. The eastern and western limbs of the Cianzo syncline, located in the hanging wall of the basin-bounding Hornocal fault, show diachronous exhumation. At the same time, western fault blocks of Tilcara Range, south of the Cianzo basin, began exhuming in the late Oligocene to early Miocene (26–16 Ma). Eastward propagation to the frontal thrust and to the Paleozoic strata east of the Tilcara Range occurred in the middle Miocene (22–10 Ma) and the late Miocene–early Pliocene (10–4 Ma), respectively.
Residential segregation is a widespread phenomenon that can be observed in almost every major city. In these urban areas, residents with different ethnical or socioeconomic backgrounds tend to form homogeneous clusters. In Schelling’s classical segregation model two types of agents are placed on a grid. An agent is content with its location if the fraction of its neighbors, which have the same type as the agent, is at least 𝜏, for some 0 < 𝜏 ≤ 1. Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty location. The model gives a coherent explanation of how clusters can form even if all agents are tolerant, i.e., if they agree to live in mixed neighborhoods. For segregation to occur, all it needs is a slight bias towards agents preferring similar neighbors.
Although the model is well studied, previous research focused on a random process point of view. However, it is more realistic to assume instead that the agents strategically choose where to live. We close this gap by introducing and analyzing game-theoretic models of Schelling segregation, where rational agents strategically choose their locations.
As the first step, we introduce and analyze a generalized game-theoretic model that allows more than two agent types and more general underlying graphs modeling the residential area. We introduce different versions of Swap and Jump Schelling Games. Swap Schelling Games assume that every vertex of the underlying graph serving as a residential area is occupied by an agent and pairs of discontent agents can swap their locations, i.e., their occupied vertices, to increase their utility. In contrast, for the Jump Schelling Game, we assume that there exist empty vertices in the graph and agents can jump to these vacant vertices if this increases their utility. We show that the number of agent types as well as the structure of underlying graph heavily influence the dynamic properties and the tractability of finding an optimal strategy profile.
As a second step, we significantly deepen these investigations for the swap version with 𝜏 = 1 by studying the influence of the underlying topology modeling the residential area on the existence of equilibria, the Price of Anarchy, and the dynamic properties. Moreover, we restrict the movement of agents locally. As a main takeaway, we find that both aspects influence the existence and the quality of stable states.
Furthermore, also for the swap model, we follow sociological surveys and study, asking the same core game-theoretic questions, non-monotone singlepeaked utility functions instead of monotone ones, i.e., utility functions that are not monotone in the fraction of same-type neighbors. Our results clearly show that moving from monotone to non-monotone utilities yields novel structural properties and different results in terms of existence and quality of stable states.
In the last part, we introduce an agent-based saturated open-city variant, the Flip Schelling Process, in which agents, based on the predominant type in their neighborhood, decide whether to change their types. We provide a general framework for analyzing the influence of the underlying topology on residential segregation and investigate the probability that an edge is monochrome, i.e., that both incident vertices have the same type, on random geometric and Erdős–Rényi graphs. For random geometric graphs, we prove the existence of a constant c > 0 such that the expected fraction of monochrome edges after the Flip Schelling Process is at least 1/2 + c. For Erdős–Rényi graphs, we show the expected fraction of monochrome edges after the Flip Schelling Process is at most 1/2 + o(1).
With the implementation of intense, short pulsed light sources throughout the last years, the powerful technique of resonant inelastic X-ray scattering (RIXS) became feasible for a wide range of experiments within femtosecond dynamics in correlated materials and molecules.
In this thesis I investigate the potential to bring RIXS into the fluence regime of nonlinear X-ray-matter interactions, especially focusing on the impact of stimulated scattering on RIXS in transition metal systems in a transmission spectroscopy geometry around transition metal L-edges.
After presenting the RIXS toolbox and the capabilities of free electron laser light sources for ultrafast intense X-ray experiments, the thesis explores an experiment designed to understand the impact of stimulated scattering on diffraction and direct beam transmission spectroscopy on a CoPd multilayer system. The experiments require short X-ray pulses that can only be generated at free electron lasers (FEL). Here the pulses are not only short, but also very intense, which opens the door to nonlinear X-ray-matter interactions. In the second part of this thesis, we investigate observations in the nonlinear interaction regime, look at potential difficulties for classic spectroscopy and investigate possibilities to enhance the RIXS through stimulated scattering. Here, a study on stimulated RIXS is presented, where we investigate the light field intensity dependent CoPd demagnetization in transmission as well as scattering geometry. Thereby we show the first direct observation of stimulated RIXS as well as light field induced nonlinear effects,
namely the breakdown of scattering intensity and the increase in sample transmittance. The topic is of ongoing interest and will just increase in relevance as more free electron lasers are planned and the number of experiments at such light sources will continue to increase in the near future.
Finally we present a discussion on the accessibility of small DOS shifts in the absorption-band of transition metal complexes through stimulated resonant X-ray scattering. As these shifts occur for example in surface states this finding could expand the experimental selectivity of NEXAFS and RIXS to the detectability of surface states. We show how stimulation can indeed enhance the visibility of DOS shifts through the detection of stimulated spectral shifts and enhancements in this theoretical study. We also forecast the observation of stimulated enhancements in resonant excitation experiments at FEL sources in systems with a high density of states just below the Fermi edge and in systems with an occupied to unoccupied DOS ratio in the valence band above 1.
Stars under influence: evidence of tidal interactions between stars and substellar companions
(2023)
Tidal interactions occur between gravitationally bound astrophysical bodies. If their spatial separation is sufficiently small, the bodies can induce tides on each other, leading to angular momentum transfer and altering of evolutionary path the bodies would have followed if they were single objects. The tidal processes are well established in the Solar planet-moon systems and close stellar binary systems. However, how do stars behave if they are orbited by a substellar companion (e.g. a planet or a brown dwarf) on a tight orbit?
Typically, a substellar companion inside the corotation radius of a star will migrate toward the star as it loses orbital angular momentum. On the other hand, the star will gain angular momentum which has the potential to increase its rotation rate. The effect should be more pronounced if the substellar companion is more massive. As the stellar rotation rate and the magnetic activity level are coupled, the star should appear more magnetically active under the tidal influence of the orbiting substellar companion. However, the difficulty in proving that a star has a higher magnetic activity level due to tidal interactions lies in the fact that (I) substellar companions around active stars are easier to detect if they are more massive, leading to a bias toward massive companions around active stars and mimicking the tidal interaction effect, and that (II) the age of a main-sequence star cannot be easily determined, leaving the possibility that a star is more active due to its young age.
In our work, we overcome these issues by employing wide stellar binary systems where one star hosts a substellar companion, and where the other star provides the magnetic activity baseline for the host star, assuming they have coevolved, and thereby provides the host's activity level if tidal interactions have no effect on it. Firstly, we find that extrasolar planets can noticeably increase the host star's X-ray luminosity and that the effect is more pronounced if the exoplanet is at least Jupiter-like in mass and close to the star. Further, we find that a brown dwarf will have an even stronger effect, as expected, and that the X-ray surface flux difference between the host star and the wide stellar companion is a significant outlier when compared to a large sample of similar wide binary systems without any known substellar companions. This result proves that substellar hosting wide binary systems can be good tools to reveal the tidal effect on host stars, and also show that the typical stellar age indicators as activity or rotation cannot be used for these stars. Finally, knowing that the activity difference is a good tracer of the substellar companion's tidal impact, we develop an analytical method to calculate the modified tidal quality factor Q' of individual host stars, which defines the tidal dissipation efficiency in the convective envelope of a given main-sequence star.
Carbonates carried in subducting slabs may play a major role in sourcing and storing carbon in the deep Earth’s interior. Current estimates indicate that between 40 to 66 million tons of carbon per year enter subduction zones, but it is uncertain how much of it reaches the lower mantle. It appears that most of this carbon might be extracted from subducting slabs at the mantle wedge and only a limited amount continues deeper and eventually reaches the deep mantle. However, estimations on deeply subducted carbon broadly range from 0.0001 to 52 million tons of carbon per year. This disparity is primarily due to the limited understanding of the survival of carbonate minerals during their transport to deep mantle conditions. Indeed, carbon has very low solubility in mantle silicates, therefore it is expected to be stored primarily in accessory phases such as carbonates. Among those carbonates, magnesite (MgCO3), as a single phase, is the most stable under all mantle conditions. However, experimental investigation on the stability of magnesite in contact with SiO2 at lower mantle conditions suggests that magnesite is stable only along a cold subducted slab geotherm. Furthermore, our understanding of magnesite’s stability when interacting with more complex mantle silicate phases remains incomplete. In the first part of this dissertation, laser-heated diamond anvil cells and multi-anvil apparatus experiments were performed to investigate the stability of magnesite in contact with iron-bearing mantle silicates. Sub-solidus reactions, melting, decarbonation and diamond formation were examined from shallow to mid-lower mantle conditions (25 to 68 GPa; 1300 to 2000 K). Multi-anvil experiments at 25 GPa show the formation of carbonate-rich melt, bridgmanite, and stishovite with melting occurring at a temperature corresponding to all geotherms except the coldest one. In situ X-ray diffraction, in laser-heating diamond anvil cells experiments, shows crystallization of bridgmanite and stishovite but no melt phase was detected in situ at high temperatures. To detect decarbonation phases such as diamond, Raman spectroscopy was used. Crystallization of diamonds is observed as a sub-solidus process even at temperatures relevant and lower than the coldest slab geotherm (1350 K at 33 GPa). Data obtained from this work suggest that magnesite is unstable in contact with the surrounding peridotite mantle in the upper-most lower mantle. The presence of magnesite instead induces melting under oxidized conditions and/or foster diamond formation under more reduced conditions, at depths ∼700 km. Consequently, carbonates will be removed from the carbonate-rich slabs at shallow lower mantle conditions, where subducted slabs can stagnate. Therefore, the transport of carbonate to deeper depths will be restricted, supporting the presence of a barrier for carbon subduction at the top of the lower mantle. Moreover, the reduction of magnesite, forming diamonds provides additional evidence that super-deep diamond crystallization is related to the reduction of carbonates or carbonated-rich melt.
The second part of this dissertation presents the development of a portable laser-heating system optimized for X-ray emission spectroscopy (XES) or nuclear inelastic scattering (NIS) spectroscopy with signal collection at near 90◦. The laser-heated diamond anvil cell is the only static pressure device that can replicate the pressure and temperatures of the Earth’s lower mantle and core. The high temperatures are reached by using high-powered lasers focused on the sample contained between the diamond anvils. Moreover, diamonds’ transparency to X-rays enables in situ X-ray spectroscopy measurements that can probe the sample under high-temperature and high-pressure conditions. Therefore, the development of portable laser-heating systems has linked high-pressure and temperature research with high-resolution X-ray spectroscopy techniques to synchrotron beamlines that do not have a dedicated, permanent, laser-heating system. A general description of the system is provided, as well as details on the use of a parabolic mirror as a reflective imaging objective for on-axis laser heating and radiospectrometric temperature measurements with zero attenuation of incoming X-rays. The parabolic mirror improves the accuracy of temperature measurements free from chromatic aberrations in a wide spectral range and its perforation permits in situ X-rays measurement at synchrotron facilities. The parabolic mirror is a well-suited alternative to refractive objectives in laser heating systems, which will facilitate future applications in the use of CO2 lasers.
Soft-template strategy enables the fabrication of composite nanomaterials with desired functionalities and structures. In this thesis, soft templates, including poly(ionic liquid) nanovesicles (PIL NVs), self-assembled polystyrene-b-poly(2-vinylpyridine) (PS-b-P2VP) particles, and glycopeptide (GP) biomolecules have been applied for the synthesis of versatile composite particles of PILs/Cu, molybdenum disulfide/carbon (MoS2/C), and GP-carbon nanotubes-metal (GP-CNTs-metal) composites, respectively. Subsequently, their possible applications as efficient catalysts in two representative reactions, i.e. CO2 electroreduction (CO2ER) and reduction of 4-nitrophenol (4-NP), have been studied, respectively.
In the first work, PIL NVs with a tunable particle size of 50 to 120 nm and a shell thickness of 15 to 60 nm have been prepared via one-step free radical polymerization. By increasing monomer concentration for polymerization, their nanoscopic morphology can evolve from hollow NVs to dense spheres, and finally to directional worms, in which a multi-lamellar packing of PIL chains occurred in all samples. The obtained PIL NVs with varied shell thickness have been in situ functionalized with ultra-small Cu nanoparticles (Cu NPs, 1-3 nm) and subsequently employed as the electrocatalysts for CO2ER. The hollow PILs/Cu composite catalysts exhibit a 2.5-fold enhancement in selectivity towards C1 products compared to the pristine Cu NPs. This enhancement is primarily attributed to the strong electronic interactions between the Cu NPs and the surface functionalities of PIL NVs. This study casts new aspects on using nanostructured PILs as novel electrocatalyst supports in efficient CO2 conversion.
In the second work, a novel approach towards fast degradation of 4-NP has been developed using porous MoS2/C particles as catalysts, which integrate the intrinsically catalytic property of MoS2 with its photothermal conversion capability. Various MoS2/C composite particles have been prepared using assembled PS-b-P2VP block copolymer particles as sacrificed soft templates. Intriguingly, the MoS2/C particles exhibit tailored morphologies including pomegranate-like, hollow, and open porous structures. Subsequently, the photothermal conversion performance of these featured particles has been compared under near infrared (NIR) light irradiation. When employing the open porous MoS2/C particles as the catalyst for the reduction of 4-NP, the reaction rate constant has increased by 1.5-fold under light illumination. This catalytic enhancement mainly results from the open porous architecture and photothermal conversion performance of the MoS2 particles. This proposed strategy offers new opportunities for efficient photothermal-assisted catalysis.
In the third work, a facile and green approach towards the fabrication of GP-CNTs-metal composites has been proposed, which utilizes a versatile GP biomolecule both as a stabilizer for CNTs in water and as a reducing agent for noble metal ions. The abundant hydrogen bonds in GP molecules bestow the formed GP-CNTs with excellent plasticity, enabling the availability of polymorphic CNTs species ranging from dispersion to viscous paste, gel, and even dough by increasing their concentration. The GP molecules can reduce metal precursors at room temperature without additional reducing agents, enabling the in situ immobilization of metal NPs (e.g. Au, Ag, and Pd) on the CNTs surface. The combination of excellent catalytic property of Pd NPs with photothermal conversion capability of CNTs makes the GP-CNTs-Pd composite a promising catalyst for the efficient degradation of 4-NP. The obtained composite displays a 1.6-fold increase in conversion under NIR light illumination in the reduction of 4-NP, mainly owing to the strong light-to-heat conversion effect of CNTs. Overall, the proposed method opens a new avenue for the synthesis of CNTs composite as a sustainable and versatile catalyst platform.
The results presented in the current thesis demonstrate the significance of using soft templates for the synthesis of versatile composites with tailored nanostructure and functionalities. The investigation of these composite nanomaterials in the catalytic reactions reveals their potential in the development of desired catalysts for emerging catalytic processes, e.g. photothermal-assisted catalysis and electrocatalysis.
The central gas in half of all galaxy clusters shows short cooling times. Assuming unimpeded cooling, this should lead to high star formation and mass cooling rates, which are not observed. Instead, it is believed that condensing gas is accreted by the central black hole that powers an active galactic nuclei jet, which heats the cluster. The detailed heating mechanism remains uncertain. A promising mechanism invokes cosmic ray protons that scatter on self-generated magnetic fluctuations, i.e. Alfvén waves. Continuous damping of Alfvén waves provides heat to the intracluster medium. Previous work has found steady state solutions for a large sample of clusters where cooling is balanced by Alfvénic wave heating. To verify modeling assumptions, we set out to study cosmic ray injection in three-dimensional magnetohydrodynamical simulations of jet feedback in an idealized cluster with the moving-mesh code arepo. We analyze the interaction of jet-inflated bubbles with the turbulent magnetized intracluster medium.
Furthermore, jet dynamics and heating are closely linked to the largely unconstrained jet composition. Interactions of electrons with photons of the cosmic microwave background result in observational signatures that depend on the bubble content. Those recent observations provided evidence for underdense bubbles with a relativistic filling while adopting simplifying modeling assumptions for the bubbles. By reproducing the observations with our simulations, we confirm the validity of their modeling assumptions and as such, confirm the important finding of low-(momentum) density jets.
In addition, the velocity and magnetic field structure of the intracluster medium have profound consequences for bubble evolution and heating processes. As velocity and magnetic fields are physically coupled, we demonstrate that numerical simulations can help link and thereby constrain their respective observables. Finally, we implement the currently preferred accretion model, cold accretion, into the moving-mesh code arepo and study feedback by light jets in a radiatively cooling magnetized cluster. While self-regulation is attained independently of accretion model, jet density and feedback efficiencies, we find that in order to reproduce observed cold gas morphology light jets are preferred.
Cosmic rays (CRs) constitute an important component of the interstellar medium (ISM) of galaxies and are thought to play an essential role in governing their evolution. In particular, they are able to impact the dynamics of a galaxy by driving galactic outflows or heating the ISM and thereby affecting the efficiency of star-formation. Hence, in order to understand galaxy formation and evolution, we need to accurately model this non-thermal constituent of the ISM. But except in our local environment within the Milky Way, we do not have the ability to measure CRs directly in other galaxies. However, there are many ways to indirectly observe CRs via the radiation they emit due to their interaction with magnetic and interstellar radiation fields as well as with the ISM.
In this work, I develop a numerical framework to calculate the spectral distribution of CRs in simulations of isolated galaxies where a steady-state between injection and cooling is assumed. Furthermore, I calculate the non-thermal emission processes arising from the modelled CR proton and electron spectra ranging from radio wavelengths up to the very high-energy gamma-ray regime.
I apply this code to a number of high-resolution magneto-hydrodynamical (MHD) simulations of isolated galaxies, where CRs are included. This allows me to study their CR spectra and compare them to observations of the CR proton and electron spectra by the Voyager-1 satellite and the AMS-02 instrument in order to reveal the origin of the measured spectral features.
Furthermore, I provide detailed emission maps, luminosities and spectra of the non-thermal emission from our simulated galaxies that range from dwarfs to Milk-Way analogues to starburst galaxies at different evolutionary stages. I successfully reproduce the observed relations between the radio and gamma-ray luminosities with the far-infrared (FIR) emission of star-forming (SF) galaxies, respectively, where the latter is a good tracer of the star-formation rate. I find that highly SF galaxies are close to the limit where their CR population would lose all of their energy due to the emission of radiation, whereas CRs tend to escape low SF galaxies more quickly. On top of that, I investigate the properties of CR transport that are needed in order to match the observed gamma-ray spectra.
Furthermore, I uncover the underlying processes that enable the FIR-radio correlation (FRC) to be maintained even in starburst galaxies and find that thermal free-free-emission naturally explains the observed radio spectra in SF galaxies like M82 and NGC 253 thus solving the riddle of flat radio spectra that have been proposed to contradict the observed tight FRC.
Lastly, I scrutinise the steady-state modelling of the CR proton component by investigating for the first time the influence of spectrally resolved CR transport in MHD simulations on the hadronic gamma-ray emission of SF galaxies revealing new insights into the observational signatures of CR transport both spectrally and spatially.
Self-efficacy reflects the self-belief that one can persistently perform difficult and novel tasks while coping with adversity. As such beliefs reflect how individuals behave, think, and act, they are key for successful entrepreneurial activities. While existing literature mainly analyzes the influence of the task-related construct of entrepreneurial self-efficacy, we take a different perspective and investigate, based on a representative sample of 1,405 German business founders, how the personality characteristic of generalized self-efficacy influences start-up performance as measured by a broad set of business outcomes up to 19 months after business creation. Outcomes include start-up survival and entrepreneurial income, as well as growth-oriented outcomes such as job creation and innovation. We find statistically significant and economically important positive effects of high scores of self-efficacy on start-up survival and entrepreneurial income, which become even stronger when focusing on the growth-oriented outcome of innovation. Furthermore, we observe that generalized self-efficacy is similarly distributed between female and male business founders, with effects being partly stronger for female entrepreneurs. Our findings are important for policy instruments that are meant to support firm growth by facilitating the design of more target-oriented offers for training, coaching, and entrepreneurial incubators.
Both horizontal-to-vertical (H/V) spectral ratios and the spatial autocorrelation method (SPAC) have proven to be valuable tools to gain insight into local site effects by ambient noise measurements. Here, the two methods are employed to assess the subsurface velocity structure at the Piano delle Concazze area on Mt Etna. Volcanic tremor records from an array of 26 broadband seismometers is processed and a strong variability of H/V ratios during periods of increased volcanic activity is found. From the spatial distribution of H/V peak frequencies, a geologic structure in the north-east of Piano delle Concazze is imaged which is interpreted as the Ellittico caldera rim. The method is extended to include both velocity data from the broadband stations and distributed acoustic sensing data from a co-located 1.5 km long fibre optic cable. High maximum amplitude values of the resulting ratios along the trajectory of the cable coincide with known faults. The outcome also indicates previously unmapped parts of a fault. The geologic interpretation is in good agreement with inversion results from magnetic survey data. Using the neighborhood algorithm, spatial autocorrelation curves obtained from the modified SPAC are inverted alone and jointly with the H/V peak frequencies for 1D shear wave velocity profiles. The obtained models are largely consistent with published models and were able to validate the results from the fibre optic cable.
The relevance of physical fitness for children’s and adolescents’ health is indisputable and it is crucial to regularly assess and evaluate children’s and adolescents’ individual physical fitness development to detect potential negative health consequences in time. Physical fitness tests are easy-to-administer, reliable, and valid which is why they should be widely used to provide information on performance development and health status of children and adolescents. When talking about development of physical fitness, two perspectives can be distinguished. One perspective is how the physical fitness status of children and adolescents changed / developed over the past decades (i.e., secular trends). The other perspective covers the analyses how physical fitness develops with increasing age due to growth and maturation processes. Although, the development of children’s and adolescents’ physical fitness has been extensively described and analyzed in the literature, still some questions remain to be uncovered that will be addressed in the present doctoral thesis.
Previous systematic reviews and meta-analyses have examined secular trends in children’s and adolescents’ physical fitness. However, considering that those analyses are by now 15 years old and that updates are available only to limited components of physical fitness, it is time to re-analyze the literature and examine secular trends for selected components of physical fitness (i.e., cardiorespiratory endurance, muscle strength, proxies of muscle power, and speed). Fur-thermore, the available studies on children’s development of physical fitness as well as the ef-fects of moderating variables such as age and sex have been investigated within a long-term ontogenetic perspective. However, the effects of age and sex in the transition from pre-puberty to puberty in the ninth year of life using a short-term ontogenetic perspective and the effect of timing of school enrollment on children’s development of physical fitness have not been clearly identified. Therefore, the present doctoral thesis seeks to complement the knowledge of children’s and adolescents’ physical fitness development by updating secular trend analysis in selected components of physical fitness, by examining short-term ontogenetic cross-sectional developmental differences in children`s physical fitness, and by comparing physical fitness of older- and younger-than-keyage children versus keyage-children. These findings provide valuable information about children’s and adolescents’ physical fitness development to help prevent potential deficits in physical fitness as early as possible and consequently ensure a holistic development and a lifelong healthy life.
Initially, a systematic review to provide an ‘update’ on secular trends in selected components of physical fitness (i.e., cardiorespiratory endurance, relative muscle strength, proxies of muscle power, speed) in children and adolescents aged 6 to 18 years was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement guidelines. To examine short-term ontogenetic cross-sectional developmental differences and to compare physical fitness of older- and younger-than-keyage children versus keyage-children physical fitness data of 108,295 keyage-children (i.e., aged 8.00 to 8.99 years), 2,586 younger-than-keyage children (i.e., aged 7.00 to 7.99 years), and 26,540 older-than-keyage children (i.e., aged 9.00 to 9.99 years) from the third grade were analyzed. Physical fitness was assessed through the EMOTIKON test battery measuring cardiorespiratory endurance (i.e., 6-min-run test), coordina-tion (i.e., star-run test), speed (i.e., 20-m linear sprint test), and proxies of lower (i.e., standing long jump test) and upper limbs (i.e., ball-push test) muscle power. Statistical inference was based on Linear Mixed Models.
Findings from the systematic review revealed a large initial improvement and an equally large subsequent decline between 1986 and 2010 as well as a stabilization between 2010 and 2015 in cardiorespiratory endurance, a general trend towards a small improvement in relative muscle strength from 1972 to 2015, an overall small negative quadratic trend for proxies of muscle power from 1972 to 2015, and a small-to-medium improvement in speed from 2002 to 2015. Findings from the cross-sectional studies showed that even in a single prepubertal year of life (i.e., ninth year) physical fitness performance develops linearly with increasing chronological age, boys showed better performances than girls in all physical fitness components, and the components varied in the size of sex and age effects. Furthermore, findings revealed that older-than-keyage children showed poorer performance in physical fitness compared to keyage-children, older-than-keyage girls showed better performances than older-than-keyage boys, and younger-than-keyage children outperformed keyage-children.
Due to the varying secular trends in physical fitness, it is recommended to promote initiatives for physical activity and physical fitness for children and adolescents to prevent adverse effects on health and well-being. More precisely, public health initiatives should specifically consider exercising cardiorespiratory endurance and muscle strength because both components showed strong positive associations with markers of health. Furthermore, the findings implied that physical education teachers, coaches, or researchers can utilize a proportional adjustment to individually interpret physical fitness of prepubertal school-aged children. Special attention should be given to the promotion of physical fitness of older-than-keyage children because they showed poorer performance in physical fitness than keyage-children. Therefore, it is necessary to specifically consider this group and provide additional health and fitness programs to reduce their deficits in physical fitness experienced during prior years to guarantee a holistic development.
Search for light primordial black holes with VERITAS using gamma γ-ray and optical observations
(2023)
The Very Energetic Radiation Imaging Telescope Array System (VERITAS) is an array of four imaging atmospheric Cherenkov telescopes (IACTs). VERITAS is sensitive to very-high-energy gamma-rays in the range of 100 GeV to >30 TeV. Hypothesized primordial black holes (PBHs) are attractive targets for IACTs. If they exist, their potential cosmological impact reaches beyond the candidacy for constituents of dark matter. The sublunar mass window is the largest unconstrained range of PBH masses. This thesis aims to develop novel concepts searching for light PBHs with VERITAS. PBHs below the sublunar window lose mass due to Hawking radiation. They would evaporate at the end of their lifetime, leading to a short burst of gamma-rays. If PBHs formed at about 10^15 g, the evaporation would occur nowadays. Detecting these signals might not only confirm the existence of PBHs but also prove the theory of Hawking radiation. This thesis probes archival VERITAS data recorded between 2012 and 2021 for possible PBH signals. This work presents a new automatic approach to assess the quality of the VERITAS data. The array-trigger rate and far infrared temperature are well suited to identify periods with poor data quality. These are masked by time cuts to obtain a consistent and clean dataset which contains about 4222 hours. The PBH evaporations could occur at any location in the field of view or time within this data. Only a blind search can be performed to identify these short signals. This thesis implements a data-driven deep learning based method to search for short transient signals with VERITAS. It does not depend on the modelling of the effective area and radial acceptance. This work presents the first application of this method to actual observational IACT data. This thesis develops new concepts dealing with the specifics of the data and the transient detection method. These are reflected in the developed data preparation pipeline and search strategies. After correction for trial factors, no candidate PBH evaporation is found in the data. Thus, new constraints of the local rate of PBH evaporations are derived. At the 99% confidence limit it is below <1.07 * 10^5 pc^-3 yr^-1. This constraint with the new, independent analysis approach is in the range of existing limits for the evaporation rate.
This thesis also investigates an alternative novel approach to searching for PBHs with IACTs. Above the sublunar window, the PBH abundance is constrained by optical microlensing studies. The sampling speed, which is of order of minutes to hours for traditional optical telescopes, is a limiting factor in expanding the limits to lower masses. IACTs are also powerful instruments for fast transient optical astronomy with up to O(ns) sampling. This thesis investigates whether IACTs might constrain the sublunar window with optical microlensing observations. This study confirms that, in principle, the fast sampling speed might allow extending microlensing searches into the sublunar mass window. However, the limiting factor for IACTs is the modest sensitivity to detect changes in optical fluxes. This thesis presents the expected rate of detectable events for VERITAS as well as prospects of possible future next-generation IACTs. For VERITAS, the rate of detectable microlensing events in the sublunar range is ~10^-6 per year of observation time. The future prospects for a 100 times more sensitive instrument are at ~0.05 events per year.
Sulfur is essential for the functionality of some important biomolecules in humans. Biomolecules like the Iron-sulfur clusters, tRNAs, Molybdenum cofactor, and some vitamins. The trafficking of sulfur involves proteins collectively called sulfurtransferase. Among these are TUM1, MOCS3, and NFS1.
This research investigated the role of TUM1 for molybdenum cofactor biosynthesis and cytosolic tRNA thiolation in humans. The rhodanese-like protein MOCS3 and the L-cysteine desulfurase (NFS1) have been previously demonstrated to interact with TUM1. These interactions suggested a dual function of TUM1 in sulfur transfer for Moco biosynthesis and cytosolic tRNA thiolation. TUM1 deficiency has been implicated to be responsible for a rare inheritable disorder known as mercaptolactate cysteine disulfiduria (MCDU), which is associated with a mental disorder. This mental disorder is similar to the symptoms of sulfite oxidase deficiency which is characterised by neurological disorders. Therefore, the role of TUM1 as a sulfurtransferase in humans was investigated, in CRISPR/Cas9 generated TUM1 knockout HEK 293T cell lines.
For the first time, TUM1 was implicated in Moco biosynthesis in humans by quantifying the intermediate product cPMP and Moco using HPLC. Comparing the TUM1 knockout cell lines to the wild-type, accumulation and reduction of cPMP and Moco were observed respectively. The effect of TUM1 knockout on the activity of a Moco-dependent enzyme, Sulfite oxidase, was also investigated. Sulfite oxidase is essential for the detoxification of sulfite to sulfate. Sulfite oxidase activity and protein abundance were reduced due to less availability of Moco. This shows that TUM1 is essential for efficient sulfur transfer for Moco biosynthesis. Reduction in cystathionin -lyase in TUM1 knockout cells was quantified, a possible coping mechanism of the cell against sulfite production through cysteine catabolism.
Secondly, the involvement of TUM1 in tRNA thio-modification at the wobble Uridine-34 was reported by quantifying the amount of mcm5s2U and mcm5U via HPLC. The reduction and accumulation of mcm5s2U and mcm5U in TUM1 knockout cells were observed in the nucleoside analysis. Herein, exogenous treatment with NaHS, a hydrogen sulfide donor, rescued the Moco biosynthesis, cytosolic tRNA thiolation, and cell proliferation deficits in TUM1 knockout cells.
Further, TUM1 was shown to impact mitochondria bioenergetics through the measurement of the oxygen consumption rate and extracellular acidification rate (ECAR) via the seahorse cell Mito stress analyzer. Reduction in total ATP production was also measured. This reveals how important TUM1 is for H2S biosynthesis in the mitochondria of HEK 293T.
Finally, the inhibition of NFS1 in HEK 293T and purified NFS1 protein by 2-methylene 3-quinuclidinone was demonstrated via spectrophotometric and radioactivity quantification. Inhibition of NFS1 by MQ further affected the iron-sulfur cluster-dependent enzyme aconitase activity.
This research paper provides an overview of the current state of MOOCs (massive open online courses) and universities in Austria, focusing on the national MOOC platform iMooX.at. The study begins by presenting the results of an analysis of the performance agreements of 22 Austrian public universities for the period 2022–2024, with a specific focus on the mention of MOOC activities and iMooX. The authors find that 12 of 22 (55 %) Austrian public universities use at least one of these terms, indicating a growing interest in MOOCs and online learning. Additionally, the authors analyze internal documentation data to share insights into how many universities in Austria have produced and/or used a MOOC on the iMooX platform since its launch in 2014. These findings provide a valuable measure of the current usage and monitoring of MOOCs and iMooX among Austrian higher education institutions. Overall, this research contributes to a better understanding of the current state of MOOCs and their integration within Austrian higher education.
Over the last decades, interest in the impact of the intestinal microbiota on host health has steadily increased. Diet is a major factor that influences the gut microbiota and thereby indirectly affects human health. For example, a high fat diet rich in saturated fatty acids led to an intestinal proliferation of the colitogenic bacterium Bilophila (B.) wadsworthia by stimulating the release of the bile acid taurocholate (TC). TC contains the sulfonated head group taurine, which undergoes conversion to sulfide (H2S) by B. wadsworthia. In a colitis prone murine animal model (IL10 / mice), the bloom of B. wadsworthia was accompanied by an exacerbation of intestinal inflammation. B. wadsworthia is able to convert taurine and also other sulfonates to H2S, indicating the potential association of sulfonate utilization and the stimulation of colitogenic bacteria.
This potential link raised the question, whether dietary sulfonates or their sulfonated metabolites stimulate the growth of colitogenic bacteria such as B. wadsworthia and whether these bacteria convert sulfonates to H2S. Besides taurine, which is present in meat, fish and life-style beverages, other dietary sulfonates are part of daily human nutrition. Sulfolipids such as sulfoquinovosyldiacylglycerols (SQDG) are highly abundant in salad, parsley and the cyanobacterium Arthrospira platensis (Spirulina). Based on previous findings, Escherichia (E.) coli releases the polar headgroup sulfoquinovose (SQ) from SQDG. Moreover, E. coli is able to convert SQ to 2,3 dihydroxypropane 1 sulfonate (DHPS) under anoxic conditions. DHPS is also converted to H2S by B. wadsworthia or by other potentially harmful gut bacteria such as members of the genus Desulfovibrio. However, only few studies report the conversion of sulfonates to H2S by bacteria directly isolated from the human intestinal tract. Most sulfonate utilizing bacteria were obtained from environmental sources such as soil or lake sediment or from potentially intestinal sources such as sewage.
In the present study, fecal slurries from healthy human subjects were incubated with sulfonates under strictly anoxic conditions, using formate and lactate as electron donors. Fecal slurries that converted sulfonates to H2S, were used as a source for the isolation of H2S forming bacteria. Isolates were identified based on their 16S ribosomal RNA (16S rRNA) gene sequence. In addition, conventional C57BL/6 mice were fed a semisynthetic diet supplemented with the SQDG rich Spirulina (SD) or a Spirulina free control diet (CD). During the intervention, body weight, water and food intake were monitored and fecal samples were collected. After three weeks, mice were killed and organ weight and size were measured, intestinal sulfonate concentrations were quantified, gut microbiota composition was determined and parameters of intestinal and hepatic fat metabolism were analyzed.
Human fecal slurries converted taurine, isethionate, cysteate, 3 sulfolacate, SQ and DHPS to H2S. However, inter individual differences in the degradation of these sulfonates were observed. Taurine, isethionate, and 3 sulfolactate were utilized by fecal microbiota of all donors, while SQ, DHPS and cysteate were converted to H2S only by microbiota from certain individuals. Bacterial isolates from human feces able to convert sulfonates to H2S were identified as taurine-utilizing Desulfovibrio strains, taurine- and isethionate-utilizing B. wadsworthia, or as SQ- and 3-sulfolactate- utilizing E. coli. In addition, a co culture of E. coli and B. wadsworthia led to complete degradation of SQ to H2S, with DHPS as an intermediate. Of the human fecal isolates, B. wadsworthia and Desulfovibrio are potentially harmful. E. coli strains might be also pathogenic, but isolated E. coli strains from human feces were identified as commensal gut bacteria.
Feeding SD to mice increased the cecal and fecal SQ concentration and altered the microbiota composition, but the relative abundance of SQDG or SQ converting bacteria and colitogenic bacteria was not enriched in mice fed SD for 21 days. SD did not affect the relative abundance of Enterobacteriaceae, to which the SQDG- and SQ-utilizing E. coli strain belong to. Furthermore, the abundance of B. wadsworthia decreased from day 2 to day 9 in feces, but recovered afterwards in the same mice. In cecum, the family Desulfovibrionaceae, to which B. wadsworthia and Desulfovibrio belong to, were reduced. No changes in the number of B. wadsworthia in cecal contents or of Desulfovibrionaceae in feces were observed. SD led to a mild activation of the immune system, which was not observed in control mice fed CD. Mice fed SD had an increased body weight, a higher adipose tissue weight, and a decreased liver weight compared to the control mice, suggesting an impact of Spirulina supplementation on fat metabolism. However, expression levels of genes involved in intestinal and hepatic intracellular lipid uptake and availability were reduced. Further investigations on the lipid metabolism at protein level could help to clarify these discrepancies.
In summary, humans differ in the ability of their fecal microbiota to utilize dietary sulfonates. While sulfonates stimulated the proliferation of potentially colitogenic isolates from human fecal slurries, the increased availability of SQ in Spirulina fed conventional mice did not lead to an enrichment of such bacteria. Presence or absence of these bacteria may explain the inter individual differences in sulfonate conversion observed for fecal slurries. This work provides new insights in the ability of intestinal bacteria to utilize sulfonates and thus, contributes to a better understanding of microbiota-mediated effects on dietary sulfonate utilization. Interestingly, feeding of the Spirulina-supplemented diet led to body-weight gain in mice in the first two days of intervention, the reasons for which are unknown.
Ribosomes decode mRNA to synthesize proteins. Ribosomes, once considered static, executing machines, are now viewed as dynamic modulators of translation. Increasingly detailed analyses of structural ribosome heterogeneity led to a paradigm shift toward ribosome specialization for selective translation. As sessile organisms, plants cannot escape harmful environments and evolved strategies to withstand. Plant cytosolic ribosomes are in some respects more diverse than those of other metazoans. This diversity may contribute to plant stress acclimation. The goal of this thesis was to determine whether plants use ribosome heterogeneity to regulate protein synthesis through specialized translation. I focused on temperature acclimation, specifically on shifts to low temperatures. During cold acclimation, Arabidopsis ceases growth for seven days while establishing the responses required to resume growth. Earlier results indicate that ribosome biogenesis is essential for cold acclimation. REIL mutants (reil-dkos) lacking a 60S maturation factor do not acclimate successfully and do not resume growth. Using these genotypes, I ascribed cold-induced defects of ribosome biogenesis to the assembly of the polypeptide exit tunnel (PET) by performing spatial statistics of rProtein changes mapped onto the plant 80S structure. I discovered that growth cessation and PET remodeling also occurs in barley, suggesting a general cold response in plants. Cold triggered PET remodeling is consistent with the function of Rei-1, a REIL homolog of yeast, which performs PET quality control. Using seminal data of ribosome specialization, I show that yeast remodels the tRNA entry site of ribosomes upon change of carbon sources and demonstrate that spatially constrained remodeling of ribosomes in metazoans may modulate protein synthesis. I argue that regional remodeling may be a form of ribosome specialization and show that heterogeneous cytosolic polysomes accumulate after cold acclimation, leading to shifts in the translational output that differs between wild-type and reil-dkos. I found that heterogeneous complexes consist of newly synthesized and reused proteins. I propose that tailored ribosome complexes enable free 60S subunits to select specific 48S initiation complexes for translation. Cold acclimated ribosomes through ribosome remodeling synthesize a novel proteome consistent with known mechanisms of cold acclimation. The main hypothesis arising from my thesis is that heterogeneous/ specialized ribosomes alter translation preferences, adjust the proteome and thereby activate plant programs for successful cold acclimation.
The Adaptive Force (AF) reflects the neuromuscular capacity to adapt to external loads during holding muscle actions and is similar to motions in real life and sports. The maximal isometric AF (AFisomax) was considered to be the most relevant parameter and was assumed to have major importance regarding injury mechanisms and the development of musculoskeletal pain. The aim of this study was to investigate the behavior of different torque parameters over the course of 30 repeated maximal AF trials. In addition, maximal holding vs. maximal pushing isometric muscle actions were compared. A side consideration was the behavior of torques in the course of repeated AF actions when comparing strength and endurance athletes. The elbow flexors of n = 12 males (six strength/six endurance athletes, non-professionals) were measured 30 times (120 s rest) using a pneumatic device. Maximal voluntary isometric contraction (MVIC) was measured pre and post. MVIC, AFisomax, and AFmax (maximal torque of one AF measurement) were evaluated regarding different considerations and statistical tests. AFmax and AFisomax declined in the course of 30 trials [slope regression (mean ± standard deviation): AFmax = −0.323 ± 0.263; AFisomax = −0.45 ± 0.45]. The decline from start to end amounted to −12.8% ± 8.3% (p < 0.001) for AFmax and −25.41% ± 26.40% (p < 0.001) for AFisomax. AF parameters declined more in strength vs. endurance athletes. Thereby, strength athletes showed a rather stable decline for AFmax and a plateau formation for AFisomax after 15 trials. In contrast, endurance athletes reduced their AFmax, especially after the first five trials, and remained on a rather similar level for AFisomax. The maximum of AFisomax of all 30 trials amounted 67.67% ± 13.60% of MVIC (p < 0.001, n = 12), supporting the hypothesis of two types of isometric muscle action (holding vs. pushing). The findings provided the first data on the behavior of torque parameters after repeated isometric–eccentric actions and revealed further insights into neuromuscular control strategies. Additionally, they highlight the importance of investigating AF parameters in athletes based on the different behaviors compared to MVIC. This is assumed to be especially relevant regarding injury mechanisms.
Many widely used observational data sets are comprised of several overlapping instrument records. While data inter-calibration techniques often yield continuous and reliable data for trend analysis, less attention is generally paid to maintaining higher-order statistics such as variance and autocorrelation. A growing body of work uses these metrics to quantify the stability or resilience of a system under study and potentially to anticipate an approaching critical transition in the system. Exploring the degree to which changes in resilience indicators such as the variance or autocorrelation can be attributed to non-stationary characteristics of the measurement process – rather than actual changes in the dynamical properties of the system – is important in this context. In this work we use both synthetic and empirical data to explore how changes in the noise structure of a data set are propagated into the commonly used resilience metrics lag-one autocorrelation and variance. We focus on examples from remotely sensed vegetation indicators such as vegetation optical depth and the normalized difference vegetation index from different satellite sources. We find that time series resulting from mixing signals from sensors with varied uncertainties and covering overlapping time spans can lead to biases in inferred resilience changes. These biases are typically more pronounced when resilience metrics are aggregated (for example, by land-cover type or region), whereas estimates for individual time series remain reliable at reasonable sensor signal-to-noise ratios. Our work provides guidelines for the treatment and aggregation of multi-instrument data in studies of critical transitions and resilience.
Recurrences in past climates
(2023)
Our ability to predict the state of a system relies on its tendency to recur to states it has visited before. Recurrence also pervades common intuitions about the systems we are most familiar with: daily routines, social rituals and the return of the seasons are just a few relatable examples. To this end, recurrence plots (RP) provide a systematic framework to quantify the recurrence of states. Despite their conceptual simplicity, they are a versatile tool in the study of observational data. The global climate is a complex system for which an understanding based on observational data is not only of academical relevance, but vital for the predurance of human societies within the planetary boundaries. Contextualizing current global climate change, however, requires observational data far beyond the instrumental period. The palaeoclimate record offers a valuable archive of proxy data but demands methodological approaches that adequately address its complexities. In this regard, the following dissertation aims at devising novel and further developing existing methods in the framework of recurrence analysis (RA). The proposed research questions focus on using RA to capture scale-dependent properties in nonlinear time series and tailoring recurrence quantification analysis (RQA) to characterize seasonal variability in palaeoclimate records (‘Palaeoseasonality’).
In the first part of this thesis, we focus on the methodological development of novel approaches in RA. The predictability of nonlinear (palaeo)climate time series is limited by abrupt transitions between regimes that exhibit entirely different dynamical complexity (e.g. crossing of ‘tipping points’). These possibly depend on characteristic time scales. RPs are well-established for detecting transitions and capture scale-dependencies, yet few approaches have combined both aspects. We apply existing concepts from the study of self-similar textures to RPs to detect abrupt transitions, considering the most relevant time scales. This combination of methods further results in the definition of a novel recurrence based nonlinear dependence measure. Quantifying lagged interactions between multiple variables is a common problem, especially in the characterization of high-dimensional complex systems. The proposed ‘recurrence flow’ measure of nonlinear dependence offers an elegant way to characterize such couplings. For spatially extended complex systems, the coupled dynamics of local variables result in the emergence of spatial patterns. These patterns tend to recur in time. Based on this observation, we propose a novel method that entails dynamically distinct regimes of atmospheric circulation based on their recurrent spatial patterns. Bridging the two parts of this dissertation, we next turn to methodological advances of RA for the study of Palaeoseasonality. Observational series of palaeoclimate ‘proxy’ records involve inherent limitations, such as irregular temporal sampling. We reveal biases in the RQA of time series with a non-stationary sampling rate and propose a correction scheme.
In the second part of this thesis, we proceed with applications in Palaeoseasonality. A review of common and promising time series analysis methods shows that numerous valuable tools exist, but their sound application requires adaptions to archive-specific limitations and consolidating transdisciplinary knowledge. Next, we study stalagmite proxy records from the Central Pacific as sensitive recorders of mid-Holocene El Niño-Southern Oscillation (ENSO) dynamics. The records’ remarkably high temporal resolution allows to draw links between ENSO and seasonal dynamics, quantified by RA. The final study presented here examines how seasonal predictability could play a role for the stability of agricultural societies. The Classic Maya underwent a period of sociopolitical disintegration that has been linked to drought events. Based on seasonally resolved stable isotope records from Yok Balum cave in Belize, we propose a measure of seasonal predictability. It unveils the potential role declining seasonal predictability could have played in destabilizing agricultural and sociopolitical systems of Classic Maya populations.
The methodological approaches and applications presented in this work reveal multiple exciting future research avenues, both for RA and the study of Palaeoseasonality.
The East African Rift System (EARS) is a significant example of active tectonics, which provides opportunities to examine the stages of continental faulting and landscape evolution. The southwest extension of the EARS is one of the most significant examples of active tectonics nowadays, however, seismotectonic research in the area has been scarce, despite the fundamental importance of neotectonics. Our first study area is located between the Northern Province of Zambia and the southeastern Katanga Province of the Democratic Republic of Congo. Lakes Mweru and Mweru Wantipa are part of the southwest extension of the EARS. Fault analysis reveals that, since the Miocene, movements along the active Mweru-Mweru Wantipa Fault System (MMFS) have been largely responsible for the reorganization of the landscape and the drainage patterns across the southwestern branch of the EARS. To investigate the spatial and temporal patterns of fluvial-lacustrine landscape development, we determined in-situ cosmogenic 10Be and 26Al in a total of twenty-six quartzitic bedrock samples that were collected from knickpoints across the Mporokoso Plateau (south of Lake Mweru) and the eastern part of the Kundelungu Plateau (north of Lake Mweru). Samples from the Mporokoso Plateau and close to the MMFS provide evidence of temporary burial. By contrast, surfaces located far from the MMFS appear to have remained uncovered since their initial exposure as they show consistent 10Be and 26Al exposure ages ranging up to ~830 ka. Reconciliation of the observed burial patterns with morphotectonic and stratigraphic analysis reveals the existence of an extensive paleo-lake during the Pleistocene. Through hypsometric analyses of the dated knickpoints, the potential maximum water level of the paleo-lake is constrained to ~1200 m asl (present lake lavel: 917 m asl). High denudation rates (up to ~40 mm ka-1) along the eastern Kundelungu Plateau suggest that footwall uplift, resulting from normal faulting, caused river incision, possibly controlling paleo-lake drainage. The lake level was reduced gradually reaching its current level at ~350 ka.
Parallel to the MMFS in the north, the Upemba Fault System (UFS) extends across the southeastern Katanga Province of the Democratic Republic of Congo. This part of our research is focused on the geomorphological behavior of the Kiubo Waterfalls. The waterfalls are the currently active knickpoint of the Lufira River, which flows into the Upemba Depression. Eleven bedrock samples along the Lufira River and its tributary stream, Luvilombo River, were collected. In-situ cosmogenic 10Be and 26Al were used in order to constrain the K constant of the Stream Power Law equation. Constraining the K constant allowed us to calculate the knickpoint retreat rate of the Kiubo Waterfalls at ~0.096 m a-1. Combining the calculated retreat rate of the knickpoint with DNA sequencing from fish populations, we managed to present extrapolation models and estimate the location of the onset of the Kiubo Waterfalls, revealing its connection to the seismicity of the UFS.
Today, point clouds are among the most important categories of spatial data, as they constitute digital 3D models of the as-is reality that can be created at unprecedented speed and precision. However, their unique properties, i.e., lack of structure, order, or connectivity information, necessitate specialized data structures and algorithms to leverage their full precision. In particular, this holds true for the interactive visualization of point clouds, which requires to balance hardware limitations regarding GPU memory and bandwidth against a naturally high susceptibility to visual artifacts.
This thesis focuses on concepts, techniques, and implementations of robust, scalable, and portable 3D visualization systems for massive point clouds. To that end, a number of rendering, visualization, and interaction techniques are introduced, that extend several basic strategies to decouple rendering efforts and data management: First, a novel visualization technique that facilitates context-aware filtering, highlighting, and interaction within point cloud depictions. Second, hardware-specific optimization techniques that improve rendering performance and image quality in an increasingly diversified hardware landscape. Third, natural and artificial locomotion techniques for nausea-free exploration in the context of state-of-the-art virtual reality devices. Fourth, a framework for web-based rendering that enables collaborative exploration of point clouds across device ecosystems and facilitates the integration into established workflows and software systems.
In cooperation with partners from industry and academia, the practicability and robustness of the presented techniques are showcased via several case studies using representative application scenarios and point cloud data sets. In summary, the work shows that the interactive visualization of point clouds can be implemented by a multi-tier software architecture with a number of domain-independent, generic system components that rely on optimization strategies specific to large point clouds. It demonstrates the feasibility of interactive, scalable point cloud visualization as a key component for distributed IT solutions that operate with spatial digital twins, providing arguments in favor of using point clouds as a universal type of spatial base data usable directly for visualization purposes.
Reactive eutectic media based on ammonium formate for the valorization of bio-sourced materials
(2023)
In the last several decades eutectic mixtures of different compositions were successfully used as solvents for vast amount of chemical processes, and only relatively recently they were discovered to be widely spread in nature. As such they are discussed as a third liquid media of the living cell, that is composed of common cell metabolites. Such media may also incorporate water as a eutectic component in order to regulate properties such as enzyme activity or viscosity. Taking inspiration form such sophisticated use of eutectic mixtures, this thesis will explore the use of reactive eutectic media (REM) for organic synthesis. Such unconventional media are characterized by the reactivity of their components, which means that mixture may assume the role of the solvent as well as the reactant itself.
The thesis focuses on novel REM based on ammonium formate and investigates their potential for the valorization of bio-sourced materials. The use of REM allows the performance of a number of solvent-free reactions, which entails the benefits of a superior atom and energy economy, higher yields and faster rates compared to reactions in solution. This is evident for the Maillard reaction between ammonium formate and various monosaccharides for the synthesis of substituted pyrazines as well as for a Leuckart type reaction between ammonium formate and levulinic acid for the synthesis of 5-methyl-2-pyrrolidone. Furthermore, reaction of ammonium formate with citric acid for the synthesis of yet undiscovered fluorophores, shows that synthesis in REM can open up unexpected reaction pathways.
Another focus of the thesis is the study of water as a third component in the REM. As a result, the concept of two different dilution regimes (tertiary REM and in REM in solvent) appears useful for understanding the influence of water. It is shown that small amounts of water can be of great benefit for the reaction, by reducing viscosity and at the same time increasing reaction yields.
REM based on ammonium formate and organic acids are employed for lignocellulosic biomass treatment. The thesis thereby introduces an alternative approach towards lignocellulosic biomass fractionation that promises a considerable process intensification by the simultaneous generation of cellulose and lignin as well as the production of value-added chemicals from REM components. The thesis investigates the generated cellulose and the pathway to nanocellulose generation and also includes the structural analysis of extracted lignin.
Finally, the thesis investigates the potential of microwave heating to run chemical reactions in REM and describes the synergy between these two approaches. Microwave heating for chemical reactions and the use of eutectic mixtures as alternative reaction media are two research fields that are often described in the scope of green chemistry. The thesis will therefore also contain a closer inspection of this terminology and its greater goal of sustainability.
Rainfall-triggered landslides are a globally occurring hazard that cause several thousand fatalities per year on average and lead to economic damages by destroying buildings and infrastructure and blocking transportation networks. For people living and governing in susceptible areas, knowing not only where, but also when landslides are most probable is key to inform strategies to reduce risk, requiring reliable assessments of weather-related landslide hazard and adequate warning. Taking proper action during high hazard periods, such as moving to higher levels of houses, closing roads and rail networks, and evacuating neighborhoods, can save lives. Nevertheless, many regions of the world with high landslide risk currently lack dedicated, operational landslide early warning systems.
The mounting availability of temporal landslide inventory data in some regions has increasingly enabled data-driven approaches to estimate landslide hazard on the basis of rainfall conditions. In other areas, however, such data remains scarce, calling for appropriate statistical methods to estimate hazard with limited data. The overarching motivation for this dissertation is to further our ability to predict rainfall-triggered landslides in time in order to expand and improve warning. To this end, I applied Bayesian inference to probabilistically quantify and predict landslide activity as a function of rainfall conditions at spatial scales ranging from a small coastal town, to metropolitan areas worldwide, to a multi-state region, and temporal scales from hourly to seasonal. This thesis is composed of three studies.
In the first study, I contributed to developing and validating statistical models for an online landslide warning dashboard for the small town of Sitka, Alaska, USA. We used logistic and Poisson regressions to estimate daily landslide probability and counts from an inventory of only five reported landslide events and 18 years of hourly precipitation measurements at the Sitka airport. Drawing on community input, we established two warning thresholds for implementation in the dashboard, which uses observed rainfall and US National Weather Service forecasts to provide real-time estimates of landslide hazard.
In the second study, I estimated rainfall intensity-duration thresholds for shallow landsliding for 26 cities worldwide and a global threshold for urban landslides. I found that landslides in urban areas occurred at rainfall intensities that were lower than previously reported global thresholds, and that 31% of urban landslides were triggered during moderate rainfall events. However, landslides in cities with widely varying climates and topographies were triggered above similar critical rainfall intensities: thresholds for 77% of cities were indistinguishable from the global threshold, suggesting that urbanization may harmonize thresholds between cities, overprinting natural variability. I provide a baseline threshold that could be considered for warning in cities with limited landslide inventory data.
In the third study, I investigated seasonal landslide response to annual precipitation patterns in the Pacific Northwest region, USA by using Bayesian multi-level models to combine data from five heterogeneous landslide inventories that cover different areas and time periods. I quantitatively confirmed a distinctly seasonal pattern of landsliding and found that peak landslide activity lags the annual precipitation peak. In February, at the height of the landslide season, landslide intensity for a given amount of monthly rainfall is up to ten times higher than at the season onset in November, underlining the importance of antecedent seasonal hillslope conditions.
Together, these studies contributed actionable, objective information for landslide early warning and examples for the application of Bayesian methods to probabilistically quantify landslide hazard from inventory and rainfall data.
RailChain
(2023)
The RailChain project designed, implemented, and experimentally evaluated a juridical recorder that is based on a distributed consensus protocol. That juridical blockchain recorder has been realized as distributed ledger on board the advanced TrainLab (ICE-TD 605 017) of Deutsche Bahn.
For the project, a consortium consisting of DB Systel, Siemens, Siemens Mobility, the Hasso Plattner Institute for Digital Engineering, Technische Universität Braunschweig, TÜV Rheinland InterTraffic, and Spherity has been formed. These partners not only concentrated competencies in railway operation, computer science, regulation, and approval, but also combined experiences from industry, research from academia, and enthusiasm from startups.
Distributed ledger technologies (DLTs) define distributed databases and express a digital protocol for transactions between business partners without the need for a trusted intermediary. The implementation of a blockchain with real-time requirements for the local network of a railway system (e.g., interlocking or train) allows to log data in the distributed system verifiably in real-time. For this, railway-specific assumptions can be leveraged to make modifications to standard blockchains protocols.
EULYNX and OCORA (Open CCS On-board Reference Architecture) are parts of a future European reference architecture for control command and signalling (CCS, Reference CCS Architecture – RCA). Both architectural concepts outline heterogeneous IT systems with components from multiple manufacturers. Such systems introduce novel challenges for the approved and safety-relevant CCS of railways which were considered neither for road-side nor for on-board systems so far. Logging implementations, such as the common juridical recorder on vehicles, can no longer be realized as a central component of a single manufacturer. All centralized approaches are in question.
The research project RailChain is funded by the mFUND program and gives practical evidence that distributed consensus protocols are a proper means to immutably (for legal purposes) store state information of many system components from multiple manufacturers. The results of RailChain have been published, prototypically implemented, and experimentally evaluated in large-scale field tests on the advanced TrainLab. At the same time, the project showed how RailChain can be integrated into the road-side and on-board architecture given by OCORA and EULYNX.
Logged data can now be analysed sooner and also their trustworthiness is being increased. This enables, e.g., auditable predictive maintenance, because it is ensured that data is authentic and unmodified at any point in time.
Personal data privacy is considered to be a fundamental right. It forms a part of our highest ethical standards and is anchored in legislation and various best practices from the technical perspective. Yet, protecting against personal data exposure is a challenging problem from the perspective of generating privacy-preserving datasets to support machine learning and data mining operations. The issue is further compounded by the fact that devices such as consumer wearables and sensors track user behaviours on such a fine-grained level, thereby accelerating the formation of multi-attribute and large-scale high-dimensional datasets.
In recent years, increasing news coverage regarding de-anonymisation incidents, including but not limited to the telecommunication, transportation, financial transaction, and healthcare sectors, have resulted in the exposure of sensitive private information. These incidents indicate that releasing privacy-preserving datasets requires serious consideration from the pre-processing perspective. A critical problem that appears in this regard is the time complexity issue in applying syntactic anonymisation methods, such as k-anonymity, l-diversity, or t-closeness to generating privacy-preserving data. Previous studies have shown that this problem is NP-hard.
This thesis focuses on large high-dimensional datasets as an example of a special case of data that is characteristically challenging to anonymise using syntactic methods. In essence, large high-dimensional data contains a proportionately large number of attributes in proportion to the population of attribute values. Applying standard syntactic data anonymisation approaches to generating privacy-preserving data based on such methods results in high information-loss, thereby rendering the data useless for analytics operations or in low privacy due to inferences based on the data when information loss is minimised.
We postulate that this problem can be resolved effectively by searching for and eliminating all the quasi-identifiers present in a high-dimensional dataset. Essentially, we quantify the privacy-preserving data sharing problem as the Find-QID problem.
Further, we show that despite the complex nature of absolute privacy, the discovery of QID can be achieved reliably for large datasets. The risk of private data exposure through inferences can be circumvented, and both can be practicably achieved without the need for high-performance computers.
For this purpose, we present, implement, and empirically assess both mathematical and engineering optimisation methods for a deterministic discovery of privacy-violating inferences. This includes a greedy search scheme by efficiently queuing QID candidates based on their tuple characteristics, projecting QIDs on Bayesian inferences, and countering Bayesian network’s state-space-explosion with an aggregation strategy taken from multigrid context and vectorised GPU acceleration. Part of this work showcases magnitudes of processing acceleration, particularly in high dimensions. We even achieve near real-time runtime for currently impractical applications. At the same time, we demonstrate how such contributions could be abused to de-anonymise Kristine A. and Cameron R. in a public Twitter dataset addressing the US Presidential Election 2020.
Finally, this work contributes, implements, and evaluates an extended and generalised version of the novel syntactic anonymisation methodology, attribute compartmentation. Attribute compartmentation promises sanitised datasets without remaining quasi-identifiers while minimising information loss. To prove its functionality in the real world, we partner with digital health experts to conduct a medical use case study. As part of the experiments, we illustrate that attribute compartmentation is suitable for everyday use and, as a positive side effect, even circumvents a common domain issue of base rate neglect.
Hantaviruses (HVs) are a group of zoonotic viruses that infect human beings primarily through aerosol transmission of rodent excreta and urine samplings. HVs are classified geographically into: Old World HVs (OWHVs) that are found in Europe and Asia, and New World HVs (NWHVs) that are observed in the Americas. These different strains can cause severe hantavirus diseases with pronounced renal syndrome or severe cardiopulmonary system distress. HVs can be extremely lethal, with NWHV infections reaching up to 40 % mortality rate. HVs are known to generate epidemic outbreaks in many parts of the world including Germany, which has seen periodic HV infections over the past decade. HV has a trisegmented genome. The small segment (S) encodes the nucleocapsid protein (NP), the middle segment (M) encodes the glycoproteins (GPs) Gn and Gc which forms up to tetramers and primarily monomers \& dimers upon independent expression respectively and large segment (L) encodes RNA dependent RNA polymerase (RdRp). Interactions between these viral proteins are crucial in providing mechanistic insights into HV virion development. Despite best efforts, there continues to be lack of quantification of these associations in living cells. This is required in developing the mechanistic models for HV viral assembly. This dissertation focuses on three key questions pertaining to the initial steps of virion formation that primarily involves the GPs and NP.
The research investigations in this work were completed using Fluorescence Correlation Spectroscopy (FCS) approaches. FCS is frequently used in assessing the biophysical features of bio-molecules including protein concentration and diffusion dynamics and circumvents the requirement of protein overexpression. FCS was primarily applied in this thesis to evaluate protein multimerization, at single cell resolution.
The first question addressed which GP spike formation model proposed by Hepojoki et al.(2010) appropriately describes the evidence in living cells. A novel in cellulo assay was developed to evaluate the amount of fluorescently labelled and unlabeled GPs upon co-expression. The results clearly showed that Gn and Gc initially formed a heterodimeric Gn:Gc subunit. This sub-unit then multimerizes with congruent Gn:Gc subunits to generate the final GP spike. Based on these interactions, models describing the formation of GP complex (with multiple GP spike subunits) were additionally developed.
HV GP assembly primarily takes place in the Golgi apparatus (GA) of infected cells. Interestingly, NWHV GPs are hypothesized to assemble at the plasma membrane (PM). This led to the second research question in this thesis, in which a systematic comparison between OWHV and NWHV GPs was conducted to validate this hypothesis. Surprisingly, GP localization at the PM was congruently observed with OWHV and NWHV GPs. Similar results were also discerned with OWHV and NWHV GP localization in the absence of cytoskeletal factors that regulate HV trafficking in cells.
The final question focused on quantifying the NP-GP interactions and understanding their influence of NP and GP multimerization. Gc mutlimers were detected in the presence of NP and complimented by the presence of localized regions of high NP-Gc interactions in the perinuclear region of living cells. Gc-CT domain was shown to influence NP-Gc associations. Gn, on the other hand, formed up to tetrameric complexes, independent from the presence of NP.
The results in this dissertation sheds light on the initial steps of HV virion formation by quantifying homo and heterotypic interactions involving NP and GPs, which otherwise are very difficult to perform. Finally, the in cellulo methodologies implemented in this work can be potentially extended to understand other key interactions involved in HV virus assembly.
Biomolecules such as proteins and lipids have vital roles in numerous cellular functions, including biomolecule transport, protein functions, cellular homeostasis and biomembrane integrity. Traditional biochemistry methods do not provide precise information about cellular biomolecule distribution and behavior under native environmental conditions since they are not transferable to live cell samples. Consequently, this can lead to inaccuracies in quantifying biomolecule interactions due to potential complexities arising from the heterogeneity of native biomembranes. To overcome these limitations, minimal invasive microscopic techniques, such as fluorescence fluctuation spectroscopy (FFS) in combination with fluorescence proteins (FPs) and fluorescence lipid analogs, have been developed. FFS techniques and membrane property sensors enable the quantification of various parameters, including concentration, dynamics, oligomerization, and interaction of biomolecules in live cell samples.
In this work, several FFS approaches and membrane property sensors were implemented and employed to examine biological processes of diverse context. Multi-color scanning fluorescence fluctuation spectroscopy (sFCS) was used the examine protein oligomerization, protein-protein interactions (PPIs) and protein dynamics at the cellular plasma membrane (PM). Additionally, two-color number and brightness (N&B) analysis was extended with the cross-correlation analysis in order to quantify hetero-interactions of proteins in the PM with very slow motion, which would not accessible with sFCS due strong initial photobleaching. Furthermore, two semi-automatic analysis pipelines were designed: spectral Förster resonance energy transfer (FRET) analysis to study changes in membrane charge at the inner leaflet of the PM, and spectral generalized polarization (GP) imaging and spectral phasor analysis to monitor changes in membrane fluidity and order.
An important parameter for studying PPIs is molecular brightness, which directly determines oligomerization and can be extracted from FFS data. However, FPs often display complex photophysical transitions, including dark states. Therefore, it is crucial to characterize FPs for their dark-states to ensure reliable oligomerization measurements. In this study, N&B and sFCS analysis were applied to determine photophysical properties of novel green FPs under different conditions (i.e., excitation power and pH) in living cells. The results showed that the new FPs, mGreenLantern (mGL) and Gamillus, exhibited the highest molecular brightness at the cost of lower photostability. The well-established monomeric enhanced green fluorescent protein (mEGFP) remained the best option to investigate PPIs at lower pH, while mGL was best suited for neutral pH, and Gamillus for high pH. These findings provide guidance for selecting an appropriate FP to quantify PPIs via FFS under different environmental conditions.
Next, several biophysical fluorescence microscopy approaches (i.e., sFCS, GP imaging, membrane charge FRET) were employed to monitor changes in lipid-lipid-packing in biomembranes in different biological context. Lipid metabolism in cancer cells is known to support rapid proliferation and metastasis. Therefore, targeting lipid synthesis or membrane integrity holds immense promise as an anticancer strategy. However, the mechanism of action of the novel agent erufosine (EPC3) on membrane stability is not fully under
stood. The present work revealed that EPC3 reduces lipid packing and composition as well as increased membrane fluidity and dynamic, hence, modifies lipid-lipid-interaction. These effects on membrane integrity were likely triggered by modulations in lipid metabolism and membrane organization. In the case of influenza A virus (IAV) infection, regulation of lipid metabolism is crucial for multiple steps in IAV replication and is related to the pathogenicity of IAV. Here, it is shown for the first time that IAV infection triggers a local enrichment of negatively charged lipids at the inner leaflet of the PM, which decreases membrane fluidity and dynamic, as well as increases lipid packing at the assembly site in living cells. This suggests that IAV alters lipid-lipid interactions and organization at the PM. Overall, this work highlights the potential of biophysical techniques as a screening platform for studying membrane properties in living cells at the single-cell level.
Finally, this study addressed remaining questions about the early stage of IAV assembly. The recruitment of matrix protein 1 (M1) and its interaction with other viral surface proteins, hemagglutinin (HA), neuraminidase (NA), and matrix protein 2 (M2), has been a subject of debate due to conflicting results. In this study, different FFS approaches were performed in transfected cells to investigate interactions between IAV proteins themselves and host factors at the PM. FFS measurements revealed that M2 interacts strongly with M1, leading to the translocation of M1 to the PM. This interaction likely took place along the non-canonical pathway, as evidenced by the detection of an interaction between M2 and the host factor LC3-II, leading to the recruitment of LC3-II to the PM. Moreover, weaker interaction was observed between HA and membrane-bound M1, and no interaction between NA and M1. Interestingly, higher oligomeric states of M1 were only detectable in infected cells. These results indicate that M2 initiates virion assembly by recruiting M1 to the PM, which may serve as a platform for further interactions with viral proteins and host factors.
This thesis is concerned with the phenomenon of quantifier scope ambiguities. This phenomenon has been researched extensively, both from a theoretical and from an empirical point of view. Nevertheless, there are still a number of under-researched topics in the field of quantifier scope, which will be the main focus of this thesis. I will take a closer look at three languages, English, German, and the Asante Twi dialect of Akan (Kwa, Niger-Kongo). The goal is a better understanding of the phenomenon of quantifier scope both within each language, as well as from a cross-linguistic perspective. First, this thesis will provide a series of experiments that allow a direct cross-linguistic comparison between English and German – two languages about which specific claims have been made in the literature. I will also provide exploratory research in the case of Asante Twi, where so far, no work has been dedicated specifically to the study of quantifier scope. The work on Asante Twi will go beyond quantifier scope and also target the quantifier and determiner system in general. The question is not only if particular scope readings are possible or not, but also which factors contribute to an increase or decrease of scope availability, and if there are factors that block certain scope readings altogether. While some of the results confirm and thereby strengthen previous claims, other results contradict general assumptions in the literature. This is particularly the case for inverse readings in German and inverse readings across clause-boundaries.
This paper studies the effect of public child care on mothers’ career trajectories. To this end, we combine county-level data on child care coverage with detailed individual-level information from the German social security records and exploit a set of German reforms leading to a substantial temporal and spatial variation in child care coverage for children under the age of three. We conduct an event study approach that investigates the labor market outcomes of mothers in the years around the birth of their first child. We thereby explore career trajectories, both in terms of quantity and quality of employment. We find that public child care improves maternal labor supply in the years immediately following childbirth. However, the results on quality-related outcomes suggest that the effect of child care provision does not reach far beyond pure employment effects. These results do not change for mothers with different ‘career costs of children’.
Throughout the last ~3 million years, the Earth's climate system was characterised by cycles of glacial and interglacial periods. The current warm period, the Holocene, is comparably stable and stands out from this long-term cyclicality. However, since the industrial revolution, the climate has been increasingly affected by a human-induced increase in greenhouse gas concentrations. While instrumental observations are used to describe changes over the past ~200 years, indirect observations via proxy data are the main source of information beyond this instrumental era. These data are indicators of past climatic conditions, stored in palaeoclimate archives around the Earth. The proxy signal is affected by processes independent of the prevailing climatic conditions. In particular, for sedimentary archives such as marine sediments and polar ice sheets, material may be redistributed during or after the initial deposition and subsequent formation of the archive. This leads to noise in the records challenging reliable reconstructions on local or short time scales. This dissertation characterises the initial deposition of the climatic signal and quantifies the resulting archive-internal heterogeneity and its influence on the observed proxy signal to improve the representativity and interpretation of climate reconstructions from marine sediments and ice cores.
To this end, the horizontal and vertical variation in radiocarbon content of a box-core from the South China Sea is investigated. The three-dimensional resolution is used to quantify the true uncertainty in radiocarbon age estimates from planktonic foraminifera with an extensive sampling scheme, including different sample volumes and replicated measurements of batches of small and large numbers of specimen. An assessment on the variability stemming from sediment mixing by benthic organisms reveals strong internal heterogeneity. Hence, sediment mixing leads to substantial time uncertainty of proxy-based reconstructions with error terms two to five times larger than previously assumed.
A second three-dimensional analysis of the upper snowpack provides insights into the heterogeneous signal deposition and imprint in snow and firn. A new study design which combines a structure-from-motion photogrammetry approach with two-dimensional isotopic data is performed at a study site in the accumulation zone of the Greenland Ice Sheet. The photogrammetry method reveals an intermittent character of snowfall, a layer-wise snow deposition with substantial contributions by wind-driven erosion and redistribution to the final spatially variable accumulation and illustrated the evolution of stratigraphic noise at the surface. The isotopic data show the preservation of stratigraphic noise within the upper firn column, leading to a spatially variable climate signal imprint and heterogeneous layer thicknesses. Additional post-depositional modifications due to snow-air exchange are also investigated, but without a conclusive quantification of the contribution to the final isotopic signature.
Finally, this characterisation and quantification of the complex signal formation in marine sediments and polar ice contributes to a better understanding of the signal content in proxy data which is needed to assess the natural climate variability during the Holocene.
Life on Earth is diverse and ranges from unicellular organisms to multicellular creatures like humans. Although there are theories about how these organisms might have evolved, we understand little about how ‘life’ started from molecules. Bottom-up synthetic biology aims to create minimal cells by combining different modules, such as compartmentalization, growth, division, and cellular communication.
All living cells have a membrane that separates them from the surrounding aqueous medium and helps to protect them. In addition, all eukaryotic cells have organelles that are enclosed by intracellular membranes. Each cellular membrane is primarily made of a lipid bilayer with membrane proteins. Lipids are amphiphilic molecules that assemble into molecular bilayers consisting of two leaflets. The hydrophobic chains of the lipids in the two leaflets face each other, and their hydrophilic headgroups face the aqueous surroundings. Giant unilamellar vesicles (GUVs) are model membrane systems that form large compartments with a size of many micrometers and enclosed by a single lipid bilayer. The size of GUVs is comparable to the size of cells, making them good membrane models which can be studied using an optical microscope. However, after the initial preparation, GUV membranes lack membrane proteins which have to be reconstituted into these membranes by subsequent preparation steps. Depending on the protein, it can be either attached via anchor lipids to one of the membrane leaflets or inserted into the lipid bilayer via its transmembrane domains.
The first step is to prepare the GUVs and then expose them to an exterior solution with proteins. Various protocols have been developed for the initial preparation of GUVs. For the second step, the GUVs can be exposed to a bulk solution of protein or can be trapped in a microfluidic device and then supplied with the protein solution. To minimize the amount of solution and for more precise measurements, I have designed a microfluidic device that has a main channel, and several dead-end side channels that are perpendicular to the main channel. The GUVs are trapped in the dead-end channels. This design exchanges the solution around the GUVs via diffusion from the main channel, thus shielding the GUVs from the flow within the main channel. This device has a small volume of just 2.5 μL, can be used without a pump and can be combined with a confocal microscope, enabling uninterrupted imaging of the GUVs during the experiments. I used this device for most of the experiments on GUVs that are discussed in this thesis.
In the first project of the thesis, a lipid mixture doped with an anchor lipid was used that can bind to a histidine chain (referred to as His-tag(ged) or 6H) via the metal cation Ni2+. This method is widely used for the biofunctionalization of GUVs by attaching proteins without a transmembrane domain. Fluorescently labeled His-tags which are bound to a membrane can be observed in a confocal microscope. Using the same lipid mixture, I prepared the GUVs with different protocols and investigated the membrane composition of the resulting GUVs by evaluating the amount of fluorescently labeled His-tagged molecules bound to their membranes. I used the microfluidic device described above to expose the outer leaflet of the vesicle to a constant concentration of the His-tagged molecules. Two fluorescent molecules with a His-tag were studied and compared: green fluorescent protein (6H-GFP) and fluorescein isothiocyanate (6H-FITC). Although the quantum yield in solution is similar for both molecules, the brightness of the membrane-bound 6H-GFP is higher than the brightness of the membrane-bound 6H-FITC. The observed difference in the brightness reveals that the fluorescence of the 6H-FITC is quenched by the anchor lipid via the Ni2+ ion. Furthermore, my measurements also showed that the fluorescence intensity of the membranebound His-tagged molecules depends on microenvironmental factors such as pH. For both 6H-GFP and 6H-FITC, the interaction with the membrane is quantified by evaluating the equilibrium dissociation constant. The membrane fluorescence is measured as a function of the fluorophores’ molar concentration. Theoretical analysis of these data leads to the equilibrium dissociation constants of (37.5 ± 7.5) nM for 6H-GFP and (18.5 ± 3.7) nM for 6H-FITC.
The anchor lipid mentioned previously used the metal cation Ni2+ to mediate the bond between the anchor lipid and the His-tag. The Ni2+ ion can be replaced by other transition metal ions. Studies have shown that Co3+ forms the strongest bonds with the His-tags attached to proteins. In these studies, strong oxidizing agents were used to oxidize the Co2+ mediated complex with the His-tagged protein to a Co3+ mediated complex. This procedure puts the proteins at risk of being oxidized as well. In this thesis, the vesicles were first prepared with anchor lipids without any metal cation. The Co3+ was added to these anchor lipids and finally the His-tagged protein was added to the GUVs to form the Co3+ mediated bond. This system was also established using the microfluidic device.
The different preparation procedures of GUVs usually lead to vesicles with a spherical morphology. On the other hand, many cell organelles have a more complex architecture with a non spherical topology. One fascinating example is provided by the endoplasmic reticulum (ER) which is made of a continuous membrane and extends throughout the cell in the form of tubes and sheets. The tubes are connected by three-way junctions and form a tubular network of irregular polygons. The formation and maintenance of these reticular networks requires membrane proteins that hydrolyize guanosine triphosphate (GTP). One of these membrane proteins is atlastin. In this thesis, I reconstituted the atlastin protein in GUV membranes using detergent-assisted reconstitution protocols to insert the proteins directly into lipid bilayers.
This thesis focuses on protein reconstitution by binding His-tagged proteins to anchor lipids and by detergent-assisted insertion of proteins with transmembrane domains. It also provides the design of a microfluidic device that can be used in various experiments, one example is the evaluation of the equilibrium dissociation constant for membrane-protein interactions. The results of this thesis will help other researchers to understand the protocols for preparing GUVs, to reconstitute proteins in GUVs, and to perform experiments using the microfluidic device. This knowledge should be beneficial for the long-term goal of combining the different modules of synthetic biology to make a minimal cell.
The Lyman-𝛼 (Ly𝛼) line commonly assists in the detection of high-redshift galaxies, the so-called Lyman-alpha emitters (LAEs). LAEs are useful tools to study the baryonic matter distribution of the high-redshift universe. Exploring their spatial distribution not only reveals the large-scale structure of the universe at early epochs, but it also provides an insight into the early formation and evolution of the galaxies we observe today. Because dark matter halos (DMHs) serve as sites of galaxy formation, the LAE distribution also traces that of the underlying dark matter. However, the details of this relation and their co-evolution over time remain unclear. Moreover, theoretical studies predict that the spatial distribution of LAEs also impacts their own circumgalactic medium (CGM) by influencing their extended Ly𝛼 gaseous halos (LAHs), whose origin is still under investigation. In this thesis, I make several contributions to improve the knowledge on these fields using samples of LAEs observed with the Multi Unit Spectroscopic Explorer (MUSE) at redshifts of 3 < 𝑧 < 6.
Properties of Arctic aerosol in the transition between Arctic haze to summer season derived by lidar
(2023)
During the Arctic haze period, the Arctic troposphere consists of larger, yet fewer, aerosol particles than during the summer (Tunved et al., 2013; Quinn et al., 2007). Interannual variability (Graßl and Ritter, 2019; Rinke et al., 2004), as well as unknown origins (Stock et al., 2014) and properties of aerosol complicate modeling these annual aerosol cycles. This thesis investigates the modification of the microphysical properties of Arctic aerosols in the transition from Arctic haze to the summer season. Therefore, lidar measurements of Ny-Ålesund from April 2021 to the end of July 2021 are evaluated based on the aerosols’ optical properties. An overview of those properties will be provided. Furthermore, parallel radiosonde data is considered for indication of hygroscopic growth.
The annual aerosol cycle in 2021 differs from expectations based on previous studies from Tunved et al. (2013) and Quinn et al. (2007). Developments of backscatter, extinction, aerosol depolarisation, lidar ratio and color ratio show a return of the Arctic haze in May. The haze had already reduced in April, but regrew afterwards.
The average Arctic aerosol displays hygroscopic behaviour, meaning growth due to water uptake. To determine such a behaviour is generally laborious because various meteorological circumstances need to be considered. Two case studies provide further information on these possible events. In particular, a day with a rare ice cloud and with highly variable water cloud layers is observed.
The massive growth of MOOCs in 2011 laid the groundwork for the achievement of SDG 4. With the various benefits of MOOCs, there is also anticipation that online education should focus on more interactivity and global collaboration. In this context, the Global MOOC and Online Education Alliance (GMA) established a diverse group of 17 world-leading universities and three online education platforms from across 14 countries on all six continents in 2020. Through nearly three years of exploration, GMA has gained experience and achieved progress in fostering global cooperation in higher education. First, in joint teaching, GMA has promoted in-depth cooperation between members inside and outside the alliance. Examples include promoting the exchange of high-quality MOOCs, encouraging the creation of Global Hybrid Classroom, and launching Global Hybrid Classroom Certificate Programs. Second, in capacity building and knowledge sharing, GMA has launched Online Education Dialogues and the Global MOOC and Online Education Conference, inviting global experts to share best practices and attracting more than 10 million viewers around the world. Moreover, GMA is collaborating with international organizations to support teachers’ professional growth, create an online learning community, and serve as a resource for further development. Third, in public advocacy, GMA has launched the SDG Hackathon and Global Massive Open Online Challenge (GMOOC) and attracted global learners to acquire knowledge and incubate their innovative ideas within a cross-cultural community to solve real-world problems that all humans face and jointly create a better future. Based on past experiences and challenges, GMA will explore more diverse cooperation models with more partners utilizing advanced technology, provide more support for digital transformation in higher education, and further promote global cooperation towards building a human community with a shared future.
With the growing number of online learning resources, it becomes increasingly difficult and overwhelming to keep track of the latest developments and to find orientation in the plethora of offers. AI-driven services to recommend standalone learning resources or even complete learning paths are discussed as a possible solution for this challenge. To function properly, such services require a well-defined set of metadata provided by the learning resource. During the last few years, the so-called MOOChub metadata format has been established as a de-facto standard by a group of MOOC providers in German-speaking countries. This format, which is based on schema.org, already delivers a quite comprehensive set of metadata. So far, this set has been sufficient to list, display, sort, filter, and search for courses on several MOOC and open educational resources (OER) aggregators. AI recommendation services and further automated integration, beyond a plain listing, have special requirements, however. To optimize the format for proper support of such systems, several extensions and modifications have to be applied. We herein report on a set of suggested changes to prepare the format for this task.
Academia-industry collaborations are beneficial when both sides bring strengths to the partnership and the collaboration outcome is of mutual benefit. These types of collaboration projects are seen as a low-risk learning opportunity for both parties. In this paper, government initiatives that can change the business landscape and academia-industry collaborations that can provide upskilling opportunities to fill emerging business needs are discussed. In light of Japan’s push for next-level modernization, a Japanese software company took a positive stance towards building new capabilities outside what it had been offering its customers. Consequently, an academic research group is laying out infrastructure for learning analytics research. An existing learning analytics dashboard was modularized to allow the research group to focus on natural language processing experiments while the software company explores a development framework suitable for data visualization techniques and artificial intelligence development. The results of this endeavor demonstrate that companies working with academia can creatively explore collaborations outside typical university-supported avenues.
Starch is a biopolymer for which, despite its simple composition, understanding the precise mechanism behind its formation and regulation has been challenging. Several approaches and bioanalytical tools can be used to expand the knowledge on the different parts involved in the starch metabolism. In this sense, a comprehensive analysis targeting two of the main groups of molecules involved in this process: proteins, as effectors/regulators of the starch metabolism, and maltodextrins as starch components and degradation products, was conducted in this research work using potato plants (Solanum tuberosum L. cv. Desiree) as model of study. On one side, proteins physically interacting to potato starch were isolated and analyzed through mass spectrometry and western blot for their identification. Alternatively, starch interacting proteins were explored in potato tubers from transgenic plants having antisense inhibition of starch-related enzymes and on tubers stored under variable environmental conditions. Most of the proteins recovered from the starch granules corresponded to previously described proteins having a specific role in the starch metabolic pathway. Another set of proteins could be grouped as protease inhibitors, which were found weakly interacting to starch. Variations in the protein profile obtained after electrophoresis separation became clear when tubers were stored under different temperatures, indicating a differential expression of proteins in response to changing environmental conditions.
On the other side, since maltodextrin metabolism is thought to be involved in both starch initiation and degradation, soluble maltooligosaccharide content in potato tubers was analyzed in this work under diverse experimental variables. For this, tuber disc samples from wild type and transgenic lines strongly repressing either the plastidial or cytosolic form of the -glucan phosphorylase and phosphoglucomutase were incubated with glucose, glucose-6-phosphate, and glucose-1-phosphate solutions to evaluate the influence of such enzymes on the conversion of the carbon sources into soluble maltodextrins, in comparison to wild-type samples. Relative maltodextrin amounts analyzed through capillary electrophoresis equipped with laser-induced fluorescence (CE-LIF) revealed that tuber discs could immediately uptake glucose-1-phosphate and use it to produce maltooligosaccharides with a degree of polymerization of up to 30 (DP30), in contrast to transgenic tubers with strong repression of the plastidial glucan phosphorylase. The results obtained from the maltodextrin analysis support previous indications that a specific transporter for glucose-1-phosphate may exist in both the plant cells and the plastidial membranes, thereby allowing a glucose-6-phosphate independent transport. Furthermore, it confirms that the plastidial glucan phosphorylase is responsible for producing longer maltooligosaccharides in the plastids by catalyzing a glucan polymerization reaction when glucose-1-phosphate is available. All these findings contribute to a better understanding of the role of the plastidial glucan phosphorylase as a key enzyme directly involved in the synthesis and degradation of glucans and their implication on starch metabolism.
Portal Wissen = Learning
(2023)
Changing through learning is one of the most important characteristics we humans have. We are born and can – it seems – do nothing. We have to comprehend, copy, and acquire everything: grasping and walking, eating and speaking. Of course, we also have to read and do number work. In the meantime, we know: We will never be able to finish this. At best, we learn for a lifetime. If we stop, it harms us. The Greek philosopher Plato said more than 2,400 years ago, “There is no shame in not knowing something. The shame is in not being willing to learn.”
As humans we are also capable of learning; thanks to more and more knowledge about the world around us, we have moved from the Stone Age into the digital age. That this development is not a finish line either, but that we still have a long way to go, is shown by man-made climate change – and above all by our inability as a global community to translate what research teaches us into appropriate actions. Let us dare to hope that we also comprehend this.
What we tend to ignore in the intensive discussion about the multi-layered levels of learning: We are by no means the only learners. Many, if not all, living beings on our planet learn, some more in a more purposeful and complex and more cognitive way than others. And for some time now, machines have also been able to learn more or less independently. Artificial intelligence sends its regards.
The significance of learning for human beings can hardly be overestimated. Science has also understood this and has discovered the learning processes and conditions in almost all contexts for itself, no matter whether it is about our own learning processes and conditions or those around us. We have investigated some of these for the current issue of “Portal Wissen”.
Psycholinguist Natalie Boll-Avetisyan has developed a box that can be used to detect language learning disorders already in young children. The behavioral biologists Jana Eccard and Valeria Mazza investigated the behavior of small rodents and found out that they do not only develop different personality traits but they also described how they learn to adapt them different environmental conditions. Computer linguist David Schlangen examines the question what machines have to learn so that our communication with them works even better.
Since research is ultimately always a learning process that strives to understand something yet unknown, this time all texts are somehow along the motto of the title theme: It is about what the history of past centuries reveals about “military cultures of violence” and the question of what lessons we should learn from natural hazards for the future.
We talked with a legal scholar who looks beyond the university’s backyard and wants to make law comprehensible to everyone. We also talked with a philosopher who analyzes why “having an opinion” means something different today than 100 years ago. We report about an AI-based genome analysis that can change healthcare sustainably. Furthermore, it is about the job profile “YouTuber”, minor cosmopolitanisms, and wildlife management in Africa. When you have finished reading, you will have learnt something. Promised! Enjoy your read!
Portal Wissen = Excellence
(2023)
When something is not just good or very good, we often call it excellent. But what does that really mean? Coming from the Latin word “excellere,” it describes things, persons, or actions that are outstanding or superior and distinguish themselves from others. It cannot get any better. Excellence is the top choice for being the first or the best. Research is no exception.
At the university, you will find numerous exceptional researchers, outstanding projects, and, time and again, sensational findings, publications, and results. But is the University of Potsdam also excellent? A question that will certainly create a different stir in 2023 than it did perhaps 20 years ago. Since the launch of the Excellence Initiative in 2005, universities that succeed in winning the most comprehensive funding program for research in Germany have been considered – literally – excellent. Whether in the form of graduate schools, research clusters, or – since the program was continued in 2019 under the title “Excellence Strategy” – entire universities of excellence: Anyone who wants to be among the best research universities needs the seal of excellence.
The University of Potsdam is applying for funding with three cluster proposals in the recently launched new round of the “Excellence Strategy of the German Federal and State Governments.” One proposal comes from ecology and biodiversity research. The aim is to paint a comprehensive picture of ecological processes by examining the role of single individuals as well as the interactions among many species in an ecosystem to precisely determine the function of biodiversity. A second proposal has been submitted by the cognitive sciences. Here, the complex coexistence of language and cognition, development and learning, as well as motivation and behavior will be researched as a dynamic interrelation. The projects will include cooperation with the educational sciences to constantly consider linked learning and educational processes. The third proposal from the geo and environmental sciences concentrates on extreme and particularly devastating natural hazards and processes such as floods and droughts. The researchers examine these extreme events, focusing on their interaction with society, to be able to better assess the risks and damages they might involve and to initiate timely measures in the future.
“All three proposals highlight the excellence of our performance,” emphasizes University President Prof. Oliver Günther, Ph.D. “The outlines impressively document our commitment, existing research excellence, and the potential of the University of Potsdam as a whole. The fact that three powerful consortia have come together in different subject areas shows that we have taken a good step forward on our way to becoming one of the top German universities.”
In this issue, we are looking at what is in and behind these proposals: We talked to the researchers who wrote them. We asked them about their plans in case their proposals are successful and they bring a cluster of excellence to the university. But we also looked at the research that has led to the proposals, has long shaped the university’s profile, and earned it national and international recognition. We present a small selection of projects, methods, and researchers to illustrate why there really is excellent research in these proposals!
By the way, “excellence” is also not the end of the flagpole. After all, the adjective “excellent” even has a comparative and a superlative. With this in mind, I wish you the most excellent pleasure reading this issue!
Laser cutting is a fast and precise fabrication process. This makes laser cutting a powerful process in custom industrial production. Since the patents on the original technology started to expire, a growing community of tech-enthusiasts embraced the technology and started sharing the models they fabricate online. Surprisingly, the shared models appear to largely be one-offs (e.g., they proudly showcase what a single person can make in one afternoon). For laser cutting to become a relevant mainstream phenomenon (as opposed to the current tech enthusiasts and industry users), it is crucial to enable users to reproduce models made by more experienced modelers, and to build on the work of others instead of creating one-offs.
We create a technological basis that allows users to build on the work of others—a progression that is currently held back by the use of exchange formats that disregard mechanical differences between machines and therefore overlook implications with respect to how well parts fit together mechanically (aka engineering fit).
For the field to progress, we need a machine-independent sharing infrastructure.
In this thesis, we outline three approaches that together get us closer to this:
(1) 2D cutting plans that are tolerant to machine variations. Our initial take is a minimally invasive approach: replacing machine-specific elements in cutting plans with more tolerant elements using mechanical hacks like springs and wedges. The resulting models fabricate on any consumer laser cutter and in a range of materials.
(2) sharing models in 3D. To allow building on the work of others, we build a 3D modeling environment for laser cutting (kyub). After users design a model, they export their 3D models to 2D cutting plans optimized for the machine and material at hand. We extend this volumetric environment with tools to edit individual plates, allowing users to leverage the efficiency of volumetric editing while having control over the most detailed elements in laser-cutting (plates)
(3) converting legacy 2D cutting plans to 3D models. To handle legacy models, we build software to interactively reconstruct 3D models from 2D cutting plans. This allows users to reuse the models in more productive ways. We revisit this by automating the assembly process for a large subset of models.
The above-mentioned software composes a larger system (kyub, 140,000 lines of code). This system integration enables the push towards actual use, which we demonstrate through a range of workshops where users build complex models such as fully functional guitars. By simplifying sharing and re-use and the resulting increase in model complexity, this line of work forms a small step to enable personal fabrication to scale past the maker phenomenon, towards a mainstream phenomenon—the same way that other fields, such as print (postscript) and ultimately computing itself (portable programming languages, etc.) reached mass adoption.
The birth of the Yishuv’s national shipping company, ZIM was preceded by private enterprise; the sea had not traditionally been a focus of the Zionist movement. In the 1930s, a five-year span of private commercial shipping saw three companies in the Jewish community in Palestine – Palestine Shipping Company, Palestine Maritime Lloyd, and Atid – before shipping was cut short by the outbreak of the Second World War. Despite their brief lifespans and their negligible contribution to general shipping, these companies constituted an important milestone. Their existence helped shift the Yishuv leadership’s attitudes about shipping’s importance for the community and the need for it to be supported by national institutions.
Carbon dioxide removal from the atmosphere is becoming an important option to achieve net zero climate targets. This paper develops a welfare and public economics perspective on optimal policies for carbon removal and storage in non-permanent sinks like forests, soil, oceans, wood products or chemical products. We derive a new metric for the valuation of non-permanent carbon storage, the social cost of carbon removal (SCC-R), which embeds also the conventional social cost of carbon emissions. We show that the contribution of CDR is to create new carbon sinks that should be used to reduce transition costs, even if the stored carbon is released to the atmosphere eventually. Importantly, CDR does not raise the ambition of optimal temperature levels unless initial atmospheric carbon stocks are excessively high. For high initial atmospheric carbon stocks, CDR allows to reduce the optimal temperature below initial levels. Finally, we characterize three different policy regimes that ensure an optimal deployment of carbon removal: downstream carbon pricing, upstream carbon pricing, and carbon storage pricing. The policy regimes differ in their informational and institutional requirements regarding monitoring, liability and financing.
Traditional ways of reducing flood risk have encountered limitations in a climate-changing and rapidly urbanizing world. For instance, there has been a demanding requirement for massive investment in order to maintain a consistent level of security as well as increased flood exposure of people and property due to a false sense of security arising from the flood protection infrastructure. Against this background, nature-based solutions (NBS) have gained popularity as a sustainable and alternative way of dealing with diverse societal challenges such as climate change and biodiversity loss. In particular, their ability to reduce flood risks while also offering ecological benefits has recently received global attention. Diverse co-benefits of NBS that favor both humans and nature are viewed as promising a wide endorsement of NBS. However, people’s perceptions of NBS are not always positive. Local resistance to NBS projects as well as decision-makers’ and practitioners’ unwillingness to adopt NBS have been pointed out as a bottleneck to the successful realization and mainstreaming of NBS. In this regard, there has been a growing necessity to investigate people’s perceptions of NBS. Current research has lacked an integrative perspective of both attitudinal and contextual factors that guide perceptions of NBS; it not only lacks empirical evidence, but a few existing ones are rather conflicting without having underlying theories. This has led to the overarching research question of this dissertation, "What shapes people’s perceptions of NBS in the context of flooding?" The dissertation aims to answer the following sub-questions in the three papers that make up this dissertation: 1. What are the topics reflected in the previous literature influencing perceptions of NBS as a means to reduce hydro-meteorological risks? (Paper I) 2. What are the stimulating and hampering attitudinal and contextual factors for mainstreaming NBS for flood risk management? How are NBS conceptualized? (Paper II) 3. How are public attitudes toward the NBS projects shaped? How do risk-and place-related factors shape individual attitudes toward NBS? (Paper III) This dissertation follows an integrative approach of considering “place” and “risk”, as well as the surrounding context, by analyzing attitudinal (i.e., individual) and contextual (i.e., systemic) factors. “Place” is mainly concerned with affective elements (e.g., bond to locality and natural environment) whereas “risk” is related to cognitive elements (e.g., threat appraisal). The surrounding context provides systemic drivers and barriers with the possibility of interfering the influence of place and risk for perceptions of NBS. To empirically address the research questions, the current status of the knowledge about people’s perceptions of NBS for flood risks was investigated by conducting a systematic review (Paper I). Based on these insights, a case study of South Korea was used to demonstrate key contextual and attitudinal factors for mainstreaming NBS through the lens of experts (Paper II). Lastly, by conducting a citizen survey, it investigated the relationship between the previously discussed concepts in Papers I and II using structural equation modeling, focusing on the core concepts, namely risk and place (Paper III). As a result, Paper I identified the key topics relating to people’s perceptions, including the perceived value of co-benefits, perceived effectiveness of risk reduction effectiveness, participation of stakeholders, socio-economic and place-specific conditions, environmental attitude, and uncertainty of NBS. Paper II confirmed Paper I's findings regarding attitudinal factors. In addition, several contextual hampering or stimulating factors were found to be similar to those of any emerging technologies (i.e., path dependence, lack of operational and systemic capacity). Among all, one of the distinctive features in NBS contexts, at least in the South Korean case, is the politicization of NBS, which can lead to polarization of ideas and undermine the decision-making process. Finally, Paper III provides a framework with the core topics (i.e., place and risk) that were considered critical in Paper I and Paper II. This place-based risk appraisal model (PRAM) connects people at risk and places where hazards (i.e., floods) and interventions (i.e., NBS) take place. The empirical analysis shows that, among the place-related variables, nature bonding was a positive predictor of the perceived risk-reduction effectiveness of NBS, and place identity was a negative predictor of supportive attitude. Among the risk-related variables, threat appraisal had a negative effect on perceived risk reduction effectiveness and supportive attitude, while well-communicated information, trust in flood risk management, and perceived co-benefit were positive predictors. This dissertation proves that the place and risk attributes of NBS shape people’s perceptions of NBS. In order to optimize the NBS implementation, it is necessary to consider the meanings and values held in place before project implementation and how these attributes interact with individual and/or community risk profiles and other contextual factors. With the increasing necessity of using NBS to lower flood risks, these results make important suggestions for the future NBS project strategy and NBS governance.
Learning the causal structures from observational data is an omnipresent challenge in data science. The amount of observational data available to Causal Structure Learning (CSL) algorithms is increasing as data is collected at high frequency from many data sources nowadays. While processing more data generally yields higher accuracy in CSL, the concomitant increase in the runtime of CSL algorithms hinders their widespread adoption in practice. CSL is a parallelizable problem. Existing parallel CSL algorithms address execution on multi-core Central Processing Units (CPUs) with dozens of compute cores. However, modern computing systems are often heterogeneous and equipped with Graphics Processing Units (GPUs) to accelerate computations. Typically, these GPUs provide several thousand compute cores for massively parallel data processing.
To shorten the runtime of CSL algorithms, we design efficient execution strategies that leverage the parallel processing power of GPUs. Particularly, we derive GPU-accelerated variants of a well-known constraint-based CSL method, the PC algorithm, as it allows choosing a statistical Conditional Independence test (CI test) appropriate to the observational data characteristics.
Our two main contributions are: (1) to reflect differences in the CI tests, we design three GPU-based variants of the PC algorithm tailored to CI tests that handle data with the following characteristics. We develop one variant for data assuming the Gaussian distribution model, one for discrete data, and another for mixed discrete-continuous data and data with non-linear relationships. Each variant is optimized for the appropriate CI test leveraging GPU hardware properties, such as shared or thread-local memory. Our GPU-accelerated variants outperform state-of-the-art parallel CPU-based algorithms by factors of up to 93.4× for data assuming the Gaussian distribution model, up to 54.3× for discrete data, up to 240× for continuous data with non-linear relationships and up to 655× for mixed discrete-continuous data. However, the proposed GPU-based variants are limited to datasets that fit into a single GPU’s memory. (2) To overcome this shortcoming, we develop approaches to scale our GPU-based variants beyond a single GPU’s memory capacity. For example, we design an out-of-core GPU variant that employs explicit memory management to process arbitrary-sized datasets. Runtime measurements on a large gene expression dataset reveal that our out-of-core GPU variant is 364 times faster than a parallel CPU-based CSL algorithm. Overall, our proposed GPU-accelerated variants speed up CSL in numerous settings to foster CSL’s adoption in practice and research.
This work explores the use of different generative AI tools in the design of MOOC courses. Authors in this experience employed a variety of AI-based tools, including natural language processing tools (e.g. Chat-GPT), and multimedia content authoring tools (e.g. DALLE-2, Midjourney, Tome.ai) to assist in the course design process. The aim was to address the unique challenges of MOOC course design, which includes to create engaging and effective content, to design interactive learning activities, and to assess student learning outcomes. The authors identified positive results with the incorporation of AI-based tools, which significantly improved the quality and effectiveness of MOOC course design. The tools proved particularly effective in analyzing and categorizing course content, identifying key learning objectives, and designing interactive learning activities that engaged students and facilitated learning. Moreover, the use of AI-based tools, streamlined the course design process, significantly reducing the time required to design and prepare the courses. In conclusion, the integration of generative AI tools into the MOOC course design process holds great potential for improving the quality and efficiency of these courses. Researchers and course designers should consider the advantages of incorporating generative AI tools into their design process to enhance their course offerings and facilitate student learning outcomes while also reducing the time and effort required for course development.