Refine
Year of publication
- 2022 (252) (remove)
Document Type
- Doctoral Thesis (252) (remove)
Is part of the Bibliography
- yes (252)
Keywords
- Klimawandel (6)
- climate change (5)
- Arabidopsis thaliana (3)
- Bundeswehr (3)
- Digitalisierung (3)
- Epigenetik (3)
- Kunstgeschichte (3)
- Modellierung (3)
- Röntgenspektroskopie (3)
- Transkriptionsfaktoren (3)
- machine learning (3)
- modelling (3)
- obesity (3)
- transcription factors (3)
- Adipositas (2)
- Antifouling (2)
- Antisemitismusforschung (2)
- Arbeitszufriedenheit (2)
- Astrophotonik (2)
- Bewegungsökologie (2)
- Deep Learning (2)
- Deutschland (2)
- Diffusion (2)
- Digitale Transformation (2)
- Dokumentarische Methode (2)
- Epigrafik (2)
- Exoplaneten (2)
- Extremereignisse (2)
- Freie-Elektronen-Laser (2)
- Friedhof (2)
- Germany (2)
- Hochwasser (2)
- Holocaust (2)
- Identität (2)
- Informationsstruktur (2)
- Judaistik (2)
- Jüdische Studien (2)
- Kalter Krieg (2)
- Koexistenz (2)
- Lehrkräftebildung (2)
- Mindestlohn (2)
- Mitochondrien (2)
- NATO (2)
- Nicht-Fulleren-Akzeptoren (2)
- Opalinus Clay (2)
- Opalinuston (2)
- PHREEQC (2)
- Photoelektronenspektroskopie (2)
- Potsdam/Geschichte (2)
- Produktion (2)
- Sepulkralkultur (2)
- Spracherwerb (2)
- Städte (2)
- Thiouracil (2)
- cities (2)
- cue-based retrieval (2)
- depression (2)
- diffusion (2)
- digitalization (2)
- documentary method (2)
- epigenetics (2)
- exoplanets (2)
- extreme events (2)
- flood (2)
- giro espacial (2)
- hydraulic fracturing (2)
- identity (2)
- information structure (2)
- internationale Beziehungen (2)
- job satisfaction (2)
- knowledge graphs (2)
- maschinelles Lernen (2)
- materielle Kultur (2)
- metabolism (2)
- migration (2)
- minimum wage (2)
- mitochondria (2)
- movement ecology (2)
- numerical modelling (2)
- numerische Modellierung (2)
- photoelectron spectroscopy (2)
- production (2)
- reactive transport (2)
- reaktiver Transport (2)
- sentence processing (2)
- social media (2)
- spatial turn (2)
- surface modification (2)
- teacher education (2)
- teacher professional development (2)
- thiouracil (2)
- travel literature (2)
- ultrafast (2)
- ultraschnell (2)
- 20. Jahrhundert (1)
- 21. Jahrhundert (1)
- 2D Numerical Modelling (1)
- 3C (1)
- 3D Druck (1)
- 3D printing (1)
- 7-Methylheptadecan (1)
- 7-methylheptadecane (1)
- AFLP (1)
- APX2 (1)
- ARMS (1)
- ASPECT (1)
- Absolute (1)
- Access Control (1)
- Adoption (1)
- Adoptionspflegezeit (1)
- African medicinal plants (1)
- Afrika (1)
- Afrikanische Heilpflanzen (1)
- Afrikapolitik (1)
- Agentenbasierte Modelle (1)
- Akklimatisierung (1)
- Akte der Staatsbürgerschaft (1)
- Aktienmarkt (1)
- Aktivkohle (1)
- Aldehydoxidase (1)
- Algorithmic Game Theory (1)
- Algorithmische Spieltheorie (1)
- Alkoholkonsum (1)
- Allemagne (1)
- Alliierte Militärverbindungsmissionen (1)
- Alphabetisierung und Grundbildung Erwachsener (1)
- Alternativmethoden (1)
- Amaranthus retroflexus (1)
- Anden (1)
- Andes (1)
- Anfangsdaten (1)
- Angiogenese (1)
- Angst (1)
- AnlV (1)
- Annamites (1)
- Annemarie Schwarzenbach (1)
- Anonyme Kindesabgabe (1)
- Anpassung (1)
- Anregungs-Abfrage-Spektroskopie (1)
- Antarctica (1)
- Antarktis (1)
- Antikörper (1)
- Antikörpercharakterisierung (1)
- Antikörpervalidierung (1)
- Aphasie (1)
- Arabica Kaffeebohnen (1)
- Arabica coffee beans (1)
- Arabidopsis (1)
- Aragon (1)
- Archivanalyse (1)
- Arctic nearhore zone (1)
- Argentina (1)
- Argentinien (1)
- Arikawissenschaft (1)
- Aromaticity (1)
- Aromatizität (1)
- Artefakte (1)
- Assay High-resolution mass spectrometry (1)
- Assay hochauflösende Massenspektrometrie (1)
- Assemblierung (1)
- Astrophotonics (1)
- Atemwegserkrankungen (1)
- Atomwaffen (1)
- Attributsicherung (1)
- Aufkonversion (1)
- Aufnahme (1)
- Aufzählungsalgorithmen (1)
- Auger-Meitner electron spectroscopy (1)
- Ausführungssemantiken (1)
- Auslandseinsätze (1)
- Auteur Theorie (1)
- Autonomie (1)
- Autophagie (1)
- Außenpolitik (1)
- BMI (1)
- Babyklappe (1)
- Bank (1)
- Baryt (1)
- Baumgrenze (1)
- Bayes'sche Mehrebenenregression (1)
- Bayes'sche Modelle (1)
- Bayesian inversion (1)
- Bayesian model (1)
- Bayesian multi-level logistic regression (1)
- Bead (1)
- Begabung (1)
- Begründungsfiguren (1)
- Benchmark (1)
- Bene Israel (1)
- Beschallung (1)
- Beschäftigungseffekte (1)
- Betriebliche Altersversorgung (1)
- Bevölkerung (1)
- Bevölkerungsdichte (1)
- Bewegung (1)
- Bewegungsorientierte Weiterbildung (1)
- Bewertung (1)
- Bildungspolitische Programme (1)
- Biofilm (1)
- Bioinspiration (1)
- Biomodification (1)
- Bioraffinerie (1)
- Bioreaktor (1)
- Bisulfit Sequenzierung (1)
- Blattverschiebung (1)
- Blaulicht (1)
- Blockchain (1)
- Blockchain Governance (1)
- Body-Mass-Index (1)
- Boolean satisfiability (1)
- Boolsche Erfüllbarkeit (1)
- Boten-RNA (mRNA) (1)
- Boundary Value Problems (1)
- Brandenburg-Preußen (1)
- Brandenburg-Prussia (1)
- Breitengrad (1)
- Bruchausbreitung (1)
- Bundesnachrichtendienst (1)
- Bundeswehrkommando Ost (1)
- Business Process Management (1)
- CA-SLA (1)
- CBD (1)
- CBM (1)
- CDK5RAP2 (1)
- CLSM (1)
- CRISPR/Cas9 (1)
- CVD (1)
- CaM4 (1)
- Cannabidiol (CBD) (1)
- Case Management (1)
- Cellulose-Bindung (1)
- Central Europe (1)
- Centrosom (1)
- Cep192 (1)
- Chalkogenide (1)
- Charge recombination (1)
- Chenopodium album (1)
- Chernoff-Hoeffding theorem (1)
- Chloroplast (1)
- Chloroplasten (1)
- Chukotka vegetation (1)
- Clathrin-bedeckte Vesikel (1)
- Climate Change Adaptation (1)
- Closet Indexing (1)
- Cloud Computing (1)
- CloudRAID (1)
- CloudRAID for Business (1)
- Co-Transfektion (1)
- Codierungstheorie (1)
- Cold War (1)
- Colitis ulcerosa (1)
- Colloid Chemistry (1)
- Computational Hardness (1)
- Computersimulation (1)
- Computertomography (1)
- Computing (1)
- Continental Rifts (1)
- Crisis Communication (1)
- Cryptography (1)
- Cue-abhängiger Abruf (1)
- Cyanobakterien-Biomarker (1)
- Cybersecurity (1)
- Cybersicherheit (1)
- Cytosin-Methylierung (1)
- DDR (1)
- DDR-Geschichte (1)
- Dammbruchfluten (1)
- Data Modeling (1)
- Data Profiling (1)
- Data-Mining (1)
- Datenaufbereitung (1)
- Datenbanken (1)
- Datenintegration (1)
- Datenmodellierung (1)
- Datenqualität (1)
- Datenstromverarbeitung (1)
- Datensynthese (1)
- Datentransformation (1)
- Datenvisualisierung (1)
- Datura stramonium (1)
- De novo Assemblierung (1)
- Deakklimatisierung (1)
- Deformation (1)
- Deformationsmechanismen (1)
- Dekolonisation (1)
- Dempster-Shafer-Theorie (1)
- Dempster–Shafer theory (1)
- Depression (1)
- Design Thinking (1)
- Design Thinking Bildung (1)
- Design Thinking education (1)
- Detektionssystem (1)
- Deutsch (1)
- Deutsche Literatur (1)
- Dezentralität (1)
- Diagnostik (1)
- Dialektik (1)
- Dialog (1)
- Dialog KI (1)
- Dichteeffekte (1)
- Dielektrophorese (1)
- Differential mobility analysis (DMA) (1)
- Differentielle Mobilitätsanalyse (DMA) (1)
- Differenzielle Genexpression (1)
- Digital Health (1)
- Digital Platforms (1)
- Digital Transformation (1)
- Digitale Gesundheit (1)
- Digitale Plattformen (1)
- Digitalization (1)
- Digitalstrategie (1)
- Dimensionsreduktion (1)
- Dirac Operator (1)
- Diracoperator (1)
- Dissertation (1)
- Distanz (1)
- Diät (1)
- Drehbuch (1)
- Drug Delivery (1)
- Dubletten (1)
- Duplikaterkennung (1)
- Dynamik (1)
- E. coli (1)
- ENTH domain proteins (1)
- ENTH-Domänenproteine (1)
- ERP (1)
- ETF (1)
- Early Starvation 1 (1)
- Echo-State Netzwerk (1)
- Edit-Distanz (1)
- Effekt (1)
- Einbettungen (1)
- Einheit des Christentums (1)
- Einwanderung (1)
- Eisbergkalbung (1)
- Eisschildmodellierung (1)
- Electrochemistry (1)
- Elektrochemie (1)
- Ella Maillart (1)
- Elternrecht (1)
- Endlagerung nuklearer Abfälle (1)
- Energiespeicher (1)
- Engagement für die Führungskraft (1)
- Enterprise File Synchronization and Share (1)
- Environmental Psychology (1)
- Epigenom Editierung (1)
- Epstein-Barr Virus-induziertes Gen 3 (1)
- Erbringung von Verwaltungsleistungen (1)
- Erdbeben (1)
- Erfüllbarkeitsschwellwert (1)
- Erigeron annuus (1)
- Erigeron canadensis (1)
- Erinnerungskultur (1)
- Erkennung von Metadaten (1)
- Ethnologie (1)
- Europe (1)
- Evaluation (1)
- Evidenztheorie (1)
- Execution Semantics (1)
- Exposition (1)
- Extremniederschläge (1)
- Eye-tracking (1)
- FLASH (1)
- Fabrikation (1)
- Fallbeschreibungen (1)
- Fallmanagement (1)
- FastScape (1)
- Federal Armed Forces of Germany (1)
- Federal Foreign Intelligence Service (1)
- Fehlererkennung (1)
- Fehlverhalten (1)
- Femtosekundenlaser-Bearbeitungsmethode (1)
- Fernando Vallejo (1)
- Filmmusik (1)
- Filmmusikanalyse (1)
- Finanzberichterstattung (1)
- Finanzmarkt (1)
- Fintech (1)
- Flares (1)
- Flugabwehr (1)
- Fluorescent Dyes (1)
- Fluoreszenzfarbstoffe (1)
- Fluoreszenzmikroskopie (1)
- Fluorophore (1)
- Fluorophores (1)
- Flutgefährdung (1)
- Fokus (1)
- Fokusrealisierung (1)
- Formationsschaden (1)
- Formgedächtnis (1)
- Fortbildung (1)
- Frankreich (1)
- Frederick William (1)
- Free Electron Laser (1)
- Friedrich III./I. (1)
- Friedrich Wilhelm (1)
- Friedrich Wilhelm I. (1)
- Future strategy (1)
- Föderalismus (1)
- GDF15 (1)
- GLOF (1)
- GLOF (Gletscherseeausbruchsflut) (1)
- GPT (1)
- GWAS (1)
- Galactic center (1)
- Game Dynamics (1)
- Gammaastronomie (1)
- Gasadsorption (1)
- Gashydrate (1)
- Gaussian processes (1)
- Gauß-Prozesse (1)
- Gedenken (1)
- Gefallenenehrung (1)
- Gemeinschaftsgarten-Experiment (1)
- Generierung freier Ladungsträger (1)
- Genom-Scan (1)
- Gentechnik (1)
- Geodynamic Modelling (1)
- Geodynamik (1)
- Geodynamische Modellierung (1)
- Geomagnetismus (1)
- Geometrie (1)
- Geothermie (1)
- German (1)
- German reunification (1)
- Gerüste aus Fasergeflecht (1)
- Geschäftsmodelle (1)
- Geschäftsprozessmanagement (1)
- Gesundheit (1)
- Gewichtsverlust (1)
- Gewässerfernerkundung (1)
- Glasfaser (1)
- Glykochemie (1)
- Glykogele (1)
- Glykokonjugat (1)
- Glykokonjugate (1)
- Glykomonomer (1)
- Glykopolymer (1)
- Glykopolymer-Elektrolyt (1)
- Glykopolymere (1)
- Gouvernementalität (1)
- GraalVM (1)
- Grabenbrüche (1)
- Graphentheorie (1)
- Gravitationswellen (1)
- Greenland Ice Sheet (1)
- Großformat (1)
- Grundgestein (1)
- Gruppe der Sowjetischen Streitkräfte in Deutschland (1)
- Grönländisches Eisschild (1)
- H3K4me (1)
- H3K9ac (1)
- HAC1 (1)
- HIV (1)
- HS transcriptional memory (1)
- HS-Transkriptionsgedächtnis (1)
- HSE (1)
- HSF (1)
- HSFA2 (1)
- HUVEC (1)
- Hautmodell (1)
- Hefe (1)
- Hegel (1)
- Heimat (1)
- HepG2 hepatocytes (1)
- HepG2-Zellen (1)
- Herzinfarkt (1)
- Herzkreislauferkrankungen (1)
- Himalaya-Gebirge (1)
- Himalayas (1)
- HipHop (1)
- Histon Methylierung (1)
- Hitting Sets (1)
- Hitze (1)
- Hitzeanpassung (1)
- Hitzeschock-Transkriptionsfaktor (1)
- Hitzestress-Gedächtnis (1)
- Hochleistungscomputer (1)
- Hochwasserrisiko (1)
- Hochwasserwirkungspfad (1)
- Homelessness (1)
- Horizontal flux (1)
- Hydrodynamik (1)
- Hydrogele (1)
- Hydrologie (1)
- Hypoxie (1)
- Idealismus (1)
- In vitro transcription technology (1)
- In-vitro-Transkriptionstechnologie (1)
- Indian caste (1)
- Indischen Kaste (1)
- Industrie 4.0 (1)
- Industry 4.0 (1)
- Induzierte Seismizität (1)
- Infrared matrix-assisted laser desorption ionization (IR-MALDI) (1)
- Injektion (1)
- Injektionsschema (1)
- Innovationen in den Städten (1)
- Integrated spectrograph (1)
- Integration (1)
- Integrin (1)
- Intelligence history (1)
- Interaktion (1)
- Internationaler Wettbewerb (1)
- Internationales Wettbewerbsrecht (1)
- Intraspezifische Variation (1)
- Investmentfonds (1)
- Ion mobility spectrometry (IMS) (1)
- Ionenmigration (1)
- Ionenmobilitätspektrometrie (1)
- Ionenmobilitätsspektrometrie (IMS) (1)
- Ionenmobilitätsspektrometry (IMS) (1)
- Isoflavonoide (1)
- Isotopenfraktionierung (1)
- JUB1 (1)
- Jewish State (1)
- Jews and Christs on the Iberian Peninsula (1)
- Jüdischer Staat (1)
- Kaffee (1)
- Kaffeenebenprodukte (1)
- Kaffeeverarbeitung (1)
- Kalman Filter (1)
- Kalman filter (1)
- Kalzium (1)
- Kanalisierung (1)
- Kapillarblutfüllung (1)
- Kapitalanlagerecht (1)
- Kapitalmarkt (1)
- Kastilien (1)
- Kindsabgabe (1)
- Kindstötung (1)
- Klassifikation der Landbedeckung (1)
- Klassifikator-Kalibrierung (1)
- Kleinwinkelröntgenstreuung (1)
- Kleinwinkelstreuung (1)
- Klimawandelanpassung (1)
- Knochen (1)
- Koexpression Netzwerk Analysen (1)
- Kohlenstoffnitriden (1)
- Kokultur (1)
- Kolloidchemie (1)
- Kolonialismus (1)
- Komplexität der Berechnung (1)
- Komponieren (1)
- Konstruktion von Wissensgraphen (1)
- Kontaktallergie (1)
- Kontextkonsistenz (1)
- Kontrastwerte (1)
- Kontrollüberzeugung (1)
- Konversation (1)
- Konversationsanalyse (1)
- Konversion (1)
- Korpusexploration (1)
- Kraft (1)
- Kraftausdauer (1)
- Krankheitsökologie (1)
- Krisenkommunikation (1)
- Kristallisation von Dünnschichten (1)
- Kryptografie (1)
- Kulturmanagement (1)
- Kundenverhalten (1)
- Kupfer (1)
- Kältestress (1)
- LAVESI (1)
- LIBS (1)
- Laborexperiment (1)
- Lactuca serriola (1)
- Ladungsrekombination (1)
- Lakunen (1)
- Landwirtschaft (1)
- Langerhans Zellen (1)
- Larix (1)
- Larix cajanderi (1)
- Lars von Trier (1)
- Laserinduzierte Inkandeszenz (LII) (1)
- Laserinduzierte Plasmaspektroskopie (LIBS) (1)
- Laufzeittomographie (1)
- Layer-by-Layer Glykopolymerbeschichtung (1)
- Lebenskunst (1)
- Lebensmittelpreise (1)
- Lebensstil (1)
- Legionella (1)
- Legionellen (1)
- Lehrerfortbildung (1)
- Lehrkräftefortbildungen (1)
- Leihmutterschaft (1)
- Leistungsentwicklung (1)
- Lektin (1)
- Lernbegründungen (1)
- Lipophagie (1)
- Literaturwissenschaft (1)
- Lithosphäre (1)
- Live-Migration (1)
- Lokalisierung von Deformation (1)
- Lorentzgeometrie (1)
- Lorentzian Geometry (1)
- Luftqualität (1)
- Luftwaffe (1)
- Lysosom (1)
- Lärche (1)
- Lösung (1)
- Lösungsassemblierung (1)
- MES (1)
- MSAP (1)
- Macht (1)
- Magma-Entgasung (1)
- Magnetfeldmodellierung (1)
- Magnetoelastizität (1)
- Makrophagen-Aktivierung (1)
- Makroökonomische Modellierung (1)
- Mandarin (1)
- Markov chains (1)
- Markovketten (1)
- Maschinen (1)
- Massenspektrometrie (1)
- Matrix-unterstützte Laser-Desorption/Ionisation (IR-MALDI) (1)
- Mechanobiologie (1)
- Medizinprodukt (1)
- Meeresspiegelanstieg (1)
- Mehrebenen-System (1)
- Mehrebenenmodelle (1)
- Mehrebenensystem (1)
- Mehrklassen-Klassifikation (1)
- Mehrsprachigkeit (1)
- Menschen, die mit HIV leben (1)
- Messenger RNA (mRNA) (1)
- Metabolismus (1)
- Methoden: analytisch (1)
- Methoden: numerisch (1)
- Microsomal (1)
- Migration (1)
- Mikrofluidik (1)
- Mikroplastik (1)
- Mikroplatte (1)
- Mikropolitik (1)
- Mikrosatelliten (1)
- Mikrosomal (1)
- Mikrostrukturelle (1)
- Milcheinnahme (1)
- Militärgeschichte (1)
- Militärische Rakete (1)
- Mineralisierung (1)
- Mittelalter (1)
- Mitteleuropa (1)
- Mitteltemperaturübergang (1)
- Model Comparison (1)
- Model-Daten Integration (1)
- Modell (1)
- Modellierung der Wassertrübung (1)
- Modellkalibrierung (1)
- Modellvergleich (1)
- Moderation (1)
- Molekulardynamik (1)
- Mortalität (1)
- Mortalitäts-Minimal-Temperatur (1)
- Multiplex (1)
- Multiproteinkomplexbildung (1)
- Muschelnachahmend (1)
- Museum (1)
- Museumswissenschaft (1)
- Musik im 20. Jahrhundert (1)
- Musikdramaturgie (1)
- Muskeldurchblutung (1)
- N-doped carbon (1)
- N-dotierter Kohlenstoff (1)
- NEXAFS (1)
- NTF (1)
- Nachrichtendienstgeschichte (1)
- Named-Entity-Erkennung (1)
- Nanopartikel (1)
- Nash Equilibrium (1)
- Nationalidentität (1)
- Naturgefahr (1)
- Negotiation (1)
- Negotiation future (1)
- Negotiation management (1)
- Negotiation trends (1)
- Neoliberale Natur (1)
- Neotektonik (1)
- Nepal (1)
- Nephropathie (1)
- Network Creation Game (1)
- Netzwerktheorie (1)
- Neuchristen (1)
- Neuroendocrine tumors (1)
- Neuropathie (1)
- Neuste Geschichte (1)
- New Public Governance (1)
- Nicht-Langevin-Systeme (1)
- Nicolas Bouvier (1)
- Non-Langevin systems (1)
- Non-fullerene acceptors (1)
- Non-thermal radiation sources (1)
- Nordafrika (1)
- Nuklearwaffen (1)
- Nukleobasen (1)
- Numerische 2D Modellierung (1)
- Obdachlosigkeit (1)
- Oberfläche (1)
- Oberflächenassemblierung (1)
- Oberflächenmodifikation (1)
- Oberflächenmodifizierung (1)
- Oberflächenprozesse (1)
- On-Sky-Tests (1)
- Online-Persönlichkeit (1)
- Operationalisierung (1)
- Organic solar cells (1)
- Organische Solarzellen (1)
- Osteogenese (1)
- PCA (1)
- PEEU (1)
- PLM (1)
- PLSR (1)
- PM10, PM2, PM1 (1)
- PVDF-based polymers (1)
- Pace-of-Life Syndrom (1)
- Paleoseismologie (1)
- Palmitat (1)
- Paläoklimatologie (1)
- Paläoökologie (1)
- Palästina (1)
- Pannexin 1 (1)
- Paramutation (1)
- Paxillin (1)
- Peptid (1)
- Performance (1)
- Performanz (1)
- Permafrostsedimente (1)
- Perowskit Solarzellen (1)
- Perowskit Vorläuferstadien (1)
- Perowskite (1)
- Persönlichkeitsmerkmale (1)
- Pflanzenanpassung (1)
- Pflanzenforschung (1)
- Pflanzenökologie (1)
- PhD thesis (1)
- Photochemische Reaktionen (1)
- Phylogeographie (1)
- Physik Lehramt (1)
- Physiologie (1)
- Pickering Emulsion (1)
- Piraterie (1)
- Planktonnahrungsnetz (1)
- Plantago major (1)
- Platform Economy (1)
- Plattform-Ökosysteme (1)
- Plattformökonomie (1)
- Politische Ökologie (1)
- Poly(2-oxazoline) (1)
- Polyether (1)
- Polykontexturalität (1)
- Polymerchemie (1)
- Polymere (1)
- Polymere auf PVDF-Basis (1)
- Polyneuropathie (1)
- Popkultur (1)
- Popmusik (1)
- Populationspersistenz (1)
- Populationsstruktur (1)
- Porositätsanalyse (1)
- Porphyrine (1)
- Porphyrins (1)
- Postkolonialismus (1)
- Preis der Anarchie (1)
- Price of Anarchy (1)
- Process Modeling (1)
- Produktionssteuerung (1)
- Produktlebenszyklus (1)
- Programmierabstraktionen (1)
- Programmierwerkzeuge (1)
- Projektion (1)
- Prosodie (1)
- Prosody (1)
- Protein (1)
- Protein Microcapsules (1)
- Protein Modifizierung (1)
- Protein-Polymer-Konjugat (1)
- Proteine (1)
- Proteinkinase A (1)
- Proteinmikrokapseln (1)
- Prozessexpertise (1)
- Prozessgestaltung (1)
- Prozessmodellierung (1)
- Prozessverbesserung (1)
- Präexistente Musik (1)
- Psycholinguistik (1)
- Public Management (1)
- Punicalagin (1)
- Pupil Remapper (1)
- Pyridone (1)
- Pyridones (1)
- Qualitative Sozialforschung (1)
- Quantendynamik (1)
- Quantizer (1)
- Quantum (1)
- Quellen nichtthermischer Strahlung (1)
- RIXS (1)
- RNA-Sequenzierung (1)
- RNA-sequencing (1)
- Radverkehr (1)
- Raketenabwehr (1)
- Raman spectroscopy (1)
- Raman-Spektroskopie (1)
- Randwertprobleme (1)
- Rangerhöhung (1)
- Rap (1)
- Rassendiskriminierung (1)
- Rauchen (1)
- Reaktivierung von Störungszonen (1)
- Recht auf Kenntnis der eigenen Abstammung (1)
- Recht auf informationelle Selbstbestimmung (1)
- Rechte (1)
- Rechte einfordern (1)
- Reifegradmodell (1)
- Reiseliteratur (1)
- Rekonstruktive Sozialforschung (1)
- Rekurrenzplot (1)
- Relaxor-ferroelektrische Polymere (1)
- Retinopathie (1)
- Richard Wagner (1)
- Rift (1)
- Risikobereitschaft (1)
- Risikobewertungen (1)
- Risikoverhalten (1)
- Risk Behaviour (1)
- Risserkennung (1)
- Risstransmissivität (1)
- Rotatorien (1)
- Russian Arctic (1)
- Röntgen-Refraktions Bildgebung (1)
- Röntgenbeugung (1)
- Röntgenstrahlen (1)
- Rücknahmewunsch (1)
- SAXS (1)
- Saccharomyces cerevisiae (1)
- Salzgestein (1)
- Samenspende (1)
- Satz von Chernoff-Hoeffding (1)
- Satzverarbeitung (1)
- Satzverständnisses (1)
- Schadensmodellierung (1)
- Schallemissionen (1)
- Schlaganfall (1)
- Scholastik (1)
- Schulentwicklung (1)
- Schulleitungen (1)
- Schulleitungshandeln (1)
- SchwHiAusbauG (1)
- Schätzung finanzieller Schäden (1)
- Science and Technology Studies (1)
- Seerecht (1)
- Seerechtsübereinkommen (1)
- Seesystemreaktionen (1)
- Selbstassemblierung (1)
- Selbstwirksamkeitserwartungen (1)
- Seltene Erdelemente (1)
- Senecio vulgaris (1)
- Seneszenz (1)
- Sensor (1)
- Sequenzierungstechnologien der nächsten Generation (1)
- Shotgun Sequenzierung (1)
- Siberia (1)
- Sibirien (1)
- Silene vulgaris (1)
- Siliziumdioxid-auf-Silizium (1)
- Sinnhaftigkeit der Arbeit (1)
- Slumming (1)
- Slumtourismus (1)
- Smalltalk (1)
- Smart Factory (1)
- Solanum nigrum (1)
- Soldatentod (1)
- Solidago canadensis (1)
- Solidago gigantea (1)
- Sommerekzem (1)
- Sonchus oleraceus (1)
- Sonication (1)
- Sorgerecht (1)
- Sorption (1)
- Sound (1)
- Sozioökonomie (1)
- Spannungsmessung (1)
- Speicher (1)
- Spieldynamik (1)
- Spionage (1)
- Sport (1)
- Sportpolitik (1)
- Sportwissenschaft (1)
- Sprachkompetenz (1)
- Staatsbürgerschaft (1)
- Stadtführungen (1)
- Stadtwachstumsraten (1)
- Steifheit (1)
- Sterne (1)
- Steuerbefreiung (1)
- Steuerbefreiung von Pensionskassen (1)
- Steuerrecht (1)
- Steuerung (1)
- Stigmatisierung (1)
- Stimuli (1)
- Stoffwechsel (1)
- Strahlungstransport (1)
- Strain Localisation (1)
- Structural and energetic disorder (1)
- Struktur-Eigenschafts-Beziehungen (1)
- Strukturbildung (1)
- Strukturelle und energetische Unordnung (1)
- Strukturgeologie (1)
- Strömungschemie (1)
- Stundenlöhne und Monatseinkommen (1)
- Subduktion (1)
- Subjektwissenschaftliche Lerntheorie (1)
- Sulfation (1)
- Supply Chain (1)
- Synthese (1)
- Synthetische Biologie (1)
- Systeme interagierender Partikel (1)
- Systemtheorie (1)
- THC (1)
- TRPV1 (1)
- Teilchenphysik (1)
- Teilnahmebegründungen (1)
- Temperatur (1)
- Temperaturänderungen (1)
- Texterkennung (1)
- Theranostic (1)
- Thioacetale (1)
- Thioacetals (1)
- Thioester (1)
- Thomas Bernhard (1)
- Tierarzneimittel (1)
- Torsion Experiments (1)
- Torsionsexperimente (1)
- Tragfähigkeit (1)
- Transformation product (1)
- Transformationsprodukt (1)
- Transkriptionsfaktor (1)
- Transversal-Hypergraph (1)
- Tripleurospermum inodorum (1)
- Trockenstress (1)
- Truppenabzug (1)
- Tundra-Taiga (1)
- Turkish (1)
- Typ 2 Diabetes (1)
- Typenbildung (1)
- Türkisch (1)
- U.S. Armed Forces (1)
- Umgangsrecht (1)
- Umwelt (1)
- Umweltpsychologie (1)
- Ungleichheit (1)
- United Nations Convention on the Law of the Sea (1)
- Unity in the Spirit of Christianity (1)
- Unternehmensdateien synchronisieren und teilen (1)
- Uran (1)
- Variationen terrestrischer Wasserspeicher (1)
- Vegetation von Tschukotka (1)
- Vegetationsveränderungen (1)
- Vegetationsveränderungen in der Subarktis (1)
- Verkehr (1)
- Vernetzer (1)
- Veronica persica (1)
- Versicherungsaufsichtsrecht (1)
- Vertical flux (1)
- Vertrauen (1)
- Veterinary drugs (1)
- Vollkorn (1)
- Vorhersage (1)
- WGCNA (1)
- Wald (1)
- Waldausdehnung (1)
- Wasser-Gesteins-Wechselwirkungen (1)
- Weiterbildungen (1)
- Wertschöpfungskooperation (1)
- Westgruppe der Truppen (1)
- Wettbewerbspolitik (1)
- Wetter (1)
- Wetterextreme (1)
- Widerstand (1)
- Wiedervereinigung (1)
- William Herschel Teleskop (1)
- William Herschel telescope (1)
- Windböen (1)
- Winderosion (1)
- Wirtsgesteinsskala (1)
- Wissenschaft der Logik (1)
- Wissensgraph (1)
- Wissensgraphen (1)
- Wissensgraphen Verfeinerung (1)
- Wohlstand (1)
- Wolke (1)
- World Trade Organization (1)
- X-ray diffraction (1)
- X-ray refraction imaging (1)
- X-ray spectroscopy (1)
- Zebularin (1)
- Zell-Umwelt-Interaktionen (1)
- Zellmigration (1)
- Zellsortierung (1)
- Zentral Asien (1)
- Zugriffskontrolle (1)
- Zukunft (1)
- Zwei-Routen-Lesemodell (1)
- abiotic stress (1)
- abiotischer Stress (1)
- above-ground biomass (1)
- absolute (1)
- absolute Methode (1)
- absolute method (1)
- acclimatisation (1)
- achievement trajectory (1)
- acoustic emissions (1)
- acoustically levitated droplets (1)
- acquired dyslexia (1)
- actin (1)
- activated carbon (1)
- acts of citizenship (1)
- adaptation (1)
- adaptive Differenzierung (1)
- adaptive differentiation (1)
- additive Fertigung (1)
- additive manufacturing (1)
- adhesive (1)
- affective turn (1)
- agent-based model (1)
- agriculture (1)
- air quality (1)
- akustisch schwebende Tropfen (1)
- alcohol intake (1)
- allied military liaison missions (1)
- alte DNA (1)
- alte sedimentäre DNA (1)
- anaerobe Inkubationensexperimente (1)
- anaerobic incubation experiments (1)
- ancient DNA (1)
- ancient sedimentary DNA (1)
- angeborene Immunantwort (1)
- angewandte Mathematik (1)
- angiogenesis (1)
- animal (1)
- anthropogene Einwirkung (1)
- anthropogenic impact (1)
- anti-fouling (1)
- antibacterial aerosol (1)
- antibakterielles Aerosol (1)
- antibody (1)
- antibody characterization (1)
- antibody validation (1)
- antimicrobial (1)
- anxiety (1)
- aphasia (1)
- applied mathematics (1)
- archive analysis (1)
- arktischer Nahküstenbereich (1)
- arrayed waveguide grating (1)
- art of life (1)
- artefacts (1)
- assembly (1)
- assessment (1)
- astrophotonics (1)
- attribute assurance (1)
- autonomy (1)
- autophagy (1)
- bAV (1)
- bacterial infections (1)
- bakterielle Infektionen (1)
- bank (1)
- barite (1)
- basement rock (1)
- benchmark (1)
- benchmarking (1)
- betriebliche Grund- und Weiterbildung (1)
- bio-modification (1)
- biofilm (1)
- bioinspiration (1)
- bioreactor (1)
- biorefinery (1)
- bisulfite sequencing (1)
- black holes (1)
- blockchain (1)
- blockchain governance (1)
- blue light (1)
- bone (1)
- border (1)
- branched chain amino acids (1)
- budding yeast (1)
- business models (1)
- calcium (1)
- calmodulin (1)
- camera-trap (1)
- canalization (1)
- cannabidiol (CBD) (1)
- capital market (1)
- carbon nitrides (1)
- cardiovascular disease (1)
- caregiver (1)
- case descriptions (1)
- cell division (1)
- cell migration (1)
- cell shape (1)
- cell sorting (1)
- cell-environment interactions (1)
- cellulose biosynthesis inhibitor (1)
- cellulose synthesis (1)
- cellulose-binding (1)
- chalcogenide (1)
- chloroplast (1)
- chloroplasts (1)
- chronic pain (1)
- chronischer Schmerz (1)
- chronotopes (1)
- citizenship (1)
- claim-making (1)
- classifier calibration (1)
- clathrin-coated vesicles (1)
- closet indexing (1)
- cloud (1)
- co-delivery of multiple genes (1)
- co-expression network analysis (1)
- co-transfection (1)
- co-translational (1)
- co-translationale Assemblierung (1)
- coexistence (1)
- coffee (1)
- coffee by-products (1)
- coffee processing (1)
- cognitive modeling (1)
- cold stress (1)
- collaborative governance (1)
- commitment to the leader (1)
- common-garden experiment (1)
- comparative literature (1)
- complex network (1)
- complex systems (1)
- computed tomography (1)
- computer vision (1)
- computing (1)
- comunidades imaginadas (1)
- conical intersection (1)
- consumer behavior (1)
- contemporary Latin American literature (1)
- contentious politics (1)
- context consistency (1)
- conversation (1)
- conversation analysis (1)
- conversational ai (1)
- copper (1)
- corpus exploration (1)
- cosmic rays (1)
- crack detection (1)
- cronotopos (1)
- cross-cultural competence (1)
- crosslinker (1)
- cultural diversity (1)
- culturally responsive education (1)
- culture-general skills (1)
- cyanobacteria biomarker (1)
- cycling (1)
- cytosine methylation (1)
- dairy intake (1)
- damage modelling (1)
- data integration (1)
- data preparation (1)
- data profiling (1)
- data quality (1)
- data synthesis (1)
- data visualisation (1)
- data wrangling (1)
- data-mining (1)
- dbms (1)
- de novo assembly (1)
- deacclimation (1)
- decentrality (1)
- deduplication (1)
- deep learning (1)
- deformation (1)
- delirium (1)
- dementia (1)
- density effects (1)
- density-driven flow (1)
- design of experiments (1)
- detection system (1)
- deutsch (1)
- diagnostic assessment (1)
- dialectics (1)
- dialogue (1)
- dichtegetriebene Strömung (1)
- dielectrophoresis (1)
- diet (1)
- differential gene expression (1)
- digital government (1)
- digital sovereignty (1)
- digital strategy (1)
- digital transformation (1)
- digitale Souveränität (1)
- digitale Verwaltung (1)
- dimensionality reduction (1)
- discontinuous Galerkin methods (1)
- discrete beam cominer (1)
- disease ecology (1)
- diskreter Strahlkombinierer (1)
- distance (1)
- domain-specific knowledge graphs (1)
- domänenspezifisches Wissensgraphen (1)
- drought stress (1)
- drug delivery (1)
- dsps (1)
- dual-route model of reading (1)
- duplicate detection (1)
- dynamic classification (1)
- dynamics (1)
- dynamische Klassifikation (1)
- dünne Filme (1)
- e-Zigarette (1)
- e-cigarette (1)
- e-government (1)
- e-services (1)
- earthquakes (1)
- echo state network (1)
- ecology (1)
- econometrics (1)
- economic network (1)
- edit distance (1)
- educational programs (1)
- effect (1)
- eindeutige Spaltenkombination (1)
- electronic drug delivery system (EDDS) (1)
- electronic nicotin delivery system (ENDS) (1)
- elliptic partial differential equations (1)
- elliptische partielle Differentialgleichungen (1)
- embeddings (1)
- emotionality (1)
- employment effects (1)
- employment precariousness (1)
- energetic disorder (1)
- energetische Unordnung (1)
- energy storage (1)
- enhanced geothermal system (1)
- enhanced geothermal systems (EGS) (1)
- entity resolution (1)
- enumeration algorithms (1)
- environment (1)
- envy (1)
- epigenome editing (1)
- erworbene Dyslexie (1)
- espbench (1)
- espionage, Intelligence (1)
- essential oils (1)
- estudios transareales (1)
- evidence theory (1)
- excited-state chemical shift (1)
- exercise (1)
- expansion microscopy (1)
- exposure (1)
- extreme precipitation (1)
- eye-tracking (1)
- fabrication (1)
- facilitation (1)
- family (1)
- fault reactivation (1)
- federalism (1)
- ferroelectric polymers (1)
- ferroelektrische Polymere (1)
- fiber mesh scaffolds (1)
- financial loss (1)
- financial market (1)
- finanzielle Schäden (1)
- flares (1)
- flood hazard (1)
- flood loss modelling (1)
- flood pathway (1)
- flood risk (1)
- floods (1)
- flow chemistry (1)
- fluorescence microscopy (1)
- focal adhesion (1)
- focus (1)
- focus realisation (1)
- fokale Adhäsionen (1)
- food prices (1)
- force (1)
- formation damage (1)
- fracture growth (1)
- fracture transmissivity (1)
- free charge generation (1)
- free charge recombination (1)
- free-electron laser (1)
- freie Ladungsträger Rekombination (1)
- function of cross-cultural competence (1)
- functional traits (1)
- future (1)
- galactic population (1)
- galaktische Population (1)
- galaktisches Zentrum (1)
- gamma astronomy (1)
- gas adsorption (1)
- gas hydrates (1)
- gastric inhibitory polypeptide receptor (1)
- gender (1)
- gender wage gap (1)
- generative model (1)
- genetic engineering (1)
- genome scan (1)
- geocriticism (1)
- geocrítica (1)
- geodynamics (1)
- geomagnetism (1)
- geometry (1)
- geothermal energy (1)
- gering literalisierte Beschäftigte (1)
- gering literalisierte Erwachsene (1)
- geringqualifizierte Beschäftigte (1)
- geschlechtsspezifische Lohnlücke (1)
- giftedness (1)
- giro afectivo (1)
- glacial refugia (1)
- glass fiber (1)
- glaziale Refugien (1)
- global (1)
- global hydrological modeling (1)
- globale hydrologische Modellierung (1)
- globalización (1)
- globalization (1)
- glyco chemistry (1)
- glycoconjugate (1)
- glycoconjugates (1)
- glycogels (1)
- glycomonomer (1)
- glycopolymer (1)
- glycopolymer electrolytes (1)
- glycopolymers (1)
- governance (1)
- gpt (1)
- grafting-from (1)
- graph theory (1)
- gravitational waves (1)
- green chemistry (1)
- ground motion modeling (1)
- group of Soviet forces in Germany (1)
- grüne Chemie (1)
- guided tours (1)
- hADSC (1)
- haltende isometrische Muskelaktion (1)
- hazard (1)
- health (1)
- heat (1)
- heat stress memory (1)
- hegemony (1)
- herbicide (1)
- heterogene Katalyse (1)
- heterogene Photokatalyse (1)
- heterogeneous catalysis (1)
- heterogeneous computing (1)
- heterogeneous photocatalysis (1)
- heterogenes Rechnen (1)
- high throughput sequencing (1)
- high-achieving students; high-performing students; high-achievers (1)
- high-performance computing (1)
- high-school education (1)
- hiphop (1)
- histone methylation (1)
- hitting sets (1)
- hoher Durchsatz Sequenzierung (1)
- holding isometric muscle action (1)
- homeland (1)
- horizontaler Fluss (1)
- host rock scale (1)
- hourly wages and monthly earnings (1)
- human aldehyde oxidase (1)
- human induced pluripotent stem cells (1)
- human keratinocytes (1)
- human-scale (1)
- humane Keratinozyten (1)
- humaninduzierte pluripotente Stammzellen (1)
- hybridization capture (1)
- hydraulische Risserzeugung (1)
- hydraulisches Aufbrechen (1)
- hydrodynamic modelling (1)
- hydrodynamics (1)
- hydrodynamische Modellierung (1)
- hydrogels (1)
- hydrology (1)
- hypoxia (1)
- hypoxic pulmonary vasoconstriction (1)
- hypoxische pulmonale Vasokonstriktion (1)
- hyrise (1)
- ice sheet modelling (1)
- ice-flow modeling (1)
- iceberg calving (1)
- idealism (1)
- identité nationale (1)
- idéntité (1)
- imagined communities (1)
- imaging (1)
- imdb (1)
- immigration (1)
- incumbent (1)
- individual choices (1)
- individuelle Entscheidungen (1)
- induced seismicity (1)
- inequality (1)
- ingestion (1)
- inhalative Applikation (1)
- initial data (1)
- injection (1)
- injection scheme (1)
- innate immune response (1)
- innovations in the city (1)
- institutional reform (1)
- insulin resistance (1)
- integrated optics (1)
- integration (1)
- integrierte Optik (1)
- integrierter Spektrograph (1)
- interacting particle systems (1)
- interaction (1)
- interactional competence (1)
- interactional linguistics (1)
- interaktionale Kompetenz (1)
- interaktionale Linguistik (1)
- intercultural communication (1)
- interkulturelle Kompetenz (1)
- international law of the sea (1)
- international relations (1)
- internationales Seerecht (1)
- interoperability (1)
- intraspecific variation (1)
- invasiv (1)
- invasive (1)
- ion migration (1)
- ion mobility spectrometry (1)
- ion mobility spectrometry (IMS) (1)
- isoflavonoids (1)
- isometric contraction (1)
- isometrische Kontraktion (1)
- isotopic fractionation (1)
- iterative Methoden zur Lösung linearer Systeme (1)
- iterative methods for sparse linear systems (1)
- justification figures (1)
- kardiovaskuläre Erkrankungen (1)
- klebend (1)
- knowledge graph construction (1)
- knowledge graph refinement (1)
- kognitive Modellierung (1)
- komplexe Systeme (1)
- komplexes Netzwerk (1)
- konfokales Laser-Scanning-Mikroskop (1)
- konische Kreuzung (1)
- kosmische Strahlung (1)
- kulturelle Außenbeziehungen (1)
- kulturelle Diversität (1)
- kultursensitive Bildung (1)
- kurdisch (1)
- körperliche Aktivität (1)
- lab experiment (1)
- lactobacillus (1)
- lacunae (1)
- lacuno-canalicular network (1)
- lake system responses (1)
- lakuno-kanaliculäres Netzwerk (1)
- land-cover classification (1)
- language acquisition (1)
- larch (1)
- laser induced breakdown spectroscopy (1)
- laser-induced breakdown spectroscopy (LIBS) (1)
- laser-induced incandescence (LII) (1)
- laserinduzierte Breakdownspektroskopie (1)
- latitudinal clines (1)
- law of the sea (1)
- layer-by-layer glycopolymer coating (1)
- leadership (1)
- learning justifications (1)
- lebende Materialien (1)
- lectin (1)
- leistungsstarke Schüler (1)
- life course (1)
- lifestyle (1)
- light-programmable viscosity (1)
- lipophagy (1)
- literatura comparada (1)
- literatura de viaje (1)
- literatura mundial (1)
- lithosphere (1)
- live migration (1)
- living materials (1)
- load-bearing (1)
- local adaptation (1)
- locus of control (1)
- lokale Anpassung (1)
- lubricant (1)
- lysosome (1)
- mRNA chemistry (1)
- mRNA-Chemie (1)
- machines (1)
- macro-economic modelling (1)
- macrophage activation (1)
- macrovascular complications (1)
- magma degassing (1)
- magnetic field modeling (1)
- magnetoelasticity (1)
- makrovaskuläre Komplikationen (1)
- maritime environmental protection (1)
- maritimer Umweltschutz (1)
- maschinelles Sehen (1)
- mass spectrometry (1)
- maturity model (1)
- meaningfulness at work (1)
- mechanobiology (1)
- metadata detection (1)
- methods: analytical (1)
- methods: numerical (1)
- miRNA (1)
- microbial diversity (1)
- microbial ecology (1)
- microfluidics (1)
- microplastics (1)
- microplate (1)
- micropolitics (1)
- microsatellites (1)
- microstructural deformation mechanisms (1)
- microvascular blood filling (1)
- microvascular complications (1)
- mid-temperature transition (1)
- mikrobielle Vielfalt (1)
- mikrobielle Ökologie (1)
- mikrovaskuläre Komplikationen (1)
- military history (1)
- mimesis (1)
- mineralization (1)
- minimum mortality temperature (1)
- misconduct (1)
- mit Anwendungen in der Laufzeittomographie, Seismischer Quellinversion und Magnetfeldmodellierung (1)
- mmdb (1)
- model (1)
- model calibration (1)
- model-data integration (1)
- molecular biomarkers (1)
- molecular dynamics (1)
- molekulare Biomarker (1)
- morphogenesis (1)
- mortality (1)
- movement-oriented professional development programmes (1)
- multi protein complex formation (1)
- multi-class classification (1)
- multi-level governance (1)
- multi-level system (1)
- multilevel modelling (1)
- multinational oganizations (1)
- muscle blood flow (1)
- muscle oxygen saturation (1)
- muskuläre Sauerstoffsättigung (1)
- mussel-mimicking (1)
- myocardial infarction (1)
- mímesis (1)
- nacheiszeitliche Wiederbesiedlung (1)
- nachhaltige Chemie (1)
- nachhaltige Stadtentwicklung (1)
- named entity recognition (1)
- nanoparticle (1)
- narrativa latinoamericana contemporánea (1)
- national identity (1)
- near-surface monitoring (1)
- nephropathy (1)
- network theory (1)
- neuomuskuläre Funktionalität (1)
- neural conversation models (1)
- neuromuscular functionality (1)
- neuronale Konversationsmodelle (1)
- neuropathy (1)
- new public governance (1)
- next generation sequencing (1)
- nicht-Mendelsche Vererbung (1)
- nicht-einheimisch (1)
- nicht-uniforme Verteilung (1)
- nichtinvasive Diagnostik (1)
- nichtlineare Mechanik (1)
- no-lugares (1)
- non-Mendelian inheritance (1)
- non-destructive evaluation (1)
- non-fullerene acceptors (1)
- non-invasive Diagnostics (1)
- non-linear mechanics (1)
- non-native (1)
- non-places (1)
- non-uniform distribution (1)
- nuclear waste disposal (1)
- nuclear weapons (1)
- nucleobases (1)
- numerical relativity (1)
- numerische Relativität (1)
- nvm (1)
- oberirdische Biomasse (1)
- occupancy (1)
- ocean color remote sensing (1)
- odd chain fatty acids (1)
- offene Daten (1)
- old age (1)
- on-sky tests (1)
- online personality (1)
- open data (1)
- operationalisation; operationalization (1)
- optical character recognition (1)
- organic chemistry (1)
- organic matter (1)
- organic semiconductors (1)
- organic solar cells (1)
- organic synthesis (1)
- organische Chemie (1)
- organische Halbleiter (1)
- organische Solarzellen (1)
- organische Synthese (1)
- organisches Material (1)
- orientalism (1)
- orientalismo (1)
- osteogenesis (1)
- outburst floods (1)
- pace-of-life syndrome (1)
- paisajes urbanos (1)
- paleoclimatology (1)
- paleoecology (1)
- paleoseismology (1)
- palestine (1)
- palmitate (1)
- paper-based (1)
- papier-basiert (1)
- paramutation (1)
- participation justifications (1)
- particle physics (1)
- partnership trajectories (1)
- patria (1)
- peptide (1)
- performance (1)
- permafrost sediments (1)
- perovskite (1)
- perovskite precursors (1)
- perovskite solar cells (1)
- personality traits (1)
- petrothermales System (EGS) (1)
- phenotypic variation (1)
- phosphoglucan (1)
- photochemical reactions (1)
- phylogeography (1)
- physical activity (1)
- physics education (1)
- physiology (1)
- phänotypische Variation (1)
- piracy (1)
- planar lightwave circuit (1)
- planare Lichtwellenleiter (1)
- plankton food web (1)
- plant adaptation (1)
- plant cell wall (1)
- plant ecology (1)
- plant research (1)
- platform ecosystems (1)
- poly(2-oxazoline)s (1)
- polycontexturality (1)
- polyglot programming (1)
- polyglottes Programmieren (1)
- polymer (1)
- polymer chemistry (1)
- polyneuropathy (1)
- population density (1)
- population persistence (1)
- population structure (1)
- populations (1)
- porosity analysis (1)
- porous carbon (1)
- poröser Kohlenstoff (1)
- post-translational (1)
- post-translationale Assemblierung (1)
- postcolonial (1)
- postcolonial studies (1)
- postglacial recolonization (1)
- power (1)
- practical phases (1)
- prediction (1)
- prevention (1)
- primary human macrophages (1)
- primäre humane Makrophagen (1)
- probiotics (1)
- process design (1)
- process expertise (1)
- process improvement (1)
- product life cycle (1)
- production control (1)
- programmable friction (1)
- programming abstraction (1)
- programming tools (1)
- projection (1)
- protein (1)
- protein modification (1)
- protein polymer conjugate (1)
- proteins (1)
- psycholinguistics (1)
- public management (1)
- public service delivery (1)
- public transport (1)
- pulmonalarterielle glatte Muskelzellen (1)
- pulmonary artery smooth muscle cells (1)
- pump-probe spectroscopy (1)
- punicalagin (1)
- pupil remappers (1)
- qualitative research (1)
- qualitative social research (1)
- quantizer (1)
- quantum dynamics (1)
- quartäre Geochronologie (1)
- racial discrimination (1)
- radial flow (1)
- radiale Strömung (1)
- radiative transfer (1)
- random k-SAT (1)
- rank elevation (1)
- rap (1)
- rare earth elements (1)
- reactive oxygen species (ROS) (1)
- reactive transport simulation (1)
- reaktive Sauerstoffspezies (ROS) (1)
- reaktive Transportsimulation (1)
- reciprocal transplant experiment (1)
- reconstructive social research (1)
- recurrence plot (1)
- red meat (1)
- refugees (1)
- relaxor-ferroelectric polymers (1)
- reporting outcomes (1)
- resistance (1)
- respiratory diseases (1)
- retinopathy (1)
- reziprokes Transplantationsexperiment (1)
- rift (1)
- rights (1)
- risk attitudes (1)
- risk scores (1)
- root growth (1)
- rotes Fleisch (1)
- rotifer (1)
- ruderal (1)
- russische Arktis (1)
- räumliche Autokorrelation (1)
- récit de voyage (1)
- salt rock (1)
- satisfiability threshold (1)
- savoir vivre (1)
- schulpraktische Phasen (1)
- schwach überwachtes maschinelles Lernen (1)
- schwarze Löcher (1)
- science of logic (1)
- scm (1)
- sea-level rise (1)
- sehr hohe Energien (1)
- seismic source inversion (1)
- seismische Quellinversion (1)
- selbst-souveräne Identitäten (1)
- self-assembly (1)
- self-efficacy beliefs (1)
- self-esteem (1)
- self-rated health (1)
- self-sovereign identity (1)
- semantic representations (1)
- semantische Repräsentationen (1)
- senescence (1)
- sensor (1)
- sequence analysis (1)
- shape-memory (1)
- shoot apical meristem (1)
- shotgun sequencing (1)
- shrubification (1)
- silica-on-silicon (1)
- similarity-based interference (1)
- simultane Einbringung multipler Gene (1)
- slum tourism (1)
- slumming (1)
- small-angle scattering (1)
- smart factory (1)
- smoking (1)
- social inequality (1)
- social networking (1)
- social networking sites (1)
- socio-economy (1)
- soft matter (1)
- sorption (1)
- soziale Ungleichheit (1)
- sozialen Medien (1)
- soziales Netzwerk (1)
- spatial autocorrelation (1)
- species coexistence (1)
- speculative philosophy (1)
- spekulative Philosophie (1)
- spiropyran copolymer (1)
- stabile Isotope (1)
- stable isotopes (1)
- standardization (STANAG) (1)
- starch granule surface (1)
- starch phosphorylation (1)
- stars (1)
- statistical inference (1)
- statistische Inferenz (1)
- statistische Versuchsplanung (Design of Experiments) (1)
- stellar interferometry (1)
- stellar physics (1)
- stellare Interferometrie (1)
- stellare Physik (1)
- stiffness (1)
- stigmatization (1)
- stimuli (1)
- stock market (1)
- stream processing (1)
- stress measurement (1)
- strike-slip (1)
- stroke (1)
- structure formation (1)
- structure-property relationships (1)
- städtischer Wandel (1)
- subarctic vegetation change (1)
- subduction (1)
- subject-scientific learning theory (1)
- summer eczema (1)
- supply chain (1)
- surface processes (1)
- surface wave (1)
- surveillance (1)
- sustainable chemistry (1)
- sustainable development (1)
- symbolic communication (1)
- symbolische Kommunikation (1)
- synthesis (1)
- synthetic biology (1)
- systems theory (1)
- target enrichment (1)
- targeted therapy (1)
- task-based parallelism (1)
- temperature (1)
- temperature variations (1)
- terrestrial water storage variation (1)
- theoretical chemistry (1)
- theoretische Chemie (1)
- thermal noise in mirror coatings (1)
- thermisches Rauschen in Spiegelbeschichtungen (1)
- thermodynamic and kinetic properties (1)
- thermodynamische und kinetische Eigenschaften (1)
- thermoresponsiv (1)
- thermoresponsive (1)
- thin film crystallization (1)
- thin films (1)
- thioester (1)
- threatened (1)
- tiering (1)
- time to task failure (1)
- traditionelle Unternehmen (1)
- traffic (1)
- trans-Golgi Netzwerk (1)
- trans-Golgi network (1)
- transarea studies (1)
- transcription factor (1)
- transversal hypergraph (1)
- travel time tomography (1)
- tree infilling (1)
- treeline (1)
- triaxial deformation experiments (1)
- triaxiale Deformationsexperimente (1)
- tropical infectious diseases (1)
- tropische Infektionskrankheiten (1)
- trust (1)
- tundra-taiga (1)
- turbidity modelling (1)
- turbulence index (1)
- type 2 diabetes (1)
- typing (1)
- ultra-fast laser inscription technology (1)
- ultrafast molecular dynamics (1)
- ultraschnelle Moleküldynamik (1)
- unbemannte Schiffe (1)
- unemployment (1)
- ungeradkettige Fettsäuren (1)
- ungewollte Schwangerschaft (1)
- unidirektionale Fehler (1)
- unique column combinations (1)
- unmanned ship (1)
- unmanned vessel (1)
- upconversion (1)
- uranium (1)
- urban growth (1)
- urban landscapes (1)
- urban transformation (1)
- value co-creation (1)
- varved lake sediments (1)
- vegetation change (1)
- verbesserte geothermische Systeme (1)
- vertikaler Fluss (1)
- very-high energy (1)
- verzweigtkettige Aminosäuren (1)
- vesicle transport (1)
- viral infections (1)
- virale Infektionen (1)
- virtual (1)
- virtual machines (1)
- virtuell (1)
- virtuelle Maschinen (1)
- visibility (1)
- visionary leadership (1)
- visionäre Führung (1)
- volatile organic compounds (VOCs) (1)
- volatile organische Substanzen (VOCs) (1)
- warvierte Seesedimente (1)
- water rock interactions (1)
- weak supervision (1)
- wealth (1)
- weather (1)
- weather extremes (1)
- weekly working hours (1)
- weiche Materie (1)
- weight loss (1)
- welfare and gender regimes (1)
- well-being (1)
- western group of forces (1)
- whole grains (1)
- wind gusts (1)
- winderosion (1)
- withdrawal of troops (1)
- work-related training (1)
- world literature (1)
- wöchentliche Arbeitszeiten (1)
- x-ray (1)
- x-ray spectroscopy (1)
- x-rays (1)
- youth (1)
- zebularine (1)
- zerstörungfreie Prüfung (1)
- zufälliges k-SAT (1)
- µCT (1)
- Öffentliche Verkehrsmittel (1)
- Ökokline (1)
- Ökologie (1)
- Ökonometrie (1)
- Ökonomisches Netzwerk (1)
- Überschwemmungen (1)
- ähnlichkeitsbasierte Gedächtnisinterferenz (1)
- ätherische Öle (1)
- études postcoloniales (1)
Institute
- Institut für Biochemie und Biologie (43)
- Extern (27)
- Institut für Physik und Astronomie (25)
- Institut für Chemie (23)
- Institut für Geowissenschaften (19)
- Historisches Institut (15)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Institut für Ernährungswissenschaft (10)
- Institut für Umweltwissenschaften und Geographie (9)
- Wirtschaftswissenschaften (8)
The main goal of this dissertation is to experimentally investigate how focus is realised, perceived, and processed by native Turkish speakers, independent of preconceived notions of positional restrictions. Crucially, there are various issues and scientific debates surrounding focus in the Turkish language in the existing literature (chapter 1). It is argued in this dissertation that two factors led to the stagnant literature on focus in Turkish: the lack of clearly defined, modern understandings of information structure and its fundamental notion of focus, and the ongoing and ill-defined debate surrounding the question of whether there is an immediately preverbal focus position in Turkish. These issues gave rise to specific research questions addressed across this dissertation. Specifically, we were interested in how the focus dimensions such as focus size (comparing narrow constituent and broad sentence focus), focus target (comparing narrow subject and narrow object focus), and focus type (comparing new-information and contrastive focus) affect Turkish focus realisation and, in turn, focus comprehension when speakers are provided syntactic freedom to position focus as they see fit.
To provide data on these core goals, we presented three behavioural experiments based on a systematic framework of information structure and its notions (chapter 2): (i) a production task with trigger wh-questions and contextual animations manipulated to elicit the focus dimensions of interest (chapter 3), (ii) a timed acceptability judgment task in listening to the recorded answers in our production task (chapter 4), and (iii) a self-paced reading task to gather on-line processing data (chapter 5).
Based on the results of the conducted experiments, multiple conclusions are made in this dissertation (chapter 6). Firstly, this dissertation demonstrated empirically that there is no focus position in Turkish, neither in the sense of a strict focus position language nor as a focally loaded position facilitating focus perception and/or processing. While focus is, in fact, syntactically variable in the Turkish preverbal area, this is a consequence of movement triggered by other IS aspects like topicalisation and backgrounding, and the observational markedness of narrow subject focus compared to narrow object focus. As for focus type in Turkish, this dimension is not associated with word order in production, perception, or processing. Significant acoustic correlates of focus size (broad sentence focus vs narrow constituent focus) and focus target (narrow subject focus vs narrow object focus) were observed in fundamental frequency and intensity, representing focal boost, (postfocal) deaccentuation, and the presence or absence of a phrase-final rise in the prenucleus, while the perceivability of these effects remains to be investigated. In contrast, no acoustic correlates of focus type in simple, three-word transitive structures were observed, with focus types being interchangeable in mismatched question-answer pairs. Overall, the findings of this dissertation highlight the need for experimental investigations regarding focus in Turkish, as theoretical predictions do not necessarily align with experimental data. As such, the fallacy of implying causation from correlation should be strictly kept in mind, especially when constructions coincide with canonical structures, such as the immediately preverbal position in narrow object foci. Finally, numerous open questions remain to be explored, especially as focus and word order in Turkish are multifaceted. As shown, givenness is a confounding factor when investigating focus types, while thematic role assignment potentially confounds word order preferences. Further research based on established, modern information structure frameworks is needed, with chapter 5 concluding with specific recommendations for such future research.
Das kolonisierte Heiligtum
(2022)
Während der Zeit des historischen Kolonialismus wurden in Völkerkundemuseen komplexe Formen rassistischer und religiöser Diskriminierung institutionalisiert, z.B. in den dort gültigen Ästhetik- und Kunstbegriffen. Viele der heutigen Museumsangestellten erklären sich deswegen zu Reformen bereit. Doch können sie sich tatsächlich vom Kolonialismus trennen? Ist eine Dekolonisation ethnologischer Museen mit kolonialer Beute je abschließend möglich? Am Beispiel umstrittener Heiligtümer lebender Kulturen untersucht Christoph Balzar das Verfahren der Musealisierung durch die Linse der Diskriminierungskritik. Im Fokus stehen dabei die Sammlungen der »Staatlichen Museen zu Berlin«.
Over the past decades, there has been a growing interest in ‘extreme events’ owing to the increasing threats that climate-related extremes such as floods, heatwaves, droughts, etc., pose to society. While extreme events have diverse definitions across various disciplines, ranging from earth science to neuroscience, they are characterized mainly as dynamic occurrences within a limited time frame that impedes the normal functioning of a system. Although extreme events are rare in occurrence, it has been found in various hydro-meteorological and physiological time series (e.g., river flows, temperatures, heartbeat intervals) that they may exhibit recurrent behavior, i.e., do not end the lifetime of the system. The aim of this thesis to develop some
sophisticated methods to study various properties of extreme events.
One of the main challenges in analyzing such extreme event-like time series is that they have large temporal gaps due to the paucity of the number of observations of extreme events. As a result, existing time series analysis tools are usually not helpful to decode the underlying
information. I use the edit distance (ED) method to analyze extreme event-like time series in their unaltered form. ED is a specific distance metric, mainly designed to measure the similarity/dissimilarity between point process-like data. I combine ED with recurrence plot techniques to identify the recurrence property of flood events in the Mississippi River in the United States. I also use recurrence quantification analysis to show the deterministic properties
and serial dependency in flood events.
After that, I use this non-linear similarity measure (ED) to compute the pairwise dependency in extreme precipitation event series. I incorporate the similarity measure within the framework of complex network theory to study the collective behavior of climate extremes. Under this architecture, the nodes are defined by the spatial grid points of the given spatio-temporal climate dataset. Each node is associated with a time series corresponding to the temporal evolution
of the climate observation at that grid point. Finally, the network links are functions of the pairwise statistical interdependence between the nodes. Various network measures, such as degree, betweenness centrality, clustering coefficient, etc., can be used to quantify the network’s topology. We apply the methodology mentioned above to study the spatio-temporal coherence pattern of extreme rainfall events in the United States and the Ganga River basin, which reveals its relation to various climate processes and the orography of the region.
The identification of precursors associated with the occurrence of extreme events in the near future is extremely important to prepare the masses for an upcoming disaster and mitigate the potential risks associated with such events. Under this motivation, I propose an in-data prediction recipe for predicting the data structures that typically occur prior to extreme events using the Echo state network, a type of Recurrent Neural Network which is a part of the reservoir
computing framework. However, unlike previous works that identify precursory structures in the same variable in which extreme events are manifested (active variable), I try to predict these structures by using data from another dynamic variable (passive variable) which does not show large excursions from the nominal condition but carries imprints of these extreme events. Furthermore, my results demonstrate that the quality of prediction depends on the magnitude
of events, i.e., the higher the magnitude of the extreme, the better is its predictability skill. I show quantitatively that this is because the input signals collectively form a more coherent pattern for an extreme event of higher magnitude, which enhances the efficiency of the machine to predict the forthcoming extreme events.
Text is a ubiquitous entity in our world and daily life. We encounter it nearly everywhere in shops, on the street, or in our flats. Nowadays, more and more text is contained in digital images. These images are either taken using cameras, e.g., smartphone cameras, or taken using scanning devices such as document scanners. The sheer amount of available data, e.g., millions of images taken by Google Streetview, prohibits manual analysis and metadata extraction. Although much progress was made in the area of optical character recognition (OCR) for printed text in documents, broad areas of OCR are still not fully explored and hold many research challenges. With the mainstream usage of machine learning and especially deep learning, one of the most pressing problems is the availability and acquisition of annotated ground truth for the training of machine learning models because obtaining annotated training data using manual annotation mechanisms is time-consuming and costly. In this thesis, we address of how we can reduce the costs of acquiring ground truth annotations for the application of state-of-the-art machine learning methods to optical character recognition pipelines. To this end, we investigate how we can reduce the annotation cost by using only a fraction of the typically required ground truth annotations, e.g., for scene text recognition systems. We also investigate how we can use synthetic data to reduce the need of manual annotation work, e.g., in the area of document analysis for archival material. In the area of scene text recognition, we have developed a novel end-to-end scene text recognition system that can be trained using inexact supervision and shows competitive/state-of-the-art performance on standard benchmark datasets for scene text recognition. Our method consists of two independent neural networks, combined using spatial transformer networks. Both networks learn together to perform text localization and text recognition at the same time while only using annotations for the recognition task. We apply our model to end-to-end scene text recognition (meaning localization and recognition of words) and pure scene text recognition without any changes in the network architecture.
In the second part of this thesis, we introduce novel approaches for using and generating synthetic data to analyze handwriting in archival data. First, we propose a novel preprocessing method to determine whether a given document page contains any handwriting. We propose a novel data synthesis strategy to train a classification model and show that our data synthesis strategy is viable by evaluating the trained model on real images from an archive. Second, we introduce the new analysis task of handwriting classification. Handwriting classification entails classifying a given handwritten word image into classes such as date, word, or number. Such an analysis step allows us to select the best fitting recognition model for subsequent text recognition; it also allows us to reason about the semantic content of a given document page without the need for fine-grained text recognition and further analysis steps, such as Named Entity Recognition. We show that our proposed approaches work well when trained on synthetic data. Further, we propose a flexible metric learning approach to allow zero-shot classification of classes unseen during the network’s training. Last, we propose a novel data synthesis algorithm to train off-the-shelf pixel-wise semantic segmentation networks for documents. Our data synthesis pipeline is based on the famous Style-GAN architecture and can synthesize realistic document images with their corresponding segmentation annotation without the need for any annotated data!
Core-shell upconversion nanoparticles - investigation of dopant intermixing and surface modification
(2022)
Frequency upconversion nanoparticles (UCNPs) are inorganic nanocrystals capable to up-convert incident photons of the near-infrared electromagnetic spectrum (NIR) into higher energy photons. These photons are re-emitted in the range of the visible (Vis) and even ultraviolet (UV) light. The frequency upconversion process (UC) is realized with nanocrystals doped with trivalent lanthanoid ions (Ln(III)). The Ln(III) ions provide the electronic (excited) states forming a ladder-like electronic structure for the Ln(III) electrons in the nanocrystals. The absorption of at least two low energy photons by the nanoparticle and the subsequent energy transfer to one Ln(III) ion leads to the promotion of one Ln(III) electron into higher excited electronic states. One high energy photon will be emitted during the radiative relaxation of the electron in the excited state back into the electronic ground state of the Ln(III) ion. The excited state electron is the result of the previous absorption of at least two low energy photons.
The UC process is very interesting in the biological/medical context. Biological samples (like organic tissue, blood, urine, and stool) absorb high-energy photons (UV and blue light) more strongly than low-energy photons (red and NIR light). Thanks to a naturally occurring optical window, NIR light can penetrate deeper than UV light into biological samples. Hence, UCNPs in bio-samples can be excited by NIR light. This possibility opens a pathway for in vitro as well as in vivo applications, like optical imaging by cell labeling or staining of specific organic tissue. Furthermore, early detection and diagnosis of diseases by predictive and diagnostic biomarkers can be realized with bio-recognition elements being labeled to the UCNPs. Additionally, "theranostic" becomes possible, in which the identification and the treatment of a disease are tackled simultaneously.
For this to succeed, certain parameters for the UCNPs must be met: high upconversion efficiency, high photoluminescence quantum yield, dispersibility, and dispersion stability in aqueous media, as well as availability of functional groups to introduce fast and easy bio-recognition elements. The UCNPs used in this work were prepared with a solvothermal decomposition synthesis yielding in particles with NaYF4 or NaGdF4 as host lattice. They have been doped with the Ln(III) ions Yb3+ and Er3+, which is only one possible upconversion pair. Their upconversion efficiency and photoluminescence quantum yield were improved by adding a passivating shell to reduce surface quenching.
However, the brightness of core-shell UCNPs stays behind the expectations compared to their bulk material (being at least μm-sized particles). The core-shell structures are not clearly separated from each other, which is a topic in literature. Instead, there is a transition layer between the core and the shell structure, which relates to the migration of the dopants within the host lattice during the synthesis. The ion migration has been examined by time-resolved laser spectroscopy and the interlanthanoid resonance energy transfer (LRET) in the two different host lattices from above. The results are
presented in two publications, which dealt with core-shell-shell structured nanoparticles. The core is doped with the LRET-acceptor (either Nd3+ or Pr3+). The intermediate shell serves as an insulation shell of pure host lattice material, whose shell thickness has been varied within one set of samples having the same composition, so that the spatial separation of LRET-acceptor and -donor changes. The outer shell with the same host lattice is doped with the LRET-donor (Eu3+). The effect of the increasing insulation shell thickness is significant, although the LRET cannot be suppressed completely.
Next to the Ln(III) migration within a host lattice, various phase transfer reactions were investigated in order to subsequently perform surface modifications for bioapplications. One result out of this research has been published using a promising ligand, that equips the UCNP with bio-modifiable groups and has good potential for bio-medical applications. This particular ligand mimics natural occurring mechanisms of mussel protein adhesion and of blood coagulation, which is why the UCNPs are encapsulated very effectively. At the same time, bio-functional groups are introduced. In a proof-of-concept, the encapsulated UCNP has been coupled successfully with a dye (which is representative for a biomarker) and the system’s photoluminescence properties have been investigated.
In der fortwährenden Diskussion zur Familienbesteuerung bewertet der Autor vor dem Hintergrund des deutschen Verfassungsrechts die geltenden Regelungen und mögliche Reformmodelle rechtlich, steuerpolitisch und ökonomisch. Unter dem besonderen Blickwinkel eines seit mehr als 20 Jahren tätigen Steuerberaters kann der Autor die Wirkung des Splittingtarifs und deren Alternativen ökonomisch gut durchdringen.
Die Ehe und Familie stehen unter dem besonderen Schutz der deutschen staatlichen Ordnung. Diese Schutzpflicht aus Art. 6 Abs. 1 GG geht deutlich über ein Abwehrrecht hinaus. Sie gebietet es, die Ehe zu schützen, ihr einen grundsätzlich unantastbaren Kernbereich privater Lebensführung zu gewähren und innerfamiliär staatlicherseits keine bestimmte innereheliche Arbeits- und Aufgabenverteilung vorzuschreiben.
Die Ehe wird häufig als „Keimzelle“ jeglicher menschlichen Gesellschaft bezeichnet. Das erscheint folgerichtig und historisch bedeutsam. Im heutigen Zeitalter jedoch ist sie mehr oder weniger eine Erwerbs-, Konsum- und Verantwortungsgemeinschaft. Insofern stellt sich vornehmlich die Frage nach einer genügenden Rechtfertigung für die Gewährung eines Splittingtarifs.
Ausgangspunkt der umfassenden systematischen Darstellung ist die historischen Entwicklung in Deutschland und den angrenzenden Staaten; insbesondere der heutigen Europäischen Union. Denn in anderen Staaten werden bereits unterschiedliche Modelle wie beispielsweise die Einzel- oder Familienbesteuerung angewendet.
Verfassungsrechtlich hat der Gesetzgeber Veränderungen der gesellschaftlichen Rahmenbedingungen in und für die Familie in seine gesetzgeberischen Entscheidungen zwingend einzubeziehen. Es stellt sich demnach die Frage, ob beispielsweise seit der Entscheidung des Bundesverfassungsgerichts zum Ehegattensplitting im Jahr 1957 relevante Veränderungen eingetreten sind, die eine Änderung der Familienbesteuerung rechtfertigen oder gar erfordern. Die zwischenzeitliche Anerkennung von Lebenspartnerschaften und die Ehe für alle sind Ausdruck einer solchen Veränderung. Sie implizieren die bereits eingetretenen Veränderungen der Lebensverhältnisse. Daher ist die politische Diskussion für eine Veränderung nachvollziehbar. Ob jedoch Alternativen vorzugswürdig sind, wird ausführlich dargestellt.
Pensionskassen
(2022)
Die Kapitalanlage von Pensionskassen in Investmentfonds wird dadurch erschwert, dass sich die sie betreffenden aufsichts- und steuerrechtlichen Normen widersprechen. Die den Pensionskassen aufsichtsrechtlich zugestandene Kaitalanlage, wird durch die aktuelle Lesart ihrer Steuerbefreiung beschränkt. Die Steuerbefreiung soll demnach entfallen, soweit die Pensionskasse durch die Kapitalanlage gewerbliche Einkünfte erzielt. Dann sei die dauernde Einkünfte- und Vermögensbindung für die Zwecke der Kasse nach § 5 Abs. 1 Nr. 3 lit. c KStG nicht gesichert. Das benachteiligt die Pensionskassen bei der Erwirtschaftung von Rentenleistungen. Zugleich findet diese Lesart bei der Auslegung der entsprechenden steuerrechtlichen Normen keinen Zuspruch.
Individuals have an intrinsic need to express themselves to other humans within a given community by sharing their experiences, thoughts, actions, and opinions. As a means, they mostly prefer to use modern online social media platforms such as Twitter, Facebook, personal blogs, and Reddit. Users of these social networks interact by drafting their own statuses updates, publishing photos, and giving likes leaving a considerable amount of data behind them to be analyzed. Researchers recently started exploring the shared social media data to understand online users better and predict their Big five personality traits: agreeableness, conscientiousness, extraversion, neuroticism, and openness to experience. This thesis intends to investigate the possible relationship between users’ Big five personality traits and the published information on their social media profiles. Facebook public data such as linguistic status updates, meta-data of likes objects, profile pictures, emotions, or reactions records were adopted to address the proposed research questions. Several machine learning predictions models were constructed with various experiments to utilize the engineered features correlated with the Big 5 Personality traits. The final predictive performances improved the prediction accuracy compared to state-of-the-art approaches, and the models were evaluated based on established benchmarks in the domain. The research experiments were implemented while ethical and privacy points were concerned. Furthermore, the research aims to raise awareness about privacy between social media users and show what third parties can reveal about users’ private traits from what they share and act on different social networking platforms.
In the second part of the thesis, the variation in personality development is studied within a cross-platform environment such as Facebook and Twitter platforms. The constructed personality profiles in these social platforms are compared to evaluate the effect of the used platforms on one user’s personality development. Likewise, personality continuity and stability analysis are performed using two social media platforms samples. The implemented experiments are based on ten-year longitudinal samples aiming to understand users’ long-term personality development and further unlock the potential of cooperation between psychologists and data scientists.
Die Zeit um 1600 markiert eine Zäsur in der Ausbildung junger Adliger, da neben der etablierten humanistischen Ausbildung zunehmend moderne Fremdsprachen und die institutionalisierte Ausbildung standesspezifischer Inhalte wie Fechten, Tanzen und Reiten an Bedeutung gewannen. Im vorliegenden Band werden die Ausbildungsgänge der Söhne dreier südwestdeutscher Freiherren- und Grafenfamilien in der Zeit des Späthumanismus untersucht. Dabei werden standesspezifische Bildungsstrategien herausgearbeitet und familiäre sowie konfessionelle Unterschiede aufgezeigt. Die Familien der Reichsgrafen orientierten sich – soweit es ihr Budget zuließ – einerseits am Fürstenstand, andererseits waren sie bestrebt, eigene Akzente zu setzen, sei es aus standespolitischen Erwägungen oder, weil sie eigenen Traditionen verpflichtet waren. Die systematische Auswertung breiter Quellenbestände förderte dabei auch eine Fülle an Informationen zu weiteren Wissensgebieten wie etwa der Reise-, Medizin-, Musikgeschichte oder der Alltagskultur zutage.
As of late, epidemiological studies have highlighted a strong association of dairy intake with lower disease risk, and similarly with an increased amount of odd-chain fatty acids (OCFA). While the OCFA also demonstrate inverse associations with disease incidence, the direct dietary sources and mode of action of the OCFA remain poorly understood.
The overall aim of this thesis was to determine the impact of two main fractions of dairy, milk fat and milk protein, on OCFA levels and their influence on health outcomes under high-fat (HF) diet conditions. Both fractions represent viable sources of OCFA, as milk fats contain a significant amount of OCFA and milk proteins are high in branched chain amino acids (BCAA), namely valine (Val) and isoleucine (Ile), which can produce propionyl-CoA (Pr-CoA), a precursor for endogenous OCFA synthesis, while leucine (Leu) does not. Additionally, this project sought to clarify the specific metabolic effects of the OCFA heptadecanoic acid (C17:0).
Both short-term and long-term feeding studies were performed using male C57BL/6JRj mice fed HF diets supplemented with milk fat or C17:0, as well as milk protein or individual BCAA (Val; Leu) to determine their influences on OCFA and metabolic health. Short-term feeding revealed that both milk fractions induce OCFA in vivo, and the increases elicited by milk protein could be, in part, explained by Val intake. In vitro studies using primary hepatocytes further showed an induction of OCFA after Val treatment via de novo lipogenesis and increased α-oxidation. In the long-term studies, both milk fat and milk protein increased hepatic and circulating OCFA levels; however, only milk protein elicited protective effects on adiposity and hepatic fat accumulation—likely mediated by the anti-obesogenic effects of an increased Leu intake. In contrast, Val feeding did not increase OCFA levels nor improve obesity, but rather resulted in glucotoxicity-induced insulin resistance in skeletal muscle mediated by its metabolite 3-hydroxyisobutyrate (3-HIB). Finally, while OCFA levels correlated with improved health outcomes, C17:0 produced negligible effects in preventing HF-diet induced health impairments.
The results presented herein demonstrate that the beneficial health outcomes associated with dairy intake are likely mediated through the effects of milk protein, while OCFA levels are likely a mere association and do not play a significant causal role in metabolic health under HF conditions. Furthermore, the highly divergent metabolic effects of the two BCAA, Leu and Val, unraveled herein highlight the importance of protein quality.
Giros Topográficos
(2022)
Giros topográficos explora las producciones simbólicas del espacio en una serie de textos narrativos publicados desde el cambio de milenio en América Latina. Retomando los planteos teóricos del spatial turn y de la geocrítica, el estudio aborda las topografías literarias desde cuatro ángulos que exceden y transforman los límites territoriales y nacionales: dinámicas de hiperconectividad mediática y movilidad acelerada; genealogías afectivas; ecologías urbanas; y representaciones de la alteridad.
A partir del análisis de obras de Lina Meruane, Guillermo Fadanelli, Andrés Neuman, Andrea Jeftanovic, Sergio Chejfech y Bernardo Carvalho, entre otros, el libro señala los flujos, ambigüedades y tensiones proyectadas por las nuevas comunidades imaginadas del s.XXI. Con ello, el ensayo busca ofrecer un aporte para repensar el estatus de la literatura latinoamericana en el marco de su globalización avanzada y la consecuente consolidación de espacios de enunciación translocalizados.
The negative impact of crude oil on the environment has led to a necessary transition toward alternative, renewable, and sustainable resources. In this regard, lignocellulosic biomass (LCB) is a promising renewable and sustainable alternative to crude oil for the production of fine chemicals and fuels in a so-called biorefinery process. LCB is composed of polysaccharides (cellulose and hemicellulose), as well as aromatics (lignin). The development of a sustainable and economically advantageous biorefinery depends on the complete and efficient valorization of all components. Therefore, in the new generation of biorefinery, the so-called biorefinery of type III, the LCB feedstocks are selectively deconstructed and catalytically transformed into platform chemicals. For this purpose, the development of highly stable and efficient catalysts is crucial for progress toward viability in biorefinery. Furthermore, a modern and integrated biorefinery relies on process and reactor design, toward more efficient and cost-effective methodologies that minimize waste. In this context, the usage of continuous flow systems has the potential to provide safe, sustainable, and innovative transformations with simple process integration and scalability for biorefinery schemes.
This thesis addresses three main challenges for future biorefinery: catalyst synthesis, waste feedstock valorization, and usage of continuous flow technology. Firstly, a cheap, scalable, and sustainable approach is presented for the synthesis of an efficient and stable 35 wt.-% Ni catalyst on highly porous nitrogen-doped carbon support (35Ni/NDC) in pellet shape. Initially, the performance of this catalyst was evaluated for the aqueous phase hydrogenation of LCB-derived compounds such as glucose, xylose, and vanillin in continuous flow systems. The 35Ni/NDC catalyst exhibited high catalytic performances in three tested hydrogenation reactions, i.e., sorbitol, xylitol, and 2-methoxy-4-methylphenol with yields of 82 mol%, 62 mol%, and 100 mol% respectively. In addition, the 35Ni/NDC catalyst exhibited remarkable stability over a long time on stream in continuous flow (40 h). Furthermore, the 35Ni/NDC catalyst was combined with commercially available Beta zeolite in a dual–column integrated process for isosorbide production from glucose (yield 83 mol%).
Finally, 35Ni/NDC was applied for the valorization of industrial waste products, namely sodium lignosulfonate (LS) and beech wood sawdust (BWS) in continuous flow systems. The LS depolymerization was conducted combining solvothermal fragmentation of water/alcohol mixtures (i.e.,methanol/water and ethanol/water) with catalytic hydrogenolysis/hydrogenation (SHF). The depolymerization was found to occur thermally in absence of catalyst with a tunable molecular weight according to temperature. Furthermore, the SHF generated an optimized cumulative yield of lignin-derived phenolic monomers of 42 mg gLS-1. Similarly, a solvothermal and reductive catalytic fragmentation (SF-RCF) of BWS was conducted using MeOH and MeTHF as a solvent. In this case, the optimized total lignin-derived phenolic monomers yield was found of 247 mg gKL-1.
Sustainable urban growth
(2022)
This dissertation explores the determinants for sustainable and socially optimalgrowth in a city. Two general equilibrium models establish the base for this evaluation, each adding its puzzle piece to the urban sustainability discourse and examining the role of non-market-based and market-based policies for balanced growth and welfare improvements in different theory settings. Sustainable urban growth either calls for policy actions or a green energy transition. Further, R&D market failures can pose severe challenges to the sustainability of urban growth and the social optimality of decentralized allocation decisions. Still, a careful (holistic) combination of policy instruments can achieve sustainable growth and even be first best.
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.
Public administrations confront fundamental challenges, including globalization, digitalization, and an eroding level of trust from society. By developing joint public service delivery with other stakeholders, public administrations can respond to these challenges. This increases the importance of inter-organizational governance—a development often referred to as New Public Governance, which to date has not been realized because public administrations focus on intra-organizational practices and follow the traditional “governmental chain.”
E-government initiatives, which can lead to high levels of interconnected public services, are currently perceived as insufficient to meet this goal. They are not designed holistically and merely affect the interactions of public and non-public stakeholders. A fundamental shift toward a joint public service delivery would require scrutiny of established processes, roles, and interactions between stakeholders.
Various scientists and practitioners within the public sector assume that the use of blockchain institutional technology could fundamentally change the relationship between public and non-public stakeholders. At first glance, inter-organizational, joint public service delivery could benefit from the use of blockchain. This dissertation aims to shed light on this widespread assumption. Hence, the objective of this dissertation is to substantiate the effect of blockchain on the relationship between public administrations and non-public stakeholders.
This objective is pursued by defining three major areas of interest. First, this dissertation strives to answer the question of whether or not blockchain is suited to enable New Public Governance and to identify instances where blockchain may not be the proper solution. The second area aims to understand empirically the status quo of existing blockchain implementations in the public sector and whether they comply with the major theoretical conclusions. The third area investigates the changing role of public administrations, as the blockchain ecosystem can significantly increase the number of stakeholders.
Corresponding research is conducted to provide insights into these areas, for example, combining theoretical concepts with empirical actualities, conducting interviews with subject matter experts and key stakeholders of leading blockchain implementations, and performing a comprehensive stakeholder analysis, followed by visualization of its results.
The results of this dissertation demonstrate that blockchain can support New Public Governance in many ways while having a minor impact on certain aspects (e.g., decentralized control), which account for this public service paradigm. Furthermore, the existing projects indicate changes to relationships between public administrations and non-public stakeholders, although not necessarily the fundamental shift proposed by New Public Governance. Lastly, the results suggest that power relations are shifting, including the decreasing influence of public administrations within the blockchain ecosystem. The results raise questions about the governance models and regulations required to support mature solutions and the further diffusion of blockchain for public service delivery.
Ein schonender Umgang mit den Ressourcen und der Umwelt ist wesentlicher Bestandteil des modernen Bergbaus sowie der zukünftigen Versorgung unserer Gesellschaft mit essentiellen Rohstoffen. Die vorliegende Arbeit beschäftigt sich mit der Entwicklung analytischer Strategien, die durch eine exakte und schnelle Vor-Ort-Analyse den technisch-praktischen Anforderungen des Bergbauprozesses gerecht werden und somit zu einer gezielten und nachhaltigen Nutzung von Rohstofflagerstätten beitragen. Die Analysen basieren auf den spektroskopischen Daten, die mittels der laserinduzierten Breakdownspektroskopie (LIBS) erhalten und mittels multivariater Datenanalyse ausgewertet werden. Die LIB-Spektroskopie ist eine vielversprechende Technik für diese Aufgabe. Ihre Attraktivität machen insbesondere die Möglichkeiten aus, Feldproben vor Ort ohne Probennahme oder ‑vorbereitung messen zu können, aber auch die Detektierbarkeit sämtlicher Elemente des Periodensystems und die Unabhängigkeit vom Aggregatzustand. In Kombination mit multivariater Datenanalyse kann eine schnelle Datenverarbeitung erfolgen, die Aussagen zur qualitativen Elementzusammensetzung der untersuchten Proben erlaubt. Mit dem Ziel die Verteilung der Elementgehalte in einer Lagerstätte zu ermitteln, werden in dieser Arbeit Kalibrierungs- und Quantifizierungsstrategien evaluiert. Für die Charakterisierung von Matrixeffekten und zur Klassifizierung von Mineralen werden explorative Datenanalysemethoden angewendet. Die spektroskopischen Untersuchungen erfolgen an Böden und Gesteinen sowie an Mineralen, die Kupfer oder Seltene Erdelemente beinhalten und aus verschiedenen Lagerstätten bzw. von unterschiedlichen Agrarflächen stammen.
Für die Entwicklung einer Kalibrierungsstrategie wurden sowohl synthetische als auch Feldproben von zwei verschiedenen Agrarflächen mittels LIBS analysiert. Anhand der Beispielanalyten Calcium, Eisen und Magnesium erfolgte die auf uni- und multivariaten Methoden beruhende Evaluierung verschiedener Kalibrierungsmethoden. Grundlagen der Quantifizierungsstrategien sind die multivariaten Analysemethoden der partiellen Regression der kleinsten Quadrate (PLSR, von engl.: partial least squares regression) und der Intervall PLSR (iPLSR, von engl.: interval PLSR), die das gesamte detektierte Spektrum oder Teilspektren in der Analyse berücksichtigen. Der Untersuchung liegen synthetische sowie Feldproben von Kupfermineralen zugrunde als auch solche die Seltene Erdelemente beinhalten. Die Proben stammen aus verschiedenen Lagerstätten und weisen unterschiedliche Begleitmatrices auf. Mittels der explorativen Datenanalyse erfolgte die Charakterisierung dieser Begleitmatrices. Die dafür angewendete Hauptkomponentenanalyse gruppiert Daten anhand von Unterschieden und Regelmäßigkeiten. Dies erlaubt Aussagen über Gemeinsamkeiten und Unterschiede der untersuchten Proben im Bezug auf ihre Herkunft, chemische Zusammensetzung oder lokal bedingte Ausprägungen. Abschließend erfolgte die Klassifizierung kupferhaltiger Minerale auf Basis der nicht-negativen Tensorfaktorisierung. Diese Methode wurde mit dem Ziel verwendet, unbekannte Proben aufgrund ihrer Eigenschaften in Klassen einzuteilen.
Die Verknüpfung von LIBS und multivariater Datenanalyse bietet die Möglichkeit durch eine Analyse vor Ort auf eine Probennahme und die entsprechende Laboranalytik weitestgehend zu verzichten und kann somit zum Umweltschutz sowie einer Schonung der natürlichen Ressourcen bei der Prospektion und Exploration von neuen Erzgängen und Lagerstätten beitragen. Die Verteilung von Elementgehalten der untersuchten Gebiete ermöglicht zudem einen gezielten Abbau und damit eine effiziente Nutzung der mineralischen Rohstoffe.
Respiratorische Erkrankungen stellen zunehmend eine relevante globale Problematik dar. Die Erweiterung bzw. Modifizierung von Applikationswegen möglicher Arzneimittel für gezielte topische Anwendungen ist dabei von größter Bedeutung. Die Variation eines bekannten Applikationsweges durch unterschiedliche technologische Umsetzungen kann die Vielfalt der Anwendungsmöglichkeiten, aber auch die Patienten-Compliance erhöhen. Die einfache und flexible Verfahrensweise durch schnelle Verfügbarkeit und eine handliche Technologie sind heutzutage wichtige Eigenschaften im Entwicklungsprozess eines Produktes. Eine direkte topische Behandlung von Atemwegserkrankungen am Wirkort in Form einer inhalativen Applikation bietet dabei viele Vorteile gegenüber einer systemischen Therapie. Die medizinische Inhalation von Wirkstoffen über die Lunge ist jedoch eine komplexe Herausforderung. Inhalatoren gehören zu den erklärungsbedürftigen Applikationsformen, die zur Erhöhung der konsequenten Einhaltung der Verordnung so einfach, wie möglich gestaltet werden müssen. Parallel besitzen und nutzen weltweit annähernd 68 Millionen Menschen die Technologie eines inhalativen Applikators zur bewussten Schädigung ihrer Gesundheit in Form einer elektronischen Zigarette. Diese bekannte Anwendung bietet die potentielle Möglichkeit einer verfügbaren, kostengünstigen und qualitätsgeprüften Gesundheitsmaßnahme zur Kontrolle, Prävention und Heilung von Atemwegserkrankungen. Sie erzeugt ein Aerosol durch elektrothermische Erwärmung eines sogenannten Liquids, das durch Kapillarkräfte eines Trägermaterials an ein Heizelement gelangt und verdampft. Ihr Bekanntheitsgrad zeigt, dass eine beabsichtigte Wirkung in den Atemwegen eintritt. Diese Wirkung könnte jedoch auch auf potentielle pharmazeutische Einsatzgebiete übertragbar sein. Die Vorteile der pulmonalen Verabreichung sind dabei vielfältig. Im Vergleich zur peroralen Applikation gelangt der Wirkstoff gezielt zum Wirkort. Wenn eine systemische Applikation zu Arzneimittelkonzentrationen unterhalb der therapeutischen Wirksamkeit in der Lunge führt, könnte eine inhalative Darreichung bereits bei niedriger Dosierung die gewünschten höheren Konzentrationen am Wirkort hervorrufen. Aufgrund der großen Resorptionsfläche der Lunge sind eine höhere Bioverfügbarkeit und ein schnellerer Wirkungseintritt infolge des fehlenden First-Pass-Effektes möglich. Es kommt ebenfalls zu minimalen systemischen Nebenwirkungen. Die elektronische Zigarette erzeugt wie die medizinischen Inhalatoren lungengängige Partikel. Die atemzuggesteuerte Technik ermöglicht eine unkomplizierte und intuitive Anwendung. Der prinzipielle Aufbau besteht aus einer elektrisch beheizten Wendel und einem Akku. Die Heizwendel ist von einem sogenannten Liquid in einem Tank umgeben und erzeugt das Aerosol. Das Liquid beinhaltet eine Basismischung bestehend aus Propylenglycol, Glycerin und reinem Wasser in unterschiedlichen prozentualen Anteilen. Es besteht die Annahme, dass das Basisliquid auch mit pharmazeutischen Wirkstoffen für die pulmonale Applikation beladen werden kann. Aufgrund der thermischen Belastung durch die e-Zigarette müssen potentielle Wirkstoffe sowie das Vehikel eine thermische Stabilität aufweisen.
Die potentielle medizinische Anwendung der Technologie einer handelsüblichen e-Zigarette wurde anhand von drei Schwerpunkten an vier Wirkstoffen untersucht. Die drei ätherischen Öle Eucalyptusöl, Minzöl und Nelkenöl wurden aufgrund ihrer leichten Flüchtigkeit und der historischen pharmazeutischen Anwendung anhand von Inhalationen bei Erkältungssymptomen bzw. im zahnmedizinischen Bereich gewählt. Das eingesetzte Cannabinoid Cannabidiol (CBD) hat einen aktuellen Bezug zu dem pharmazeutischen Markt Deutschlands zur Legalisierung von cannabishaltigen Produkten und der medizinischen Forschung zum inhalativen Konsum. Es wurden relevante wirkstoffhaltige Flüssigformulierungen entwickelt und hinsichtlich ihrer Verdampfbarkeit zu Aerosolen bewertet. In den quantitativen und qualitativen chromatographischen Untersuchungen konnten spezifische Verdampfungsprofile der Wirkstoffe erfasst und bewertet werden. Dabei stieg die verdampfte Masse der Leitsubstanzen 1,8-Cineol (Eucalyptusöl), Menthol (Minzöl) und Eugenol (Nelkenöl) zwischen 33,6 µg und 156,2 µg pro Zug proportional zur Konzentration im Liquid im Bereich zwischen 0,5% und 1,5% bei einer Leistung von 20 Watt. Die Freisetzungsrate von Cannabidiol hingegen schien unabhängig von der Konzentration im Liquid im Mittelwert bei 13,3 µg pro Zug zu liegen. Dieses konnte an fünf CBD-haltigen Liquids im Konzentrationsbereich zwischen 31 µg/g und 5120 µg/g Liquid gezeigt werden. Außerdem konnte eine Steigerung der verdampften Massen mit Zunahme der Leistung der e-Zigarette festgestellt werden. Die Interaktion der Liquids bzw. Aerosole mit den Bestandteilen des Speichels sowie weiterer gastrointestinaler Flüssigkeiten wurde über die Anwendung von zugehörigen in vitro Modellen und Einsatz von Enzymaktivitäts-Assays geprüft. In den Untersuchungen wurden Änderungen von Enzymaktivitäten anhand des oralen Schlüsselenzyms α-Amylase sowie von Proteasen ermittelt. Damit sollte exemplarisch ein möglicher Einfluss auf physiologische bzw. metabolische Prozesse im humanen Organismus geprüft werden. Das Bedampfen von biologischen Suspensionen führte bei niedriger Leistung der e-Zigarette (20 Watt) zu keiner bzw. einer leichten Änderung der Enzymaktivität. Die Anwendung einer hohen Leistung (80 Watt) bewirkte tendenziell das Herabsetzen der Enzymaktivitäten. Die Erhöhung der Enzymaktivitäten könnte zu einem enzymatischen Abbau von Schleimstoffen wie Mucinen führen, was wiederum die effektive, mechanische Abwehr gegenüber bakteriellen Infektionen zur Folge hätte. Da eine Anwendung der Applikation insbesondere bei bakteriellen Atemwegserkrankungen denkbar wäre, folgten abschließend Untersuchungen der antibakteriellen Eigenschaften der Liquids bzw. Aerosole in vitro. Es wurden sechs klinisch relevante bakterielle Krankheitserreger ausgewählt, die nach zwei Charakteristika gruppiert werden können. Die drei multiresistenten Bakterien Pseudomonas aeruginosa, Klebsiella pneumoniae und Methicillin-resistenter Staphylococcus aureus können mithilfe von üblichen Therapien mit Antibiotika nicht abgetötet werden und haben vor allem eine nosokomiale Relevanz. Die zweite Gruppe weist Eigenschaften auf, die vordergründig assoziiert sind mit respiratorischen Erkrankungen. Die Bakterien Streptococcus pneumoniae, Moraxella catarrhalis und Haemophilus influenzae sind repräsentativ beteiligt an Atemwegserkrankungen mit diverser Symptomatik. Die Bakterienarten wurden mit den jeweiligen Liquids behandelt bzw. bedampft und deren grundlegende Dosis-Wirkungsbeziehung charakterisiert. Dabei konnte eine antibakterielle Aktivität der Formulierungen ermittelt werden, die durch Zugabe eines Wirkstoffes die bereits antibakterielle Wirkung der Bestandteile Glycerin und Propylenglycol verstärkte. Die hygroskopischen Eigenschaften dieser Substanzen sind vermutlich für eine Wirkung in aerosolierter Form verantwortlich. Sie entziehen die Feuchtigkeit aus der Luft und haben einen austrocknenden Effekt auf die Bakterien. Das Bedampfen der Bakterienarten Streptococcus pneumoniae, Moraxella catarrhalis und Haemophilus influenzae hatte einen antibakteriellen Effekt, der zeitlich abhängig von der Leistung der e-Zigarette war.
Die Ergebnisse der Untersuchungen führen zu dem Schluss, dass jeder Wirkstoff bzw. jede Substanzklasse individuell zu bewerten ist und somit Inhalator und Formulierung aufeinander abgestimmt werden müssen. Der Einsatz der e-Zigarette als Medizinprodukt zur Applikation von Arzneimitteln setzt stets Prüfungen nach Europäischem Arzneibuch voraus. Durch Modifizierungen könnte eine Dosierung gut kontrollierbar gemacht werden, aber auch die Partikelgrößenverteilung kann insoweit reguliert werden, dass die Wirkstoffe je nach Partikelgröße zu einem geeigneten Applikationsort wie Mund, Rachen oder Bronchien transportiert werden. Der Vergleich mit den Eigenschaften anderer medizinischer Inhalatoren führt zu dem Schluss, dass die Technologie der e-Zigarette durchaus eine gleichartige oder bessere Performance für thermisch stabile Wirkstoffe bieten könnte. Dieses fiktive Medizinprodukt könnte aus einer hersteller-unspezifisch produzierten, wieder aufladbaren Energiequelle mit Universalgewinde zum mehrfachen Gebrauch und einer hersteller- und wirkstoffspezifisch produzierten Einheit aus Verdampfer und Arzneimittel bestehen. Das Arzneimittel, ein medizinisches Liquid (Vehikel und Wirkstoff) kann in dem Tank des Verdampfers mit konstanten, nicht variablen Parametern patientenindividuell produziert werden. Inhalative Anwendungen werden perspektivisch wohl nicht zuletzt aufgrund der aktuellen COVID-19-Pandemie eine zunehmende Rolle spielen. Der Bedarf nach alternativen Therapieoptionen wird weiter ansteigen. Diese Arbeit liefert einen Beitrag zum Einsatz der Technologie der elektronischen Zigarette als electronic nicotin delivery system (ENDS) nach Modifizierung zu einem potentiellen pulmonalen Applikationssystem als electronic drug delivery system (EDDS) von inhalativen, thermisch stabilen Arzneimitteln in Form eines Medizinproduktes.
Die Arbeit ist ein Beitrag zu einer grundlegenden Diskussion der Kapitalmarktforschung, dem messbaren Erfolg „aktiver vs. passiver“ Investmentstrategien. Der Autor setzt sich kritisch mit den wesentlichen Anlagestrategien und Modellen für Indexprodukte auseinander und beleuchtet zugleich Closet Indexing bei aktiv gemanagten Investmentfonds. Das Ergebnis zeigt, dass Closet Indexing nicht nur sporadisch auftritt, sondern eine weit verbreitete Anlagestrategie in vielen vermeintlich aktiv gemanagten Aktieninvestmentfonds ist.
Die Arbeit gibt einen Einblick in die Verständigungspraxen bei Stadtführungen mit (ehemaligen) Obdachlosen, die in ihrem Selbstverständnis auf die Herstellung von Verständnis, Toleranz und Anerkennung für von Obdachlosigkeit betroffene Personen zielen. Zunächst wird in den Diskurs des Slumtourismus eingeführt und, angesichts der Vielfalt der damit verbundenen Erscheinungsformen, Slumming als organisierte Begegnung mit sozialer Ungleichheit definiert. Die zentralen Diskurslinien und die darin eingewobenen moralischen Positionen werden nachvollzogen und im Rahmen der eigenommenen wissenssoziologischen Perspektive als Ausdruck einer per se polykontexturalen Praxis re-interpretiert. Slumming erscheint dann als eine organisierte Begegnung von Lebensformen, die sich in einer Weise fremd sind, als dass ein unmittelbares Verstehen unwahrscheinlich erscheint und genau aus diesem Grund auf der Basis von gängigen Interpretationen des Common Sense ausgehandelt werden muss. Vor diesem Hintergrund untersucht die vorliegende Arbeit, wie sich Teilnehmer und Stadtführer über die Erfahrung der Obdachlosigkeit praktisch verständigen und welcher Art das hierüber erzeugte Verständnis für die im öffentlichen Diskurs mit vielfältigen stigmatisierenden Zuschreibungen versehenen Obdachlosen ist. Dabei interessiert besonders, in Bezug auf welche Aspekte der Erfahrung von Obdachlosigkeit ein gemeinsames Verständnis möglich wird und an welchen Stellen dieses an Grenzen gerät. Dazu wurden die Gesprächsverläufe auf neun Stadtführungen mit (ehemaligen) obdachlosen Stadtführern unterschiedlicher Anbieter im deutschsprachigen Raum verschriftlicht und mit dem Verfahren der Dokumentarischen Methode ausgewertet. Die vergleichende Betrachtung der Verständigungspraxen eröffnet nicht zuletzt eine differenzierte Perspektive auf die in den Prozessen der Verständigung immer schon eingewobenen Anerkennungspraktiken. Mit Blick auf die moralische Debatte um organisierte Begegnungen mit sozialer Ungleichheit wird dadurch eine ethische Perspektive angeregt, in deren Zentrum Fragen zur Vermittlungsarbeit stehen.
Struggle for existence
(2022)
In this project, I sought to understand how Palestinian claim-making in the West Bank is possible within the context of continuing Israeli occupation and repression by the Palestinian political leadership. I explored the questions of what channels non-state actors use to advance their claims, what opportunities they have for making these claims, and what challenges they face. This exploration covers the time period from the Oslo Accords in the mid-1990s to the so-called Great March of Return in 2018.
I demonstrated that Palestinians used different modes and strategies of resistance in the past century, as the area of what today is Israel/Palestine has historically been a target for foreign penetration. Yet, the Oslo agreements between the Israeli government and the Palestinian leadership have ended Palestinians’ decentralized and pluralist social governance, reinforced Israeli rule in the Palestinian territories, promoted continuing dispossession and segregation of Palestinians, and further restricted their rights and their claim-making opportunities until this day. Therefore, today, Palestinian society in the West Bank is characterized by fragmentation, geographical and societal segregation, and double repression by Israeli occupation and Palestinian Authority (PA) policies. What is more, Palestinian claim-making is legally curtailed due to the establishment of different geographical entities in which Palestinians are subjugated to different forms of Israeli rule and regulations.
I argue that the concepts of civil society and acts of citizenship, which are often used to describe non-state actors’ rights-seeking activities, fall short on understanding and describing Palestinian claim-making in the West Bank comprehensively. By determining their boundaries, the concept of acts of subjecthood evolved as a novel theoretical approach within the research process and as a means of claim-making within repressive contexts where claim makers’ rights are curtailed and opportunities for rights-seeking activities are few. Thereby, this study applies a new theoretical framework to the conflict in Israel/Palestine and contributes to a better understanding of rights-seeking activities within the West Bank. Further, I argue that Palestinian acts of subjecthood against hostile Israeli rule in the West Bank are embedded within the comprehensive structure of settler colonialism. As a form of colonialism that aims at replacing an indigenous population, Israeli settler colonialism in the West Bank manifests itself in restrictions of Palestinian movement, settlement constructions, home demolitions, violence, and detentions.
By using grounded theory and inductive reasoning as methodological approaches, I was able to make generalizations about the state of Palestinian claim-making. These generalizations are based on the analysis of secondary materials and data collected via face-to-face and video interviews with non-state actors in Israel/Palestine. The conducted research shows that there is not a single measure or a standalone condition that hinders Palestinian claim-making, but a complex and comprehensive structure that, on the one hand, shrinks Palestinian living space by occupation and destruction and, on the other hand, diminishes Palestinian civic space by limiting the fundamental rights to organize and build social movements to change the status Palestinians live in.
Although the concrete, tangible outcomes of Palestinian acts of subjecthood are marginal, they contribute to strengthening and perpetuating Palestinian’s long history of resistance against Israeli oppression. With a lack of adherence to international law, the neglect of UN resolutions by the Israeli government, the continuous defeats of rights organizations in Israeli courts, and the repression of institutions based in the West Bank by PA and occupation policies, Palestinian acts of subjecthood cannot overturn current power structures. Nevertheless, the ongoing persistence of non-state actors claiming rights, as well as the pop-up of new initiatives and youth movements are all essential for strengthening Palestinians’ resilience and documenting current injustices. Therefore, they can build the pillars for social change in the future.
Das Ziel der vorliegenden Dissertation war es zu untersuchen, wie palästinensisches claim-making, also die Artikulation von Forderungen bzw. die Geltendmachung von bestimmten Rechten, vor dem Hintergrund der anhaltenden israelischen Besatzung und Repressalien durch die palästinensische politische Führung im Westjordanland durchgesetzt werden kann. Dabei soll der Frage nachgegangen werden, welche Kanäle nichtstaatliche Akteure nutzen, um ihre Ansprüche geltend zu machen, welche Möglichkeiten sich ihnen dafür bieten und vor welchen Herausforderungen sie stehen. Der Untersuchungszeitraum erstreckt sich dabei vom Osloer Friedensprozess Mitte der 1990er Jahre bis hin zum sogenannten Great March of Return im Jahr 2018.
Die im Gebiet des heutigen Israel/Palästina lebenden PalästinenserInnen bedienten sich in Zeiten ausländischer Einflussnahme, z.B. während der britischen Besatzung im vergangenen Jahrhundert, verschiedenster Widerstandsformen und -strategien. Jedoch haben die Osloer Abkommen zwischen der israelischen Regierung und der palästinensischen Führung die dezentrale und partizipative Mobilisierung der palästinensischen Gesellschaft erschwert, die andauernde Enteignung von PalästinenserInnen begünstigt und ihre Rechte bis zum heutigen Tag weiter eingeschränkt. Die heutige palästinensische Gesellschaft im Westjordanland ist daher durch Zersplitterung, geografische und gesellschaftliche Segregation und doppelte Un-terdrückung durch die israelische Besatzung sowie die Palästinensische Autonomiebehörde gekennzeichnet. Zudem führt die Etablierung verschiedener geografischer Entitäten, in denen PalästinenserInnen unterschiedlichen Formen israelischer Herrschaft, Regularien und Ein-griffsrechten unterworfen sind, dazu, dass palästinensisches claim-making auch formalrecht-lich eingeschränkt ist.
Um die Aktivitäten nichtstaatlicher Akteure in diesem Kontext beschreiben zu können, wer-den häufig das Konzept der Zivilgesellschaft oder das der acts of citizenship herangezogen. In der vorliegenden Arbeit wird jedoch argumentiert, dass diese Konzepte nur bedingt auf den Status Quo im Westjordanland anwendbar sind und palästinensisches claim-making nicht hinreichend verstehen und beschreiben können. Im Laufe des Forschungsprozesses hat sich daher das Konzept der acts of subjecthood als neuer theoretischer Ansatz herausgebildet, der claim-making in repressiven Kontexten beschreibt, in denen nichtstaatliche Akteure nur geringen Handlungsspielraum haben, ihre Forderungen durchsetzen zu können. Durch diese „Theorie-Brille“ ermöglicht meine Forschung einen neuartigen Blick auf den israelisch-palästinensischen Konflikt und trägt auf diese Weise zu einem besseren Verständnis von claim-making-Aktivitäten im Westjordanland bei. Darüber hinaus bettet die vorliegende Ar-beit acts of subjecthood in den größeren Kontext des Siedlungskolonialismus ein. Dieser beschreibt eine Form des Kolonialismus, die darauf abzielt, eine einheimische Bevölkerung durch die der Kolonialmacht zu ersetzen. Im Westjordanland manifestiert sich der israelische Siedlungskolonialismus in der Einschränkung der Bewegungsfreiheit von PalästinenserIn-nen, dem Bau von Siedlungen, der Zerstörung von Häusern, Gewalt und Inhaftierungen.
Die Verwendung der Grounded Theory und des induktiven Denkens als methodische Ansätze ermöglichte es, verallgemeinerbare Aussagen zum Zustand palästinensischen claim-makings treffen zu können. Diese Verallgemeinerungen beruhen auf der Analyse von Sekundärquellen und Daten, die im Rahmen von Interviews mit VertreterInnen nichtstaatlicher Organisationen in Israel/Palästina erhoben wurden. Die durchgeführte Analyse macht deutlich, dass nicht eine einzelne Maßnahme oder Bedingung palästinensisches claim-making behindert, sondern eine komplexe, vielschichtige und zielgerichtet implementierte Struktur. Diese verringert einerseits den Lebensraum von PalästinenserInnen durch Besatzung und Zerstörung und schränkt andererseits den zivilen Raum ein, indem sie ihnen grundlegende Rechte und fundamentale Freiheiten verwehrt.
Obwohl die konkreten Auswirkungen palästinensischer acts of subjecthood marginal sind, tragen sie dazu bei, den Widerstand gegen politische Unterdrückung zu stärken und fortzusetzen. Angesichts der Verletzung von Völkerrecht und der Missachtung zahlreicher UN-Resolutionen durch die israelische Regierung, der Niederlagen von Menschenrechtsorganisationen vor israelischen Gerichten, der Unterdrückung von Institutionen im Westjordanland durch die Palästinensische Autonomiebehörde und die Besatzungspolitik können acts of subjecthood die derzeitigen Machtstrukturen nicht aufbrechen. Dennoch sind die anhaltende Beharrlichkeit nichtstaatlicher Akteure, Forderungen zu artikulieren und Rechte einzufordern und die Gründung neuer Initiativen und Organisationen essenziell für die Stärkung gesellschaftlicher Resilienz sowie die Dokumentation von Ungerechtigkeiten und Rechtsverletzungen. Diese Akteure legen so den Grundstein für einen möglichen gesellschaftspolitischen Wandel in der Zukunft.
Biomimicry is the art of mimicking nature to overcome a particular technical or scientific challenge. The approach studies how evolution has found solutions to the most complex problems in nature. This makes it a powerful method for science. In combination with the rapid development of manufacturing and information technologies into the digital age, structures and material that were before thought to be unrealizable can now be created with simple sketch and the touch of a button. This doctoral thesis had as its primary goal to investigate how digital tools, such as programming, modelling, 3D-Design tools and 3D-Printing, with the help from biomimicry, could lead to new analysis methods in science and new medical devices in medicine.
The Electrical Discharge Machining (EDM) process is applied commonly to deform or mold hard metals that are difficult to work using normal machinery. A workpiece submerged in an electrolyte is deformed while being in close vicinity to an electrode. When high voltage is put between the workpiece and the electrode it will cause sparks that create cavitations on the substrate which in turn removes material and is flushed away by the electrolyte. Usually, such surfaces are analysed based on roughness, in this work another method using a novel curvature analysis method is presented as an alternative. In addition, to better understand how the surface changes during process time of the EDM process, a digital impact model was created which created craters on ridges on an originally flat substrate. These substrates were then analysed using the curvature analysis method at different processing times of the modelling. It was found that a substrate reaches an equilibrium at around 10000 impacts. The proposed curvature analysis method has potential to be used in the design of new cell culture substrates for stem cell.
The Venus flytrap can shut its jaws at an amazing speed. The shutting mechanism may be interesting to use in science and is an example of a so-called mechanical bi-stable system – there are two stable states. In this work two truncated pyramid structures were modelled using a non-linear mechanical model called the Chained Beam Constraint Model (CBCM). The structure with a slope angle of 30 degrees is not bi-stable and the structure with a slope angle of 45 degrees is bi-stable. Developing this idea further by using PEVA, which has a shape-memory effect, the structure which is not bi-stable could be programmed to be bi-stable and then turned off again. This could be used as an energy storage system. Another species which has interesting mechanism is the tapeworm. Some species of this animal has a crown of hooks and suckers located on its side. The parasite commonly is found in mammals in the lower intestine and attaches to the walls by using its suckers. When the tapeworm has found a suitable spot, it ejects its hooks and permanently attaches to the wall. This function could be used in minimally invasive medicine to have better control of implants during the implantation process. By using the CBCM model and a 3D-printer capable of tuning how hard or soft a printed part is, a design strategy was developed to investigate how one could create a device that mimics the tapeworm. In the end a prototype was created which was able attach to a pork loin at an under pressure of 20 kPa and to ejects its hooks at an under pressure of 50 kPa or above.
These three projects is an exhibit of how digital tools and biomimicry can be used together to come up with applicable solutions in science and in medicine.
Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions.
This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets.
Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information.
Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure.
In this thesis, the dependencies of charge localization and itinerance in two classes of aromatic molecules are accessed: pyridones and porphyrins. The focus lies on the effects of isomerism, complexation, solvation, and optical excitation, which are concomitant with different crucial biological applications of specific members of these groups of compounds. Several porphyrins play key roles in the metabolism of plants and animals. The nucleobases, which store the genetic information in the DNA and RNA are pyridone derivatives. Additionally, a number of vitamins are based on these two groups of substances.
This thesis aims to answer the question of how the electronic structure of these classes of molecules is modified, enabling the versatile natural functionality. The resulting insights into the effect of constitutional and external factors are expected to facilitate the design of new processes for medicine, light-harvesting, catalysis, and environmental remediation.
The common denominator of pyridones and porphyrins is their aromatic character. As aromaticity was an early-on topic in chemical physics, the overview of relevant theoretical models in this work also mirrors the development of this scientific field in the 20th century. The spectroscopic investigation of these compounds has long been centered on their global, optical transition between frontier orbitals.
The utilization and advancement of X-ray spectroscopic methods characterizing the local electronic structure of molecular samples form the core of this thesis. The element selectivity of the near-edge X-ray absorption fine structure (NEXAFS) is employed to probe the unoccupied density of states at the nitrogen site, which is key for the chemical reactivity of pyridones and porphyrins. The results contribute to the growing database of NEXAFS features and their interpretation, e.g., by advancing the debate on the porphyrin N K-edge through systematic experimental and theoretical arguments. Further, a state-of-the-art laser pump – NEXAFS probe scheme is used to characterize the relaxation pathway of a photoexcited porphyrin on the atomic level.
Resonant inelastic X-ray scattering (RIXS) provides complementary results by accessing the highest occupied valence levels including symmetry information. It is shown that RIXS is an effective experimental tool to gain detailed information on charge densities of individual species in tautomeric mixtures. Additionally, the hRIXS and METRIXS high-resolution RIXS spectrometers, which have been in part commissioned in the course of this thesis, will gain access to the ultra-fast and thermal chemistry of pyridones, porphyrins, and many other compounds.
With respect to both classes of bio-inspired aromatic molecules, this thesis establishes that even though pyridones and porphyrins differ largely by their optical absorption bands and hydrogen bonding abilities, they all share a global stabilization of local constitutional changes and relevant external perturbation. It is because of this wide-ranging response that pyridones and porphyrins can be applied in a manifold of biological and technical processes.
The importance of carbohydrate structures is enormous due to their ubiquitousness in our lives. The development of so-called glycomaterials is the result of this tremendous significance. These are not exclusively used for research into fundamental biological processes, but also, among other things, as inhibitors of pathogens or as drug delivery systems. This work describes the development of glycomaterials involving the synthesis of glycoderivatives, -monomers and -polymers. Glycosylamines were synthesized as precursors in a single synthesis step under microwave irradiation to significantly shorten the usual reaction time. Derivatization at the anomeric position was carried out according to the methods developed by Kochetkov and Likhorshetov, which do not require the introduction of protecting groups. Aminated saccharide structures formed the basis for the synthesis of glycomonomers in β-configuration by methacrylation. In order to obtain α-Man-based monomers for interactions with certain α-Man-binding lectins, a monomer synthesis by Staudinger ligation was developed in this work, which also does not require protective groups. Modification of the primary hydroxyl group of a saccharide was accomplished by enzyme-catalyzed synthesis. Ribose-containing cytidine was transesterified using the lipase Novozym 435 and microwave irradiation. The resulting monomer synthesis was optimized by varying the reaction partners. To create an amide bond instead of an ester bond, protected cytidine was modified by oxidation followed by amide coupling to form the monomer. This synthetic route was also used to isolate the monomer from its counterpart guanosine. After obtaining the nucleoside-based monomers, they were block copolymerized using the RAFT method. Pre-synthesized pHPMA served as macroCTA to yield cytidine- or guanosine-containing block copolymer. These isolated block copolymers were then investigated for their self-assembly behavior using UV-Vis, DLS and SEM to serve as a potential thermoresponsive drug delivery system.
Nation, migration, narration
(2022)
En France et en Allemagne, l’immigration est devenue dans les dernières décennies une problématique centrale. C’est dans ce contexte qu’est apparu le rap. Celui-ci connaît une popularité énorme chez les populations issues de l’immigration. Pour autant, les rappeurs ne s’en confrontent pas moins à leur identité française ou allemande.
Le but de ce travail est d’expliquer cette apparente contradiction : comment des personnes issues de l’immigration, exprimant un mal-être face à un racisme qu’ils considèrent omniprésent, peuvent-elles se sentir pleinement françaises / allemandes ?
On a divisé le travail entre les chapitres suivants : Contexte de l'étude, méthodologie et théories (I) ; Analyse des différentes formes d’identité nationale au prisme du corpus (II) ; Analyse en trois étapes chronologiques du rapport à la société dans les textes des rappeurs (III-V) ; étude de cas de Kery James en France et Samy Deluxe en Allemagne (VI).
Successful communication is often explored by people throughout their life courses. To effectively transfer one’s own information to others, people employ various linguistic tools, such as word order information, prosodic cues, and lexical choices. The exploration of these linguistic cues is known as the study of information structure (IS). Moreover, an important issue in the language acquisition of children is the investigation of how they acquire IS. This thesis seeks to improve our understanding of how children acquire different tools (i.e., prosodical cues, syntactical cues, and the focus particle only) of focus marking in a cross linguistic perspective.
In the first study, following Szendrői and her colleagues (2017)- the sentence-picture verification task- was performed to investigate whether three- to five-year-old Mandarin-speaking children as well as Mandarin-speaking adults could apply prosodic information to recognize focus in sentences. More, in the second study, not only Mandarin-speaking adults and Mandarin-speaking children but also German-speaking adults and German-speaking children were included to confirm the assumption that children could have adult-like performance in understanding sentence focus by identifying language specific cues in their mother tongue from early onwards. In this study, the same paradigm- the sentence-picture verification task- as in the first study was employed together with the eye-tracking method. Finally, in the last study, an issue of whether five-year-old Mandarin-speaking children could understand the pre-subject only sentence was carried out and again whether prosodic information would help them to better understand this kind of sentences.
The overall results seem to suggest that Mandarin-speaking children from early onwards could make use of the specific linguistic cues in their ambient language. That is, in Mandarin, a Topic-prominent and tone language, the word order information plays a more important rule than the prosodic information and even three-year-old Mandarin-speaking children could follow the word order information. More, although it seems that German-speaking children could follow the prosodic information, they did not have the adult-like performance in the object-accented condition. A feasible reason for this result is that there are more possibilities of marking focus in German, such as flexible word order, prosodic information, focus particles, and thus it would take longer time for German-speaking children to manage these linguistic tools. Another important empirical finding regarding the syntactically-marked focus in German is that it seems that the cleft construction is not a valid focus construction and this result corroborates with the previous observations (Dufter, 2009). Further, eye-tracking method did help to uncover how the parser direct their attention for recognizing focus. In the final study, it is showed that with explicit verbal context Mandarin-speaking children could understand the pre-subject only sentence and the study brought a better understanding of the acquisition of the focus particle- only with the Mandarin-speaking children.
Während das nationale und das europäische Wettbewerbsrecht seit vielen Jahrzehnten eine differenzierte Regelung und wissenschaftliche Durchdringung erfahren haben, ist ein vergleichbarer wettbewerbsrechtlicher Normenbestand auf internationaler Ebene nicht zu verzeichnen. Diese Dissertation greift diese Forschungslücke auf und plädiert für die Schaffung eines internationalen, multilateralen Wettbewerbsrechts. Dabei wird der Bestand an hard-law und soft-law untersucht und als Ergebnis gefordert, neue multilaterale Wettbewerbsregelungen zu entwerfen. In institutioneller Hinsicht ist zu fragen, innerhalb welcher internationaler Organisation dies sinnvoll erfolgen kann. Insgesamt unternimmt die Dissertation den Versuch, mögliche Konturen einer globalen Wettbewerbsrechtsordnung aufzuzeigen und vertieft zu begründen.
Isometric muscle function
(2022)
The cumulative dissertation consists of four original articles. These considered isometric muscle ac-tions in healthy humans from a basic physiological view (oxygen and blood supply) as well as possibilities of their distinction. It includes a novel approach to measure a specific form of isometric hold-ing function which has not been considered in motor science so far. This function is characterized by an adaptation to varying external forces with particular importance in daily activities and sports.
The first part of the research program analyzed how the biceps brachii muscle is supplied with oxygen and blood by adapting to a moderate constant load until task failure (publication 1). In this regard, regulative mechanisms were investigated in relation to the issue of presumably compressed capillaries due to high intramuscular pressures (publication 2).
Furthermore, it was examined if oxygenation and time to task failure (TTF) differs compared to an-other isometric muscle function (publication 3). This function is mainly of diagnostic interest by measuring the maximal voluntary isometric contraction (MVIC) as a gold standard. For that, a person pulls on or pushes against an insurmountable resistance. However, the underlying pulling or pushing form of isometric muscle action (PIMA) differs compared to the holding one (HIMA).
HIMAs have mainly been examined by using constant loads. In order to quantify the adaptability to varying external forces, a new approach was necessary and considered in the second part of the research program. A device was constructed based on a previously developed pneumatic measurement system. The device should have been able to measure the Adaptive Force (AF) of elbow ex-tensor muscles. The AF determines the adaptability to increasing external forces under isometric (AFiso) and eccentric (AFecc) conditions. At first, it was questioned if these parameters can be relia-bly assessed by use of the new device (publication 4). Subsequently, the main research question was investigated: Is the maximal AFiso a specific and independent variable of muscle function in comparison to the MVIC? Furthermore, both research parts contained a sub-question of how results can be influenced.
Parameters of local oxygen saturation (SvO2) and capillary blood filling (rHb) were non-invasively recorded by a spectrophotometer during maximal and submaximal HIMAs and PIMAs.
These were the main findings: Under load, SvO2 and rHb always adjusted into a steady state after an initial decrease. Nevertheless, their behavior could roughly be categorized into two types. In type I, both parameters behaved nearly parallel to each other. In contrast, their progression over time was partly inverse in type II. The inverse behavior probably depends on the level of deoxygenation since rHb increased reliably at a suggested threshold of about 59% SvO2. This triggered mechanism and the found homeostatic steady states seem to be in conflict with the concept of mechanically compressed capillaries and consequently with a restricted blood flow. Anatomical configuration of blood vessels might provide one hypothetical explanation of how blood flow might be maintained. HIMA and PIMA did not differ regarding oxygenation and allocation to the described types. The TTF tended to be longer during PIMA.
As a sub-question, oxygenation and TTF were compared between (HIMA) and intermittent voluntary muscle twitches during a weight holding task. TTF but not oxygenation differed significantly
(Twitch > HIMA). A changed neuromuscular control might serve as a speculative explanation of how the results can be explained. This is supported by the finding that the TTF did not correlate significantly with the extent of deoxygenation irrespective of the performed task (HIMA, PIMA or Twitch).
Other neuromuscular aspects of muscle function were considered in second part of the re-search program. The new device mentioned above detected different force capacities within four trials at two days each. Among AF measurements, the functional counterpart of a concentric muscle action merging into an isometric one was analyzed in comparison to the MVIC.
Based on the results, it can be assumed that a prior concentric muscle action does not influence the MVIC. However, the results were inconsistent and possibly influenced by systematic errors. In con-trast, maximal variables of the AF (AFisomax and AFeccmax) could be measured in a reliable way which is indicated by a high test-retest reliability. Despite substantial correlations between force variables, the AFisomax differed significantly from MVIC and AFmax, which was identical with AFeccmax in almost all cases. Moreover, AFisomax revealed the highest variability between trials.
These results indicate that maximal force capacities should be assessed separately. The adaptive holding capacity of a muscle can be lower compared to a commonly determined MVIC. This is of relevance since muscles frequently need to respond adequately to external forces. If their response does not correspond to the external impact, the muscle is forced to lengthen. In this scenario, joints are not completely stabilized and an injury may occur. This outlined issue should be addressed in future research in the field of sport and health sciences.
At last, the dissertation presents another possibility to quantify the AFisomax by use of a handheld device applied in combination with a manual muscle test. This assessment delivers a more practical way for clinical purposes.
We live in an aging society. The change in demographic structures poses a number of challenges, including an increase in age-associated diseases. Delirium, dementia, and depression are considered to be of particular interest in the field of aging and mental health. A common theory regarding healthy aging and mental health is that the highest satisfaction and best performance is achieved when a person's abilities match the demands of their environment. In this context, the person's environment includes both the physical and the social environment. Based on this assumption, this dissertation focuses on the investigation of non-pharmacological interventions that modify environmental factors in order to facilitate the prevention and treatment of mental disorders in older patients and their caregivers. The first part of this dissertation consists of two publications and deals with the prevention of postoperative delirium in elderly patients. The PAWEL study investigated the use of a multimodal, non-pharmacological intervention in the routine care of patients aged 70 years or older undergoing elective surgery. The intervention included an interdepartmental delirium prevention team, daily use of seven manualized “best practice” procedures, structured staff training on delirium, and the adaptation of the hospital environment to the patients’ needs. The second part of the dissertation used a meta-analysis to investigate whether technology-based interventions are a suitable form of support for informal caregivers of people with dementia. Subgroup analyses were conducted to examine the effect of different types of technology on caregiver burden and depressive symptoms. The following main results were found: The PAWEL study showed that the use of a multimodal, non-pharmacological intervention resulted in a significantly lower incidence rate of postoperative delirium and reduced days with delirium in the intervention group compared to the control group. However, this difference could not be observed in the group of patients undergoing elective cardiac surgery. The results of the meta-analysis showed that technology-based interventions offer a promising alternative to traditional “face-to-face” services. Significant effect sizes could be found in relation to both the burden and the depressive symptoms of caregiving relatives. These results provide further important information on the significant impact of non-pharmacological interventions that modify environmental factors on mental health, and support the consideration of such interventions in the prevention and treatment of mental disorders in both older patients and their caregivers.
Traditional organizations are strongly encouraged by emerging digital customer behavior and digital competition to transform their businesses for the digital age. Incumbents are particularly exposed to the field of tension between maintaining and renewing their business model. Banking is one of the industries most affected by digitalization, with a large stream of digital innovations around Fintech. Most research contributions focus on digital innovations, such as Fintech, but there are only a few studies on the related challenges and perspectives of incumbent organizations, such as traditional banks. Against this background, this dissertation examines the specific causes, effects and solutions for traditional banks in digital transformation − an underrepresented research area so far.
The first part of the thesis examines how digitalization has changed the latent customer expectations in banking and studies the underlying technological drivers of evolving business-to-consumer (B2C) business models. Online consumer reviews are systematized to identify latent concepts of customer behavior and future decision paths as strategic digitalization effects. Furthermore, the service attribute preferences, the impact of influencing factors and the underlying customer segments are uncovered for checking accounts in a discrete choice experiment. The dissertation contributes here to customer behavior research in digital transformation, moving beyond the technology acceptance model. In addition, the dissertation systematizes value proposition types in the evolving discourse around smart products and services as key drivers of business models and market power in the platform economy.
The second part of the thesis focuses on the effects of digital transformation on the strategy development of financial service providers, which are classified along with their firm performance levels. Standard types are derived based on fuzzy-set qualitative comparative analysis (fsQCA), with facade digitalization as one typical standard type for low performing incumbent banks that lack a holistic strategic response to digital transformation. Based on this, the contradictory impact of digitalization measures on key business figures is examined for German savings banks, confirming that the shift towards digital customer interaction was not accompanied by new revenue models diminishing bank profitability. The dissertation further contributes to the discourse on digitalized work designs and the consequences for job perceptions in banking customer advisory. The threefold impact of the IT support perceived in customer interaction on the job satisfaction of customer advisors is disentangled.
In the third part of the dissertation, solutions are developed design-oriented for core action areas of digitalized business models, i.e., data and platforms. A consolidated taxonomy for data-driven business models and a future reference model for digital banking have been developed. The impact of the platform economy is demonstrated here using the example of the market entry by Bigtech. The role-based e3-value modeling is extended by meta-roles and role segments and linked to value co-creation mapping in VDML. In this way, the dissertation extends enterprise modeling research on platform ecosystems and value co-creation using the example of banking.
Synthetische Transkriptionsfaktoren bestehen wie natürliche Transkriptionsfaktoren aus einer DNA-Bindedomäne, die sich spezifisch an die Bindestellensequenz vor dem Ziel-Gen anlagert, und einer Aktivierungsdomäne, die die Transkriptionsmaschinerie rekrutiert, sodass das Zielgen exprimiert wird. Der Unterschied zu den natürlichen Transkriptionsfaktoren ist, sowohl dass die DNA-Bindedomäne als auch die Aktivierungsdomäne wirtsfremd sein können und dadurch künstliche Stoffwechselwege im Wirt, größtenteils chemisch, induziert werden können. Optogenetische synthetische Transkriptionsfaktoren, die hier entwickelt wurden, gehen einen Schritt weiter. Dabei ist die DNA-Bindedomäne nicht mehr an die Aktivierungsdomäne, sondern mit dem Blaulicht-Photorezeptor CRY2 gekoppelt. Die Aktivierungsdomäne wurde mit dem Interaktionspartner CIB1 fusioniert. Unter Blaulichtbestrahlung dimerisieren CRY2 und CIB1 und damit einhergehend die beiden Domänen, sodass ein funktionsfähiger Transkriptionsfaktor entsteht. Dieses System wurde in die Saccharomyces cerevisiae genomisch integriert. Verifiziert wurde das konstruierte System mit Hilfe des Reporters yEGFP, welcher durchflusszytometrisch detektiert werden konnte. Es konnte gezeigt werden, dass die yEGFP Expression variabel gestaltet werden kann, indem unterschiedlich lange Blaulichtimpulse ausgesendet wurden, die DNA-Bindedomäne, die Aktivierungsdomäne oder die Anzahl der Bindestellen, an dem sich die DNA-Bindedomäne anlagert, verändert wurden. Um das System für industrielle Anwendungen attraktiv zu gestalten, wurde das System vom Deepwell-Maßstab auf Photobioreaktor-Maßstab hochskaliert. Außerdem erwies sich das Blaulichtsystem sowohl im Laborstamm YPH500 als auch im industriell oft verwendeten Hefestamm CEN.PK als funktional. Des Weiteren konnte ein industrierelevante Protein ebenso mit Hilfe des verifizierten Systems exprimiert werden. Schlussendlich konnte in dieser Arbeit das etablierte Blaulicht-System erfolgreich mit einem Rotlichtsystem kombiniert werden, was zuvor noch nicht beschrieben wurde.
This thesis deals with the synthesis of protein and composite protein-mineral microcapsules by the application of high-intensity ultrasound at the oil-water interface. While one system is stabilized by BSA molecules, the other system is stabilized by different nanoparticles modified with BSA. A comprehensive study of all synthesis stages as well as of resulting capsules were carried out and a plausible explanation of the capsule formation mechanism was proposed. During the formation of BSA microcapsules, the protein molecules adsorb firstly at the O/W interface and unfold there forming an interfacial network stabilized by hydrophobic interactions and hydrogen bonds between neighboring molecules. Simultaneously, the ultrasonic treatment causes the cross-linking of the BSA molecules via the formation of intermolecular disulfide bonds. In this thesis, the experimental evidences of ultrasonically induced cross-linking of the BSA in the shells of protein-based microcapsules are demonstrated. Therefore, the concept proposed many years ago by Suslick and co-workers is confirmed by experimental evidences for the first time. Moreover, a consistent mechanism for the formation of intermolecular disulfide bonds in capsule shells is proposed that is based on the redistribution of thiol and disulfide groups in BSA under the action of high-energy ultrasound. The formation of composite protein-mineral microcapsules loaded with three different oils and shells composed of nanoparticles was also successful. The nature of the loaded oil and the type of nanoparticles in the shell, had influence on size and shape of the microcapsules. The examination of the composite capsule revealed that the BSA molecules adsorbed on the nanoparticles surface in the capsule shell are not cross-linked by intermolecular disulfide bonds. Instead, a Pickering emulsion formation takes place. The surface modification of composite microcapsules through both pre-modification of main components and also the post-modification of the surface of ready composite microcapsules was successfully demonstrated. Additionally, the mechanical properties of protein and composite protein-mineral microcapsules were compared. The results showed that the protein microcapsules are more resistant to elastic deformation.
Seismology, like many scientific fields, e.g., music information retrieval and speech signal pro- cessing, is experiencing exponential growth in the amount of data acquired by modern seismo- logical networks. In this thesis, I take advantage of the opportunities offered by "big data" and by the methods developed in the areas of music information retrieval and machine learning to predict better the ground motion generated by earthquakes and to study the properties of the surface layers of the Earth. In order to better predict seismic ground motions, I propose two approaches based on unsupervised deep learning methods, an autoencoder network and Generative Adversarial Networks. The autoencoder technique explores a massive amount of ground motion data, evaluates the required parameters, and generates synthetic ground motion data in the Fourier amplitude spectra (FAS) domain. This method is tested on two synthetic datasets and one real dataset. The application on the real dataset shows that the substantial information contained within the FAS data can be encoded to a four to the five-dimensional manifold. Consequently, only a few independent parameters are required for efficient ground motion prediction. I also propose a method based on Conditional Generative Adversarial Networks (CGAN) for simulating ground motion records in the time-frequency and time domains. CGAN generates the time-frequency domains based on the parameters: magnitude, distance, and shear wave velocities to 30 m depth (VS30). After generating the amplitude of the time-frequency domains using the CGAN model, instead of classical conventional methods that assume the amplitude spectra with a random phase spectrum, the phase of the time-frequency domains is recovered by minimizing the observed and reconstructed spectrograms. In the second part of this dissertation, I propose two methods for the monitoring and characterization of near-surface materials and site effect analyses. I implement an autocorrelation function and an interferometry method to monitor the velocity changes of near-surface materials resulting from the Kumamoto earthquake sequence (Japan, 2016). The observed seismic velocity changes during the strong shaking are due to the non-linear response of the near-surface materials. The results show that the velocity changes lasted for about two months after the Kumamoto mainshock. Furthermore, I used the velocity changes to evaluate the in-situ strain-stress relationship. I also propose a method for assessing the site proxy "VS30" using non-invasive analysis. In the proposed method, a dispersion curve of surface waves is inverted to estimate the shear wave velocity of the subsurface. This method is based on the Dix-like linear operators, which relate the shear wave velocity to the phase velocity. The proposed method is fast, efficient, and stable. All of the methods presented in this work can be used for processing "big data" in seismology and for the analysis of weak and strong ground motion data, to predict ground shaking, and to analyze site responses by considering potential time dependencies and nonlinearities.
Microplastics in the environments are estimated to increase in the near future due to increasing consumption of plastic product and also due to further fragmentation in small pieces. The fate and effects of MP once released into the freshwater environment are still scarcely studied, compared to the marine environment. In order to understand possible effect and interaction of MPs in freshwater environment, planktonic zooplankton organisms are very useful for their crucial trophic role. In particular freshwater rotifers are one of the most abundant organisms and they are the interface between primary producers and secondary consumers. The aim of my thesis was to investigate the ingestion and the effect of MPs in rotifers from a more natural scenario and to individuate processes such as the aggregation of MPs, the food dilution effect and the increasing concentrations of MPs that could influence the final outcome of MPs in the environment. In fact, in a near natural scenario MPs interaction with bacteria and algae, aggregations together with the size and concentration are considered drivers of ingestion and effect. The aggregation of MPs makes smaller MPs more available for rotifers and larger MPs less ingested. The negative effect caused by the ingestion of MPs was modulated by their size but also by the quantity and the quality of food that cause variable responses. In fact, rotifers in the environment are subjected to food limitation and the presence of MPs could exacerbate this condition and decrease the population and the reproduction input. Finally, in a scenario incorporating an entire zooplanktonic community, MPs were ingested by most individuals taking into account their feeding mode but also the concentration of MPs, which was found to be essential for the availability of MPs. This study highlights the importance to investigate MPs from a more environmental perspective, this in fact could provide an alternative and realistic view of effect of MPs in the ecosystem.
Duplicate detection describes the process of finding multiple representations of the same real-world entity in the absence of a unique identifier, and has many application areas, such as customer relationship management, genealogy and social sciences, or online shopping. Due to the increasing amount of data in recent years, the problem has become even more challenging on the one hand, but has led to a renaissance in duplicate detection research on the other hand.
This thesis examines the effects and opportunities of transitive relationships on the duplicate detection process. Transitivity implies that if record pairs ⟨ri,rj⟩ and ⟨rj,rk⟩ are classified as duplicates, then also record pair ⟨ri,rk⟩ has to be a duplicate. However, this reasoning might contradict with the pairwise classification, which is usually based on the similarity of objects. An essential property of similarity, in contrast to equivalence, is that similarity is not necessarily transitive.
First, we experimentally evaluate the effect of an increasing data volume on the threshold selection to classify whether a record pair is a duplicate or non-duplicate. Our experiments show that independently of the pair selection algorithm and the used similarity measure, selecting a suitable threshold becomes more difficult with an increasing number of records due to an increased probability of adding a false duplicate to an existing cluster. Thus, the best threshold changes with the dataset size, and a good threshold for a small (possibly sampled) dataset is not necessarily a good threshold for a larger (possibly complete) dataset. As data grows over time, earlier selected thresholds are no longer a suitable choice, and the problem becomes worse for datasets with larger clusters.
Second, we present with the Duplicate Count Strategy (DCS) and its enhancement DCS++ two alternatives to the standard Sorted Neighborhood Method (SNM) for the selection of candidate record pairs. DCS adapts SNMs window size based on the number of detected duplicates and DCS++ uses transitive dependencies to save complex comparisons for finding duplicates in larger clusters. We prove that with a proper (domain- and data-independent!) threshold, DCS++ is more efficient than SNM without loss of effectiveness.
Third, we tackle the problem of contradicting pairwise classifications. Usually, the transitive closure is used for pairwise classifications to obtain a transitively closed result set. However, the transitive closure disregards negative classifications. We present three new and several existing clustering algorithms and experimentally evaluate them on various datasets and under various algorithm configurations. The results show that the commonly used transitive closure is inferior to most other clustering algorithms, especially for the precision of results. In scenarios with larger clusters, our proposed EMCC algorithm is, together with Markov Clustering, the best performing clustering approach for duplicate detection, although its runtime is longer than Markov Clustering due to the subexponential time complexity. EMCC especially outperforms Markov Clustering regarding the precision of the results and additionally has the advantage that it can also be used in scenarios where edge weights are not available.
Die betriebliche oder „arbeitsplatzorientierte Grundbildung (AoG)“ ist ein noch junges Forschungsgebiet der wissenschaftlichen Erwachsenenbildung. Neuerdings wenden sich auch betriebliche Kursangebote an die „Bildungsverlierer“ unseres Schulsystems, die mit z.T. massiven Problemen die Haupt- oder Sonderschule verlassen. Diese qualitative Studie ist dem Bereich der subjektwissenschaftlichen Lernforschung zuzuordnen. Sie rekonstruiert die subjektiven Lernbegründungen der formal gering literalisierten Beschäftigten, wenn sie an Grundbildungskursen am Arbeitsplatz teilnehmen. Die Studie zeigt auf, dass erhebliche lernbiografisch begründete Widerstände überwunden werden müssen, um – eingebettet in Anerkennungsverhältnisse im Kurs – zu persönlich bedeutsamem produktivem Lernen vorzudringen. Zunächst sind Freiwilligkeit und Teilnehmendenorientierung in Kursen häufig in Frage gestellt. Mehr oder weniger reflektierte Lernwiderstände mit defensiven Lerngründen werden insofern vom Autor als subjektiv sinnvolle Handlungsstrategien verstanden. Als lebenslang tätiger Praktiker erläutert der Autor, wie die Resultate der Studie für eine gute Praxis betrieblicher Grundbildung genutzt werden können.
A decade ago, it became feasible to store multi-terabyte databases in main memory. These in-memory databases (IMDBs) profit from DRAM's low latency and high throughput as well as from the removal of costly abstractions used in disk-based systems, such as the buffer cache. However, as the DRAM technology approaches physical limits, scaling these databases becomes difficult. Non-volatile memory (NVM) addresses this challenge. This new type of memory is persistent, has more capacity than DRAM (4x), and does not suffer from its density-inhibiting limitations. Yet, as NVM has a higher latency (5-15x) and a lower throughput (0.35x), it cannot fully replace DRAM.
IMDBs thus need to navigate the trade-off between the two memory tiers. We present a solution to this optimization problem. Leveraging information about access frequencies and patterns, our solution utilizes NVM's additional capacity while minimizing the associated access costs. Unlike buffer cache-based implementations, our tiering abstraction does not add any costs when reading data from DRAM. As such, it can act as a drop-in replacement for existing IMDBs. Our contributions are as follows:
(1) As the foundation for our research, we present Hyrise, an open-source, columnar IMDB that we re-engineered and re-wrote from scratch. Hyrise enables realistic end-to-end benchmarks of SQL workloads and offers query performance which is competitive with other research and commercial systems. At the same time, Hyrise is easy to understand and modify as repeatedly demonstrated by its uses in research and teaching.
(2) We present a novel memory management framework for different memory and storage tiers. By encapsulating the allocation and access methods of these tiers, we enable existing data structures to be stored on different tiers with no modifications to their implementation. Besides DRAM and NVM, we also support and evaluate SSDs and have made provisions for upcoming technologies such as disaggregated memory.
(3) To identify the parts of the data that can be moved to (s)lower tiers with little performance impact, we present a tracking method that identifies access skew both in the row and column dimensions and that detects patterns within consecutive accesses. Unlike existing methods that have substantial associated costs, our access counters exhibit no identifiable overhead in standard benchmarks despite their increased accuracy.
(4) Finally, we introduce a tiering algorithm that optimizes the data placement for a given memory budget. In the TPC-H benchmark, this allows us to move 90% of the data to NVM while the throughput is reduced by only 10.8% and the query latency is increased by 11.6%. With this, we outperform approaches that ignore the workload's access skew and access patterns and increase the query latency by 20% or more.
Individually, our contributions provide novel approaches to current challenges in systems engineering and database research. Combining them allows IMDBs to scale past the limits of DRAM while continuing to profit from the benefits of in-memory computing.
The increasing introduction of non-native plant species may pose a threat to local biodiversity. However, the basis of successful plant invasion is not conclusively understood, especially since these plant species can adapt to the new range within a short period of time despite impoverished genetic diversity of the starting populations. In this context, DNA methylation is considered promising to explain successful adaptation mechanisms in the new habitat. DNA methylation is a heritable variation in gene expression without changing the underlying genetic information. Thus, DNA methylation is considered a so-called epigenetic mechanism, but has been studied in mainly clonally reproducing plant species or genetic model plants. An understanding of this epigenetic mechanism in the context of non-native, predominantly sexually reproducing plant species might help to expand knowledge in biodiversity research on the interaction between plants and their habitats and, based on this, may enable more precise measures in conservation biology.
For my studies, I combined chemical DNA demethylation of field-collected seed material from predominantly sexually reproducing species and rearing offsping under common climatic conditions to examine DNA methylation in an ecological-evolutionary context. The contrast of chemically treated (demethylated) plants, whose variation in DNA methylation was artificially reduced, and untreated control plants of the same species allowed me to study the impact of this mechanism on adaptive trait differentiation and local adaptation. With this experimental background, I conducted three studies examining the effect of DNA methylation in non-native species along a climatic gradient and also between climatically divergent regions.
The first study focused on adaptive trait differentiation in two invasive perennial goldenrod species, Solidago canadensis sensu latu and S. gigantea AITON, along a climate gradient of more than 1000 km in length in Central Europe. I found population differences in flowering timing, plant height, and biomass in the temporally longer-established S. canadensis, but only in the number of regrowing shoots for S. gigantea. While S. canadensis did not show any population structure, I was able to identify three genetic groups along this climatic gradient in S. gigantea. Surprisingly, demethylated plants of both species showed no change in the majority of traits studied. In the subsequent second study, I focused on the longer-established goldenrod species S. canadensis and used molecular analyses to infer spatial epigenetic and genetic population differences in the same specimens from the previous study. I found weak genetic but no epigenetic spatial variation between populations. Additionally, I was able to identify one genetic marker and one epigenetic marker putatively susceptible to selection. However, the results of this study reconfirmed that the epigenetic mechanism of DNA methylation appears to be hardly involved in adaptive processes within the new range in S. canadensis.
Finally, I conducted a third study in which I reciprocally transplanted short-lived plant species between two climatically divergent regions in Germany to investigate local adaptation at the plant family level. For this purpose, I used four plant families (Amaranthaceae, Asteraceae, Plantaginaceae, Solanaceae) and here I additionally compared between non-native and native plant species. Seeds were transplanted to regions with a distance of more than 600 kilometers and had either a temperate-oceanic or a temperate-continental climate. In this study, some species were found to be maladapted to their own local conditions, both in non-native and native plant species alike. In demethylated individuals of the plant species studied, DNA methylation had inconsistent but species-specific effects on survival and biomass production. The results of this study highlight that DNA methylation did not make a substantial contribution to local adaptation in the non-native as well as native species studied.
In summary, my work showed that DNA methylation plays a negligible role in both adaptive trait variation along climatic gradients and local adaptation in non-native plant species that either exhibit a high degree of genetic variation or rely mainly on sexual reproduction with low clonal propagation. I was able to show that the adaptive success of these non-native plant species can hardly be explained by DNA methylation, but could be a possible consequence of multiple introductions, dispersal corridors and meta-population dynamics. Similarly, my results illustrate that the use of plant species that do not predominantly reproduce clonally and are not model plants is essential to characterize the effect size of epigenetic mechanisms in an ecological-evolutionary context.
Dynamic resource management is an essential requirement for private and public cloud computing environments. With dynamic resource management, the physical resources assignment to the cloud virtual resources depends on the actual need of the applications or the running services, which enhances the cloud physical resources utilization and reduces the offered services cost. In addition, the virtual resources can be moved across different physical resources in the cloud environment without an obvious impact on the running applications or services production. This means that the availability of the running services and applications in the cloud is independent on the hardware resources including the servers, switches and storage failures. This increases the reliability of using cloud services compared to the classical data-centers environments.
In this thesis we briefly discuss the dynamic resource management topic and then deeply focus on live migration as the definition of the compute resource dynamic management. Live migration is a commonly used and an essential feature in cloud and virtual data-centers environments. Cloud computing load balance, power saving and fault tolerance features are all dependent on live migration to optimize the virtual and physical resources usage. As we will discuss in this thesis, live migration shows many benefits to cloud and virtual data-centers environments, however the cost of live migration can not be ignored. Live migration cost includes the migration time, downtime, network overhead, power consumption increases and CPU overhead.
IT admins run virtual machines live migrations without an idea about the migration cost. So, resources bottlenecks, higher migration cost and migration failures might happen. The first problem that we discuss in this thesis is how to model the cost of the virtual machines live migration. Secondly, we investigate how to make use of machine learning techniques to help the cloud admins getting an estimation of this cost before initiating the migration for one of multiple virtual machines. Also, we discuss the optimal timing for a specific virtual machine before live migration to another server. Finally, we propose practical solutions that can be used by the cloud admins to be integrated with the cloud administration portals to answer the raised research questions above.
Our research methodology to achieve the project objectives is to propose empirical models based on using VMware test-beds with different benchmarks tools. Then we make use of the machine learning techniques to propose a prediction approach for virtual machines live migration cost. Timing optimization for live migration is also proposed in this thesis based on using the cost prediction and data-centers network utilization prediction. Live migration with persistent memory clusters is also discussed at the end of the thesis. The cost prediction and timing optimization techniques proposed in this thesis could be practically integrated with VMware vSphere cluster portal such that the IT admins can now use the cost prediction feature and timing optimization option before proceeding with a virtual machine live migration.
Testing results show that our proposed approach for VMs live migration cost prediction shows acceptable results with less than 20% prediction error and can be easily implemented and integrated with VMware vSphere as an example of a commonly used resource management portal for virtual data-centers and private cloud environments. The results show that using our proposed VMs migration timing optimization technique also could save up to 51% of migration time of the VMs migration time for memory intensive workloads and up to 27% of the migration time for network intensive workloads. This timing optimization technique can be useful for network admins to save migration time with utilizing higher network rate and higher probability of success.
At the end of this thesis, we discuss the persistent memory technology as a new trend in servers memory technology. Persistent memory modes of operation and configurations are discussed in detail to explain how live migration works between servers with different memory configuration set up. Then, we build a VMware cluster with persistent memory inside server and also with DRAM only servers to show the live migration cost difference between the VMs with DRAM only versus the VMs with persistent memory inside.
Der Gefährdungsschaden
(2022)
Das Konzept des Gefährdungsschadens gehört seit mindestens 130 Jahren zum festen strafrechtlichen Repertoire. Allerdings ist bis heute nicht eindeutig geklärt, wo die Grenze zwischen vermögensrelevanten und -irrelevanten Gefährdungen verläuft. Im Gegensatz zu den bisherigen sachverhalts- und deliktsbezogenen Fallgruppen bildet Melanie Epe daher zehn induktiv-dogmatische Fallgruppen auf Basis aller in der Standardliteratur publizierten 382 Entscheidungen des Reichsgerichts, des BGH und der Obergerichte. Die Leistungsfähigkeit der Fallgruppen ist, dass diese die Figur des Gefährdungsschadens in der Praxis greifbarer machen. Denn es wird an dogmatische, wirtschaftliche Besonderheiten für die Begründung des Gefährdungsschadens angeknüpft.
Aldehyde oxidases (AOXs) (E.C. 1.2.3.1) are molybdoflavo-enzymes belonging to the xanthine oxidase (XO) family. AOXs in mammals contain one molybdenum cofactor (Moco), one flavin adenine dinucleotide (FAD) and two [2Fe-2S] clusters, the presence of which is essential for the activity of the enzyme. Human aldehyde oxidase (hAOX1) is a cytosolic enzyme mainly expressed in the liver. hAOX1is involved in the metabolism of xenobiotics. It oxidizes aldehydes to their corresponding carboxylic acids and hydroxylates N-heterocyclic compounds. Since these functional groups are widely present in therapeutics, understanding the behaviour of hAOX1 has important implications in medicine. During the catalytic cycle of hAOX1, the substrate is oxidized at Moco and electrons are internally transferred to FAD via the FeS clusters. An electron acceptor juxtaposed to the FAD receives the electrons and re-oxidizes the enzyme for the next catalytic cycle. Molecular oxygen is the endogenous electron acceptor of hAOX1 and in doing so it is reduced and produces reactive oxygen species (ROS) including hydrogen peroxide (H2O2) and superoxide (O2.-). The production of ROS has patho-physiological importance, as ROS can have a wide range of effects on cell components including the enzyme itself.
In this thesis, we have shown that hAOX1 loses its activity over multiple cycles of catalysis due to endogenous ROS production and have identified a cysteine rich motif that protects hAOX1 from the ROS damaging effects. We have also shown that a sulfido ligand, which is bound at Moco and is essential for the catalytic activity of the enzyme, is vulnerable during turnover. The ROS produced during the course of the reaction are also able to remove this sulfido ligand from Moco. ROS, in addition, oxidize particular cysteine residues. The combined effects of ROS on the sulfido ligand and on specific cysteine residues in the enzyme result in its inactivation. Furthermore, we report that small reducing agents containing reactive sulfhydryl groups, in a selective manner, inactivate some of the mammalian AOXs by modifying the sulfido ligand at Moco. The mechanism of ROS production by hAOX1 is another scope that has been investigated as part of the work in this thesis. We have shown that the ratio of type of ROS, i.e. hydrogen peroxide (H2O2) and superoxide (O2.-), produced by hAOX1 is determined by a particular position on a flexible loop that locates in close proximity of FAD. The size of the cavity at the ROS producing site, i.e. the N5 position of the FAD isoalloxazine ring, kinetically affects the amount of each type of ROS generated by hAOX1. Taken together, hAOX1 is an enzyme with emerging importance in pharmacological and medical studies, not only due to its involvement in drug metabolism, but also due to ROS production which has physiological and pathological implications.
Hydraulic-driven fractures play a key role in subsurface energy technologies across several scales. By injecting fluid at high hydraulic pressure into rock with intrinsic low permeability, in-situ stress field and fracture development pattern can be characterised as well as rock permeability can be enhanced. Hydraulic fracturing is a commercial standard procedure for enhanced oil and gas production of rock reservoirs with low permeability in petroleum industry. However, in EGS utilization, a major geological concern is the unsolicited generation of earthquakes due to fault reactivation, referred to as induced seismicity, with a magnitude large enough to be felt on the surface or to damage facilities and buildings. Furthermore, reliable interpretation of hydraulic fracturing tests for stress measurement is a great challenge for the energy technologies. Therefore, in this cumulative doctoral thesis the following research questions are investigated. (1): How do hydraulic fractures grow in hard rock at various scales?; (2): Which parameters control hydraulic fracturing and hydro-mechanical coupling?; and (3): How can hydraulic fracturing in hard rock be modelled?
In the laboratory scale study, several laboratory hydraulic fracturing experiments are investigated numerically using Irazu2D that were performed on intact cubic Pocheon granite samples from South Korea applying different injection protocols. The goal of the laboratory experiments is to test the concept of cyclic soft stimulation which may enable sustainable permeability enhancement (Publication 1).
In the borehole scale study, hydraulic fracturing tests are reported that were performed in boreholes located in central Hungary to determine the in-situ stress for a geological site investigation. At depth of about 540 m, the recorded pressure versus time curves in mica schist with low dip angle foliation show atypical evolution. In order to provide explanation for this observation, a series of discrete element computations using Particle Flow Code 2D are performed (Publication 2).
In the reservoir scale study, the hydro-mechanical behaviour of fractured crystalline rock due to one of the five hydraulic stimulations at the Pohang Enhanced Geothermal site in South Korea is studied. Fluid pressure perturbation at faults of several hundred-meter lengths during hydraulic stimulation is simulated using FracMan (Publication 3).
The doctoral research shows that the resulting hydraulic fracturing geometry will depend “locally”, i.e. at the length scale of representative elementary volume (REV) and below that (sub-REV), on the geometry and strength of natural fractures, and “globally”, i.e. at super-REV domain volume, on far-field stresses. Regarding hydro-mechanical coupling, it is suggested to define separate coupling relationship for intact rock mass and natural fractures. Furthermore, the relative importance of parameters affecting the magnitude of formation breakdown pressure, a parameter characterising hydro-mechanical coupling, is defined. It can be also concluded that there is a clear gap between the capacity of the simulation software and the complexity of the studied problems. Therefore, the computational time of the simulation of complex hydraulic fracture geometries must be reduced while maintaining high fidelity simulation results. This can be achieved either by extending the computational resources via parallelization techniques or using time scaling techniques. The ongoing development of used numerical models focuses on tackling these methodological challenges.
Countries processing raw coffee beans are burdened with low economical incomes to fight the serious environmental problems caused by the by-products and wastewater that is generated during the wet-coffee processing. The aim of this work was to develop alternative methods of improving the waste by-product quality and thus making the process economically more attractive with valorization options that can be brought to the coffee producers.
The type of processing influences not only the constitution of green coffee but also of by-products and wastewater. Therefore, coffee bean samples as well as by-products and wastewater collected at different production steps of were analyzed. Results show that the composition of wastewater is dependent on how much and how often the wastewater is recycled in the processing. Considering the coffee beans, results indicate that the proteins might be affected during processing and a positive effect of the fermentation on the solubility and accessibility of proteins seems to be probable. The steps of coffee processing influence the different constituents of green coffee beans which, during roasting, give rise to aroma compounds and express the characteristics of roasted coffee beans. Knowing that this group of compounds is involved in the Maillard reaction during roasting, this possibility could be utilized for the coffee producers to improve the quality of green coffee beans and finally the coffee cup quality.
The valorization of coffee wastes through modification to activated carbon has been considered as a low-cost option creating an adsorbent with prospective to compete with commercial carbons. Activation protocol using spent coffee and parchment was developed and prepared to assess their adsorption capacity for organic compounds. Spent coffee grounds and parchment proved to have similar adsorption efficiency to commercial activated carbon.
The results of this study document a significant information originating from the processing of the de-pulped to green coffee beans. Furthermore, it showed that coffee parchment and spent coffee grounds can be valorized as low-cost option to produce activated carbons. Further work needs to be directed to the optimization of the activation methods to improve the quality of the materials produced and the viability of applying such experiments in-situ to bring the coffee producer further valorization opportunities with environmental perspectives.
Coffee producers would profit in establishing appropriate simple technologies to improve green coffee quality, re-use coffee by-products, and wastewater valorization.
More than a century ago the phenomenon of non-Mendelian inheritance (NMI), defined as any type of inheritance pattern in which traits do not segregate in accordance with Mendel’s laws, was first reported. In the plant kingdom three genomic compartments, the nucleus, chloroplast, and mitochondrion, can participate in such a phenomenon. High-throughput sequencing (HTS) proved to be a key technology to investigate NMI phenomena by assembling and/or resequencing entire genomes. However, generation, analysis and interpretation of such datasets remain challenging by the multi-layered biological complexity. To advance our knowledge in the field of NMI, I conducted three studies involving different HTS technologies and implemented two new algorithms to analyze them.
In the first study I implemented a novel post-assembly pipeline, called Semi-Automated Graph-Based Assembly Curator (SAGBAC), which visualizes non-graph-based assemblies as graphs, identifies recombinogenic repeat pairs (RRPs), and reconstructs plant mitochondrial genomes (PMG) in a semiautomated workflow. We applied this pipeline to assemblies of three Oenothera species resulting in a spatially folded and circularized model. This model was confirmed by PCR and Southern blot analyses and was used to predict a defined set of 70 PMG isoforms. With Illumina Mate Pair and PacBio RSII data, the stoichiometry of the RRPs was determined quantitatively differing up to three-fold.
In the second study I developed a post-multiple sequence alignment algorithm, called correlation mapping (CM), which correlates segment-wise numbers of nucleotide changes to a numeric ascertainable phenotype. We applied this algorithm to 14 wild type and 18 mutagenized plastome assemblies within the Oenothera genus and identified two genes, accD and ycf2 that may cause the competitive behavior of plastid genotypes as plastids can be biparental inherited in Oenothera. Moreover, lipid composition of the plastid envelope membrane is affected by polymorphisms within these two genes.
For the third study, I programmed a pipeline to investigate a NMI phenomenon, known as paramutation, in tomato by analyzing DNA and bisulfite sequencing data as well as microarray data. We identified the responsible gene (Solyc02g0005200) and were able to fully repress its caused phenotype by heterologous complementation with a paramutation insensitive transgene of the Arabidopsis thaliana orthologue. Additionally, a suppressor mutant shows a globally altered DNA methylation pattern and carries a large deletion leading to a gene fusion involving a histone deacetylase.
In conclusion, my developed and implemented algorithms and data analysis pipelines are suitable to investigate NMI and led to novel insights about such phenomena by reconstructing PMGs (SAGBAC) as a requirement to study mitochondria-associated phenotypes, by identifying genes (CM) causing interplastidial competition as well by applying a DNA/Bisulfite-seq analysis pipeline to shed light in a transgenerational epigenetic inheritance phenomenon.
Molecules are often naturally embedded in a complex environment. As a consequence, characteristic properties of a molecular subsystem can be substantially altered or new properties emerge due to interactions between molecular and environmental degrees of freedom. The present thesis is concerned with the numerical study of quantum dynamical and stationary properties of molecular vibrational systems embedded in selected complex environments.
In the first part, we discuss "strong-coupling" model scenarios for molecular vibrations interacting with few quantized electromagnetic field modes of an optical Fabry-Pérot cavity. We thoroughly elaborate on properties of emerging "vibrational polariton" light-matter hybrid states and examine the relevance of the dipole self-energy. Further, we identify cavity-induced quantum effects and an emergent dynamical resonance in a cavity-altered thermal isomerization model, which lead to significant suppression of thermal reaction rates. Moreover, for a single rovibrating diatomic molecule in an optical cavity, we observe non-adiabatic signatures in dynamics due to "vibro-polaritonic conical intersections" and discuss spectroscopically accessible "rovibro-polaritonic" light-matter hybrid states.
In the second part, we study a weakly coupled but numerically challenging quantum mechanical adsorbate-surface model system comprising a few thousand surface modes. We introduce an efficient construction scheme for a "hierarchical effective mode" approach to reduce the number of surface modes in a controlled manner. In combination with the multilayer multiconfigurational time-dependent Hartree (ML-MCTDH) method, we examine the vibrational adsorbate relaxation dynamics from different excited adsorbate states by solving the full non-Markovian system-bath dynamics for the characteristic relaxation time scale. We examine half-lifetime scaling laws from vibrational populations and identify prominent non-Markovian signatures as deviations from Markovian reduced system density matrix theory in vibrational coherences, system-bath entanglement and energy transfer dynamics.
In the final part of this thesis, we approach the dynamics and spectroscopy of vibronic model systems at finite temperature by formulating the ML-MCTDH method in the non-stochastic framework of thermofield dynamics. We apply our method to thermally-altered ultrafast internal conversion in the well-known vibronic coupling model of pyrazine. Numerically beneficial representations of multilayer wave functions ("ML-trees") are identified for different temperature regimes, which allow us to access thermal effects on both electronic and vibrational dynamics as well as spectroscopic properties for several pyrazine models.
The echo chamber model describes the development of groups in heterogeneous social networks. By heterogeneous social network we mean a set of individuals, each of whom represents exactly one opinion. The existing relationships between individuals can then be represented by a graph. The echo chamber model is a time-discrete model which, like a board game, is played in rounds. In each round, an existing relationship is randomly and uniformly selected from the network and the two connected individuals interact. If the opinions of the individuals involved are sufficiently similar, they continue to move closer together in their opinions, whereas in the case of opinions that are too far apart, they break off their relationship and one of the individuals seeks a new relationship. In this paper we examine the building blocks of this model. We start from the observation that changes in the structure of relationships in the network can be described by a system of interacting particles in a more abstract space.
These reflections lead to the definition of a new abstract graph that encompasses all possible relational configurations of the social network. This provides us with the geometric understanding necessary to analyse the dynamic components of the echo chamber model in Part III. As a first step, in Part 7, we leave aside the opinions of the inidividuals and assume that the position of the edges changes with each move as described above, in order to obtain a basic understanding of the underlying dynamics. Using Markov chain theory, we find upper bounds on the speed of convergence of an associated Markov chain to its unique stationary distribution and show that there are mutually identifiable networks that are not apparent in the dynamics under analysis, in the sense that the stationary distribution of the associated Markov chain gives equal weight to these networks.
In the reversible cases, we focus in particular on the explicit form of the stationary distribution as well as on the lower bounds of the Cheeger constant to describe the convergence speed.
The final result of Section 8, based on absorbing Markov chains, shows that in a reduced version of the echo chamber model, a hierarchical structure of the number of conflicting relations can be identified.
We can use this structure to determine an upper bound on the expected absorption time, using a quasi-stationary distribution. This hierarchy of structure also provides a bridge to classical theories of pure death processes. We conclude by showing how future research can exploit this link and by discussing the importance of the results as building blocks for a full theoretical understanding of the echo chamber model. Finally, Part IV presents a published paper on the birth-death process with partial catastrophe. The paper is based on the explicit calculation of the first moment of a catastrophe. This first part is entirely based on an analytical approach to second degree recurrences with linear coefficients. The convergence to 0 of the resulting sequence as well as the speed of convergence are proved. On the other hand, the determination of the upper bounds of the expected value of the population size as well as its variance and the difference between the determined upper bound and the actual value of the expected value. For these results we use almost exclusively the theory of ordinary nonlinear differential equations.
High-mountain regions provide valuable ecosystem services, including food, water, and energy production, to more than 900 million people worldwide. Projections hold, that this population number will rapidly increase in the next decades, accompanied by a continued urbanisation of cities located in mountain valleys. One of the manifestations of this ongoing socio-economic change of mountain societies is a rise in settlement areas and transportation infrastructure while an increased power need fuels the construction of hydropower plants along rivers in the high-mountain regions of the world. However, physical processes governing the cryosphere of these regions are highly sensitive to changes in climate and a global warming will likely alter the conditions in the headwaters of high-mountain rivers. One of the potential implications of this change is an increase in frequency and magnitude of outburst floods – highly dynamic flows capable of carrying large amounts of water and sediments. Sudden outbursts from lakes formed behind natural dams are complex geomorphological processes and are often part of a hazard cascade. In contrast to other types of natural hazards in high-alpine areas, for example landslides or avalanches, outburst floods are highly infrequent. Therefore, observations and data describing for example the mode of outburst or the hydraulic properties of the downstream propagating flow are very limited, which is a major challenge in contemporary (glacial) lake outburst flood research. Although glacial lake outburst floods (GLOFs) and landslide-dammed lake outburst floods (LLOFs) are rare, a number of documented events caused high fatality counts and damage. The highest documented losses due to outburst floods since the start of the 20th century were induced by only a few high-discharge events. Thus, outburst floods can be a significant hazard to downvalley communities and infrastructure in high-mountain regions worldwide.
This thesis focuses on the Greater Himalayan region, a vast mountain belt stretching across 0.89 million km2. Although potentially hundreds of outburst floods have occurred there since the beginning of the 20th century, data on these events is still scarce. Projections of cryospheric change, including glacier-mass wastage and permafrost degradation, will likely result in an overall increase of the water volume stored in meltwater lakes as well as the destabilisation of mountain slopes in the Greater Himalayan region. Thus, the potential for outburst floods to affect the increasingly more densely populated valleys of this mountain belt is also likely to increase in the future. A prime example of one of these valleys is the Pokhara valley in Nepal, which is drained by the Seti Khola, a river crossing one of the steepest topographic gradients in the Himalayas. This valley is also home to Nepal’s second largest, rapidly growing city, Pokhara, which currently has a population of more than half a million people – some of which live in informal settlements within the floodplain of the Seti Khola. Although there is ample evidence for past outburst floods along this river in recent and historic times, these events have hardly been quantified.
The main motivation of my thesis is to address the data scarcity on past and potential future outburst floods in the Greater Himalayan region, both at a regional and at a local scale. For the former, I compiled an inventory of >3,000 moraine-dammed lakes, of which about 1% had a documented sudden failure in the past four decades. I used this data to test whether a number of predictors that have been widely applied in previous GLOF assessments are statistically relevant when estimating past GLOF susceptibility. For this, I set up four Bayesian multi-level logistic regression models, in which I explored the credibility of the predictors lake area, lake-area dynamics, lake elevation, parent-glacier-mass balance, and monsoonality. By using a hierarchical approach consisting of two levels, this probabilistic framework also allowed for spatial variability on GLOF susceptibility across the vast study area, which until now had not been considered in studies of this scale. The model results suggest that in the Nyainqentanglha and Eastern Himalayas – regions with strong negative glacier-mass balances – lakes have been more prone to release GLOFs than in regions with less negative or even stable glacier-mass balances. Similarly, larger lakes in larger catchments had, on average, a higher probability to have had a GLOF in the past four decades. Yet, monsoonality, lake elevation, and lake-area dynamics were more ambiguous. This challenges the credibility of a lake’s rapid growth in surface area as an indicator of a pending outburst; a metric that has been applied to regional GLOF assessments worldwide.
At a local scale, my thesis aims to overcome data scarcity concerning the flow characteristics of the catastrophic May 2012 flood along the Seti Khola, which caused 72 fatalities, as well as potentially much larger predecessors, which deposited >1 km³ of sediment in the Pokhara valley between the 12th and 14th century CE. To reconstruct peak discharges, flow depths, and flow velocities of the 2012 flood, I mapped the extents of flood sediments from RapidEye satellite imagery and used these as a proxy for inundation limits. To constrain the latter for the Mediaeval events, I utilised outcrops of slackwater deposits in the fills of tributary valleys. Using steady-state hydrodynamic modelling for a wide range of plausible scenarios, from meteorological (1,000 m³ s-1) to cataclysmic outburst floods (600,000 m³ s-1), I assessed the likely initial discharges of the recent and the Mediaeval floods based on the lowest mismatch between sedimentary evidence and simulated flood limits. One-dimensional HEC-RAS simulations suggest, that the 2012 flood most likely had a peak discharge of 3,700 m³ s-1 in the upper Seti Khola and attenuated to 500 m³ s-1 when arriving in Pokhara’s suburbs some 15 km downstream.
Simulations of flow in two-dimensions with orders of magnitude higher peak discharges in ANUGA show extensive backwater effects in the main tributary valleys. These backwater effects match the locations of slackwater deposits and, hence, attest for the flood character of Mediaeval sediment pulses. This thesis provides first quantitative proof for the hypothesis, that the latter were linked to earthquake-triggered outbursts of large former lakes in the headwaters of the Seti Khola – producing floods with peak discharges of >50,000 m³ s-1.
Building on this improved understanding of past floods along the Seti Khola, my thesis continues with an analysis of the impacts of potential future outburst floods on land cover, including built-up areas and infrastructure mapped from high-resolution satellite and OpenStreetMap data. HEC-RAS simulations of ten flood scenarios, with peak discharges ranging from 1,000 to 10,000 m³ s-1, show that the relative inundation hazard is highest in Pokhara’s north-western suburbs. There, the potential effects of hydraulic ponding upstream of narrow gorges might locally sustain higher flow depths. Yet, along this reach, informal settlements and gravel mining activities are close to the active channel. By tracing the construction dynamics in two of these potentially affected informal settlements on multi-temporal RapidEye, PlanetScope, and Google Earth imagery, I found that exposure increased locally between three- to twentyfold in just over a decade (2008 to 2021).
In conclusion, this thesis provides new quantitative insights into the past controls on the susceptibility of glacial lakes to sudden outburst at a regional scale and the flow dynamics of propagating flood waves released by past events at a local scale, which can aid future hazard assessments on transient scales in the Greater Himalayan region. My subsequent exploration of the impacts of potential future outburst floods to exposed infrastructure and (informal) settlements might provide valuable inputs to anticipatory assessments of multiple risks in the Pokhara valley.
As climate change worsens, there is a growing urgency to promote renewable energies and improve their accessibility to society. Here, solar energy harvesting is of particular importance. Currently, metal halide perovskite (MHP) solar cells are indispensable in future solar energy generation research. MHPs are crystalline semiconductors increasingly relevant as low-cost, high-performance materials for optoelectronics. Their processing from solution at low temperature enables easy fabrication of thin film elements, encompassing solar cells and light-emitting diodes or photodetectors. Understanding the coordination chemistry of MHPs in their precursor solution would allow control over the thin film crystallization, the material properties and the final device performance.
In this work, we elaborate on the key parameters to manipulate the precursor solution with the long-term objective of enabling systematic process control. We focus on the nanostructural characterization of the initial arrangements of MHPs in the precursor solutions. Small-angle scattering is particularly well suited for measuring nanoparticles in solution. This technique proved to be valuable for the direct analyzes of perovskite precursor solutions in standard processing concentrations without causing radiation damage. We gain insights into the chemical nature of widely used precursor structures such as methylammonium lead iodide (MAPbI3), presenting first insights into the complex arrangements and interaction within this precursor state. Furthermore, we transfer the preceding results to other more complex perovskite precursors. The influence of compositional engineering is investigated using the addition of alkali cations as an example. As a result, we propose a detailed working mechanism on how the alkali cations suppress the formation of intermediate phases and improve the quality of the crystalline thin film. In addition, we investigate the crystallization process of a tin-based perovskite composition (FASnI3) under the influence of fluoride chemistry. We prove that the frequently used additive, tin fluoride (SnF2), selectively binds undesired oxidized tin (Sn(IV)) in the precursor solution. This prevents its incorporation into the actual crystal structure and thus reduces the defect density of the material. Furthermore, SnF2 leads to a more homogeneous crystal growth process, which results in improved crystal quality of the thin film material.
In total, this study provides a detailed characterization of the complex system of perovskite precursor chemistry. We thereby cover relevant parameters for future MHP solar cell process control, such as (I) the environmental impact based on concentration and temperature (II) the addition of counter ions to reduce the diffuse layer surrounding the precursor nanostructures and (III) the targeted use of additives to eliminate unwanted components selectively and to ensure a more homogeneous crystal growth.
Stimuli-promoted in situ formation of hydrogels with thiol/thioester containing peptide precursors
(2022)
Hydrogels are potential synthetic ECM-like substitutes since they provide functional and structural similarities compared to soft tissues. They can be prepared by crosslinking of macromolecules or by polymerizing suitable precursors. The crosslinks are not necessarily covalent bonds, but could also be formed by physical interactions such as π-π interactions, hydrophobic interactions, or H-bonding. On demand in situ forming hydrogels have garnered increased interest especially for biomedical applications over preformed gels due to the relative ease of in vivo delivery and filling of cavities. The thiol-Michael addition reaction provides a straightforward and robust strategy for in situ gel formation with its fast reaction kinetics and ability to proceed under physiological conditions. The incorporation of a trigger function into a crosslinking system becomes even more interesting since gelling can be controlled with stimulus of choice. The use of small molar mass crosslinker precursors with active groups orthogonal to thiol-Michael reaction type electrophile provides the opportunity to implement an on-demand in situ crosslinking without compromising the fast reaction kinetics.
It was postulated that short peptide sequences due to the broad range structural-function relations available with the different constituent amino acids, can be exploited for the realisation of stimuli-promoted in situ covalent crosslinking and gelation applications. The advantages of this system over conventional polymer-polymer hydrogel systems are the ability tune and predict material property at the molecular level.
The main aim of this work was to develop a simplified and biologically-friendly stimuli-promoted in situ crosslinking and hydrogelation system using peptide mimetics as latent crosslinkers. The approach aims at using a single thiodepsipeptide sequence to achieve separate pH- and enzyme-promoted gelation systems with little modification to the thiodepsipeptide sequence. The realization of this aim required the completion of three milestones.
In the first place, after deciding on the thiol-Michael reaction as an effective in situ crosslinking strategy, a thiodepsipeptide, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH (TDP) with expected propensity towards pH-dependent thiol-thioester exchange (TTE) activation, was proposed as a suitable crosslinker precursor for pH-promoted gelation system. Prior to the synthesis of the proposed peptide-mimetic, knowledge of the thiol-Michael reactivity of the would-be activated thiol moiety SH-Leu, which is internally embedded in the thiodepsipeptide was required. In line with pKa requirements for a successful TTE, the reactivity of a more acidic thiol, SH-Phe was also investigated to aid the selection of the best thiol to be incorporated in the thioester bearing peptide based crosslinker precursor. Using ‘pseudo’ 2D-NMR investigations, it was found that only reactions involving SH-Leu yielded the expected thiol-Michael product, an observation that was attributed to the steric hindrance of the bulkier nature of SH-Phe. The fast reaction rates and complete acrylate/maleimide conversion obtained with SH-Leu at pH 7.2 and higher aided the direct elimination of SH-Phe as a potential thiol for the synthesis of the peptide mimetic.
Based on the initial studies, for the pH-promoted gelation system, the proposed Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH was kept unmodified. The subtle difference in pKa values between SH-Leu (thioester thiol) and the terminal cysteamine thiol from theoretical conditions should be enough to effect a ‘pseudo’ intramolecular TTE. In polar protic solvents and under basic aqueous conditions, TDP successfully undergoes a ‘pseudo’ intramolecular TTE reaction to yield an α,ω-dithiol tripeptide, HSLeu-Leu-Gly-NEtSH. The pH dependence of thiolate ion generation by the cysteamine thiol aided the incorporation of the needed stimulus (pH) for the overall success of TTE (activation step) – thiol-Michael addition (crosslinking) strategy.
Secondly, with potential biomedical applications in focus, the susceptibility of TDP, like other thioesters, to intermolecular TTE reaction was probed with a group of thiols of varying thiol pKa values, since biological milieu characteristically contain peptide/protein thiols. L-cysteine, which is a biologically relevant thiol, and a small molecular weight thiol, methylthioglycolate both with relatively similar thiol pKa, values, led to an increase concentration of the dithiol crosslinker when reacted with TDP. In the presence of acidic thiols (p-NTP and 4MBA), a decrease in the dithiol concentration was observed, an observation that can be attributed to the inability of the TTE tetrahedral intermediate to dissociate into exchange products and is in line with pKa requirements for successful TTE reaction. These results additionally makes TDP more attractive and the potentially the first crosslinker precursor for applications in biologically relevant media.
Finally, the ability of TDP to promote pH-sensitive in situ gel formation was probed with maleimide functionalized 4-arm polyethylene glycol polymers in tris-buffered media of varying pHs. When a 1:1 thiol: maleimide molar ratio was used, TDP-PEG4MAL hydrogels formed within 3, 12 and 24 hours at pH values of 8.5, 8.0 and 7.5 respectively. However, gelation times of 3, 5 and 30 mins were observed for the same pH trend when the thiol: maleimide molar was increased to 2:1.
A direct correlation of thiol content with G’ of the gels at each pH could also be drawn by comparing gels with thiol: maleimide ratios of 1:1 to those with 2:1 thiol: maleimide mole ratios. This is supported by the fact that the storage modulus (G') is linearly dependent on the crosslinking density of the polymer. The values of initial G′ for all gels ranged between (200 – 5000 Pa), which falls in the range of elasticities of certain tissue microenvironments for example brain tissue 200 – 1000 Pa and adipose tissue (2500 – 3500 Pa).
Knowledge so far gained from the study on the ability to design and tune the exchange reaction of thioester containing peptide mimetic will give those working in the field further insight into the development of new sequences tailored towards specific applications.
TTE substrate design using peptide mimetic as presented in this work has revealed interesting new insights considering the state-of-the-art. Using the results obtained as reference, the strategy provides a possibility to extend the concept to the controlled delivery of active molecules needed for other robust and high yielding crosslinking reactions for biomedical applications. Application for this sequentially coupled functional system could be seen e.g. in the treatment of inflamed tissues associated with urinary tract like bladder infections for which pH levels above 7 were reported. By the inclusion of cell adhesion peptide motifs, the hydrogel network formed at this pH could act as a new support layer for the healing of damage epithelium as shown in interfacial gel formation experiments using TDP and PEG4MAL droplets.
The versatility of the thiodepsipeptide sequence, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-(TDPo) was extended for the design and synthesis of a MMP-sensitive 4-arm PEG-TDPo conjugate. The purported cleavage of TDPo at the Gly-SLeu bond yields active thiol units for subsequent reaction of orthogonal Michael acceptor moieties. One of the advantages of stimuli-promoted in situ crosslinking systems using short peptides should be the ease of design of required peptide molecules due to the predictability of peptide functions their sequence structure. Consequently the functionalisation of a 4-arm PEG core with the collagenase active TDPo sequence yielded an MMP-sensitive 4-arm thiodepsipeptide-PEG conjugate (PEG4TDPo) substrate.
Cleavage studies using thiol flourometric assay in the presence of MMPs -2 and -9 confirmed the susceptibility of PEG4TDPo towards these enzymes. The resulting time-dependent increase in fluorescence intensity in the presence of thiol assay signifies the successful cleavage of TDPo at the Gly-SLeu bond as expected. It was observed that the cleavage studies with thiol flourometric assay introduces a sigmoid non-Michaelis-Menten type kinetic profile, hence making it difficult to accurately determine the enzyme cycling parameters, kcat and KM .
Gelation studies with PEG4MAL at 10 % wt. concentrations revealed faster gelation with MMP-2 than MMP-9 with 28 and 40 min gelation times respectively. Possible contributions by hydrolytic cleavage of PEG4TDPo has resulted in the gelation of PEG4MAL blank samples but only after 60 minutes of reaction. From theoretical considerations, the simultaneous gelation reaction would be expected to more negatively impact the enzymatic than hydrolytic cleavage. The exact contributions from hydrolytic cleavage of PEG4TDPo would however require additional studies.
In summary this new and simplified in situ crosslinking system using peptide-based crosslinker precursors with tuneable properties exhibited in situ crosslinking gelation kinetics on similar levels with already active dithiols reported. The advantageous on-demand functionality associated with its pH-sensitivity and physiological compatibility makes it a strong candidate worth further research as biomedical applications in general and on-demand material synthesis is concerned.
Results from MMP-promoted gelation system unveils a simple but unexplored approach for in situ synthesis of covalently crosslinked soft materials, that could lead to the development of an alternative pathway in addressing cancer metastasis by making use of MMP overexpression as a trigger. This goal has so far not being reach with MMP inhibitors despite the extensive work this regard.
X-rays are integral to furthering our knowledge of exoplanetary systems. In this work we discuss the use of X-ray observations to understand star-planet interac- tions, mass-loss rates of an exoplanet’s atmosphere and the study of an exoplanet’s atmospheric components using future X-ray spectroscopy.
The low-mass star GJ 1151 was reported to display variable low-frequency radio emission, which is an indication of coronal star-planet interactions with an unseen exoplanet. In chapter 5 we report the first X-ray detection of GJ 1151’s corona based on XMM-Newton data. Averaged over the observation, we detect the star with a low coronal temperature of 1.6 MK and an X-ray luminosity of LX = 5.5 × 1026 erg/s. This is compatible with the coronal assumptions for a sub-Alfvénic star- planet interaction origin of the observed radio signals from this star.
In chapter 6, we aim to characterise the high-energy environment of known ex- oplanets and estimate their mass-loss rates. This work is based on the soft X-ray instrument on board the Spectrum Roentgen Gamma (SRG) mission, eROSITA, along with archival data from ROSAT, XMM-Newton, and Chandra. We use these four X-ray source catalogues to derive X-ray luminosities of exoplanet host stars in the 0.2-2 keV energy band. A catalogue of the mass-loss rates of 287 exoplan- ets is presented, with 96 of these planets characterised for the first time using new eROSITA detections. Of these first time detections, 14 are of transiting exoplanets that undergo irradiation from their host stars that is of a level known to cause ob- servable evaporation signals in other systems, making them suitable for follow-up observations.
In the next generation of space observatories, X-ray transmission spectroscopy of an exoplanet’s atmosphere will be possible, allowing for a detailed look into the atmospheric composition of these planets. In chapter 7, we model sample spectra using a toy model of an exoplanetary atmosphere to predict what exoplanet transit observations with future X-ray missions such as Athena will look like. We then estimate the observable X-ray transmission spectrum for a typical Hot Jupiter-type exoplanet, giving us insights into the advances in X-ray observations of exoplanets in the decades to come.
Im Rahmen dieser Dissertation wurde der Sauerstoff im Grundgerüst der [1,3]-Dioxolo[4.5-f]benzodioxol-Fluoreszenzfarbstoffe (DBD-Fluoreszenzfarbstoffe) vollständig mit Schwefel ausgetauscht und daraus eine neue Klasse von Fluoreszenzfarbstoffen entwickelt, die Benzo[1,2-d:4,5-d']bis([1,3]dithiol)-Fluorophore (S4-DBD-Fluorophore). Insgesamt neun der besonders interessanten, difunktionalisierten Vertreter konnten synthetisiert werden, die sich in ihren elektronenziehenden Gruppen und in ihrer Anordnung unterschieden.
Durch den Austausch von Sauerstoff mit Schwefel kam es zu teilweise auffälligen Veränderungen in den Fluoreszenzparametern, wie eine Abnahme der Fluoreszenzquantenausbeuten und -lebenszeiten aber auch eine deutliche Rotverschiebung in den Absorptions- und Emissionswellenlängen mit großen STOKES-Verschiebungen. Damit sind die S4-DBD-Fluorophore eine wertvolle Ergänzung für die DBD-Farbstoffe.
Die Ursachen für die Abnahme der Lebenszeiten und Quantenausbeuten konnte auf eine hohe Besetzung des Triplett-Zustandes zurückgeführt werden, welcher durch die verstärkten Spin-Bahn-Kopplungen des Schwefels hervorgerufen wird. Zusammen mit dem Arbeitskreis physikalische Chemie der Universität Potsdam konnten auch die photophysikalischen Prozesse über die Transienten-Absorptionsspektroskopie (TAS) aufgeklärt werden.
Eine Strategie zur Funktionalisierung der S4-DBD-Farbstoffe am Thioacetalgerüst konnte entwickelt werden. So gelang es Alkohol-, Propargyl-, Azid-, NHS-Ester-, Carbonsäure-, Maleimid- und Tosyl-Gruppen an S4-DBD-Dialdehyden anzubringen.
Erweiternd wurden molekulare Stäbe auf Basis von Schwefel-Oligo-Spiro-Ketalen (SOSKs) untersucht, bei denen Sauerstoff durch Schwefel ersetzt wurde. Hier konnten die Synthesen der löslichkeitsvermittelnden TER-Muffe und auch des Tetrathiapentaerythritols als Grundbaustein deutlich verbessert werden. Aus diesen konnte ein einfaches SOSK-Polymer hergestellt werden. Weitere Versuche zum Aufbau eines Stabes müssen aber noch untersucht werden. Um einen S-OSK-Stab aufzubauen hat sich dabei die Dithiocarbonat-Gruppe in ersten Versuchen als potenzielle geeignete Schutzgruppe für das Tetrathiapentaerythritol herausgestellt.
Proteine sind an praktisch allen Prozessen in lebenden Zellen maßgeblich beteiligt. Auch in der Biotechnologie werden Proteine in vielfältiger Weise eingesetzt.
Ein Protein besteht aus einer Kette von Aminosäuren. Häufig lagern sich mehrere dieser Ketten zu größeren Strukturen und Funktionseinheiten, sogenannten Proteinkomplexen,
zusammen. Kürzlich wurde gezeigt, dass eine Proteinkomplexbildung bereits während der Biosynthese der Proteine (co-translational) stattfinden kann
und nicht stets erst danach (post-translational) erfolgt. Da Fehlassemblierungen von Proteinen zu Funktionsverlusten und adversen Effekten führen, ist eine präzise und verlässliche Proteinkomplexbildung sowohl für zelluläre Prozesse als auch für biotechnologische Anwendungen essenziell. Mit experimentellen Methoden lassen sich zwar u.a. die Stöchiometrie und die Struktur von Proteinkomplexen bestimmen,
jedoch bisher nicht die Dynamik der Komplexbildung auf unterschiedlichen Zeitskalen. Daher sind grundlegende Mechanismen der Proteinkomplexbildung noch nicht vollständig verstanden. Die hier vorgestellte, auf experimentellen Erkenntnissen aufbauende, computergestützte Modellierung der Proteinkomplexbildung erlaubt eine umfassende Analyse des Einflusses physikalisch-chemischer Parameter
auf den Assemblierungsprozess. Die Modelle bilden möglichst realistisch die experimentellen Systeme der Kooperationspartner (Bar-Ziv, Weizmann-Institut, Israel; Bukau und Kramer, Universität Heidelberg) ab, um damit die Assemblierung von Proteinkomplexen einerseits in einem quasi-zweidimensionalen synthetischen Expressionssystem (in vitro) und andererseits im Bakterium Escherichia coli (in vivo) untersuchen zu können. Mit Hilfe eines vereinfachten Expressionssystems, in dem die Proteine nur an die Chip-Oberfläche, aber nicht aneinander binden können, wird das theoretische Modell parametrisiert. In diesem vereinfachten in-vitro-System durchläuft die Effizienz der Komplexbildung drei Regime – ein bindedominiertes Regime, ein Mischregime und ein produktionsdominiertes Regime. Ihr Maximum erreicht die Effizienz dabei kurz nach dem Übergang vom bindedominierten ins Mischregime und fällt anschließend monoton ab. Sowohl im nicht-vereinfachten in-vitro- als auch im in-vivo-System koexistieren je zwei konkurrierende Assemblierungspfade: Im in-vitro-System erfolgt die Komplexbildung entweder spontan in wässriger Lösung (Lösungsassemblierung) oder aber in einer definierten Schrittfolge an der Chip-Oberfläche (Oberflächenassemblierung); Im in-vivo-System konkurrieren hingegen die co- und die post-translationale Komplexbildung. Es zeigt sich, dass die Dominanz der Assemblierungspfade im in-vitro-System zeitabhängig ist und u.a. durch die Limitierung und Stärke der Bindestellen auf der Chip-Oberfläche beeinflusst werden kann. Im in-vivo-System hat der räumliche Abstand zwischen den Syntheseorten der beiden Proteinkomponenten nur dann einen Einfluss auf die Komplexbildung, wenn die Untereinheiten schnell degradieren. In diesem Fall dominiert die co-translationale Assemblierung auch auf kurzen Zeitskalen deutlich, wohingegen es bei stabilen Untereinheiten zu einem Wechsel von der Dominanz der post- hin zu einer geringen Dominanz der co-translationalen Assemblierung kommt. Mit den in-silico-Modellen lässt sich neben der Dynamik u.a. auch die Lokalisierung der Komplexbildung und -bindung darstellen, was einen Vergleich der theoretischen Vorhersagen mit experimentellen Daten und somit eine Validierung der Modelle ermöglicht. Der hier präsentierte in-silico Ansatz ergänzt die experimentellen Methoden, und erlaubt so, deren Ergebnisse zu interpretieren und neue Erkenntnisse davon abzuleiten.
Neural conversation models aim to predict appropriate contributions to a (given) conversation by using neural networks trained on dialogue data. A specific strand focuses on non-goal driven dialogues, first proposed by Ritter et al. (2011): They investigated the task of transforming an utterance into an appropriate reply. Then, this strand evolved into dialogue system approaches using long dialogue histories and additional background context. Contributing meaningful and appropriate to a conversation is a complex task, and therefore research in this area has been very diverse: Serban et al. (2016), for example, looked into utilizing variable length dialogue histories, Zhang et al. (2018) added additional context to the dialogue history, Wolf et al. (2019) proposed a model based on pre-trained Self-Attention neural networks (Vasvani et al., 2017), and Dinan et al. (2021) investigated safety issues of these approaches. This trend can be seen as a transformation from trying to somehow carry on a conversation to generating appropriate replies in a controlled and reliable way.
In this thesis, we first elaborate the meaning of appropriateness in the context of neural conversation models by drawing inspiration from the Cooperative Principle (Grice, 1975). We first define what an appropriate contribution has to be by operationalizing these maxims as demands on conversation models: being fluent, informative, consistent towards given context, coherent and following a social norm. Then, we identify different targets (or intervention points) to achieve the conversational appropriateness by investigating recent research in that field.
In this thesis, we investigate the aspect of consistency towards context in greater detail, being one aspect of our interpretation of appropriateness.
During the research, we developed a new context-based dialogue dataset (KOMODIS) that combines factual and opinionated context to dialogues. The KOMODIS
dataset is publicly available and we use the data in this thesis to gather new insights in context-augmented dialogue generation.
We further introduced a new way of encoding context within Self-Attention based neural networks. For that, we elaborate the issue of space complexity from knowledge graphs,
and propose a concise encoding strategy for structured context inspired from graph neural networks (Gilmer et al., 2017) to reduce the space complexity of the additional context. We discuss limitations of context-augmentation for neural conversation models, explore the characteristics of knowledge graphs, and explain how we create and augment knowledge graphs for our experiments.
Lastly, we analyzed the potential of reinforcement and transfer learning to improve context-consistency for neural conversation models. We find that current reward functions need to be more precise to enable the potential of reinforcement learning, and that sequential transfer learning can improve the subjective quality of generated dialogues.
Current business organizations want to be more efficient and constantly evolving to find ways to retain talent. It is well established that visionary leadership plays a vital role in organizational success and contributes to a better working environment. This study aims to determine the effect of visionary leadership on employees' perceived job satisfaction. Specifically, it investigates whether the mediators meaningfulness at work and commitment to the leader impact the relationship. I take support from job demand resource theory to explain the overarching model used in this study and broaden-and-build theory to leverage the use of mediators.
To test the hypotheses, evidence was collected in a multi-source, time-lagged design field study of 95 leader-follower dyads. The data was collected in a three-wave study, each survey appearing after one month. Data on employee perception of visionary leadership was collected in T1, data for both mediators were collected in T2, and employee perception of job satisfaction was collected in T3. The findings display that meaningfulness at work and commitment to the leader play positive intervening roles (in the form of a chain) in the indirect influence of visionary leadership on employee perceptions regarding job satisfaction.
This research offers contributions to literature and theory by first broadening the existing knowledge on the effects of visionary leadership on employees. Second, it contributes to the literature on constructs meaningfulness at work, commitment to the leader, and job satisfaction. Third, it sheds light on the mediation mechanism dealing with study variables in line with the proposed model. Fourth, it integrates two theories, job demand resource theory and broaden-and-build theory providing further evidence. Additionally, the study provides practical implications for business leaders and HR practitioners.
Overall, my study discusses the potential of visionary leadership behavior to elevate employee outcomes. The study aligns with previous research and answers several calls for further research on visionary leadership, job satisfaction, and mediation mechanism with meaningfulness at work and commitment to the leader.
East and South
(2022)
"What is 'Europe' in academic discourse? While Europe tends to be used as shorthand, often interchangeable with the 'West', neither the 'West' nor 'Europe' are homogeneous spaces. Though postcolonial studies have long been debunking Eurocentrism in its multiple guises, there is still work to do in fully comprehending how its imaginations and discursive legacies conceive the figure of Europe, as not all who live on European soil are understood as equally 'European'. This volume explores this immediate need to rethink the axis of postcolonial cultural productions, to disarticulate Eurocentrism, to recognise Europe as a more diverse, plural and fluid space, to draw forward cultural exchanges and dialogues within the Global South. Through analyses of literary texts from East-Central Europe and beyond, this volume sheds light on alternative literary cartographies - the multiplicity of Europes and being European which exist both as they are viewed from the different geographies of the global South, and within the continent itself. Covering a wide spatial and temporal terrain in postcolonial and European cultural productions, this volume will be of great interest to scholars and researchers of literature and literary criticism, cultural studies, post-colonial studies, Global South studies and European studies"
Die allergische Kontaktdermatitis ist eine immunologisch bedingte Hauterkrankung mit insbesondere in den westlichen Industrienationen hoher und weiter ansteigender Prävalenz. Es handelt sich hierbei um eine Hypersensitivitätsreaktion vom Typ IV, die sich nach Allergenkontakt durch Juckreiz, Rötung, Bläschenbildung und Abschälung der Haut äußert. Zahlreiche Xenobiotika besitzen das Potenzial, Kontaktallergien auszulösen, darunter Konservierungsstoffe, Medikamente, Duftstoffe und Chemikalien. Die wirksamste Maßnahme zur Eindämmung der Erkrankung ist die Expositionsprophylaxe, also die Vermeidung des Kontakts mit den entsprechenden Substanzen. Dies wiederum setzt die Kenntnis des jeweiligen sensibilisierenden Potenzials einer Substanz voraus, dessen Bestimmung aus diesem Grund eine hohe toxikologische Relevanz besitzt. Zu diesem Zweck existieren von der OECD veröffentlichte Testleitlinien, welche auf entsprechend validierten Testmethoden basieren. Goldstandard bei der Prüfung auf hautsensibilisierendes Potenzial war über lange Zeit der murine Lokale Lymphknotentest. Seit der 7. Änderung der EU-Kosmetikrichtlinie, welche Tierversuche für Kosmetika und deren Inhaltsstoffe untersagt, wurden vermehrt Alternativmethoden in die OECD-Testleitlinien implementiert.. Die bestehenden in vitro Methoden sind jedoch alleinstehend nur begrenzt aussagekräftig, da sie lediglich singuläre Mechanismen bei der Entstehung einer Kontaktallergie abbilden. Die Entwicklung von Testmethoden, welche mehrere dieser Schlüsselereignisse berücksichtigen, erscheint daher richtungsweisend. Einen vielversprechenden Ansatz liefert hierbei der Loose-fit coculture-based sensitisation assay (LCSA), welcher eine Kokultur aus primären Keratinozyten und PBMC darstellt. Bei der Kokultivierung von Immunzellen mit anderen Zelltypen stellt sich allerdings die Frage, inwiefern die Nutzung von Zellen derselben Spender*innen (autologe Kokultur) bzw. verschiedener Spender*innen (allogene Kokultur) einen Einfluss nimmt. Zu diesem Zweck wurden im Rahmen dieser Arbeit Hautzellen spenderspezifisch aus gezupften Haarfollikeln isoliert und der LCSA mit den generierten HFDK in autologen und allogenen Ansätzen verglichen. Zusätzlich wurde auch ein Vergleich zwischen der Nutzung von HFDK und NHK, welche aus humaner Vorhaut isoliert wurden, im LCSA durchgeführt. Dabei ergaben sich keine signifikanten Unterschiede zwischen autologen und allogenen Kokulturen bzw. zwischen der Verwendung von HFDK und NHK. Die Verwendung allogener Zellen aus anonymem Spendermaterial sowie die Nutzung von Keratinozyten aus unterschiedlichen Quellen scheint im Rahmen des LCSA problemlos möglich. Einige der getesteten Kontaktallergene, darunter DNCB und NiCl2, erwiesen sich im LCSA jedoch als problematisch und konnten nicht zufriedenstellend als sensibilisierend detektiert werden. Daher wurde eine Optimierung der Kokultur durch Verwendung ex vivo differenzierter Langerhans Zellen (MoLC) angestrebt, welche ein besseres Modell primärer epidermaler Langerhans Zellen darstellen als die dendritischen Zellen aus dem LCSA. Zusätzlich wurden weitere, den Erfolg der Kokultur beeinflussende Faktoren, wie die Art und Zusammensetzung des Mediums und die Kokultivierungsdauer, untersucht und angepasst. Das schlussendlich etablierte Kokultivierungsprotokoll führte zu einer maßgeblich verstärkten Expression von CD207 (Langerin) auf den MoLC, was auf eine wirkungsvolle Interaktion zwischen Haut- und Immunzellen in der Kokultur hindeutete. Des Weiteren konnten DNCB und NiCl2 im Gegensatz zum LCSA durch Verwendung des kostimulatorischen Moleküls CD86 sowie des Reifungsmarkers CD83 als Ausleseparameter eindeutig als Kontaktallergene identifiziert werden. Die Untersuchungen zur Kokultur von MoLC und HFDK wurden jeweils vergleichend in autologen und allogenen Ansätzen durchgeführt. Ähnlich wie beim LCSA kam es aber auch hier zu keinen signifikanten Unterschieden, weder hinsichtlich der Expression von Charakterisierungs- und Aktivierungsmarkern auf MoLC noch hinsichtlich der Zytokinsekretion in den Zellkulturüberstand. Die Hinweise aus zahlreichen Studien im Mausmodell, dass Zellen des angeborenen Immunsystems zur Erkennung von und Aktivierung durch allogene Zellen bzw. Gewebe in der Lage sind, bestätigten sich im Rahmen dieser Arbeit dementsprechend nicht. Aus diesem Grund wurden abschließend CD4+ T-Lymphozyten, die Effektorzellen des adaptiven Immunsystems, in die Kokultur aus MoLC und autologen bzw. allogenen HFDK integriert. Überraschenderweise traten auch hier keine verstärkten Aktivierungen in allogener Kokultur im Vergleich zur autologen Kokultur auf. Die Nutzung autologer Primärzellen scheint im Rahmen der hier getesteten Methoden nicht notwendig zu sein, was die Validierung von Kokulturen und deren Implementierung in die OECD-Testleitlinien erleichtern dürfte. Zuletzt wurde eine Kokultivierung primärer Haut- und Immunzellen auch im 3D-Vollhautmodell durchgeführt, wobei autologe MoLC in die Epidermisäquivalente entsprechender Modelle integriert werden sollten. Obwohl die erstellten Hautmodelle unter Verwendung autologer Haarfollikel-generierter Keratinozyten und Fibroblasten eine zufriedenstellende Differenzierung und Stratifizierung aufwiesen, gestaltete sich die Inkorporation der MoLC als problematisch und konnte im Rahmen dieser Arbeit nicht erreicht werden.
Der Verteidigungsausschuss des Deutschen Bundestags steht seit seiner Gründung in rationaler und emotionaler Auseinandersetzung mit Parlament und Öffentlichkeit. Wolfgang Geist untersucht in seiner Langzeitanalyse die wechselnde Stellung des Ausschusses im Bundestag und gegenüber dessen Fraktionen unter den sich wandelnden politischen und gesellschaftlichen Gegebenheiten. So wird deutlich, welche Rolle der Ausschuss – auch in seiner besonderen Tätigkeit als Untersuchungsausschuss – in der Sicherheitspolitik der Bundesrepublik spielte sowie welcher Bedeutung der personellen Zusammensetzung und einzelnen politischen Akteuren zukam. Gleichzeitig hinterfragt er das Schlagwort »Parlamentsarmee«.
Der Jüdische Friedhof in Potsdam am Pfingstberg wurde 1743 angelegt und kontinuierlich bis in die NS-Zeit belegt. Bis Anfang des 21. Jahrhunderts gab es vereinzelte Begräbnisse mit Bezug zur alten Jüdischen Gemeinde Potsdams; mehrere Gedenkanlagen wurden errichtet. Mit mehr als 500 historischen Grabanlagen, seinem Ensemble aus Friedhofsbauten sowie aufgrund seiner Landschaftsarchitektur gehört dieser Ort heute zum UNESCO-Welterbe.
Teil 1 von Anke Geißler-Grünbergs Studie ist der Geschichte des Friedhofs gewidmet. Nach einem Blick auf die Rolle der Jüdischen Gemeinde Potsdams als Eigentümerin des Friedhofs erfolgt unter Auswertung von umfangreichem Archivmaterial eine detaillierte Darstellung der Geschichte des „Guten Ortes“. Eine Untersuchung sämtlicher Grabmale auf unterschiedliche Gestaltungsmerkmale visualisiert und rekonstruiert die Veränderungen in der erhaltenen Sepulkralkultur. Abschließend richtet sich der Fokus auf den Umgang mit dem Friedhof als Ort der Erinnerung.
Teil 2 bietet die Dokumentation von 370 Grabanlagen. Um den Friedhof in seiner Gesamtheit abzubilden, wurde die 1992 erfolgte Teildokumentation der 158 Grabsteine des gesamten ältesten Begräbnisfeldes ergänzend hinzugenommen. Mit mehr als 1.000 Fotos wird hier ein einmaliges Zeugnis der Brandenburger Juden dokumentiert.
Der Jüdische Friedhof in Potsdam am Pfingstberg wurde 1743 angelegt und kontinuierlich bis in die NS-Zeit belegt. Bis Anfang des 21. Jahrhunderts gab es vereinzelte Begräbnisse mit Bezug zur alten Jüdischen Gemeinde Potsdams; mehrere Gedenkanlagen wurden errichtet. Mit mehr als 500 historischen Grabanlagen, seinem Ensemble aus Friedhofsbauten sowie aufgrund seiner Landschaftsarchitektur gehört dieser Ort heute zum UNESCO-Welterbe.
Teil 1 von Anke Geißler-Grünbergs Studie ist der Geschichte des Friedhofs gewidmet. Nach einem Blick auf die Rolle der Jüdischen Gemeinde Potsdams als Eigentümerin des Friedhofs erfolgt unter Auswertung von umfangreichem Archivmaterial eine detaillierte Darstellung der Geschichte des „Guten Ortes“. Eine Untersuchung sämtlicher Grabmale auf unterschiedliche Gestaltungsmerkmale visualisiert und rekonstruiert die Veränderungen in der erhaltenen Sepulkralkultur. Abschließend richtet sich der Fokus auf den Umgang mit dem Friedhof als Ort der Erinnerung.
Teil 2 bietet die Dokumentation von 370 Grabanlagen. Um den Friedhof in seiner Gesamtheit abzubilden, wurde die 1992 erfolgte Teildokumentation der 158 Grabsteine des gesamten ältesten Begräbnisfeldes ergänzend hinzugenommen. Mit mehr als 1.000 Fotos wird hier ein einmaliges Zeugnis der Brandenburger Juden dokumentiert.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.
Digital transformation (DT) has not only been a major challenge in recent years, it is also supposed to continue to enormously impact our society and economy in the forthcoming decade. On the one hand, digital technologies have emerged, diffusing and determining our private and professional lives. On the other hand, digital platforms have leveraged the potentials of digital technologies to provide new business models. These dynamics have a massive effect on individuals, companies, and entire ecosystems. Digital technologies and platforms have changed the way persons consume or interact with each other. Moreover, they offer companies new opportunities to conduct their business in terms of value creation (e.g., business processes), value proposition (e.g., business models), or customer interaction (e.g., communication channels), i.e., the three dimensions of DT. However, they also can become a threat for a company's competitiveness or even survival. Eventually, the emergence, diffusion, and employment of digital technologies and platforms bear the potential to transform entire markets and ecosystems.
Against this background, IS research has explored and theorized the phenomena in the context of DT in the past decade, but not to its full extent. This is not surprising, given the complexity and pervasiveness of DT, which still requires far more research to further understand DT with its interdependencies in its entirety and in greater detail, particularly through the IS perspective at the confluence of technology, economy, and society. Consequently, the IS research discipline has determined and emphasized several relevant research gaps for exploring and understanding DT, including empirical data, theories as well as knowledge of the dynamic and transformative capabilities of digital technologies and platforms for both organizations and entire industries.
Hence, this thesis aims to address these research gaps on the IS research agenda and consists of two streams. The first stream of this thesis includes four papers that investigate the impact of digital technologies on organizations. In particular, these papers study the effects of new technologies on firms (paper II.1) and their innovative capabilities (II.2), the nature and characteristics of data-driven business models (II.3), and current developments in research and practice regarding on-demand healthcare (II.4). Consequently, the papers provide novel insights on the dynamic capabilities of digital technologies along the three dimensions of DT. Furthermore, they offer companies some opportunities to systematically explore, employ, and evaluate digital technologies to modify or redesign their organizations or business models.
The second stream comprises three papers that explore and theorize the impact of digital platforms on traditional companies, markets, and the economy and society at large. At this, paper III.1 examines the implications for the business of traditional insurance companies through the emergence and diffusion of multi-sided platforms, particularly in terms of value creation, value proposition, and customer interaction. Paper III.2 approaches the platform impact more holistically and investigates how the ongoing digital transformation and "platformization" in healthcare lastingly transform value creation in the healthcare market. Paper III.3 moves on from the level of single businesses or markets to the regulatory problems that result from the platform economy for economy and society, and proposes appropriate regulatory approaches for addressing these problems. Hence, these papers bring new insights on the table about the transformative capabilities of digital platforms for incumbent companies in particular and entire ecosystems in general.
Altogether, this thesis contributes to the understanding of the impact of DT on organizations and markets through the conduction of multiple-case study analyses that are systematically reflected with the current state of the art in research. On this empirical basis, the thesis also provides conceptual models, taxonomies, and frameworks that help describing, explaining, or predicting the impact of digital technologies and digital platforms on companies, markets and the economy or society at large from an interdisciplinary viewpoint.
Pannexin 1
(2022)
Hypoxic pulmonary vasoconstriction is an active alveolar hypoxia-caused physiological response redirecting pulmonary blood flow from poorly ventilated areas to better oxygenated lung regions in order to optimize oxygen supply. However, the signaling pathways underlying this pulmonary vascular response remain an area under investigation. In the present study I investigated the functional relevance of Pannexin 1 (Panx1)-mediated ATP release in hypoxic pulmonary vasoconstriction and chronic hypoxic pulmonary hypertension using murine isolated perfused lungs, chronic hypoxic mice, and pulmonary artery smooth muscle cell culture. In isolated mouse lungs, switch to hypoxic gas induced a marked increase in pulmonary artery pressure. Pharmacological inhibition of Panx1 using probenecid, Panx1 specific inhibitory peptide (10Panx1) or spironolactone as well as genetic deletion of Panx1 in smooth muscle cells diminished hypoxic pulmonary vasoconstriction in isolated perfused mouse lungs. Fura-2 imaging revealed a reduced Ca2+ response to hypoxia in pulmonary artery smooth muscle cells treated with spironolactone or 10Panx1. Although these findings suggested an important role of Panx1 in HPV, neither smooth muscle cell nor endothelial cell specific genetic deletion of Panx1 prevented the development of pulmonary hypertension in chronic hypoxic mice. Surprisingly, hypoxia did not induce ATP release and inhibition of purinergic receptors or ATP degradation by ATPase failed to decrease the pulmonary vasoconstriction response to hypoxia in isolated perfused mouse lungs. However, Panx1 antagonism as well as TRPV4 inhibition prevented the hypoxia-induced increase in intracellular Ca2+ concentration in pulmonary artery smooth muscle cells in an additive manner suggesting that Panx1 might modulate intracellular Ca2+ signaling independently of the ATP-P2-TRPV4 signaling axis. In line with this assumption, overexpression of Panx1 in HeLa cells increased intracellular Ca2+ concentrations in response to acute hypoxia. Conclusion: In this study I identifiy Panx1 as novel regulator of HPV.. Yet, the role of Panx1 was not attributable to the release of ATP and downstream P2 signaling pathways or activation of TRPV4 but rathter relates to a role of Panx1 as indirect or direct modulator of the Ca2+ response to hypoxia in PASMCs. Genetic deletion of Panx1 did not influence the development of chronic hypoxic pulmonary hypertension in mice.
Vermögen vererben
(2022)
Die politische Regulierung der Vermögensweitergabe und individuelle Erbregelungen in der zweiten Hälfte des 20. Jahrhunderts.
Das Vererben von Vermögen stabilisiert die Gesellschaftsordnung. Erbregelungen können soziale Ungleichheitsverhältnisse in die Zukunft fortschreiben oder zu Enttäuschungen übergangener Familienmitglieder führen. Da das Vererben soziale Gerechtigkeits- und Familienvorstellungen berührt, ist seine Regulierung politisch höchst umstritten. Obwohl die jährlich vererbten Vermögen in den letzten Jahren immer neue Rekordhöhen erreichten, ist die Vorgeschichte dieser gegenwärtigen Entwicklung bisher kaum erforscht.
Ronny Grundig untersucht den Wandel der Vermögensvererbung vom Ende des Zweiten Weltkriegs bis zum Ende der 1980er Jahre. Er blickt auf politische Regulierungen,
die Praktiken des Vererbens und die Aneignung des Erbes durch die Hinterbliebenen. Der Autor analysiert die Steuervermeidung Vermögender sowie Konflikte zwischen Erben und Erbinnen. Ebenso zeigt er den Wandel von Paar- und Familienbeziehungen beim Vererben, die sich in den Testamenten niederschlagen und die Verteilung der hinterlassenen Vermögen beeinflussen.
Identity management is at the forefront of applications’ security posture. It separates the unauthorised user from the legitimate individual. Identity management models have evolved from the isolated to the centralised paradigm and identity federations. Within this advancement, the identity provider emerged as a trusted third party that holds a powerful position. Allen postulated the novel self-sovereign identity paradigm to establish a new balance. Thus, extensive research is required to comprehend its virtues and limitations. Analysing the new paradigm, initially, we investigate the blockchain-based self-sovereign identity concept structurally. Moreover, we examine trust requirements in this context by reference to patterns. These shapes comprise major entities linked by a decentralised identity provider. By comparison to the traditional models, we conclude that trust in credential management and authentication is removed. Trust-enhancing attribute aggregation based on multiple attribute providers provokes a further trust shift. Subsequently, we formalise attribute assurance trust modelling by a metaframework. It encompasses the attestation and trust network as well as the trust decision process, including the trust function, as central components. A secure attribute assurance trust model depends on the security of the trust function. The trust function should consider high trust values and several attribute authorities. Furthermore, we evaluate classification, conceptual study, practical analysis and simulation as assessment strategies of trust models. For realising trust-enhancing attribute aggregation, we propose a probabilistic approach. The method exerts the principle characteristics of correctness and validity. These values are combined for one provider and subsequently for multiple issuers. We embed this trust function in a model within the self-sovereign identity ecosystem. To practically apply the trust function and solve several challenges for the service provider that arise from adopting self-sovereign identity solutions, we conceptualise and implement an identity broker. The mediator applies a component-based architecture to abstract from a single solution. Standard identity and access management protocols build the interface for applications. We can conclude that the broker’s usage at the side of the service provider does not undermine self-sovereign principles, but fosters the advancement of the ecosystem. The identity broker is applied to sample web applications with distinct attribute requirements to showcase usefulness for authentication and attribute-based access control within a case study.
Abzug unter Beobachtung
(2022)
Mehr als vier Jahrzehnte lang beobachteten die Streitkräfte und Militärnachrichtendienste der NATO-Staaten die sowjetischen Truppen in der DDR. Hierfür übernahm in der Bundesrepublik Deutschland der Bundesnachrichtendienst (BND) die militärische Auslandsaufklärung unter Anwendung nachrichtendienstlicher Mittel und Methoden. Die Bundeswehr betrieb dagegen taktische Fernmelde- und elektronische Aufklärung und hörte vor allem den Funkverkehr der „Gruppe der sowjetischen Streitkräfte in Deutschland“ (GSSD) ab. Mit der Aufstellung einer zentralen Dienststelle für das militärische Nachrichtenwesen, dem Amt für Nachrichtenwesen der Bundeswehr, bündelte und erweiterte zugleich das Bundesministerium für Verteidigung in den 1980er Jahren seine analytischen Kapazitäten. Das Monopol des BND in der militärischen Auslandsaufklärung wurde von der Bundeswehr dadurch zunehmend infrage gestellt.
Nach der deutschen Wiedervereinigung am 3. Oktober 1990 befanden sich immer noch mehr als 300.000 sowjetische Soldaten auf deutschem Territorium. Die 1989 in Westgruppe der Truppen (WGT) umbenannte GSSD sollte – so der Zwei-plus-Vier-Vertrag – bis 1994 vollständig abziehen. Der Vertrag verbot auch den drei Westmächten, in den neuen Bundesländern militärisch tätig zu sein. Die für die Militäraufklärung bis dahin unverzichtbaren Militärverbindungsmissionen der Westmächte mussten ihre Dienste einstellen. Doch was geschah mit diesem „alliierten Erbe“? Wer übernahm auf deutscher Seite die Aufklärung der sowjetischen Truppen und wer kontrollierte den Truppenabzug?
Die Studie untersucht die Rolle von Bundeswehr und BND beim Abzug der WGT zwischen 1990 und 1994 und fragt dabei nach Kooperation und Konkurrenz zwischen Streitkräften und Nachrichtendiensten. Welche militärischen und nachrichtendienstlichen Mittel und Fähigkeiten stellte die Bundesregierung zur Bewältigung des Truppenabzugs zur Verfügung, nachdem die westlichen Militärverbindungsmissionen aufgelöst wurden? Wie veränderten sich die Anforderungen an die militärische Auslandsaufklärung des BND? Inwieweit setzten sich Konkurrenz und Kooperation von Bundeswehr und BNDbeim Truppenabzug fort? Welche Rolle spielten dabei die einstigen Westmächte? Die Arbeit versteht sich nicht nur als Beitrag zur Militärgeschichte, sondern auch zur deutschen Nachrichtendienstgeschichte.
The development of novel programmable materials aiming to control friction in real-time holds potential to facilitate innovative lubrication solutions for reducing wear and energy losses. This work describes the integration of light-responsiveness into two lubricating materials, silicon oils and polymer brush surfaces.
The first part focusses on the assessment on 9-anthracene ester-terminated polydimethylsiloxanes (PDMS-A) and, in particular, on the variability of rheological properties and the implications that arise with UV-light as external trigger. The applied rheometer setup contains an UV-transparent quartz-plate, which enables radiation and simultaneous measurement of the dynamic moduli. UV-A radiation (354 nm) triggers the cycloaddition reaction between the terminal functionalities of linear PDMS, resulting in chain extension. The newly-formed anthracene dimers cleave by UV-C radiation (254 nm) or at elevated temperatures (T > 130 °C). The sequential UV-A radiation and thermal reprogramming over three cycles demonstrate high conversions and reproducible programming of rheological properties. In contrast, the photochemical back reaction by UV-C is incomplete and can only partially restore the initial rheological properties. The dynamic moduli increase with each cycle in photochemical programming, presumably resulting from a chain segment re-arrangement as a result of the repeated partial photocleavage and subsequent chain length-dependent dimerization. In addition, long periods of radiation cause photooxidative degradation, which damages photo-responsive functions and consequently reduces the programming range. The absence of oxygen, however, reduces undesired side reactions. Anthracene-functionalized PDMS and native PDMS mix depending on the anthracene ester content and chain length, respectively, and allow fine-tuning of programmable rheological properties. The work shows the influence of mixing conditions during the photoprogramming step on the rheological properties, indicating that material property gradients induced by light attenuation along the beam have to be considered. Accordingly, thin lubricant films are suggested as potential application for light-programmable silicon fluids.
The second part compares strategies for the grafting of spiropyran (SP) containing copolymer brushes from Si wafers and evaluates the light-responsiveness of the surfaces. Pre-experiments on the kinetics of the thermally initiated RAFT copolymerization of 2-hydroxyethyl acrylate (HEA) and spiropyran acrylate (SPA) in solution show, first, a strong retardation by SP and, second, the dependence of SPA polymerization on light. Surprisingly, the copolymerization of SPA is inhibited in the dark. These findings contribute to improve the synthesis of polar, spiropyran-containing copolymers. The comparison between initiator systems for the grafting-from approach indicates PET-RAFT superior to thermally initiated RAFT, suggesting a more efficient initiation of surface-bound CTA by light. Surface-initiated polymerization via PET-RAFT with an initiator system of EosinY (EoY) and ascorbic acid (AscA) facilitates copolymer synthesis from HEA and 5-25 mol% SPA. The resulting polymer film with a thickness of a few nanometers was detected by atomic force microscopy (AFM) and ellipsometry. Water contact angle (CA) measurements demonstrate photo-switchable surface polarity, which is attributed to the photoisomerization between non-polar spiropyran and zwitterionic merocyanine isomer. Furthermore, the obtained spiropyran brushes show potential for further studies on light-programmable properties. In this context, it would be interesting to investigate whether swollen spiropyran-containing polymers change their configuration and thus their film thickness under the influence of light. In addition, further experiments using an AFM or microtribometer should evaluate whether light-programmable solvation enables a change in frictional properties between polymer brush surfaces.
Knowledge-intensive business processes are flexible and data-driven. Therefore, traditional process modeling languages do not meet their requirements: These languages focus on highly structured processes in which data plays a minor role. As a result, process-oriented information systems fail to assist knowledge workers on executing their processes. We propose a novel case management approach that combines flexible activity-centric processes with data models, and we provide a joint semantics using colored Petri nets. The approach is suited to model, verify, and enact knowledge-intensive processes and can aid the development of information systems that support knowledge work.
Knowledge-intensive processes are human-centered, multi-variant, and data-driven. Typical domains include healthcare, insurances, and law. The processes cannot be fully modeled, since the underlying knowledge is too vast and changes too quickly. Thus, models for knowledge-intensive processes are necessarily underspecified. In fact, a case emerges gradually as knowledge workers make informed decisions. Knowledge work imposes special requirements on modeling and managing respective processes. They include flexibility during design and execution, ad-hoc adaption to unforeseen situations, and the integration of behavior and data. However, the predominantly used process modeling languages (e.g., BPMN) are unsuited for this task.
Therefore, novel modeling languages have been proposed. Many of them focus on activities' data requirements and declarative constraints rather than imperative control flow. Fragment-Based Case Management, for example, combines activity-centric imperative process fragments with declarative data requirements. At runtime, fragments can be combined dynamically, and new ones can be added. Yet, no integrated semantics for flexible activity-centric process models and data models exists.
In this thesis, Wickr, a novel case modeling approach extending fragment-based Case Management, is presented. It supports batch processing of data, sharing data among cases, and a full-fledged data model with associations and multiplicity constraints. We develop a translational semantics for Wickr targeting (colored) Petri nets. The semantics assert that a case adheres to the constraints in both the process fragments and the data models. Among other things, multiplicity constraints must not be violated. Furthermore, the semantics are extended to multiple cases that operate on shared data. Wickr shows that the data structure may reflect process behavior and vice versa. Based on its semantics, prototypes for executing and verifying case models showcase the feasibility of Wickr. Its applicability to knowledge-intensive and to data-centric processes is evaluated using well-known requirements from related work.
This paper examines the function that cross-cultural competence (3C) has for NATO in a military context while focusing on two member states and their armed forces: the United States and Germany. Three dimensions were established to analyze 3C internally and externally: dimension A, dealing with 3C within the military organization; dimension B, focusing on 3C in a coalition environment/multicultural NATO contingent, for example while on a mission/training exercise abroad; and dimension C, covering 3C and NATO missions abroad with regard to interaction with the local population.
When developing the research design, the cultural studies-based theory of hegemony constructed by Antonio Gramsci was applied to a comprehensive document analysis of 3C coursework and regulations as well as official documents in order to establish a typification for cross-cultural competence.
As the result, 3C could be categorized as Type I – Ethical 3C, Type II – Hegemonic 3C, and Type III – Dominant 3C. Attributes were assigned according to each type. To validate the established typification, qualitative surveys were conducted with NATO (ACT), the U.S. Armed Forces (USCENTCOM), and the German Armed Forces (BMVg). These interviews validated the typification and revealed a varied approach to 3C in the established dimensions. It became evident that dimensions A and B indicated a prevalence of Type III, which greatly impacts the work atmosphere and effectiveness for NATO (ACT). In contrast, dimension C revealed the use of postcolonial mechanisms by NATO forces, such as applying one’s value systems to other cultures and having the appearance of an occupying force when 3C is not applied (Type I-II). In general, the function of each 3C type in the various dimensions could be determined.
In addition, a comparative study of the document analysis and the qualitative surveys resulted in a canon for culture-general skills. Regarding the determined lack of coherence in 3C correlating with a demonstrably negative impact on effectiveness and efficiency as well as interoperability, a NATO standard in the form of a standardization agreement (STANAG) was suggested based on the aforementioned findings, with a focus on: empathy, cross-cultural awareness, communication skills (including active listening), flexibility and adaptability, and interest. Moreover, tolerance of ambiguity and teachability, patience, observation skills, and perspective-taking could be considered significant. Suspending judgment and respect are also relevant skills here.
At the same time, the document analysis also revealed a lack of coherency and consistency in 3C education and interorganizational alignment. In particular, the documents examined for the U.S. Forces indicated divergent approaches. Furthermore, the interview analysis disclosed a large discrepancy in part between doctrine and actual implementation with regard to the NATO Forces.
Subdividing space through interfaces leads to many space partitions that are relevant to soft matter self-assembly. Prominent examples include cellular media, e.g. soap froths, which are bubbles of air separated by interfaces of soap and water, but also more complex partitions such as bicontinuous minimal surfaces.
Using computer simulations, this thesis analyses soft matter systems in terms of the relationship between the physical forces between the system's constituents and the structure of the resulting interfaces or partitions. The focus is on two systems, copolymeric self-assembly and the so-called Quantizer problem, where the driving force of structure formation, the minimisation of the free-energy, is an interplay of surface area minimisation and stretching contributions, favouring cells of uniform thickness.
In the first part of the thesis we address copolymeric phase formation with sharp interfaces. We analyse a columnar copolymer system "forced" to assemble on a spherical surface, where the perfect solution, the hexagonal tiling, is topologically prohibited. For a system of three-armed copolymers, the resulting structure is described by solutions of the so-called Thomson problem, the search of minimal energy configurations of repelling charges on a sphere. We find three intertwined Thomson problem solutions on a single sphere, occurring at a probability depending on the radius of the substrate.
We then investigate the formation of amorphous and crystalline structures in the Quantizer system, a particulate model with an energy functional without surface tension that favours spherical cells of equal size. We find that quasi-static equilibrium cooling allows the Quantizer system to crystallise into a BCC ground state, whereas quenching and non-equilibrium cooling, i.e. cooling at slower rates then quenching, leads to an approximately hyperuniform, amorphous state. The assumed universality of the latter, i.e. independence of energy minimisation method or initial configuration, is strengthened by our results. We expand the Quantizer system by introducing interface tension, creating a model that we find to mimic polymeric micelle systems: An order-disorder phase transition is observed with a stable Frank-Caspar phase.
The second part considers bicontinuous partitions of space into two network-like domains, and introduces an open-source tool for the identification of structures in electron microscopy images. We expand a method of matching experimentally accessible projections with computed projections of potential structures, introduced by Deng and Mieczkowski (1998). The computed structures are modelled using nodal representations of constant-mean-curvature surfaces. A case study conducted on etioplast cell membranes in chloroplast precursors establishes the double Diamond surface structure to be dominant in these plant cells. We automate the matching process employing deep-learning methods, which manage to identify structures with excellent accuracy.
Die vorliegende Untersuchung zum internationalen Kryptowerterecht schafft einen umfassenden Überblick für die Praxis, indem eine systematische Analyse der Sachverhalte unter Berücksichtigung der technischen Besonderheiten, insbesondere der Blockchain, erfolgt. Kern der Arbeit stellen die vier untersuchten Sachverhalte dar, angefangen beim Mining, der Ausgabe neuer Kryptowerte im Rahmen eines ICO, Transaktionen auf dem Sekundärmarkt und der Prospekthaftung bei ICOs. Aus rechtlicher Sicht wird vor allem die Frage der internationalen Zuständigkeit nach der Brüssel Ia-VO und des anwendbaren Rechts nach der Rom I und II-VO behandelt. Zusätzlich werden Aspekte des Kapitalmarkt-IPR erörtert.
The index theorem for elliptic operators on a closed Riemannian manifold by Atiyah and Singer has many applications in analysis, geometry and topology, but it is not suitable for a generalization to a Lorentzian setting.
In the case where a boundary is present Atiyah, Patodi and Singer provide an index theorem for compact Riemannian manifolds by introducing non-local boundary conditions obtained via the spectral decomposition of an induced boundary operator, so called APS boundary conditions. Bär and Strohmaier prove a Lorentzian version of this index theorem for the Dirac operator on a manifold with boundary by utilizing results from APS and the characterization of the spectral flow by Phillips. In their case the Lorentzian manifold is assumed to be globally hyperbolic and spatially compact, and the induced boundary operator is given by the Riemannian Dirac operator on a spacelike Cauchy hypersurface. Their results show that imposing APS boundary conditions for these boundary operator will yield a Fredholm operator with a smooth kernel and its index can be calculated by a formula similar to the Riemannian case.
Back in the Riemannian setting, Bär and Ballmann provide an analysis of the most general kind of boundary conditions that can be imposed on a first order elliptic differential operator that will still yield regularity for solutions as well as Fredholm property for the resulting operator. These boundary conditions can be thought of as deformations to the graph of a suitable operator mapping APS boundary conditions to their orthogonal complement.
This thesis aims at applying the boundary conditions found by Bär and Ballmann to a Lorentzian setting to understand more general types of boundary conditions for the Dirac operator, conserving Fredholm property as well as providing regularity results and relative index formulas for the resulting operators. As it turns out, there are some differences in applying these graph-type boundary conditions to the Lorentzian Dirac operator when compared to the Riemannian setting. It will be shown that in contrast to the Riemannian case, going from a Fredholm boundary condition to its orthogonal complement works out fine in the Lorentzian setting. On the other hand, in order to deduce Fredholm property and regularity of solutions for graph-type boundary conditions, additional assumptions for the deformation maps need to be made.
The thesis is organized as follows. In chapter 1 basic facts about Lorentzian and Riemannian spin manifolds, their spinor bundles and the Dirac operator are listed. These will serve as a foundation to define the setting and prove the results of later chapters.
Chapter 2 defines the general notion of boundary conditions for the Dirac operator used in this thesis and introduces the APS boundary conditions as well as their graph type deformations. Also the role of the wave evolution operator in finding Fredholm boundary conditions is analyzed and these boundary conditions are connected to notion of Fredholm pairs in a given Hilbert space.
Chapter 3 focuses on the principal symbol calculation of the wave evolution operator and the results are used to proof Fredholm property as well as regularity of solutions for suitable graph-type boundary conditions. Also sufficient conditions are derived for (pseudo-)local boundary conditions imposed on the Dirac operator to yield a Fredholm operator with a smooth solution space.
In the last chapter 4, a few examples of boundary conditions are calculated applying the results of previous chapters. Restricting to special geometries and/or boundary conditions, results can be obtained that are not covered by the more general statements, and it is shown that so-called transmission conditions behave very differently than in the Riemannian setting.
Social networking site use and well-being - a nuanced understanding of a complex relationship
(2022)
Social Networking Sites (SNSs) are ubiquitous and attract an enormous chair of the digital population. Their functionalities allow users to connect and interact with others and weave complex social networks in which social information is continuously disseminated between users. Besides the social value SNSs are generating, they likewise attract companies and allow for new forms of marketing, thereby creating considerable economic value alike. However, as SNSs grew in popularity, so did concerns about the impact of their use on social interactions in general and the well-being of individual users in particular. While existing scientific evidence points to both risk as well as benefits of SNS use, research still lacks a profound understanding of which aspects of SNSs enable an impact on well-being and which psychological processes on the part of the users underly and explain this relationship. Therefore, this thesis is dedicated to an in-depth exploration of the relationship between SNS use and well-being and aims to answer how SNS use can impact well-being. Primarily, it focuses on the unique technological features that characterize SNSs and enable potential well- being alterations and on specific psychological processes on the part of the users, underlying and explaining the relationship. For this purpose, the thesis first introduces the concept of well- being. It continues by presenting SNSs’ unique technological features, divided into specifics of the content disseminated on SNSs and the network structure of SNSs. Further, the thesis introduces three classes of psychological processes assumed most relevant for the relationship between SNSs and well-being: other-focused, self-focused, and contrastive processes.. It is assumed that the course and quality of these common processes change in the SNS context and that a complex interplay between the unique features of SNSs and these processes determines how SNSs may ultimately affect users' well-being - both in positive and negative ways. The dissertation comprises seven research articles, each of which focusses on a particular set of SNS characteristics, their interplay with one or more of the proposed psychological processes, and ultimately the resulting effects on user well-being or its key resilience and risk factors. The seven articles investigate this relationship using different methodological approaches. Three articles are based on either systematic or narrative literature reviews, one applies an empirical cross-sectional research design, and three articles present an experimental investigation. Thematically, two articles revolve around SNS use’s effect on self-esteem. Three articles examine the specific role of the emotion of envy and its potential to establish and perpetuate a well-being-damaging social climate on SNSs. The two last articles of this thesis revolve around the established assumption that active and passive SNS use, as different modalities of SNS use, cause differential effects on users’ well-being due to the involvement of different psychological processes. The results of this thesis illustrate different ways how SNSs can affect users’ well-being. The results suggest that especially contrastive processes play a decisive role in explaining potential well-being risks for SNS users. Their interplay with certain SNS features seems to foster upward social comparisons and feelings of envy, potentially leading to a complex set of deleterious effects on users’ well-being. At the same time, the findings illuminate ways in which SNSs can benefit users and their self-esteem – especially when SNS use promotes self- focused and social-feedback-based other-focused processes. The thesis and their findings illustrate that the relationship between SNSs and well-being is complex. Therefore, a nuanced perspective, taking into consideration both the technological uniqueness of SNSs and the psychological processes they are enabling, is crucial to understand how these technologies affect their users in good and potentially harmful ways. On the one hand, the gathered insights contribute to research, providing novel insights into the complex relationship between SNS use and well-being. On the other hand, the results enable a focused and action-oriented derivation of recommendations for stakeholders such as individual users, policymakers, and platform providers. The findings of this thesis can help them to better combat SNS-related risks and ultimately ensure a healthy and sustainable environment for users - and thus also the economic values of SNSs - in the long term.
Ein Dach über Europa
(2022)
Wo ist Deutschlands Raketenabwehr? Diese Frage rückte nach der völkerrechtswidrigen Annexion der Krim durch Russland 2014 in den Fokus der Presseberichterstattung. Für die Abwehr von ballistischen Raketen ist die Flugabwehrraketentruppe der Luftwaffe zuständig. Im Ost-West-Konflikt schützten rund 18.600 deutsche Soldaten im Rahmen der Integrierten NATO-Luftverteidigung die westliche Allianz vor Luftangriffen durch den Warschauer Pakt. Nach der Wiedervereinigung befand sich der Luftverteidigungsgürtel des Bündnisses nicht nur in einer geografisch wirkungslosen Position, sondern ihm fehlte auch die Daseinsberechtigung. Mit seiner Auflösung ging ein erheblicher Abbau von Personal und Material der Flugabwehrraketenverbände einher. Nach der Neuausrichtung der Bundeswehr 2012 blieb diesem Dienstbereich der Luftwaffe nur noch ein Geschwader mit rund 2.300 Dienstposten. Der alte Feind war weg – und Deutschland nach 1989/90 umgeben von Freunden und Verbündeten. Warum also sollte die Regierung in eine Fähigkeit investieren, die Deutschland für sich selbst nicht brauchte?
Natural hazards pose a threat to human health and life. In Germany, where the research for this thesis was conducted, numerous weather extremes occurred in the recent past that caused high numbers of fatalities and huge financial losses. The focus of this research is centred around two relevant natural hazards: heat stress and flooding. Preventing negative health impacts and deaths, as well as structural and monetary damage is the purpose of risk management and this requires citizens to adapt as well. Risk communication is implemented to foster people’s risk perception and motivate individual adaptation. However, methods of risk and crisis communication are often not evaluated in a structured manner. Much interdisciplinary research exists on both risk perception and adaptation, however, not much is known on the connection between the two. Furthermore, the existing research on risk communication is often not theory-driven and its impact on individual adaptation and risk perception is not thoroughly documented. This dissertation follows three research aims: (1) Compare psychological theories that contribute to natural hazard research. (2) Explore risk perception and adaptive behaviour by applying multiple methods. And (3) evaluate one risk communication method and one crisis communication method in a theory-driven manner to determine their impact on risk perception and adaptive behaviour. First, a literature review is provided on existing psychological theories which aim to explain the behaviour of individuals with regards to natural hazards. The three key theories included are the Protection Motivation Theory (PMT), the Protective Action Decision Model (PADM), and the Risk Information Seeking and Processing Model (RISP). Each of these are described and compared to each other with a focus on their explanatory power and practical significance in interdisciplinary research. Theoretical adaptations and possible extensions for future research are proposed for the presented approaches. Second, a multimethod field study on heat stress at an open-air event is presented. Face-to-face surveys (n = 306) and behavioural observations (n = 2750) were carried out at a horticultural show in Würzburg in summer 2018. The visitors’ risk perception, adaptive behaviour, and activity level were analysed and compared between hot days, summer days, and rainy days, applying correlation analyses, ANOVA, and multiple regression analyses. Heat risk perception was generally high, but most respondents were unaware of heat warnings on the day of their visit. During hot days the highest level of adaptation and lower activity levels were observed. Discrepancies between reported and observed adaptation emerged for different age groups.. Third, a telephone and web-based household survey on heat stress was conducted in the cities of Würzburg, Potsdam, and Remscheid in 2019 (n = 1417). The PADM served as the study’s theoretical framework. In multiple regression analyses the PADM factors of environmental and demographic context, risk communication, and psychological processes explained a substantial share of variance of protection motivation, protective response, and emotion-focused coping. Elements of crisis communication of a heat warning were evaluated experimentally. Results showed that understanding and adaptation intention was significantly higher in individuals that had received action recommendations alongside the heat warning. Fourth, the focus is set on a risk communication method of the flood context. A series of workshops on individual flood protection was carried out in six different settings. The participants (n = 115) answered a pretest-posttest questionnaire. Mixed-model analyses revealed significant increases in self-efficacy, subjective knowledge, and protection motivation. Stronger effects were observed in younger participants and those with lower levels of previous knowledge on flood adaptation as well as no flood experience. The findings of this thesis help to understand individual adaptation, as well as possible impacts of risk and crisis communication on risk perception and adaptation. The scientific background of this work is rooted in the disciplines of psychology and geosciences. The two theories PMT and PADM proved to be useful theoretical frameworks for the presented studies to suggest improvements in risk communication methods. A broad picture of individual adaptation is captured through a variety of methods of self-reports (face-to-face, telephone-based, web-based, and paper-pencil surveys) and behavioural observations, which recorded past and intended behaviour. Alongside with further methodological recommendations, the theory-driven evaluations of risk and crisis communication methods can serve as best-practice examples for future evaluation studies in natural hazard research but also other sciences dealing with risk behaviour to identify and improve effective risk communication pathways.
Fiber-based microfluidics has undergone many innovative developments in recent years, with exciting examples of portable, cost-effective and easy-to-use detection systems already being used in diagnostic and analytical applications. In water samples, Legionella are a serious risk as human pathogens. Infection occurs through inhalation of aerosols containing Legionella cells and can cause severe pneumonia and may even be fatal. In case of Legionella contamination of water-bearing systems or Legionella infection, it is essential to find the source of the contamination as quickly as possible to prevent further infections. In drinking, industrial and wastewater monitoring, the culture-based method is still the most commonly used technique to detect Legionella contamination. In order to improve the laboratory-dependent determination, the long analysis times of 10-14 days as well as the inaccuracy of the measured values in colony forming units (CFU), new innovative ideas are needed. In all areas of application, for example in public, commercial or private facilities, rapid and precise analysis is required, ideally on site.
In this PhD thesis, all necessary single steps for a rapid DNA-based detection of Legionella were developed and characterized on a fiber-based miniaturized platform. In the first step, a fast, simple and device-independent chemical lysis of the bacteria and extraction of genomic DNA was established. Subsequently, different materials were investigated with respect to their non-specific DNA retention. Glass fiber filters proved to be particularly suitable, as they allow recovery of the DNA sample from the fiber material in combination with dedicated buffers and exhibit low autofluorescence, which was important for fluorescence-based readout.
A fiber-based electrophoresis unit was developed to migrate different oligonucleotides within a fiber matrix by application of an electric field. A particular advantage over lateral flow assays is the targeted movement, even after the fiber is saturated with liquid. For this purpose, the entire process of fiber selection, fiber chip patterning, combination with printed electrodes, and testing of retention and migration of different DNA samples (single-stranded, double-stranded and genomic DNA) was performed. DNA could be pulled across the fiber chip in an electric field of 24 V/cm within 5 minutes, remained intact and could be used for subsequent detection assays e.g., polymerase chain reaction (PCR) or fluorescence in situ hybridization (FISH). Fiber electrophoresis could also be used to separate DNA from other components e.g., proteins or cell lysates or to pull DNA through multiple layers of the glass microfiber. In this way, different fragments experienced a moderate, size-dependent separation. Furthermore, this arrangement offers the possibility that different detection reactions could take place in different layers at a later time. Electric current and potential measurements were collected to investigate the local distribution of the sample during migration. While an increase in current signal at high concentrations indicated the presence of DNA samples, initial experiments with methylene blue stained DNA showed a temporal sequence of signals, indicating sample migration along the chip.
For the specific detection of a Legionella DNA, a FISH-based detection with a molecular beacon probe was tested on the glass microfiber. A specific region within the 16S rRNA gene of Legionella spp. served as a target. For this detection, suitable reaction conditions and a readout unit had to be set up first. Subsequently, the sensitivity of the probe was tested with the reverse complementary target sequence and the specificity with several DNA fragments that differed from the target sequence. Compared to other DNA sequences of similar length also found in Legionella pneumophila, only the target DNA was specifically detected on the glass microfiber. If a single base exchange is present or if two bases are changed, the probe can no longer distinguish between the DNA targets and non-targets. An analysis with this specificity can be achieved with other methods such as melting point determination, as was also briefly indicated here. The molecular beacon probe could be dried on the glass microfiber and stored at room temperature for more than three months, after which it was still capable of detecting the target sequence. Finally, the feasibility of fiber-based FISH detection for genomic Legionella DNA was tested. Without further processing, the probe was unable to detect its target sequence in the complex genomic DNA. However, after selecting and application of appropriate restriction enzymes, specific detection of Legionella DNA against other aquatic pathogens with similar fragment patterns as Acinetobacter haemolyticus was possible.
Humankind and their environment need to be protected from the harmful effects of spent nuclear fuel, and therefore disposal in deep geological formations is favoured worldwide. Suitability of potential host rocks is evaluated, among others, by the retention capacity with respect to radionuclides. Safety assessments are based on the quantification of radionuclide migration lengths with numerical simulations as experiments cannot cover the required temporal (1 Ma) and spatial scales (>100 m).
Aim of the present thesis is to assess the migration of uranium, a geochemically complex radionuclide, in the potential host rock Opalinus Clay. Radionuclide migration in clay formations is governed by diffusion due to their low permeability and retarded by sorption. Both processes highly depend on pore water geochemistry and mineralogy that vary between different facies. Diffusion is quantified with the single-component (SC) approach using one diffusion coefficient for all species and the process-based multi-component (MC) option. With this, each species is assigned its own diffusion coefficient and the interaction with the diffuse double layer is taken into account. Sorption is integrated via a bottom-up approach using mechanistic surface complexation models and cation exchange. Therefore, reactive transport simulations are conducted with the geochemical code PHREEQC to quantify uranium migration, i.e. diffusion and sorption, as a function of mineralogical and geochemical heterogeneities on the host rock scale.
Sorption processes are facies dependent. Migration lengths vary between the Opalinus Clay facies by up to 10 m. Thereby, the geochemistry of the pore water, in particular the partial pressure of carbon dioxide (pCO2), is more decisive for the sorption capacity than the amount of clay minerals. Nevertheless, higher clay mineral quantities compensate geochemical variations. Consequently, sorption processes must be quantified as a function of pore water geochemistry in contact with the mineral assemblage.
Uranium diffusion in the Opalinus Clay is facies independent. Speciation is dominated by aqueous ternary complexes of U(VI) with calcium and carbonate. Differences in the migration lengths between SC and MC diffusion are with +/-5 m negligible. Further, the application of the MC approach highly depends on the quality and availability of the underlying data. Therefore, diffusion processes can be adequately quantified with the SC approach using experimentally determined diffusion coefficients.
The hydrogeological system governs pore water geochemistry within the formation rather than the mineralogy. Diffusive exchange with the adjacent aquifers established geochemical gradients over geological time scales that can enhance migration by up to 25 m. Consequently, uranium sorption processes must be quantified following the identified priority: pCO2 > hydrogeology > mineralogy.
The presented research provides a workflow and orientation for other potential disposal sites with similar pore water geochemistry due to the identified mechanisms and dependencies. With a maximum migration length of 70 m, the retention capacity of the Opalinus Clay with respect to uranium is sufficient to fulfill the German legal minimum requirement of a thickness of at least 100 m.
Das Totenfürsorgerecht
(2022)
Die vorliegende Untersuchung befasst sich mit den allgegenwärtigen Fragen, welches rechtliche Schicksal der menschliche Körper nach dem Tod nimmt, ob der Leichnam vererbt wird und wer in welchem Umfang über ihn bestimmen darf. Die Autorin gelangt zu dem Ergebnis, dass das Totenfürsorgerecht als wesentlicher Bestandteil des - im Grundgesetz als Staatszielbestimmung zu verankernden - sog. postmortalen Persönlichkeitsschutzes auf die Wahrung der Pietät ziele und zuvörderst dem Willen des Verstorbenen verpflichtet sei. Bei unbekanntem Verstorbenenwillen dürfe der Totenfürsorgeberechtigte aber in einigen wenigen Bereichen auch eigene Entscheidungen über den ihm anvertrauten Leichnam treffen. Der Umgang mit dem Leichnam lasse sich bislang keinem bekannten Rechtsinstitut zuordnen und stelle Gewohnheitsrecht dar. Gegenwärtig herrsche Rechtsunsicherheit. Zur Behebung des gesetzgeberischen Defizits schlägt die Autorin ein Bundesgesetz vor und unterbreitet hierfür einen Gesetzesvorschlag.
Data stream processing systems (DSPSs) are a key enabler to integrate continuously generated data, such as sensor measurements, into enterprise applications. DSPSs allow to steadily analyze information from data streams, e.g., to monitor manufacturing processes and enable fast reactions to anomalous behavior. Moreover, DSPSs continuously filter, sample, and aggregate incoming streams of data, which reduces the data size, and thus data storage costs.
The growing volumes of generated data have increased the demand for high-performance DSPSs, leading to a higher interest in these systems and to the development of new DSPSs. While having more DSPSs is favorable for users as it allows choosing the system that satisfies their requirements the most, it also introduces the challenge of identifying the most suitable DSPS regarding current needs as well as future demands. Having a solution to this challenge is important because replacements of DSPSs require the costly re-writing of applications if no abstraction layer is used for application development. However, quantifying performance differences between DSPSs is a difficult task. Existing benchmarks fail to integrate all core functionalities of DSPSs and lack tool support, which hinders objective result comparisons. Moreover, no current benchmark covers the combination of streaming data with existing structured business data, which is particularly relevant for companies.
This thesis proposes a performance benchmark for enterprise stream processing called ESPBench. With enterprise stream processing, we refer to the combination of streaming and structured business data. Our benchmark design represents real-world scenarios and allows for an objective result comparison as well as scaling of data. The defined benchmark query set covers all core functionalities of DSPSs. The benchmark toolkit automates the entire benchmark process and provides important features, such as query result validation and a configurable data ingestion rate.
To validate ESPBench and to ease the use of the benchmark, we propose an example implementation of the ESPBench queries leveraging the Apache Beam software development kit (SDK). The Apache Beam SDK is an abstraction layer designed for developing stream processing applications that is applied in academia as well as enterprise contexts. It allows to run the defined applications on any of the supported DSPSs. The performance impact of Apache Beam is studied in this dissertation as well. The results show that there is a significant influence that differs among DSPSs and stream processing applications. For validating ESPBench, we use the example implementation of the ESPBench queries developed using the Apache Beam SDK. We benchmark the implemented queries executed on three modern DSPSs: Apache Flink, Apache Spark Streaming, and Hazelcast Jet. The results of the study prove the functioning of ESPBench and its toolkit. ESPBench is capable of quantifying performance characteristics of DSPSs and of unveiling differences among systems.
The benchmark proposed in this thesis covers all requirements to be applied in enterprise stream processing settings, and thus represents an improvement over the current state-of-the-art.
Die vorliegende Arbeit vertritt die These, dass Hegels Wissenschaft der Logik mit einer Konzeption von Absolutheit Ernst zu machen versucht, nach der es kein Außerhalb des Absoluten geben kann. Dies macht sich bereits im Anfang der Logik bemerkbar: Wenn es nichts außerhalb des Absoluten geben kann, dann darf auch der Anfang nicht außerhalb des Absoluten sein. Folglich kann der Anfang nur mit dem Absoluten gemacht werden. Das Setzen des Anfangs als absolut ist aber gleichzeitig ein Testen des Anfangs auf seine Absolutheit. Diese Prüfung kann der Anfang nicht bestehen. Denn es liegt im Wesen eines Anfangs, nur Anfang und nicht das Ganze und somit nicht das Absolute zu sein. Der Anfang ist am weitesten davon entfernt, das Ganze zu sein, und muss folglich als das Nicht-Absoluteste innerhalb der Logik betrachtet werden. Also ist er beides: Er ist ein Anfang mit dem Absoluten und er ist ein Anfang mit dem Nicht-Absolutesten. Die Logik widerspricht sich bereits in ihrem Anfang. Von diesem Widerspruch muss sie sich befreien. Diese Befreiung treibt den Gang vom Anfang fort. Dies erzeugt den Fortgang der Logik. Die anfängliche Bestimmung hebt sich auf und geht in ihre Folgebestimmung über. Die Folgebestimmung wird ihrerseits absolut gesetzt, kann dieser Setzung aber ebenfalls nicht gerecht werden und hebt sich in ihre Folgebestimmung auf. Eine jede Bestimmung, die auf den Anfang folgt, durchläuft diese Bewegung des Absolutsetzens, Daran-Scheiterns und Sich-Aufhebens, bis – ganz am Ende der Logik – ebendiese Bewegung als dasjenige erkannt wird, was allein vermögend ist, dem Anspruch auf Absolutheit zu genügen. Denn wenn eine jede Bestimmung dieser Bewegung unterworfen ist, dann gibt es kein Außerhalb zu dieser Bewegung. Und also muss sie das gesuchte Absolute sein.
Auf ihrem Weg hin zur wahren Bedeutung des Absoluten kehrt die Logik immer wieder in die Bestimmung ihres Anfangs zurück, um Voraussetzungen einzuholen, die in Zusammenhang mit ihrem Anfang gemacht werden mussten. Für das Einholen dieser Voraussetzungen werden folgende Textstellen von Interesse sein: der Übergang in die Wesenslogik, der Übergang in die Begriffslogik und das Schlusskapitel. Denn auch zuallerletzt, in ihrem Ende kehrt die Logik in ihren Anfang zurück. Entsprechend kann mit Hegel gesagt werden: Das Erste ist auch das Letzte und das Letzte ist auch das Erste.
This thesis is analyzing multiple coordination challenges which arise with the digital transformation of public administration in federal systems, illustrated by four case studies in Germany. I make various observations within a multi-level system and provide an in-depth analysis. Theoretical explanations from both federalism research and neo-institutionalism are utilized to explain the findings of the empirical driven work. The four articles evince a holistic picture of the German case and elucidate its role as a digital government laggard. Their foci range from macro, over meso to micro level of public administration, differentiating between the governance and the tool dimension of digital government.
The first article shows how multi-level negotiations lead to expensive but eventually satisfying solutions for the involved actors, creating a subtle balance between centralization and decentralization. The second article identifies legal, technical, and organizational barriers for cross-organizational service provision, highlighting the importance of inter-organizational and inter-disciplinary exchange and both a common language and trust. Institutional change and its effects on the micro level, on citizens and the employees in local one-stop shops, mark the focus of the third article, bridging the gap between reforms and the administrative reality on the local level. The fourth article looks at the citizens’ perspective on digital government reforms, their expectations, use and satisfaction. In this vein, this thesis provides a detailed account of the importance of understanding the digital divide and therefore the necessity of reaching out to different recipients of digital government reforms. I draw conclusions from the factors identified as causes for Germany’s shortcomings for other federal systems where feasible and derive reform potential therefrom. This allows to gain a new perspective on digital government and its coordination challenges in federal contexts.
Die Conversos
(2022)
Das 15. Jahrhundert ist das Zeitalter der letzten Massenkonversionen zum Christentum auf der Iberischen Halbinsel. Unter starkem Druck und zum Teil mit Gewalt wurde die zuvor große jüdische Bevölkerung gedrängt, sich taufen zu lassen. Gleichwohl akzeptierten große Teile der christlichen Mehrheit die Neubekehrten nicht als gleichwertig und bezweifelten ihre Rechtgläubigkeit. Gegen diese Diskriminierung wandten sich immer wieder Geistliche und Gelehrte mit Predigten, Briefen und Denkschriften. Die vorliegende Studie zeichnet erstmals detailliert die wesentlichen Inhalte ihrer Theologie nach, ihre spezielle Auslegung der Bibel und deren Wurzeln in Recht und Tradition der lateinischen Kirche.
Der Autor untersucht das System des gerichtlichen Rechtsschutzes gegen beamtenrechtliche Personalauswahlentscheidungen im Hinblick auf dessen tatsächliche Wirksamkeit zur Durchsetzung des Grundrechts auf gleichen Zugang zu öffentlichen Ämtern aus Art. 33 Abs. 2 GG. Maßstab der Wirksamkeitsprüfung ist die Rechtsschutzgarantie aus Art. 19 Abs. 4 S. 1 GG. Berücksichtigt werden insbesondere die Modifikationen der etablierten Rechtsschutzdogmatik durch das Urteil des BVerwG vom 04.11.2010. Der Autor konstatiert, dass Bewerbern um ein öffentliches Amt nun zwar ein formell lückenloser Primärrechtsschutz eingeräumt wird. Dessen praktische Wirksamkeit ist jedoch durch zahlreiche prozessuale Besonderheiten und die Handhabung des dem Dienstherrn bei der Auswahlentscheidung zugebilligten weiten Beurteilungs- und Ermessensspielraums erheblich eingeschränkt. Der Autor folgert, dass der geforderte effektive gerichtliche Rechtsschutz nur durch eine rechtsschutzfreundliche Gestaltung des behördlichen Auswahlverfahrens gewährleistet werden kann, und leitet bestimmte organisatorische Mindestanforderungen an das Auswahlverfahren her.
Organic solar cells (OSCs), in recent years, have shown high efficiencies through the development of novel non-fullerene acceptors (NFAs). Fullerene derivatives have been the centerpiece of the accepting materials used throughout organic photovoltaic (OPV) research. However, since 2015 novel NFAs have been a game-changer and have overtaken fullerenes. However, the current understanding of the properties of NFAs for OPV is still relatively limited and critical mechanisms defining the performance of OPVs are still topics of debate.
In this thesis, attention is paid to understanding reduced-Langevin recombination with respect to the device physics properties of fullerene and non-fullerene systems. The work is comprised of four closely linked studies. The first is a detailed exploration of the fill factor (FF) expressed in terms of transport and recombination properties in a comparison of fullerene and non-fullerene acceptors. We investigated the key reason behind the reduced FF in the NFA (ITIC-based) devices which is faster non-geminate recombination relative to the fullerene (PCBM[70]-based) devices. This is then followed by a consideration of a newly synthesized NFA Y-series derivative which exhibits the highest power conversion efficiency for OSC at the time. Such that in the second study, we illustrated the role of disorder on the non-geminate recombination and charge extraction of thick NFA (Y6-based) devices. As a result, we enhanced the FF of thick PM6:Y6 by reducing the disorder which leads to suppressing the non-geminate recombination toward non-Langevin system. In the third work, we revealed the reason behind thickness independence of the short circuit current of PM6:Y6 devices, caused by the extraordinarily long diffusion length of Y6. The fourth study entails a broad comparison of a selection of fullerene and non-fullerene blends with respect to charge generation efficiency and recombination to unveil the importance of efficient charge generation for achieving reduced recombination.
I employed transient measurements such as Time Delayed Collection Field (TDCF), Resistance dependent Photovoltage (RPV), and steady-state techniques such as Bias Assisted Charge Extraction (BACE), Temperature-Dependent Space Charge Limited Current (T-SCLC), Capacitance-Voltage (CV), and Photo-Induce Absorption (PIA), to analyze the OSCs.
The outcomes in this thesis together draw a complex picture of multiple factors that affect reduced-Langevin recombination and thereby the FF and overall performance. This provides a suitable platform for identifying important parameters when designing new blend systems. As a result, we succeeded to improve the overall performance through enhancing the FF of thick NFA device by adjustment of the amount of the solvent additive in the active blend solution. It also highlights potentially critical gaps in the current experimental understanding of fundamental charge interaction and recombination dynamics.
Die Rechtsfigur der fehlerhaften Personengesellschaft blickt sowohl im deutschen als auch im französischen Recht auf eine lange Tradition zurück, wobei sich das deutsche Recht in seinen Anfangszeiten am französischen Vorbild orientierte. Auch mit Blick auf die heutige gesetzliche Regelung in Frankreich lohnt sich daher eine rechtsvergleichende Untersuchung der Lehre von der fehlerhaften Personengesellschaft in beiden Ländern.
Trotz der unterschiedlichen dogmatischen Herangehensweise offenbaren sich wichtige Vergleichsmöglichkeiten. Besonders bei der konstruktiven Einordnung des Phänomens von Faktizität im Zivilrecht kann sich die Perspektive des französischen Rechts als ausgesprochen fruchtbar für die deutsche Dogmatik erweisen.
Characterization of the role of stress - responsive NAC transcription factors ANAC055 and ATAF1
(2022)
Selbstwirksamkeitserwartungen von Lehramtsstudierenden im Kontext von schulpraktischen Erfahrungen
(2022)
Selbstwirksamkeitserwartungen spielen eine wichtige Rolle für das professionelle Verhalten von Lehrkräften im Unterricht (Tschannen-Moran et al., 1998) sowie für die Leistungen und das Verhalten der Schülerinnen und Schüler (Mojavezi & Tamiz, 2012). Selbstwirksamkeitserwartungen von Lehrkräften sind definiert als die Überzeugung von Lehrkräften, dass sie in der Lage sind, bestimmte Ziele in einer spezifischen Situation zu erreichen (Dellinger et al., 2008; Tschannen-Moran & Hoy, 2001). Aufgrund der bedeutenden Rolle der Lehrkräfte im Bildungssystem und in der Gesellschaft ist es wichtig, das Wohlbefinden, die Produktivität und die Wirksamkeit von Lehrkräften zu fördern (Kasalak & Dagyar, 2020). Empirische Befunde unterstreichen die positiven Effekte von Selbstwirksamkeitserwartungen bei Lehrkräften auf ihr Wohlbefinden (Perera & John, 2020) und auf das Lernen sowie die Leistungen der Schülerinnen und Schüler (Zee & Koomen, 2016). Dabei mangelt es jedoch an empirischer Forschung, die die Bedeutung von Selbstwirksamkeitserwartungen bei Lehramtsstudierende in der Lehrkräftebildung untersucht (Yurekli et al., 2020), insbesondere während schulpraktischen Ausbildungsphasen. Ausgehend von der Bedeutung eigener Unterrichtserfahrungen, die als mastery experience, d.h. als stärkste Quelle von Selbstwirksamkeit für Lehramtsstudierende, beschrieben wurden (Pfitzner-Eden, 2016b), werden in dieser Dissertation Praxiserfahrungen als Quelle von Selbstwirksamkeit von Lehramtsstudierenden und die Veränderung der Selbstwirksamkeit von Lehramtsstudierenden während der Lehrkräfteausbildung untersucht. Studie 1 konzentriert sich daher auf die Veränderung der Selbstwirksamkeit von Lehramtsstudierenden während kurzer praktischer Unterrichtserfahrungen im Vergleich zur Online-Lehre ohne Unterrichtserfahrung. Aufgrund inkonsistenter Befunde zu den wechselseitigen Beziehungen zwischen den Selbstwirksamkeitserwartungen von Lehrkräften und ihrem Unterrichtsverhalten (Holzberger et al., 2013; Lazarides et al., 2022) wurde in Studie 2 der Zusammenhang zwischen der Selbstwirksamkeit von Lehramtsstudierenden und ihrem Unterrichtsverhalten während des Lehramtsstudiums untersucht. Da Feedback als verbale Überzeugung (verbal persuasion) dienen kann und somit eine wichtige Quelle für Selbstwirksamkeitserwartungen ist, die das Gefühl der Kompetenz stärkt (Pfitzner-Eden, 2016b), fokussiert Studie 2 den Zusammenhang zwischen der Veränderung der Selbstwirksamkeit von Lehramtsstudierenden und der wahrgenommenen Qualität des Peer-Feedbacks im Kontext kurzer schulpraktischer Erfahrungen während des Lehramtsstudiums. Darüber hinaus ist es für die Untersuchung der Veränderung von Selbstwirksamkeit bei Lehramtsstudierenden wichtig, individuelle Persönlichkeitsaspekte und spezifische Bedingungen der Lernumgebung in der Lehrkräftebildung zu untersuchen (Bach, 2022). Ausgehend von der Annahme, dass die Unterstützung von Reflexionsprozessen in der Lehrkräftebildung (Menon & Azam, 2021) und der Einsatz innovativer Lernsettings wie VR-Videos (Nissim & Weissblueth, 2017) die Entwicklung von Selbstwirksamkeitserwartungen von Lehramtsstudierenden fördern, werden in Studie 3 und Studie 4 Reflexionsprozesse bei Lehramtsstudierenden in Bezug auf ihre eigenen Unterrichtserfahrungen bzw. stellvertretenden Unterrichtserfahrungen anderer untersucht. Vor dem Hintergrund inkonsistenter Befunde und fehlender empirischer Forschung zu den Zusammenhängen zwischen Selbstwirksamkeit von Lehramtsstudierenden und verschiedenen Faktoren, die das Lernumfeld oder persönliche Merkmale betreffen, sind weitere empirische Studien erforderlich, die verschiedene Quellen und Zusammenhänge der Selbstwirksamkeitserwartungen von Lehramtsstudierenden während des Lehramtsstudiums untersuchen. In diesem Zusammenhang wird in der vorliegenden Dissertation der Frage nachgegangen, welche individuellen Merkmale und Lernumgebungen die Selbstwirksamkeit von Lehramtsstudierenden – insbesondere während kurzer schulpraktischer Phasen im Lehramtsstudium fördern können. Darüber hinaus schließt die Dissertation mit der Diskussion der Ergebnisse aus den vier Teilstudien ab, indem Stärken und Schwächen jeder Studie gesamtheitlich in den Blick genommen werden. Abschließend werden Limitationen und Implikationen für die weitere Forschung und die Praxis diskutiert.
Growth differentiation factor 15 (GDF15) is a stress-induced cytokine secreted into the circulation by a number of tissues under different pathological conditions such as cardiovascular disease, cancer or mitochondrial dysfunction, among others. While GDF15 signaling through its recently identified hindbrain-specific receptor GDNF family receptor alpha-like (GFRAL) has been proposed to be involved in the metabolic stress response, its endocrine role under chronic stress conditions is still poorly understood. Mitochondrial dysfunction is characterized by the impairment of oxidative phosphorylation (OXPHOS), leading to inefficient functioning of mitochondria and consequently, to mitochondrial stress. Importantly, mitochondrial dysfunction is among the pathologies to most robustly induce GDF15 as a cytokine in the circulation.
The overall aim of this thesis was to elucidate the role of the GDF15-GFRAL pathway under mitochondrial stress conditions. For this purpose, a mouse model of skeletal muscle-specific mitochondrial stress achieved by ectopic expression of uncoupling protein 1 (UCP1), the HSA-Ucp1-transgenic (TG) mouse, was employed. As a consequence of mitochondrial stress, TG mice display a metabolic remodeling consisting of a lean phenotype, an improved glucose metabolism, an increased metabolic flexibility and a metabolic activation of white adipose tissue.
Making use of TG mice crossed with whole body Gdf15-knockout (GdKO) and Gfral-knockout (GfKO) mouse models, this thesis demonstrates that skeletal muscle mitochondrial stress induces the integrated stress response (ISR) and GDF15 in skeletal muscle, which is released into the circulation as a myokine (muscle-induced cytokine) in a circadian manner. Further, this work identifies GDF15-GFRAL signaling to be responsible for the systemic metabolic remodeling elicited by mitochondrial stress in TG mice. Moreover, this study reveals a daytime-restricted anorexia induced by the GDF15-GFRAL axis under muscle mitochondrial stress, which is, mechanistically, mediated through the induction of hypothalamic corticotropin releasing hormone (CRH). Finally, this work elucidates a so far unknown physiological outcome of the GDF15-GFRAL pathway: the induction of anxiety-like behavior.
In conclusion, this study uncovers a muscle-brain crosstalk under skeletal muscle mitochondrial stress conditions through the induction of GDF15 as a myokine that signals through the hindbrain-specific GFRAL receptor to elicit a stress response leading to metabolic remodeling and modulation of ingestive- and anxiety-like behavior.
Flares are magnetically driven explosions that occur in the atmospheres of all main sequence stars that possess an outer convection zone. Flaring activity is rooted in the magnetic dynamo that operates deep in the stellar interior, propagates through all layers of the atmosphere from the corona to the photosphere, and emits electromagnetic radiation from radio bands to X-ray. Eventually, this radiation, and associated eruptions of energetic particles, are ejected out into interplanetary space, where they impact planetary atmospheres, and dominate the space weather environments of young star-planet systems.
Thanks to the Kepler and the Transit Exoplanet Survey Satellite (TESS) missions, flare observations have become accessible for millions of stars and star-planet systems. The goal of this thesis is to use these flares as multifaceted messengers to understand stellar magnetism across the main sequence, investigate planetary habitability, and explore how close-in planets can affect the host star.
Using space based observations obtained by the Kepler/K2 mission, I found that flaring activity declines with stellar age, but this decline crucially depends on stellar mass and rotation. I calibrated the age of the stars in my sample using their membership in open clusters from zero age main sequence to solar age. This allowed me to reveal the rapid transition from an active, saturated flaring state to a more quiescent, inactive flaring behavior in early M dwarfs at about 600-800 Myr. This result is an important observational constraint on stellar activity evolution that I was able to de-bias using open clusters as an activity-independent age indicator.
The TESS mission quickly superseded Kepler and K2 as the main source of flares in low mass M dwarfs. Using TESS 2-minute cadence light curves, I developed a new technique for flare localization and discovered, against the commonly held belief, that flares do not occur uniformly across their stellar surface: In fast rotating fully convective stars, giant flares are preferably located at high latitudes. This bears implications for both our understanding of magnetic field emergence in these stars, and the impact on the exoplanet atmospheres: A planet that orbits in the equatorial plane of its host may be spared from the destructive effects of these poleward emitting flares.
AU Mic is an early M dwarf, and the most actively flaring planet host detected to date. Its innermost companion, AU Mic b is one of the most promising targets for a first observation of flaring star-planet interactions. In these interactions, the planet influences the star, as opposed to space weather, where the planet is always on the receiving side. The effect reflects the properties of the magnetosphere shared by planet and star, as well as the so far inaccessible magnetic properties of planets. In the about 50 days of TESS monitoring data of AU Mic, I searched for statistically robust signs of flaring interactions with AU Mic b as flares that occur in surplus of the star's intrinsic activity. I found the strongest yet still marginal signal in recurring excess flaring in phase with the orbital period of AU Mic b. If it reflects true signal, I estimate that extending the observing time by a factor of 2-3 will yield a statistically significant detection. Well within the reach of future TESS observations, this additional data may bring us closer to robustly detecting this effect than we have ever been.
This thesis demonstrates the immense scientific value of space based, long baseline flare monitoring, and the versatility of flares as a carrier of information about the magnetism of star-planet systems. Many discoveries still lay in wait in the vast archives that Kepler and TESS have produced over the years. Flares are intense spotlights into the magnetic structures in star-planet systems that are otherwise far below our resolution limits. The ongoing TESS mission, and soon PLATO, will further open the door to in-depth understanding of small and dynamic scale magnetic fields on low mass stars, and the space weather environment they effect.
This study explores the identity of the Bene Israel caste from India and its assimilation into Israeli society. The large immigration from India to Israel started in the early 1950s and continued until the early 1970s. Initially, these immigrants struggled hard as they faced many problems such as the language barrier, cultural differences, a new climate, geographical isolation, and racial discrimination. This analysis focuses on the three major aspects of the integration process involving the Bene Israel: economic, socio-cultural and political. The study covers the period from the early fifties to the present.
I will focus on the origin of the Bene Israel, which has evolved after their immigration to Israel; from a Hindu–Muslim lifestyle and customs they integrated into the Jewish life of Israel. Despite its ethnographic nature, this study has theological implications as it is an encounter between Jewish monotheism and Indian polytheism.
All the western scholars who researched the Bene Israel community felt impelled to rely on information received by community members themselves. No written historical evidence recorded Bene Israel culture and origin. Only during the nineteenth century onwards, after the intrusion of western Jewish missionaries, were Jewish books translated into Marathi . Missionary activities among the Bene Israel served as a catalyst for the Bene Israel themselves to investigate their historical past . Haeem Samuel Kehimkar (1830-1908), a Bene Israel teacher, wrote notes on the history of the Bene Israel in India in Marathi in 1897. Brenda Ness wrote in her dissertation:
The results [of the missionary activities] are several works about the community in English and Marathi by Bene-Israel authors which have appeared during the last century. These are, for the most part, not documented; they consist of much theorizing on accepted tradition and tend to be apologetic in nature.
There can be no philosophical explanation or rational justification for an entire community to leave their motherland India, and enter into a process of annihilation of its own free will. I see this as a social and cultural suicide. In craving for a better future in Israel, the Indian Bene Israel community pays an enormously heavy price as a people that are today discarded by the East and disowned by the West: because they chose to become something that they never were and never could be. As it is written, “know where you came from, and where you are going.” A community with an ancient history from a spiritual culture has completely lost its identity and self-esteem.
In concluding this dissertation, I realize the dilemma with which I have confronted the members of the Bene Israel community which I have reviewed after strenuous and constant self-examination. I chose to evolve the diversifications of the younger generations urges towards acceptance, and wish to clarify my intricate analysis of this controversial community. The complexity of living in a Jewish State, where citizens cannot fulfill their basic desires, like matrimony, forced an entire community to conceal their true identity and perjure themselves to blend in, for the sake of national integration. Although scholars accepted their new claims, the skepticism of the rabbinate authorities prevails, and they refuse to marry them to this day, suspecting they are an Indian caste.
Knowledge graphs are structured repositories of knowledge that store facts
about the general world or a particular domain in terms of entities and
their relationships. Owing to the heterogeneity of use cases that are served
by them, there arises a need for the automated construction of domain-
specific knowledge graphs from texts. While there have been many research
efforts towards open information extraction for automated knowledge graph
construction, these techniques do not perform well in domain-specific settings.
Furthermore, regardless of whether they are constructed automatically from
specific texts or based on real-world facts that are constantly evolving, all
knowledge graphs inherently suffer from incompleteness as well as errors in
the information they hold.
This thesis investigates the challenges encountered during knowledge graph
construction and proposes techniques for their curation (a.k.a. refinement)
including the correction of semantic ambiguities and the completion of missing
facts. Firstly, we leverage existing approaches for the automatic construction
of a knowledge graph in the art domain with open information extraction
techniques and analyse their limitations. In particular, we focus on the
challenging task of named entity recognition for artwork titles and show
empirical evidence of performance improvement with our proposed solution
for the generation of annotated training data.
Towards the curation of existing knowledge graphs, we identify the issue of
polysemous relations that represent different semantics based on the context.
Having concrete semantics for relations is important for downstream appli-
cations (e.g. question answering) that are supported by knowledge graphs.
Therefore, we define the novel task of finding fine-grained relation semantics
in knowledge graphs and propose FineGReS, a data-driven technique that
discovers potential sub-relations with fine-grained meaning from existing pol-
ysemous relations. We leverage knowledge representation learning methods
that generate low-dimensional vectors (or embeddings) for knowledge graphs
to capture their semantics and structure. The efficacy and utility of the
proposed technique are demonstrated by comparing it with several baselines
on the entity classification use case.
Further, we explore the semantic representations in knowledge graph embed-
ding models. In the past decade, these models have shown state-of-the-art
results for the task of link prediction in the context of knowledge graph comple-
tion. In view of the popularity and widespread application of the embedding
techniques not only for link prediction but also for different semantic tasks,
this thesis presents a critical analysis of the embeddings by quantitatively
measuring their semantic capabilities. We investigate and discuss the reasons
for the shortcomings of embeddings in terms of the characteristics of the
underlying knowledge graph datasets and the training techniques used by
popular models.
Following up on this, we propose ReasonKGE, a novel method for generating
semantically enriched knowledge graph embeddings by taking into account the
semantics of the facts that are encapsulated by an ontology accompanying the
knowledge graph. With a targeted, reasoning-based method for generating
negative samples during the training of the models, ReasonKGE is able to
not only enhance the link prediction performance, but also reduce the number
of semantically inconsistent predictions made by the resultant embeddings,
thus improving the quality of knowledge graphs.
It is estimated that data scientists spend up to 80% of the time exploring, cleaning, and transforming their data. A major reason for that expenditure is the lack of knowledge about the used data, which are often from different sources and have heterogeneous structures. As a means to describe various properties of data, metadata can help data scientists understand and prepare their data, saving time for innovative and valuable data analytics. However, metadata do not always exist: some data file formats are not capable of storing them; metadata were deleted for privacy concerns; legacy data may have been produced by systems that were not designed to store and handle meta- data. As data are being produced at an unprecedentedly fast pace and stored in diverse formats, manually creating metadata is not only impractical but also error-prone, demanding automatic approaches for metadata detection.
In this thesis, we are focused on detecting metadata in CSV files – a type of plain-text file that, similar to spreadsheets, may contain different types of content at arbitrary positions. We propose a taxonomy of metadata in CSV files and specifically address the discovery of three different metadata: line and cell type, aggregations, and primary keys and foreign keys.
Data are organized in an ad-hoc manner in CSV files, and do not follow a fixed structure, which is assumed by common data processing tools. Detecting the structure of such files is a prerequisite of extracting information from them, which can be addressed by detecting the semantic type, such as header, data, derived, or footnote, of each line or each cell. We propose the supervised- learning approach Strudel to detect the type of lines and cells. CSV files may also include aggregations. An aggregation represents the arithmetic relationship between a numeric cell and a set of other numeric cells. Our proposed AggreCol algorithm is capable of detecting aggregations of five arithmetic functions in CSV files. Note that stylistic features, such as font style and cell background color, do not exist in CSV files. Our proposed algorithms address the respective problems by using only content, contextual, and computational features.
Storing a relational table is also a common usage of CSV files. Primary keys and foreign keys are important metadata for relational databases, which are usually not present for database instances dumped as plain-text files. We propose the HoPF algorithm to holistically detect both constraints in relational databases. Our approach is capable of distinguishing true primary and foreign keys from a great amount of spurious unique column combinations and inclusion dependencies, which can be detected by state-of-the-art data profiling algorithms.
The Arctic is changing rapidly and permafrost is thawing. Especially ice-rich permafrost, such as the late Pleistocene Yedoma, is vulnerable to rapid and deep thaw processes such as surface subsidence after the melting of ground ice. Due to permafrost thaw, the permafrost carbon pool is becoming increasingly accessible to microbes, leading to increased greenhouse gas emissions, which enhances the climate warming.
The assessment of the molecular structure and biodegradability of permafrost organic matter (OM) is highly needed. My research revolves around the question “how does permafrost thaw affect its OM storage?” More specifically, I assessed (1) how molecular biomarkers can be applied to characterize permafrost OM, (2) greenhouse gas production rates from thawing permafrost, and (3) the quality of OM of frozen and (previously) thawed sediments.
I studied deep (max. 55 m) Yedoma and thawed Yedoma permafrost sediments from Yakutia (Sakha Republic). I analyzed sediment cores taken below thermokarst lakes on the Bykovsky Peninsula (southeast of the Lena Delta) and in the Yukechi Alas (Central Yakutia), and headwall samples from the permafrost cliff Sobo-Sise (Lena Delta) and the retrogressive thaw slump Batagay (Yana Uplands). I measured biomarker concentrations of all sediment samples. Furthermore, I carried out incubation experiments to quantify greenhouse gas production in thawing permafrost.
I showed that the biomarker proxies are useful to assess the source of the OM and to distinguish between OM derived from terrestrial higher plants, aquatic plants and microbial activity. In addition, I showed that some proxies help to assess the degree of degradation of permafrost OM, especially when combined with sedimentological data in a multi-proxy approach. The OM of Yedoma is generally better preserved than that of thawed Yedoma sediments. The greenhouse gas production was highest in the permafrost sediments that thawed for the first time, meaning that the frozen Yedoma sediments contained most labile OM. Furthermore, I showed that the methanogenic communities had established in the recently thawed sediments, but not yet in the still-frozen sediments.
My research provided the first molecular biomarker distributions and organic carbon turnover data as well as insights in the state and processes in deep frozen and thawed Yedoma sediments. These findings show the relevance of studying OM in deep permafrost sediments.
Does youth matter?
(2022)
This dissertation is a compilation of publications and submitted publication manuscripts that seek to improve the understanding of modern partnership trajectories. Romantic relationships constitute one of the most important dimensions in a person’s life. They serve to satisfy social and emotional needs (Arránz Becker, 2008) and have an impact on various other dimensions of life. Since the 1970s, partnership formation has been characterized by increased heterogeneity, has become less ordered and much more diverse in terms of living arrangements and the number of unions across the life course (Helske et al, 2015; Ross et al, 2009). This dissertation argues that while partnerships have become more unstable, the need for attachment and the importance of relationship have remained high, if not increased, as evidenced by the prevalence of couple relationships that have remained quite stable (Eckhardt, 2015). The life course perspective (Elder, 1994; Elder et al., 2004; Mayer, 2009) offers an appropriate framework for the understanding of partnership formations throughout the life course. This perspective stresses the path dependency of the life course as well as the interdependencies of life domains (Bernardi et al., 2019). Thus, it can be argued that conditions, resources, and experiences in youth have a substantial influence on later life course outcomes. Given the increasing heterogeneity of partnership trajectories, research to understand partnership processes cannot be based only on single events (e.g., marriage or divorce) or life stages, but must be explored in a dynamic context and over a longer period of time. In sum, this thesis argues that partnership trajectories have to be considered from a holistic perspective. Not only single transitions or events are useful to describe modern partnership histories adequately, but rather the whole process. Additionally, as partnership trajectories are linked to various outcomes (e.g., economics, health, effects on children), it is therefore highly relevant to improve our understanding of partnership dynamics and their determinants and consequences. Findings in this field of research contribute to a better understanding of how childhood and youth are of prospective importance for the later partnership trajectories and whether there are any long-term effects of the conditions and resources formed and stabilized in youth, which then help to understand and explain partnership dynamics. Thus, the interest of this thesis lies in the longitudinal description and prediction of the dynamics of partnership trajectories in light of the individual resources formed and stabilized in youth, as well as in the investigation of the consequences of different partnership trajectory patterns on individual well-being. For these objectives, a high demand on the data is required, as prospective data at the beginning of the partnership biography are needed, as well as data on current life dimensions and the detailed partnership history. The German LifE Study provides this particular data structure as it examines life courses of more than 1,300 individuals from adolescence to middle adulthood. With regard to the overall aim of this dissertation, the main conclusion is that early life conditions, experiences, and resources influence the dynamics of individual partnership trajectories. The results illustrate that youth matters and that characteristics and resources anchored in youth influence the timing of early status passages, which sets individuals on specific life paths. However, in addition to personal and social resources, partnership trajectories were also significantly influenced by individuals’ sociodemographic placement. Additionally, individual resources are also linked to the overall turbulence or stability of partnership trajectories. This overall dynamic, which is reflected in different partnership patterns, influences individual well-being, with stability being associated with greater satisfaction, and instability (women), or permanent singlehood (men), having a negative impact on well-being. My analyses contribute to life course research by examining path dependency against the background of various individual factors (socio-structural and psychological characteristics) to model decision-making processes in partnerships in more detail. They do so by including also non-cohabitational union types in the analyses, by accounting for pre-trajectory life conditions and resources, and, most importantly, by modeling the partnership trajectory in a holistic and dynamic perspective, applying this perspective to appropriate and modern statistical methods on a unique dataset.
The ongoing climate change is altering the living conditions for many organisms on this planet at an unprecedented pace. Hence, it is crucial for the survival of species to adapt to these changing conditions. In this dissertation Silene vulgaris is used as a model organism to understand the adaption strategies of widely distributed plant species to the current climate change. Especially plant species that possess a wide geographic range are expected to have a high phenotypic plasticity or to show genetic differentiation in response to the different climate conditions they grow in. However, they are often underrepresented in research.
In the greenhouse experiment presented in this thesis, I examined the phenotypic responses and plasticity in S. vulgaris to estimate its’ adaptation potential. Seeds from 25 wild European populations were collected along a latitudinal gradient and grown in a greenhouse under three different precipitation (65 mm, 75 mm, 90 mm) and two different temperature regimes (18°C, 21°C) that resembled a possible climate change scenario for central Europe. Afterwards different biomass and fecundity-related plant traits were measured.
The treatments significantly influenced the plants but did not reveal a latitudinal difference in response to climate treatments for most plant traits. The number of flowers per individual however, showed a stronger plasticity in northern European populations (e.g., Swedish populations) where numbers decreased more drastically with increased temperature and decreased precipitation.
To gain an even deeper understanding of the adaptation of S. vulgaris to climate change it is also important to reveal the underlying phylogeny of the sampled populations. Therefore, I analysed their population genetic structure through whole genome sequencing via ddRAD.
The sequencing revealed three major genetic clusters in the S. vulgaris populations sampled in Europe: one cluster comprised Southern European populations, one cluster Western European populations and another cluster contained central European populations. A following analysis of experimental trait responses among the clusters to the climate-change scenario showed that the genetic clusters significantly differed in biomass-related traits and in the days to flowering. However, half of the traits showed parallel response patterns to the experimental climate-change scenario.
In addition to the potential geographic and genetic adaptation differences to climate change this dissertation also deals with the response differences between the sexes in S. vulgaris. As a gynodioecious species populations of S. vulgaris consist of female and hermaphrodite
individuals and the sexes can differ in their morphological traits which is known as sexual dimorphism. As climate change is becoming an important factor influencing plant morphology it remains unclear if and how different sexes may respond in sexually dimorphic species. To examine this question the sex of each individual plant was determined during the greenhouse experiment and the measured plant traits were analysed accordingly. In general, hermaphrodites had a higher number of flowers but a lower number of leaves than females. With regards to the climate change treatment, I found that hermaphrodites showed a milder negative response to higher temperatures in the number of flowers produced and in specific leaf area (SLA) compared to females.
Synthesis – The significant treatment response in Silene vulgaris, independent of population origin in most traits suggests a high degree of universal phenotypic plasticity. Also, the three European intraspecific genetic lineages detected showed comparable parallel response patterns in half of the traits suggesting considerable phenotypic plasticity. Hence, plasticity might represent a possible adaptation strategy of this widely distributed species during ongoing and future climatic changes. The results on sexual dimorphism show that females and hermaphrodites are differing mainly in their number of flowers and females are affected more strongly by the experimental climate-change scenario. These results provide a solid knowledge basis on the sexual dimorphism in S. vulgaris under climate change, but further research is needed to determine the long-term impact on the breeding system for the species.
In summary this dissertation provides a comprehensive insight into the adaptation mechanisms and consequences of a widely distributed and gynodioecious plant species and leverages our understanding of the impact of anthropogenic climate change on plants.
Die Arbeit untersucht bewaffnete Konfliktszenarien, in denen an multinationalen Militäroperationen beteiligte Staaten während einer Gewahrsamsoperation gegnerische Kräfte oder andere Personen in Gewahrsam nehmen und diese dann an die Kräfte eines anderen Staates, oftmals der Hostnation mit zweifelhafter Menschenrechtsreputation, überstellen. Gewahrsamspersonen laufen dann Gefahr, Opfer erheblicher Rechtsverletzungen zu werden