Refine
Year of publication
- 2017 (182) (remove)
Document Type
- Doctoral Thesis (182) (remove)
Language
- English (182) (remove)
Keywords
- Klimawandel (4)
- climate change (4)
- Arbeitsmarktpolitik (2)
- Bioraffinerie (2)
- DNA origami (2)
- Epidemiologie (2)
- Innovationsmanagement (2)
- Machine Learning (2)
- Nanopartikel (2)
- Naturgefahren (2)
- Netzwerke (2)
- Prosodie (2)
- Zeitreihenanalyse (2)
- biomass (2)
- coordination (2)
- entrepreneurship (2)
- language production (2)
- nanoparticles (2)
- natural hazards (2)
- networks (2)
- numerical modeling (2)
- numerische Modellierung (2)
- prosody (2)
- self-assembly (2)
- "Reactive Flux" Ratenkonstanten (1)
- 2-Thiodisaccharide (1)
- 2-Thiodisaccharides (1)
- 3D Modellierung (1)
- 3D geovirtual environment (1)
- 3D geovisualization system (1)
- 3D-Geovisualisierungssystem (1)
- 3D-geovirtuelle Umgebung (1)
- 3D-modeling (1)
- Abhängigkeit (1)
- Adipositas (1)
- Aggression (1)
- Aktivierungsentropie (1)
- Akzeptabilitätsbewertung (1)
- Anregungs-Abfrage-Experiment (1)
- Antibiotika (1)
- Antiferromagnetismus (1)
- Arbeitslosenversicherung (1)
- Arbeitslosigkeit (1)
- Arbeitssuche (1)
- Arbeitssuchverhalten (1)
- Arktik (1)
- Artverbreitung (1)
- Asteroseismologie (1)
- Atmosphärendynamik (1)
- Aufarbeitung von Fruktose (1)
- Ausdauerleistung (1)
- Australia (1)
- Australien (1)
- Automatisierung (1)
- Autotrophie (1)
- Azobenzol (1)
- Azobenzol-haltiges Tensid (1)
- Bachstufen (1)
- Bakterien Sensor (1)
- Beobachtung anthropogener Aktivitäten (1)
- Beobachtung von Erdbebenquellen (1)
- Betriebswirtschaft (1)
- Bioinformatik (1)
- Biomasse (1)
- Biorefinery (1)
- Biosensoren (1)
- Biosynthese (1)
- Blockcopolymervesikel (1)
- Bohrlochrandausbrüche (1)
- Borel Funktionen (1)
- Borel functions (1)
- Brand Equity (1)
- Brand Management (1)
- Brechungsindex von Azobenzol-haltigen Tensiden (1)
- Business Development (1)
- Canada (1)
- Catalysis (1)
- Cauchy horizon (1)
- Cauchyhorizont (1)
- Cer Ammonium Nitrat (CAN) (1)
- Ceric Ammonium Nitrate (CAN) (1)
- Chemodynamik der Milchstraße (1)
- Cherenkov showers (1)
- Cherenkov-Schauern (1)
- Chile (1)
- Closure Positive Shift (CPS) (1)
- Coarea Formel (1)
- Constraint-basierte Modellierung (1)
- Cre Rekombinase (1)
- Cre recombinase (1)
- Cyanobakterien (1)
- DFTB3 (1)
- DNA Origami (1)
- DNA assembly (1)
- DNA damage (1)
- DNA-Origami (1)
- DNA-Schädigung (1)
- DNS Assemblierung (1)
- Dark Matter (1)
- Datenintegration (1)
- Datenqualität (1)
- Deep Learning (1)
- Design Thinking (1)
- Deutschland (1)
- Diabetes mellitus Typ 2 (1)
- Dichteheterogenitäten im oberen Mantel (1)
- Dihydroxyaceton (1)
- Discrimination Networks (1)
- Diversität (1)
- Dokument Analyse (1)
- Doppelsterne (1)
- Doppelt hydrophile Blockcopolymere (1)
- Dreisprachigkeit (1)
- Dunkler Materie (1)
- Duplikaterkennung (1)
- E-Learning (1)
- ECIS (1)
- EM (1)
- Eisenbahninfrastruktur (1)
- El`gygytgyn Kratersee (1)
- Elektromyographie (1)
- Ellipse (1)
- El’gygytgyn Crater Lake (1)
- Empfehlungen (1)
- Entdeckung (1)
- Entrepreneurship (1)
- Entstehung der Milchstraße (1)
- Entstehung von Galaxien (1)
- Entwicklung (1)
- Entwicklung von Galaxien (1)
- Erdbeben Modellierung (1)
- Ereigniskorrelierte Hirnpotentiale (EKP) (1)
- Ernährung (1)
- Ernährungsmuster (1)
- Erosion (1)
- European bats (1)
- Europäische Fledermausarten (1)
- Eutrophierung (1)
- Evolutionsgenetik (1)
- FRET (1)
- Feinsedimente (1)
- Ferromagnetismus (1)
- Festigkeit des Schiefer (1)
- Fettoxidation (1)
- Fettstoffwechsel (1)
- FhuA (1)
- Finanzwissenschaften (1)
- Flocking (1)
- Fluoreszenz (1)
- Fokus (1)
- Fotoschalter (1)
- Förster resonance energy transfer (1)
- Förster-Resonanzenergietransfer (1)
- GIS (1)
- GIS-Dienstkomposition (1)
- GTPase (1)
- Galaxien (1)
- Galaxien: Evolution (1)
- Galaxien: Kinematik und Dynamik (1)
- Galaxien: Statistiken (1)
- Gator Netzwerk (1)
- Gator networks (1)
- Gebirgsbäche (1)
- Geographical mobility (1)
- Geomorphologie (1)
- Geothermie (1)
- Gerinne-Hang-Kopplung (1)
- German (1)
- Germany (1)
- Geschiebetransport (1)
- Geschmackssystems (1)
- Gewinnung benannter Entitäten (1)
- Gleichgewicht (1)
- Gleichgewichtstraining (1)
- Goldnanopartikel (1)
- Gravitationswelle (1)
- Grundwasser (1)
- Gründungsförderung (1)
- Gründungszuschuss (1)
- Hitzewellen (1)
- Hohlzylinderversuche (1)
- Huftiere (1)
- Hybrid (1)
- Hybridmodell (1)
- Hydroxymethylfurfural (1)
- IC Model (1)
- IC Modell (1)
- Iceland (1)
- Information Retrieval (1)
- Informationsextraktion (1)
- Inklusionsabhängigkeit (1)
- Innovation (1)
- Innovation Management (1)
- Inversions-Theorie (1)
- Island (1)
- Jetstream (1)
- Job search behavior (1)
- Kaffee (1)
- Kanada (1)
- Katalyse (1)
- Kenias öffentlicher Dienst (1)
- Kenya public service (1)
- Kerguelen (1)
- Kern Methoden (1)
- Ketzin (1)
- Kindheit (1)
- Kinematik (1)
- Klimafolgenanalyse (1)
- Kohlenhydrat-Protein Interaction (1)
- Kohlenhydrate (1)
- Kohlenhydratoxidation (1)
- Kohlenstoff (1)
- Kohlenstoff-Nanopunkte (1)
- Kohlenstoff-Punkte (1)
- Kohlenstoffisotope (1)
- Kohlenstoffnitride (1)
- Komposite (1)
- Konformationsselektion (1)
- Koordination (1)
- Koordinationskomplexe (1)
- Koordinierung (1)
- Krafttraining (1)
- Kultur (1)
- Küste (1)
- L2 sentence processing (1)
- Labor market policies (1)
- Ladung Transport (1)
- Ladungsträgerrekombination (1)
- Landformen (1)
- Landschaftsanalyse (1)
- Landschaftspräferenzen (1)
- Lawinen (1)
- Leerlaufspannung (1)
- Leistungsinformationen verwenden (1)
- Leistungsmanagement (1)
- Leseexperiment (1)
- Licht (1)
- Lignin (1)
- Line Suche (1)
- Litoral (1)
- Lumping (1)
- Lysimeter (1)
- Längsschnitt (1)
- Lösungsmitteleffekte (1)
- Lösungsprozess (1)
- Magnetosomen-Ketten (1)
- Makrophyten (1)
- Mantleplumes (1)
- Markenführung (1)
- Markenwert (1)
- Marketing Strategy (1)
- Markov state models (1)
- Markowketten (1)
- Maximalkraft/Schnellkraft (1)
- Mediationsanalyse (1)
- Meeres-Governance (1)
- Mehrsprachigkeit (1)
- Mesokristalle (1)
- Message Passing Interface (1)
- Metabolomics (1)
- Metadaten (1)
- Metanome (1)
- Microdialyse (1)
- Microeconometrics (1)
- Microviridin (1)
- Middleware (1)
- Mikrobiologie (1)
- Mikroökonometrie (1)
- Milky Way chemodynamics (1)
- Milky Way evolution (1)
- Minimax Optimalität (1)
- Mobil (1)
- Modellierung des seismischen Zyklus (1)
- Mongolei (1)
- Mongolia (1)
- N-Alkylglycin (1)
- N-alkyl-glycine (1)
- NGS (1)
- NLME (1)
- NLP (1)
- Nachkommen (1)
- Nanolinsen (1)
- Naturrisiken (1)
- Naturschutz (1)
- Naturstoffe (1)
- Navier-Stokes-Gleichungen (1)
- Navier-Stoks equations (1)
- Neo-institutionalismus (1)
- Nettoproduktion (1)
- Nettorotation der Lithosphäre (1)
- Niederschlag (1)
- Normalenbündel (1)
- Norwegian (1)
- Norwegisch (1)
- Nucleus parabrachialis (1)
- OLED (1)
- Oberflächenwasser-Grundwasser Interaktion (1)
- Objekttopikalisierung (1)
- Ontologien (1)
- Optimierung von Biosynthesewegen (1)
- Optogenetik (1)
- Organisationsreform (1)
- Organisationstheorie (1)
- Organizational innovation (1)
- Oxo-Kohlenstoff (1)
- PBPK (1)
- PPGIS (1)
- Partikelverben (1)
- Periphyton (1)
- Permafrost (1)
- Personal Data (1)
- Pflanzen-Habitat Interaktionen (1)
- Pharmakodynamik (1)
- Photochemie (1)
- Photogrammetrie (1)
- Photokatalyse (1)
- Phylogeographie (1)
- Plasmonik (1)
- Plattenbewegungen (1)
- Plume-Rücken Interaktion (1)
- Polyester (1)
- Polykondensation (1)
- Polymerchemie (1)
- Polymere (1)
- Polymersynthese (1)
- Polypeptoide (1)
- Populations Analyse (1)
- Populationsökologie (1)
- Posidonia shale (1)
- Posidonienschiefer (1)
- Prevalence (1)
- Privatsphäre (1)
- Product Scandals (1)
- Profilerstellung für Daten (1)
- Protease-Inhibitoren (1)
- Protein-Polymer Konjugaten (1)
- Protein-Protein-Interaktion (1)
- Proteinkinetik (1)
- Prozessidentifikation (1)
- Prozesssynchronisierung (1)
- Prävalenz (1)
- Psycholinguistik (1)
- QM/MM Molekulardynamik (1)
- QM/MM stochastic dynamics (1)
- Quadratsäure (1)
- Quantenchemie (1)
- RAFT/MADIX Polymerisation (1)
- RAFT/MADIX polymerization (1)
- ROP (1)
- ROS (1)
- Radarinterferometrie mit synthetischer Apertur (1)
- Radikalreaktionen (1)
- Radiokarbon (1)
- Radiosensitization (1)
- Rauheit (1)
- Raumwellen (1)
- Reaktionsmechanismen (1)
- Regionale Mobilität (1)
- Regionalökonometrie (1)
- Regionalökonomie (1)
- Reibung an Plattengrenzen (1)
- Rektifizierbarkeit höherer Ordnung (1)
- Reptilien (1)
- Rete Netzwerk (1)
- Rete networks (1)
- RiPP (1)
- Ringöffnungspolymerisation (1)
- Risiko- und Vulnerabilitätsfaktoren (1)
- Risk and Vulnerability Factors (1)
- Rubisco (1)
- Rumpfkinematik (1)
- Rumpfkraft (1)
- Réunion (1)
- SERS (1)
- Satellitenbilder (1)
- Satzverarbeitung (1)
- Schadensmodellierung (1)
- Schnittstelle (1)
- Scrambling (1)
- Sedimentfracht (1)
- Sedimentquellenidentifizierung (1)
- Seen (1)
- Seismizität und Tektonik (1)
- Selbstassemblierung (1)
- Selbstorganisation (1)
- Senioren (1)
- Serviceorientierte Architektur (SOA) (1)
- Sexual Aggression (1)
- Sexuelle Aggression (1)
- Silbernanopartikel (1)
- Skelettmuskel (1)
- Software-basierte Cache-Kohärenz (1)
- Southeast Asia (1)
- Spannung (1)
- Spannungsänderungen (1)
- Spektralanalyse (1)
- Spektroskopie (1)
- Spillover Effects (1)
- Spracherkennung (1)
- Spracherwerb (1)
- Sprachinhibition (1)
- Sprachproduktion (1)
- Sprachwahrnehmung (1)
- Sprachwechsel (1)
- Standard (1)
- Staufen im Breisgau (1)
- Sternentwicklung (1)
- Sternwinde (1)
- Stoffwechselnetze (1)
- Strategisches Management (1)
- Städe (1)
- Städte Effizienz (1)
- Subduktion (1)
- Subjective beliefs (1)
- Subjektive Erwartungen (1)
- Surface Hopping Dynamik (1)
- Syntax (1)
- Systemsbiologie (1)
- Säuglinge (1)
- Südostasien (1)
- Talsperre (1)
- Teile und Herrsche (1)
- Telekonnektionen (1)
- Temperatur (1)
- Temporäre Ambiguität (1)
- Theory of Mind (1)
- Thermoregulationsverhalten (1)
- Tiefe Biosphäre (1)
- Transkriptionsfaktor (1)
- Transmembranprotein (1)
- Trendanalysen (1)
- Turkey (1)
- Typ-2-Diabetes mellitus (1)
- Türkei (1)
- UHI (1)
- Unternehmensführung (1)
- Untertage-Kohlevergasung (1)
- Upgrade of Fructose (1)
- VERITAS (1)
- Variabilität (1)
- Verhalten (1)
- Verwaltungsmodernisierung (1)
- Viskositätsstruktur im oberen Mantel (1)
- Vollkorn (1)
- Vorhersagemodelle (1)
- Wasserhaushalt (1)
- Wellenbrechung und Diffraktion (1)
- Wellengleichung (1)
- Wetterlagen (1)
- Wirbelsäule (1)
- Wirkstoffinteraktionen (1)
- Wissen (1)
- Wissenschaftlichesworkflows (1)
- Wolf-Rayet (1)
- Wärmetransport (1)
- Wüste (1)
- Zeitreihenuntersuchung (1)
- Zeitskala (1)
- Zell Bewegung (1)
- Zell-substrat Adhäsion (1)
- Zweisprachigkeit (1)
- absorptive capacity (1)
- acceptability judgments (1)
- activated urethane (1)
- activation entropy (1)
- aggression (1)
- agrammatic aphasia (1)
- agricultural production systems (1)
- agronomic factors (1)
- aktiviertes Urethan (1)
- aldehyde (1)
- anger regulation (1)
- annotation (1)
- antecedent complexity (1)
- antibiotic combinations (1)
- antiferromagnetism (1)
- approximate differentiability (1)
- approximative Differenzierbarkeit (1)
- arctic (1)
- argument mining (1)
- argumentation (1)
- argumentation mining (1)
- argumentation structure (1)
- argumentation structure parsing (1)
- asteroseismology (1)
- asymmetric competition (1)
- asymmetrische Konkurrenz (1)
- atmosphere dynamics (1)
- automatic classification (1)
- automation (1)
- automatische Klassifizierung (1)
- autotrophy (1)
- azobenzene (1)
- azobenzene refractive index (1)
- azobenzene surfactant (1)
- bacterial population growth (1)
- bacterial sensor (1)
- balance (1)
- balance training (1)
- bedload transport (1)
- behaviour (1)
- benthic primary producers (1)
- benthische Primärproduzenten (1)
- bifunctional enzyme (1)
- bildbasierte Repräsentation (1)
- bilingualism (1)
- binary stars (1)
- bioenergy (1)
- biogeography (1)
- bioinformatics (1)
- biomass flows (1)
- biophysics (1)
- biorefinery (1)
- biosensors (1)
- biosynthesis (1)
- block copolymer vesicles (1)
- body waves (1)
- borehole breakouts (1)
- business support (1)
- carbohydrate oxidation (1)
- carbohydrate-protein interaction (1)
- carbohydrates (1)
- carbon (1)
- carbon dots (1)
- carbon isotopes (1)
- carbon nanodots (1)
- carbon nitrides (1)
- catalytic azobenzene isomerization (1)
- cell movement (1)
- cell-substrate adhesion (1)
- channel steps (1)
- channel-hillslope coupling (1)
- charge carrier recombination (1)
- charge transport (1)
- childhood (1)
- chloro-ribosome (1)
- cities (1)
- climate impact analysis (1)
- coarea formula (1)
- coast (1)
- coffee (1)
- coherence (1)
- communities (1)
- composite materials (1)
- computational seismology (1)
- computergestützte Seismologie (1)
- conformational selection (1)
- conservation (1)
- constraint-based modeling (1)
- contrastive topic (1)
- controlled radical polymerization (1)
- conventional agriculture (1)
- coordination complexes (1)
- cosmological simulations (1)
- culture (1)
- cyanobacteria (1)
- damage modeling (1)
- das Cauchyproblem (1)
- das Goursatproblem (1)
- das charakteristische Cauchyproblem (1)
- data integration (1)
- data profiling (1)
- data quality (1)
- deep biosphere (1)
- deep learning (1)
- degradation (1)
- demografischer Wandel (1)
- demographic change (1)
- dependency (1)
- der Städtische Wärmeinseleffekt (1)
- der Urbane Hitzeinsel Effekt (1)
- der Urbane Hitzeinsel Effekt basierend auf Landoberflächentemperatur (1)
- desert (1)
- development (1)
- die Stadtform (1)
- die linearisierte Einsteingleichung (1)
- dihydroxyacetone (1)
- discourse parsing (1)
- discourse structure (1)
- discovery (1)
- discrimination networks (1)
- dissociative electron attachment (1)
- dissoziative Elektronen Anlagerung (1)
- diversity (1)
- divide-and-conquer (1)
- document analysis (1)
- double hydrophilic block copolymers (1)
- drug drug interactions (1)
- duplicate detection (1)
- dynamic topography (1)
- dynamische Topographie (1)
- earthquake modeling (1)
- earthquake source observations (1)
- ecological modelling (1)
- eindeutige Spaltenkombination (1)
- einseitige Kommunikation (1)
- elderly (1)
- electrical chemotaxis assay (1)
- electromyography (1)
- elektrischer Chemotaxis Assy (1)
- ellipsis (1)
- elliptic quasicomplexes (1)
- elliptische Quasi-Komplexe (1)
- energy efficiency (1)
- epidemiology (1)
- equatorial electrojet (1)
- erosion (1)
- estuary (1)
- event-related potentials (ERP) (1)
- evolutionary history (1)
- exercise performance (1)
- exercise supervision (1)
- fat oxidation (1)
- ferromagnetism (1)
- flocking (1)
- fluorescence (1)
- flussunterbrechende Analyse (1)
- focus (1)
- freshwater ecosystems (1)
- functional dependency (1)
- funktionale Abhängigkeit (1)
- fusion (1)
- galactic astronomy (1)
- galactolipids (1)
- galaktische Astrophysik (1)
- galaxies: evolution (1)
- galaxies: kinematics and dynamics (1)
- galaxies: statistics (1)
- galaxy (1)
- galaxy evolution (1)
- galaxy formation (1)
- garden path (1)
- gelöster organischer Kohlenstoff (1)
- geodynamic models (1)
- geodynamische Modelle (1)
- geomechanical modelling (1)
- geomechanische Modellierung (1)
- geomorphology (1)
- geospatial services (1)
- geothermal (1)
- giant unilamellar vesicles (1)
- gold nanoparticles (1)
- grafische Modelle (1)
- graphical models (1)
- gravitational wave (1)
- groundwater (1)
- heat transport (1)
- heatwaves (1)
- high quantum yield (1)
- high resolution mass spectrometry (1)
- higher order rectifiability (1)
- hochauflösende Massenspektrometrie (1)
- hohe Quantenausbeute (1)
- hollow cylinder experiments (1)
- hybrid (1)
- hybrid model (1)
- hybrid multi-junction solar cell (1)
- hybride Mehrschichtsolarzellen (1)
- hyporheic zone (1)
- hyporheische Zone (1)
- image-based representation (1)
- importance sampling (1)
- inclusion dependency (1)
- incremental graph pattern matching (1)
- individual anaerobic threshold (1)
- individuelle anaerobe Schwelle (1)
- induced fit (1)
- induced seismicity (1)
- induzierte Passform (1)
- induzierte Seismizität (1)
- infants (1)
- inflectional morphology (1)
- information extraction (1)
- inkrementelle Graphmustersuche (1)
- innovation adoption (1)
- innovation management (1)
- interdepartmental committee (1)
- interface (1)
- intermediaries (1)
- interministerielle Arbeitsgruppe (1)
- interspecific interactions (1)
- interspezifische Wechselwirkungen (1)
- inverse Probleme (1)
- inverse problems (1)
- inverse theory (1)
- jet stream (1)
- job search (1)
- katalytische Isomerisation von Azobenzolen (1)
- kernel methods (1)
- kinematics (1)
- knowledge (1)
- kontrastives Topik (1)
- kontrollierte radikalische Polymerisationen (1)
- konventionelle Landwirtschaft (1)
- kosmologische Simulationen (1)
- labor market policy (1)
- lakes (1)
- landforms (1)
- landscape preferences (1)
- landscape analysis (1)
- language acquisition (1)
- language inhibition (1)
- language recognition (1)
- language switching (1)
- light (1)
- line search (1)
- lipid metabolism (1)
- lithosphere net rotation (1)
- lithosphere stress field (1)
- lithosphärisches Spannungsfeld (1)
- littoral eutrophication (1)
- local and regional factors (1)
- longitudinal methodology (1)
- lower critical solution temperature (1)
- lower extremity muscle strength/power (1)
- lumping (1)
- lunar tides (1)
- lunare Gezeiten (1)
- lysimeter (1)
- machine learning (1)
- macrophytes (1)
- magnetische resonante Beugung (1)
- magnetischer Zirkulardichroismus (1)
- magnetosome chains (1)
- magnetotactic bacteria (1)
- magnetotaktische Bakterien (1)
- mantle plumes (1)
- marine governance (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- massereiche Sterne (1)
- massive stars (1)
- maternal diet (1)
- maternale Ernährung (1)
- mathematical modelling (1)
- mathematische Modellierung (1)
- mechanistic modeling (1)
- mechanistische Modellierung (1)
- mechanobiology (1)
- mediation analysis (1)
- mental number representation (1)
- mentale Zahlenrepräsentation (1)
- mesocrystals (1)
- metabolic networks (1)
- metabolomics (1)
- metadata (1)
- metanome (1)
- microbial decomposition (1)
- microbiology (1)
- microdialysis (1)
- microeconometric analyses (1)
- microviridin (1)
- microwave synthesis (1)
- middleware (1)
- mikrobieller Abbau (1)
- mikrowellengestützte Synthese (1)
- mikrökonometrische Analysen (1)
- minimax optimality (1)
- mitochondrial transformation (1)
- mixed methods (1)
- mobile (1)
- mock observations (1)
- model-driven software engineering (1)
- modellgetriebene Softwareentwicklung (1)
- monitoring of anthropogenic activities (1)
- morphological changes (1)
- morphologischen Veränderungen (1)
- mountain rivers (1)
- multilingualism (1)
- nachaltige Städteentwicklung (1)
- named entity mining (1)
- nanolenses (1)
- nanoporous carbon particles (1)
- nanoporöser Kohlenstoffpartikel (1)
- natural language processing (1)
- natural products (1)
- natural risks (1)
- natürliche Sprachverarbeitung (1)
- net production (1)
- nicht-lineare gemischte Modelle (NLME) (1)
- nicht-monetäre Bewertung (1)
- nichtisothermer Mehrphasenfluss (1)
- non-isothermal multiphase flow (1)
- non-monetary valuation (1)
- normal bundle (1)
- numerical cognition (1)
- numerical simulation (1)
- numerical techniques (1)
- numerische Kognition (1)
- numerische Methoden (1)
- numerische Simulation (1)
- nutrition (1)
- obesity (1)
- object based image analysis (1)
- object topicalization (1)
- objektbasierte Bildanalyse (1)
- offspring (1)
- one-sided communication (1)
- ontologies (1)
- open circuit voltage (1)
- optically induced dynamics (1)
- optisch induzierte Dynamik (1)
- optogenetics (1)
- organic carbon (1)
- organic matter quality (1)
- organic solar cells (1)
- organische Solarzellen (1)
- organischer Kohlenstoff (1)
- organisches Material (1)
- organization theory (1)
- organizational reform (1)
- oxidase (1)
- oxocarbon (1)
- parabrachial nucleus (1)
- parallel programming (1)
- parallele Programmierung (1)
- participatory mapping (1)
- particle verbs (1)
- pathway engineering (1)
- peptide (1)
- performance information use (1)
- performance management (1)
- periphyton (1)
- permafrost (1)
- persönliche Informationen (1)
- pharmacodynamics (1)
- photocatalysis (1)
- photochemistry (1)
- photogrammetry (1)
- photoswitches (1)
- photosynthesis (1)
- physical attractiveness (1)
- physiologie-basierte Pharmacokinetic (PBPK) (1)
- physische Attraktivität (1)
- plant-habitat interactions (1)
- plasmon nano-particles (1)
- plasmonic catalysis (1)
- plasmonics (1)
- plasmonische Katalyse (1)
- plasmonische Nanopartikeln (1)
- plastid transformation (1)
- plate boundary friction (1)
- plate motions (1)
- plume-ridge interaction (1)
- plötzliche stratosphärische Erwärmungsereignisse (1)
- political economics (1)
- politische Ökonomie (1)
- polycondensation (1)
- polyesters (1)
- polymer chemistry (1)
- polymer synthesis (1)
- polymers (1)
- polymictic lakes (1)
- polymiktische Seen (1)
- polypeptoids (1)
- popPBPK (1)
- popPK (1)
- population analysis (1)
- population ecology (1)
- porous materials (1)
- poröse Materialien (1)
- precipitation (1)
- predictive modelling (1)
- primed picture naming (1)
- privacy (1)
- process identification (1)
- process synchronization (1)
- prosodic boundary cues (1)
- prosodic phrase boundaries (1)
- prosodische Grenzmarkierungen (1)
- prosodische Phrasengrenzen (1)
- protease inhibitor (1)
- protein kinetics (1)
- protein-polymer conjugate (1)
- protein-protein interaction (1)
- psycholinguistics (1)
- public economics (1)
- public management (1)
- public organizations (1)
- public sector innovation (1)
- pump-probe experiment (1)
- quantum chemistry (1)
- radical reactions (1)
- radiocarbon (1)
- railway infrastructure (1)
- rare-earth metals (1)
- raum-zeitlich (1)
- reactive flux rate constants (1)
- reactive oxygen species (1)
- recommendation (1)
- red giant stars (1)
- red meat (1)
- regional economics (1)
- regularity (1)
- reptiles (1)
- resistance training (1)
- ribosomal dynamics (1)
- ribosome assembly (1)
- riesen unilamellare Vesikel (1)
- rote Riesensterne (1)
- rotes Fleisch (1)
- roughness (1)
- satellite images (1)
- schaltbare Materialien (1)
- scientific workflows (1)
- scrambling (1)
- sediment source fingerprinting (1)
- sediment transport modelling (1)
- seismic cycle modeling (1)
- seismicity and tectonics (1)
- self-paced reading (1)
- seltene Erden (1)
- semantic domain modeling (1)
- semantische Domänenmodellierung (1)
- service composition (1)
- service-oriented architecture (SOA) (1)
- shale strength (1)
- silver nanoparticles (1)
- skeletal muscle (1)
- smart materials (1)
- snow avalanches (1)
- software-based cache coherence (1)
- solution process (1)
- solvent effect (1)
- spatial econometrics (1)
- spatio-temporal (1)
- species distribution (1)
- spectral analysis (1)
- spectroscopy (1)
- speech perception (1)
- spine (1)
- squaric acid (1)
- standard (1)
- start-up subsidy (1)
- stellar evolution (1)
- stellar population (1)
- stellar winds (1)
- stellare Population (1)
- stochastic interacting particles (1)
- stochastisches interagierendes System (1)
- stopped-flow (1)
- stress (1)
- stress changes (1)
- structural properties (1)
- strukturelle Eigenschaften (1)
- subduction (1)
- sudden stratospheric warming (1)
- sulfadiazine (1)
- supramolecular chemistry (1)
- supramolekulare Chemie (1)
- surface hopping dynamics (1)
- surface modification (1)
- surface urban heat island effect (1)
- surface water-groundwater interaction (1)
- suspended sediments (1)
- sustainability (1)
- sustainable urban development (1)
- syntax (1)
- synthetic aperture radar interferometry (1)
- synthetic biology (1)
- synthetische Beobachtungen (1)
- synthetische Biologie (1)
- systematischer Review (1)
- systems biology (1)
- task demands (1)
- taste processing (1)
- teleconnections (1)
- temperature (1)
- the Cauchy problem (1)
- the Goursat problem (1)
- the characteristic Cauchy problem (1)
- the linearised Einstein equation (1)
- theory of mind (1)
- thermal isomerization of azobenzene (1)
- thermisch angeregte Isomerisierung von Azobenzolen (1)
- thermo-mechanical modeling (1)
- thermo-mechanische Modellierung (1)
- thermoregulatory behaviour (1)
- time scale (1)
- time series analysis (1)
- time series investigation (1)
- time-kill curves (1)
- time-series analysis (1)
- tissue engineering (1)
- tobacco (1)
- tobramycin (1)
- transcription factor (1)
- transition path sampling (1)
- translation (1)
- translation theory (1)
- transmembrane protein (1)
- treatment effects (1)
- trend analyses (1)
- trilingualism (1)
- trunk kinematics (1)
- trunk muscle strength (1)
- type 2 diabetes (1)
- ultra-thin membrane (1)
- ultradünne Membranen (1)
- ultrafast phenomena (1)
- ultraschnelle Phänomene (1)
- unconventional shale (1)
- underground coal gasification (1)
- unemployment (1)
- unemployment insurance (1)
- ungulates (1)
- unique column combination (1)
- unkonventionelle Schiefer (1)
- untere kritische Entmischungstemperatur (1)
- upper mantle density heterogeneities (1)
- upper mantle viscosity structure (1)
- urban efficiency (1)
- urban form (1)
- urban heat island effect (1)
- variability (1)
- vertical coupling (1)
- vertikale Kuppelung (1)
- vertrackte Probleme (1)
- water balance (1)
- water reservoir (1)
- wave equation (1)
- wave scattering and diffraction (1)
- wavelet (1)
- weather pattern (1)
- weißer Kohlenstoff (1)
- white carbon (1)
- whole-grain (1)
- wicked problems (1)
- x-ray magnetic circular dichroism (XMCD) (1)
- x-ray magnetic resonant diffraction (XMRD) (1)
- Ärgerregulation (1)
- Ästuar (1)
- Öffentliche Organisationen (1)
- Übungsanleitung (1)
- äquatorialer Elektrojet (1)
- ökologische Modellierung (1)
- азобензолсодержащие ПАВ (1)
- каталитическая изомеризация азобензолов (1)
- плазмонные наночастицы (1)
- показатель преломления азобензолов (1)
Institute
- Institut für Biochemie und Biologie (48)
- Institut für Geowissenschaften (27)
- Institut für Physik und Astronomie (22)
- Institut für Chemie (20)
- Institut für Ernährungswissenschaft (9)
- Sozialwissenschaften (9)
- Institut für Umweltwissenschaften und Geographie (8)
- Department Linguistik (7)
- Hasso-Plattner-Institut für Digital Engineering GmbH (6)
- Institut für Mathematik (6)
Via their powerful radiation, stellar winds, and supernova explosions, massive stars (Mini & 8 M☉) bear a tremendous impact on galactic evolution. It became clear in recent decades that the majority of massive stars reside in binary systems. This thesis sets as a goal to quantify the impact of binarity (i.e., the presence of a companion star) on massive stars. For this purpose, massive binary systems in the Local Group, including OB-type binaries, high mass X-ray binaries (HMXBs), and Wolf-Rayet (WR) binaries, were investigated by means of spectral, orbital, and evolutionary analyses.
The spectral analyses were performed with the non-local thermodynamic equillibrium (non-LTE) Potsdam Wolf-Rayet (PoWR) model atmosphere code. Thanks to critical updates in the calculation of the hydrostatic layers, the code became a state-of-the-art tool applicable for all types of hot massive stars (Chapter 2). The eclipsing OB-type triple system δ Ori served as an intriguing test-case for the new version of the PoWR code, and provided key insights regarding the formation of X-rays in massive stars (Chapter 3). We further analyzed two prototypical HMXBs, Vela X-1 and IGR J17544-2619, and obtained fundamental conclusions regarding the dichotomy of two basic classes of HMXBs (Chapter 4). We performed an exhaustive analysis of the binary R 145 in the Large Magellanic Cloud (LMC), which was claimed to host the most massive stars known. We were able to disentangle the spectrum of the system, and performed an orbital, polarimetric, and spectral analysis, as well as an analysis of the wind-wind collision region. The true masses of the binary components turned out to be significantly lower than suggested, impacting our understanding of the initial mass function and stellar evolution at low metallicity (Chapter 5). Finally, all known WR binaries in the Small Magellanic Cloud (SMC) were analyzed. Although it was theoretical predicted that virtually all WR stars in the SMC should be formed via mass-transfer in binaries, we find that binarity was not important for the formation of the known WR stars in the SMC, implying a strong discrepancy between theory and observations (Chapter 6).
Conformational transition of peptide-functionalized cryogels enabling shape-memory capability
(2017)
The aim of this thesis is to develop approaches to automatically recognise the structure of argumentation in short monological texts. This amounts to identifying the central claim of the text, supporting premises, possible objections, and counter-objections to these objections, and connecting them correspondingly to a structure that adequately describes the argumentation presented in the text.
The first step towards such an automatic analysis of the structure of argumentation is to know how to represent it. We systematically review the literature on theories of discourse, as well as on theories of the structure of argumentation against a set of requirements and desiderata, and identify the theory of J. B. Freeman (1991, 2011) as a suitable candidate to represent argumentation structure. Based on this, a scheme is derived that is able to represent complex argumentative structures and can cope with various segmentation issues typically occurring in authentic text.
In order to empirically test our scheme for reliability of annotation, we conduct several annotation experiments, the most important of which assesses the agreement in reconstructing argumentation structure. The results show that expert annotators produce very reliable annotations, while the results of non-expert annotators highly depend on their training in and commitment to the task.
We then introduce the 'microtext' corpus, a collection of short argumentative texts. We report on the creation, translation, and annotation of it and provide a variety of statistics. It is the first parallel corpus (with a German and English version) annotated with argumentation structure, and -- thanks to the work of our colleagues -- also the first annotated according to multiple theories of (global) discourse structure.
The corpus is then used to develop and evaluate approaches to automatically predict argumentation structures in a series of six studies: The first two of them focus on learning local models for different aspects of argumentation structure. In the third study, we develop the main approach proposed in this thesis for predicting globally optimal argumentation structures: the 'evidence graph' model. This model is then systematically compared to other approaches in the fourth study, and achieves state-of-the-art results on the microtext corpus. The remaining two studies aim to demonstrate the versatility and elegance of the proposed approach by predicting argumentation structures of different granularity from text, and finally by using it to translate rhetorical structure representations into argumentation structures.
Bisherige Studien zur Demokratieförderung analysierten „erfolgreiche“ Beispiele. Das ist teilweise eine Reflektion der politischen Ökonomie von Demokratieförderung, in der sie Beispielen im Inland erzeugter demokratischer Durchbrüche folgt. Dennoch kann eine wissenschaftliche Analyse externer Einflüsse auf interne Veränderungen sich nicht nur auf Fälle erfolgreicher Demokratieentwicklung beziehen, sondern muss Beispiele von Regimeveränderungen, die nicht in einer Demokratie resultierten, berücksichtigen, um Selektionsvorurteile zu vermeiden und die kausalen Mechanismen zu isolieren, die für einen demokratischen Wandel notwendig sind, neben dem Zusammenbruch eines autoritären Regimes und einer Liberalisierung.
In dieser Studie dienen Marokko und Tunesien als Fallbeispiele, Länder, die nach langjähriger Diktaturerfahrung versuchen demokratische Strukturen aufzubauen und sich anderen Herausforderungen stellen müssen als sich demokratisierende Regime, die über einen relativ effektiven Staat verfügen.
Da es wenig Austausch zwischen Analysten von demokratischen Übergängen, Konsolidierung und Post-Konflikt Staatenbildung gab, überrascht, dass diese radikal unterschiedliche Situation von demokratischem Wandel und variierenden Rollen externer Akteure in jeder Kategorie bisher nicht differenziert wurde. Die Studie widmet sich den hieraus resultierenden Kernfragen: „Wie, Warum und durch Was wird Demokratieförderung durch externe Akteure funktionieren?“
Die Frage nach dem „Wie“ ist hier die schwierigste, es ist eine Frage nach den Methoden und Strategien des Demokratisierungsprozesses sowie der Unterstützung, die sorgfältig durchdachte Techniken und ihre breite Akzeptanz durch eine Vielzahl von Partner erfordert. Antwort auf die Frage nach dem „Was“ und „Warum“ hingegen findet sich in der Grundlage schlechter Regierungsarbeit und schlechter Wirtschaftsleistung, die zu Aufständen der Bevölkerung führen. Die Resultate der Studie tragen zum Fortschritt in der Demokratieförderung bei.
Wie hängen Vertrauen, Konsumeinstellungen und Verhalten bezüglich Fairtrade zusammen?
Dies ist die grundlegende Frage, mit der sich diese Arbeit beschäftigt. Lea Dirkwinkel analysiert die Fragestellung am Beispiel des Fairtrade-Labels, das als Symbol für das Produktzertifizierungssystem von Fairtrade International steht und das bekannteste Beispiel der Fairtrade-Bewegung darstellt.
Die Forschungsfrage wird einerseits zurückgeführt auf die Tatsache, dass die Qualität von Fairtrade-Gütern durch Konsumenten nicht erfasst werden kann, und andererseits durch die sogenannte Einstellungs-Verhaltens-Lücke begründet. Die Einstellungs-Verhaltens-Lücke beschreibt die kognitive Dissonanz zwischen positiven ethischen Einstellungen und Kaufintentionen sowie dem tatsächlichen Kaufverhalten und widerspricht traditionellen Einstellungs-Verhaltens-Modellen, die besagen, dass die Einstellung das Verhalten von Menschen bestimmt. Beide zuvor genannten Aspekte begründen in der Marketingtheorie die Relevanz von Vertrauen für den Konsum von Fairtrade-Produkten, aber auch anderen nachhaltigen Gütern.
Die Analyse basiert auf einer Online-Datenerhebung und erfolgte anhand der Kombination aus Conjoint Analyse und Strukturgleichungsanalyse. Die innovative methodische Vorgehensweise lieferte sowohl für die Marketingforschung als auch für die Praxis relevante Ergebnisse. Zum einem wird die wichtige Rolle von Vertrauen für den Fairtrade-Konsum bestätigt; zum anderen erklärt die Arbeit, wie sich Fairtrade-Vertrauen auswirkt. Das Vertrauen in das Fairtrade-Label stellt den Ausgangspunkt für Vertrauensbeziehungen zwischen Fairtrade und den Konsumenten dar und wird auf die zertifizierten Produkte übertragen.
Empfehlungen, die sich daraus ergeben, konzentrieren sich auf Maßnahmen, die das Vertrauen in Fairtrade-Labels stärken, z.B. durch die Reduzierung der Anzahl verschiedener Labels oder die verstärkte Kommunikation der Unabhängigkeit von Zertifizierungsorganisationen.
Over the past decade, an increasing number of public organizations involved in fisheries and marine environmental management in Europe have changed their formal coordination structures. Similar reorganizations of formal coordination structures can be observed for organizations at different administrative levels of governance with different mandates across the policy cycle.
Against the backdrop of this phenomenon, this PhD thesis is interested in exploring how these similar organizational reforms can be explained and why the formal coordination structures for fisheries and marine environmental management have been reorganized in the cases of the International Council for the Exploration of the Sea (ICES), the Directorate-General for Fisheries and Maritime Affairs of the European Commission (DG FISH), the Norwegian Institute of Marine Research (IMR) and the Swedish Agency for Marine and Water Management (SwAM). Accordingly, the objective is to shed light on how public organizations actually “behave” or “tick” in the face of increasingly complex coordination challenges in fisheries and marine environmental management.
To address these questions, the thesis draws on different theoretical perspectives in organization theory, namely an instrumental and an institutional perspective. These theoretical perspectives provide different explanations for how organizations deal with issues of formal organizational structure and coordination. In order to evaluate the explanatory relevance of these theoretical perspectives in the cases of ICES, DG FISH, the IMR and the SwAM, a case study approach based on congruence analysis is applied. The case studies are based on document analysis, the analysis of organizational charts and their change over time, as well as expert interviews. The aim of the thesis is to contribute to the coordination debate in the marine policy and governance literature from a hitherto omitted public administration and organization theory perspective, as well as explaining coordination efforts at the organizational level with an organization theory approach.
The findings indicate that the formal coordination structures of the organizations studied have not only changed to solve coordination problems in fisheries and marine environmental management efficiently and effectively, but also to follow modern management paradigms in marine governance and to ensure the legitimacy of these organizations. Moreover, it was found that in the cases of ICES, DG FISH, the IMR and the SwAM, the organizational changes were strongly influenced by external pressures and interactions with other organizations in the organizational field of fisheries and marine environmental management in Europe. Driven by forces of isomorphism, a gradual convergence of the formal horizontal coordination structures for fisheries and marine environmental management of the organizations studied can be observed. However, the findings also indicate that although the organizational changes observed may convey a reaction to changing environments, they do not necessarily reflect actual policy change and the implementation of new management concepts.
Rubisco catalyses the first step of CO2 assimilation into plant biomass. Despite its crucial role, it is notorious for its low catalytic rate and its tendency to fix O2 instead of CO2, giving rise to a toxic product that needs to be recycled in a process known as photorespiration. Since almost all our food supply relies on Rubisco, even small improvements in its specificity for CO2 could lead to an improvement of photosynthesis and ultimately, crop yield. In this work, we attempted to improve photosynthesis by decreasing photorespiration with an artificial CCM based on a fusion between Rubisco and a carbonic anhydrase (CA).
A preliminary set of plants contained fusions between one of two CAs, bCA1 and CAH3, and the N- or C-terminus of RbcL connected by a small flexible linker of 5 amino acids. Subsequently, further fusion proteins were created between RbcL C-terminus and bCA1/CAH3 with linkers of 14, 23, 32, and 41 amino acids. The transplastomic tobacco plants carrying fusions with bCA1 were able to grow autotrophically even with the shortest linkers, albeit at a low rate, and accumulated very low levels of the fusion protein. On the other hand, plants carrying fusions with CAH3 were autotrophic only with the longer linkers. The longest linker permitted nearly wild-type like growth of the plants carrying fusions with CAH3 and increased the levels of fusion protein, but also of smaller degradation products.
The fusion of catalytically inactive CAs to RbcL did not cause a different phenotype from the fusions with catalytically active CAs, suggesting that the selected CAs were not active in the fusion with RbcL or their activity did not have an effect on CO2 assimilation. However, fusions to RbcL did not abolish RbcL catalytic activity, as shown by the autotrophic growth, gas exchange and in vitro activity measurements. Furthermore, Rubisco carboxylation rate and specificity for CO2 was not altered in some of the fusion proteins, suggesting that despite the defect in RbcL folding or assembly caused by the fusions, the addition of 60-150 amino acids to RbcL does not affect its catalytic properties. On the contrary, most growth defects of the plants carrying RbcL-CA fusions are related to their reduced Rubisco content, likely caused by impaired RbcL folding or assembly. Finally, we found that fusions with RbcL C-terminus were better tolerated than with the N-terminus, and increasing the length of the linker relieved the growth impairment imposed by the fusion to RbcL. Together, the results of this work constitute considerable relevant findings for future Rubisco engineering.
Borehole instabilities are frequently encountered when drilling through finely laminated, organic rich shales (Økland and Cook, 1998; Ottesen, 2010; etc.); such instabilities should be avoided to assure a successful exploitation and safe production of the contained unconventional hydrocarbons. Borehole instabilities, such as borehole breakouts or drilling induced tensile fractures, may lead to poor cementing of the borehole annulus, difficulties with recording and interpretation of geophysical logs, low directional control and in the worst case the loss of the well. If these problems are not recognized and expertly remedied, pollution of the groundwater or the emission of gases into the atmosphere can occur since the migration paths of the hydrocarbons in the subsurface are not yet fully understood (e.g., Davies et al., 2014; Zoback et al., 2010). In addition, it is often mentioned that the drilling problems encountered and the resulting downtimes of the wellbore system in finely laminated shales significantly increase drilling costs (Fjaer et al., 2008; Aadnoy and Ong, 2003).
In order to understand and reduce the borehole instabilities during drilling in unconventional shales, we investigate stress-induced irregular extensions of the borehole diameter, which are also referred to as borehole breakouts. For this purpose, experiments with different borehole diameters, bedding plane angles and stress boundary conditions were performed on finely laminated Posidonia shales. The Lower Jurassic Posidonia shale is one of the most productive source rocks for conventional reservoirs in Europe and has the greatest potential for unconventional oil and gas in Europe (Littke et al., 2011).
In this work, Posidonia shale specimens from the North (PN) and South (PS) German basins were selected and characterized petrophysically and mechanically. The composition of the two shales is dominated by calcite (47-56%) followed by clays (23-28%) and quartz (16-17%). The remaining components are mainly pyrite and organic matter. The porosity of the shales varies considerably and is up to 10% for PS and 1% for PN, which is due to a larger deposition depth of PN. Both shales show marked elasticity and strength anisotropy, which can be attributed to a macroscopic distribution and orientation of soft and hard minerals. Under load the hard minerals form a load-bearing, supporting structure, while the soft minerals compensate the deformation. Therefore, if loaded parallel to the bedding, the Posidonia shale is more brittle than loaded normal to the bedding. The resulting elastic anisotropy, which can be defined by the ratio of the modulus of elasticity parallel and normal to the bedding, is about 50%, while the strength anisotropy (i.e., the ratio of uniaxial compressive strength normal and parallel to the bedding) is up to 66%. Based on the petrophysical characterization of the two rocks, a transverse isotropy (TVI) was derived. In general, PS is softer and weaker than PN, which is due to the stronger compaction of the material due to the higher burial depth.
Conventional triaxial borehole breakout experiments on specimens with different borehole diameters showed that, when the diameter of the borehole is increased, the stress required to initiate borehole breakout decreases to a constant value. This value can be expressed as the ratio of the tangential stress and the uniaxial compressive strength of the rock. The ratio increases exponentially with decreasing borehole diameter from about 2.5 for a 10 mm diameter hole to ~ 7 for a 1 mm borehole (increase of initiation stress by 280%) and can be described by a fracture mechanic based criterion. The reduction in borehole diameter is therefore a considerable aspect in reducing the risk of breakouts. New drilling techniques with significantly reduced borehole diameters, such as "fish-bone" holes, are already underway and are currently being tested (e.g., Xing et al., 2012).
The observed strength anisotropy and the TVI material behavior are also reflected in the observed breakout processes at the borehole wall. Drill holes normal to the bedding develop breakouts in a plane of isotropy and are not affected by the strength or elasticity anisotropy. The observed breakouts are point-symmetric and form compressive shear failure planes, which can be predicted by a Mohr-Coulomb failure approach. If the shear failure planes intersect, conjugate breakouts can be described as "dog-eared” breakouts.
While the initiation of breakouts for wells oriented normal to the stratification has been triggered by random local defects, reduced strengths parallel to bedding planes are the starting point for breakouts for wells parallel to the bedding. In the case of a deflected borehole trajectory, therefore, the observed failure type changes from shear-induced failure surfaces to buckling failure of individual layer packages. In addition, the breakout depths and widths increased, resulting in a stress-induced enlargement of the borehole cross-section and an increased output of rock material into the borehole. With the transition from shear to buckling failure and changing bedding plane angle with respect to the borehole axis, the stress required for inducing wellbore breakouts drops by 65%.
These observations under conventional triaxial stress boundary conditions could also be confirmed under true triaxial stress conditions. Here breakouts grew into the rock as a result of buckling failure, too. In this process, the broken layer packs rotate into the pressure-free drill hole and detach themselves from the surrounding rock by tensile cracking. The final breakout shape in Posidonia shale can be described as trapezoidal when the bedding planes are parallel to the greatest horizontal stress and to the borehole axis. In the event that the largest horizontal stress is normal to the stratification, breakouts were formed entirely by shear fractures between the stratification and required higher stresses to initiate similar to breakouts in conventional triaxial experiments with boreholes oriented normal to the bedding.
In the content of this work, a fracture mechanics-based failure criterion for conventional triaxial loading conditions in isotropic rocks (Dresen et al., 2010) has been successfully extended to true triaxial loading conditions in the transverse isotropic rock to predict the initiation of borehole breakouts. The criterion was successfully verified on the experiments carried out.
The extended failure criterion and the conclusions from the laboratory and numerical work may help to reduce the risk of borehole breakouts in unconventional shales.
The motivation of this work was to investigate the self-assembly of a block copolymer species that attended little attraction before, double hydrophilic block copolymers (DHBCs). DHBCs consist of two linear hydrophilic polymer blocks. The self-assembly of DHBCs towards suprastructures such as particles and vesicles is determined via a strong difference in hydrophilicity between the corresponding blocks leading to a microphase separation due to immiscibility. The benefits of DHBCs and the corresponding particles and vesicles, such as biocompatibility, high permeability towards water and hydrophilic compounds as well as the large amount of possible functionalizations that can be addressed to the block copolymers make the application of DHBC based structures a viable choice in biomedicine. In order to assess a route towards self-assembled structures from DHBCs that display the potential to act as cargos for future applications, several block copolymers containing two hydrophilic polymer blocks were synthesized. Poly(ethylene oxide)-b-poly(N-vinylpyrrolidone) (PEO-b-PVP) and Poly(ethylene oxide)-b-poly(N-vinylpyrrolidone-co-N-vinylimidazole) (PEO-b-P(VP-co-VIm) block copolymers were synthesized via reversible deactivation radical polymerization (RDRP) techniques starting from a PEO-macro chain transfer agent. The block copolymers displayed a concentration dependent self-assembly behavior in water which was determined via dynamic light scattering (DLS). It was possible to observe spherical particles via laser scanning confocal microscopy (LSCM) and cryogenic scanning electron microscopy (cryo SEM) at highly concentrated solutions of PEO-b-PVP. Furthermore, a crosslinking strategy with (PEO-b-P(VP-co-VIm) was developed applying a diiodo derived crosslinker diethylene glycol bis(2-iodoethyl) ether to form quaternary amines at the VIm units. The formed crosslinked structures proved stability upon dilution and transfer into organic solvents. Moreover, self-assembly and crosslinking in DMF proved to be more advantageous and the crosslinked structures could be successfully transferred to aqueous solution. The afforded spherical submicron particles could be visualized via LSCM, cryo SEM and Cryo TEM.
Double hydrophilic pullulan-b-poly(acrylamide) block copolymers were synthesized via copper catalyzed alkyne azide cycloaddition (CuAAC) starting from suitable pullulan alkyne and azide functionalized poly(N,N-dimethylacrylamide) (PDMA) and poly(N-ethylacrylamide) (PEA) homopolymers. The conjugation reaction was confirmed via SEC and 1H-NMR measurements. The self-assembly of the block copolymers was monitored with DLS and static light scattering (SLS) measurements indicating the presence of hollow spherical structures. Cryo SEM measurements could confirm the presence of vesicular structures for Pull-b-PEA block copolymers. Solutions of Pull-b-PDMA displayed particles in cryo SEM. Moreover, an end group functionalization of Pull-b-PDMA with Rhodamine B allowed assessing the structure via LSCM and hollow spherical structures were observed indicating the presence of vesicles, too.
An exemplified pathway towards a DHBC based drug delivery vehicle was demonstrated with the block copolymer Pull-b-PVP. The block copolymer was synthesized via RAFT/MADIX techniques starting from a pullulan chain transfer agent. Pull-b-PVP displayed a concentration dependent self-assembly in water with an efficiency superior to the PEO-b-PVP system, which could be observed via DLS. Cryo SEM and LSCM microscopy displayed the presence of spherical structures. In order to apply a reversible crosslinking strategy on the synthesized block copolymer, the pullulan block was selectively oxidized to dialdehydes with NaIO4. The oxidation of the block copolymer was confirmed via SEC and 1H-NMR measurements. The self-assembled and oxidized structures were subsequently crosslinked with cystamine dihiydrochloride, a pH and redox responsive crosslinker resulting in crosslinked vesicles which were observed via cryo SEM. The vesicular structures of crosslinked Pull-b-PVP could be disassembled by acid treatment or the application of the redox agent tris(2-carboxyethyl)-phosphin-hydrochloride. The successful disassembly was monitored with DLS measurements.
To conclude, self-assembled structures from DHBCs such as particles and vesicles display a strong potential to generate an impact on biomedicine and nanotechnologies. The variety of DHBC compositions and functionalities are very promising features for future applications.
Functional nanoporous carbon-based materials derived from oxocarbon-metal coordination complexes
(2017)
Nanoporous carbon based materials are of particular interest for both science and industry due to their exceptional properties such as a large surface area, high pore volume, high electroconductivity as well as high chemical and thermal stability. Benefiting from these advantageous properties, nanoporous carbons proved to be useful in various energy and environment related applications including energy storage and conversion, catalysis, gas sorption and separation technologies. The synthesis of nanoporous carbons classically involves thermal carbonization of the carbon precursors (e.g. phenolic resins, polyacrylonitrile, poly(vinyl alcohol) etc.) followed by an activation step and/or it makes use of classical hard or soft templates to obtain well-defined porous structures. However, these synthesis strategies are complicated and costly; and make use of hazardous chemicals, hindering their application for large-scale production. Furthermore, control over the carbon materials properties is challenging owing to the relatively unpredictable processes at the high carbonization temperatures.
In the present thesis, nanoporous carbon based materials are prepared by the direct heat treatment of crystalline precursor materials with pre-defined properties. This synthesis strategy does not require any additional carbon sources or classical hard- or soft templates. The highly stable and porous crystalline precursors are based on coordination compounds of the squarate and croconate ions with various divalent metal ions including Zn2+, Cu2+, Ni2+, and Co2+, respectively. Here, the structural properties of the crystals can be controlled by the choice of appropriate synthesis conditions such as the crystal aging temperature, the ligand/metal molar ratio, the metal ion, and the organic ligand system. In this context, the coordination of the squarate ions to Zn2+ yields porous 3D cube crystalline particles. The morphology of the cubes can be tuned from densely packed cubes with a smooth surface to cubes with intriguing micrometer-sized openings and voids which evolve on the centers of the low index faces as the crystal aging temperature is raised. By varying the molar ratio, the particle shape can be changed from truncated cubes to perfect cubes with right-angled edges.
These crystalline precursors can be easily transformed into the respective carbon based materials by heat treatment at elevated temperatures in a nitrogen atmosphere followed by a facile washing step. The resulting carbons are obtained in good yields and possess a hierarchical pore structure with well-organized and interconnected micro-, meso- and macropores. Moreover, high surface areas and large pore volumes of up to 1957 m2 g-1 and 2.31 cm3 g-1 are achieved, respectively, whereby the macroscopic structure of the precursors is preserved throughout the whole synthesis procedure.
Owing to these advantageous properties, the resulting carbon based materials represent promising supercapacitor electrode materials for energy storage applications. This is exemplarily demonstrated by employing the 3D hierarchical porous carbon cubes derived from squarate-zinc coordination compounds as electrode material showing a specific capacitance of 133 F g-1 in H2SO4 at a scan rate of 5 mV s-1 and retaining 67% of this specific capacitance when the scan rate is increased to 200 mV s-1.
In a further application, the porous carbon cubes derived from squarate-zinc coordination compounds are used as high surface area support material and decorated with nickel nanoparticles via an incipient wetness impregnation. The resulting composite material combines a high surface area, a hierarchical pore structure with high functionality and well-accessible pores. Moreover, owing to their regular micro-cube shape, they allow for a good packing of a fixed-bed flow reactor along with high column efficiency and a minimized pressure drop throughout the packed reactor. Therefore, the composite is employed as heterogeneous catalyst in the selective hydrogenation of 5-hydroxymethylfurfural to 2,5-dimethylfuran showing good catalytic performance and overcoming the conventional problem of column blocking.
Thinking about the rational design of 3D carbon geometries, the functions and properties of the resulting carbon-based materials can be further expanded by the rational introduction of heteroatoms (e.g. N, B, S, P, etc.) into the carbon structures in order to alter properties such as wettability, surface polarity as well as the electrochemical landscape. In this context, the use of crystalline materials based on oxocarbon-metal ion complexes can open a platform of highly functional materials for all processes that involve surface processes.
Development of a reliable and environmentally friendly synthesis for fluorescence carbon nanodots
(2017)
Carbon nanodots (CNDs) have generated considerable attention due to their promising properties, e.g. high water solubility, chemical inertness, resistance to photobleaching, high biocompatibility and ease of functionalization. These properties render them ideal for a wide range of functions, e.g. electrochemical applications, waste water treatment, (photo)catalysis, bio-imaging and bio-technology, as well as chemical sensing, and optoelectronic devices like LEDs. In particular, the ability to prepare CNDs from a wide range of accessible organic materials makes them a potential alternative for conventional organic dyes and semiconductor quantum dots (QDs) in various applications. However, current synthesis methods are typically expensive and depend on complex and time-consuming processes or severe synthesis conditions and toxic chemicals. One way to reduce overall preparation costs is the use of biological waste as starting material. Hence, natural carbon sources such as pomelo peal, egg white and egg yolk, orange juice, and even eggshells, to name a few; have been used for the preparation of CNDs. While the use of waste is desirable, especially to avoid competition with essential food production, most starting-materials lack the essential purity and structural homogeneity to obtain homogeneous carbon dots. Furthermore, most synthesis approaches reported to date require extensive purification steps and have resulted in carbon dots with heterogeneous photoluminescent properties and indefinite composition. For this reason, among others, the relationship between CND structure (e.g. size, edge shape, functional groups and overall composition) and photophysical properties is yet not fully understood. This is particularly true for carbon dots displaying selective luminescence (one of their most intriguing properties), i.e. their PL emission wavelength can be tuned by varying the excitation wavelength.
In this work, a new reliable, economic, and environmentally-friendly one-step synthesis is established to obtain CNDs with well-defined and reproducible photoluminescence (PL) properties via the microwave-assisted hydrothermal treatment of starch, carboxylic acids and Tris-EDTA (TE) buffer as carbon- and nitrogen source, respectively. The presented microwave-assisted hydrothermal precursor carbonization (MW-hPC) is characterized by its cost-efficiency, simplicity, short reaction times, low environmental footprint, and high yields of approx. 80% (w/w). Furthermore, only a single synthesis step is necessary to obtain homogeneous water-soluble CNDs with no need for further purification.
Depending on starting materials and reaction conditions different types of CNDs have been prepared. The as-prepared CNDs exhibit reproducible, highly homogeneous and favourable PL properties with narrow emission bands (approx. 70nm FWHM), are non-blinking, and are ready to use without need for further purification, modification or surface passivation agents. Furthermore, the CNDs are comparatively small (approx. 2.0nm to 2.4nm) with narrow size distributions; are stable over a long period of time (at least one year), either in solution or as a dried solid; and maintain their PL properties when re-dispersed in solution. Depending on CND type, the PL quantum yield (PLQY) can be adjusted from as low as 1% to as high as 90%; one of the highest reported PLQY values (for CNDs) so far.
An essential part of this work was the utilization of a microwave synthesis reactor, allowing various batch sizes and precise control over reaction temperature and -time, pressure, and heating- and cooling rate, while also being safe to operate at elevated reaction conditions (e.g. 230 ±C and 30 bar). The hereby-achieved high sample throughput allowed, for the first time, the thorough investigation of a wide range of synthesis parameters, providing valuable insight into the CND formation. The influence of carbon- and nitrogen source, precursor concentration and -combination, reaction time and -temperature, batch size, and post-synthesis purification steps were carefully investigated regarding their influence on the optical properties of as-synthesized CNDs. In addition, the change in photophysical properties resulting from the conversion of CND solution into solid and back into the solution was investigated. Remarkably, upon freeze-drying the initial brown CND-solution turns into a non-fluorescent white/slightly yellow to brown solid which recovers PL in aqueous solution. Selected CND samples were also subject to EDX, FTIR, NMR, PL lifetime (TCSPC), particle size (TEM), TGA and XRD analysis. Besides structural characterization, the pH- and excitation dependent PL characteristics (i.e. selective luminescence) were examined; giving inside into the origin of photophysical properties and excitation dependent behaviour of CNDs. The obtained results support the notion that for CNDs the nature of the surface states determines the PL properties and that excitation dependent behaviour is caused by the “Giant Red-Edge Excitation Shift” (GREES).
Nanolenses are linear chains of differently-sized metal nanoparticles, which can theoretically provide extremely high field enhancements. The complex structure renders their synthesis challenging and has hampered closer analyses so far. Here, the technique of DNA origami was used to self-assemble DNA-coated 10 nm, 20 nm, and 60 nm gold or silver nanoparticles into gold or silver nanolenses. Three different geometrical arrangements of gold nanolenses were assembled, and for each of the three, sets of single gold nanolenses were investigated in detail by atomic force microscopy, scanning electron microscopy, dark-field scattering and Raman spectroscopy. The surface-enhanced Raman scattering (SERS) capabilities of the single nanolenses were assessed by labelling the 10 nm gold nanoparticle selectively with dye molecules. The experimental data was complemented by finite-difference time-domain simulations. For those gold nanolenses which showed the strongest field enhancement, SERS signals from the two different internal gaps were compared by selectively placing probe dyes on the 20 nm or 60 nm gold particles. The highest enhancement was found for the gap between the 20 nm and 10 nm nanoparticle, which is indicative of a cascaded field enhancement. The protein streptavidin was labelled with alkyne groups and served as a biological model analyte, bound between the 20 nm and 10 nm particle of silver nanolenses. Thereby, a SERS signal from a single streptavidin could be detected. Background peaks observed in SERS measurements on single silver nanolenses could be attributed to amorphous carbon. It was shown that the amorphous carbon is generated in situ.
The valorization of carbohydrates is one of the most promising fields in green chemistry, as it enables to produce bulk chemicals and fuels out of renewable and abundant resources, instead of further exploiting fossil feedstocks. The focus in this thesis is the conversion of fructose, using dehydration and hydrodeoxygenation reactions. The main goal is to find an easy continuous process, including the solubility of the sugar in a green solvent, the conversion over a solid acid as well as over a metal@tungsten carbide catalyst.
At the beginning of this thesis, solid acid catalysts are synthesized by using carbohydrate material like glucose and starch at high temperatures (up to 600 °C). Additionally a third carbon is synthesized, using an activation method based on Ca(OH)2. After carbonization and further sulfonation, using fuming sulfuric acid, the three resulting catalysts are characterized together with sulfonated carbon black and Amberlyst 15 as references. In order to test all solid acid catalysts in reaction, a 250 mm x 4.6 mm stainless steel column is used as a fixed-bed continuous reactor. The temperature (110 °C to 250 °C) and residence time (2 to 30 minutes) is varied, and a direct relationship between contact time and selectivity is determined. The reaction mechanism, as well as the product distribution is showing a dehydration step of fructose towards 5-hydroxymethylfurfural (HMF). These furan-ring molecules are considered as “sleeping giants”, due to the possibility of using them as fuel, but also for upgrading them to chemicals like terephthalic acid or p-xylene. Consecutive reactions are producing levulinic acid, as well as condensation products with ethanol and formic acid. The activated carbon is additionally showing a 2 % yield of 2,5-Dimethylfuran (DMF) production, pointing towards the extraordinary properties of this catalyst. Without a metal catalyst present, what is normally necessary for hydrogenation reactions, a transferhydrogenation (with formic acid) is observed. The active catalyst was therefore carbon itself, what activated the hydrogen on its surface. This phenomenon was just very rarely observed so far. Expensive noble metals are the material of choice, when it comes to hydrogenation reactions nowadays and cheaper alternatives are necessary.
By postulating a similar electronic structure of tungsten carbide (WC) to platinum by Lewy and Boudart, research is focusing on the replacement of Pt. The production of nano-sized tungsten carbide particles (7.5 ± 2.5 nm, 70 m2 g-1) is enabled by the so called “urea glass route” and its catalytic performances are compared to commercial material. It is shown, that the activity is strongly dependent on the size of the particles as well as the surface area. Nano-sized tungsten carbide is showing activity for hydrogenation reactions under mild conditions (maximum 150 °C, 30 bar). This material therefore opens up new possibilities for replacing the rare and expensive platinum with tungsten carbide based catalysts.
Additionally different metal nanoparticles of palladium, copper and nickel are deposited on top of WC to further promote its reactivity. The nickel nanoparticles are strongly connected to WC and showed the best activity as well as selectivity for upgrading HMF with hydrodeoxygenation. The Ni@WC is not leaching and is showing very good hydrodeoxygenation properties with DMF yields up to 90 percent. Copper@WC is not showing good activity and palladium@WC enables undesired consecutive reactions, hydrogenating the furan ring system.
In order to enable the upgrade of fructose to DMF directly in a continuous system, the current H CUBE Pro TM hydrogenation system is customized with a second reaction column. A 250 mm x 4.6 mm stainless steel reactor column is connected ahead of the hydrogen insertion, enabling the dehydration of fructose to HMF derivatives, before pumping these products into the second column for hydrogenation. The overall residence time in the two column reactor system is 14 minutes. The overall results are an almost full conversion with a yield of 38.5 % DMF and 47 % yield of EL. The main disadvantage is the formation of higher mass products, so called humins, which start depositing on top of the catalysts, blocking their active sites.
In general it can be stated, that a two column system goes along with a higher investment as well as more maintenance costs, compared to a one column catalytic approach. To develop a catalyst, which is on the one hand able to dehydrate as well as hydrodeoxygenate the reactants, is aimed for at the last part of the thesis. The activated carbon however shows already activity for hydrodeoxygenation without any metal present and offers itself therefore as an alternative to overcome the temperature instability of Amberlyst 15 (max. 120 °C) for a combined DMF production directly from fructose. The activity for the upgrade to DMF is increased from 2 % to 12 % DMF yield in one mixed continuous column.
In order to scale up the entire one column approach, an 800 mm x 28.5 mm inner diameter column was planned and manufactured. The system is scaled up and assembled, whereas this flow reactor system is able to be run with 50 mL min-1 maximum flow rate, to stand a pressure of maximum 100 bar and be heated to around 500 °C. The tubing and connections, as well as the used devices are planned according to be safe and easy in use. The scaled-up approach offers a reaction column 120 times bigger (510 ml) then the first extension of the commercial system. This further extension offers the possibility of ranging between 1 and 1000 mL min-1, making it possible to use the approach in pilot plant applications.
Nowadays, the need to protect the environment becomes more urgent than ever. In the field of chemistry, this translates to practices such as waste prevention, use of renewable feedstocks, and catalysis; concepts based on the principles of green chemistry. Polymers are an important product in the chemical industry and are also in the focus of these changes. In this thesis, more sustainable approaches to make two classes of polymers, polypeptoids and polyesters, are described.
Polypeptoids or poly(alkyl-N-glycines) are isomers of polypeptides and are biocompatible, as well as degradable under biologically relevant conditions. In addition to that, they can have interesting properties such as lower critical solution temperature (LCST) behavior. They are usually synthesized by the ring opening polymerization (ROP) of N-carboxy anhydrides (NCAs), which are produced with the use of toxic compounds (e.g. phosgene) and which are highly sensitive to humidity. In order to avoid the direct synthesis and isolation of the NCAs, N-phenoxycarbonyl-protected N-substituted glycines are prepared, which can yield the NCAs in situ. The conditions for the NCA synthesis and its direct polymerization are investigated and optimized for the simplest N-substituted glycine, sarcosine. The use of a tertiary amine in less than stoichiometric amounts compared to the N-phenoxycarbonyl--sarcosine seems to accelerate drastically the NCA formation and does not affect the efficiency of the polymerization. In fact, well defined polysarcosines that comply to the monomer to initiator ratio can be produced by this method. This approach was also applied to other N-substituted glycines.
Dihydroxyacetone is a sustainable diol produced from glycerol, and has already been used for the synthesis of polycarbonates. Here, it was used as a comonomer for the synthesis of polyesters. However, the polymerization of dihydroxyacetone presented difficulties, probably due to the insolubility of the macromolecular chains. To circumvent the problem, the dimethyl acetal protected dihydroxyacetone was polymerized with terephthaloyl chloride to yield a soluble polymer. When the carbonyl was recovered after deprotection, the product was insoluble in all solvents, showing that the carbonyl in the main chain hinders the dissolution of the polymers. The solubility issue can be avoided, when a 1:1 mixture of dihydroxyacetone/ ethylene glycol is used to yield a soluble copolyester.
Among modern functional materials, the class of nitrogen-containing carbons combines non-toxicity and sustainability with outstanding properties. The versatility of this materials class is based on the opportunity to tune electronic and catalytic properties via the nitrogen content and –motifs: This ranges from the electronically conducting N-doped carbon, where few carbon atoms in the graphitic lattice are substituted by nitrogen, to the organic semiconductor graphitic carbon nitride (g-C₃N₄), with a structure based on tri-s-triazine units.
In general, composites can reveal outstanding catalytic properties due to synergistic behavior, e.g. the formation of electronic heterojunctions. In this thesis, the formation of an “all-carbon” heterojunction was targeted, i.e. differences in the electronic properties of the single components were achieved by the introduction of different nitrogen motives into the carbon lattice. Such composites are promising as metal-free catalysts for the photocatalytic water splitting. Here, hydrogen can be generated from water by light irradiation with the use of a photocatalyst. As first part of the heterojunction, the organic semiconductor g-C₃N₄ was employed, because of its suitable band structure for photocatalytic water splitting, high stability and non-toxicity. The second part was chosen as C₂N, a recently discovered semiconductor. Compared to g-C₃N₄, the less nitrogen containing C₂N has a smaller band gap and a higher absorption coefficient in the visible light range, which is expected to increase the optical absorption in the composite eventually leading to an enhanced charge carrier separation due to the formation of an electronic heterojunction.
The aim of preparing an “all-carbon” composite included the research on appropriate precursors for the respective components g-C₃N₄ and C₂N, as well as strategies for appropriate structuring. This was targeted by applying precursors which can form supramolecular pre-organized structures. This allows for more control over morphology and atom patterns during the carbonization process.
In the first part of this thesis, it was demonstrated how the photocatalytic activity of g-C₃N₄ can be increased by the targeted introduction of defects or surface terminations. This was achieved by using caffeine as a “growth stopping” additive during the formation of the hydrogen-bonded supramolecular precursor complexes. The increased photocatalytic activity of the obtained materials was demonstrated with dye degradation experiments.
The second part of this thesis was focused on the synthesis of the second component C₂N. Here, a deep eutectic mixture from hexaketocyclohexane and urea was structured using the biopolymer chitosan. This scaffolding resulted in mesoporous nitrogen-doped carbon monoliths and beads. CO₂- and dye-adsorption experiments with the obtained monolith material revealed a high isosteric heat of CO₂-adsorption and showed the accessibility of the monolithic pore system to larger dye molecules. Furthermore, a novel precursor system for C₂N was explored, based on organic crystals from squaric acid and urea. The respective C₂N carbon with an unusual sheet-like morphology could be synthesized by carbonization of the crystals at 550 °C. With this precursor system, also microporous C₂N carbon with a BET surface area of 865 m²/g was obtained by “salt-templating” with ZnCl₂.
Finally, the preparation of a g-C₃N₄/C₂N “all carbon” composite heterojunction was attempted by the self-assembly of g-C₃N₄ and C₂N nanosheets and tested for photocatalytic water splitting. Indeed, the composites revealed high rates of hydrogen evolution when compared to bulk g-C₃N₄. However, the increased catalytic activity was mainly attributed to the high surface area of the nanocomposites rather than to the composition. With regard to alternative composite synthesis ways, first experiments indicated N-Methyl-2-pyrrolidon to be suitable for higher concentrated dispersion of C₂N nanosheets. Eventually, the results obtained in this thesis provide precious synthetic contributions towards the preparation and processing of carbon/nitrogen compounds for energy applications.
Lignin valorization
(2017)
The topic of this project is the use of lignin as alternative source of aromatic building blocks and oligomers to fossil feedstocks. Lignin is known as the most abundant aromatic polymer in nature and is isolated from the lignocellulosic component of plants by different possible extraction treatments. Both the biomass source and the extraction method affect the structure of the isolated lignin, therefore influencing its further application. Lignin was extracted from beech wood by two different hydrothermal alkaline treatments, which use NaOH and Ba(OH)2 as base and by an acid-catalyzed organosolv process. Moreover, lignin was isolated from bamboo, beech wood and coconut by soda treatment of the biomasses. A comparison of the structural features of such isolated lignins was performed through the use of a wide range of analytical methods. Alkaline lignins resulted in a better candidate as carbon precursor and macromonomers for the synthesis of polymer than organosolv lignin. In fact, alkaline lignins showed higher residual mass after carbonization and higher content of the reactive hydroxy functionalities. In contrast, the lignin source turned out to slightly affect the lignin hydroxyl content.
One of the most common lignin modifications is its deconstruction to obtain aromatic molecules, which can be used as starting materials for the synthesis of fine chemicals. Lignin deconstruction leads to a complex mixture of aromatic molecules. A gas chromatographic analytical method was developed to characterize the mixture of products obtained by lignin deconstruction via heterogeneous catalytic hydrogenolysis. The analytical protocol allowed the quantification of three main groups of molecules by means of calibration curves, internal standard and a preliminary silylation step of the sample. The analytical method was used to study the influence of the hydrogenolysis catalyst, temperature and system (flow and batch reactor) on the yield and selectivity of the aromatic compounds.
Lignin extracted from beech wood by a hydrothermal process using Ba(OH)2 as base, was functionalized by aromatic nitration in order to add nitrogen functionalities. The final goal was the synthesis of a nitrogen doped carbon. Nitrated lignin was reduced to the amino form in order to compare the influence of different nitrogen functionalities on the porosity of the final carbon. The carbons were obtained by ionothermal treatment of the precursors in the presence of the eutectic salt mixture KCl/ZnCl2 Such synthesized carbons showed micro-, macro- and mesoporosity and were tested for their electrocatalytic activity towards the oxygen reduction reaction. Mesoporous carbon derived from nitro lignin displayed the highest electrocatalytic activity.
Lignins isolated from coconut, beech wood and bamboo were used as macromonomers for the synthesis of biobased polyesters. A condensation reaction was performed between lignin and a hyper branched poly(ester-amine), previously obtained by condensation of triethanolamine and adipic acid. The influence of the lignin source and content on the thermochemical and mechanical properties of the final material was investigated. The prepolymer showed adhesive properties towards aluminum and its shear strength was therefore measured. The gluing properties of such synthesized glues turned out to be independent from the lignin source but affected by the amount of lignin in the final material.
This work shows that, although still at a laboratory scale, the valorization of lignin can overcome the critical issues of lignin´s structure variability and complexity.
The Yukon Coast in Canada is an ice-rich permafrost coast and highly sensitive to changing environmental conditions. Retrogressive thaw slumps are a common thermoerosion feature along this coast, and develop through the thawing of exposed ice-rich permafrost on slopes and removal of accumulating debris. They contribute large amounts of sediment, including organic carbon and nitrogen, to the nearshore zone.
The objective of this study was to 1) identify the climatic and geomorphological drivers of sediment-meltwater release, 2) quantify the amount of released meltwater, sediment, organic carbon and nitrogen, and 3) project the evolution of sediment-meltwater release of retrogressive thaw slumps in a changing future climate.
The analysis is based on data collected over 18 days in July 2013 and 18 days in August 2012. A cut-throat flume was set up in the main sediment-meltwater channel of the largest retrogressive thaw slump on Herschel Island. In addition, two weather stations, one on top of the undisturbed tundra and one on the slump floor, measured incoming solar radiation, air temperature, wind speed and precipitation. The discharge volume eroding from the ice-rich permafrost and retreating snowbanks was measured and compared to the meteorological data collected in real time with a resolution of one minute.
The results show that the release of sediment-meltwater from thawing of the ice-rich permafrost headwall is strongly related to snowmelt, incoming solar radiation and air temperature. Snowmelt led to seasonal differences, especially due to the additional contribution of water to the eroding sediment-meltwater from headwall ablation, lead to dilution of the sediment-meltwater composition. Incoming solar radiation and air temperature were the main drivers for diurnal and inter-diurnal fluctuations. In July (2013), the retrogressive thaw slump released about 25 000 m³ of sediment-meltwater, containing 225 kg dissolved organic carbon and 2050 t of sediment, which in turn included 33 t organic carbon, and 4 t total nitrogen. In August (2012), just 15 600 m³ of sediment-meltwater was released, since there was no additional contribution from snowmelt. However, even without the additional dilution, 281 kg dissolved organic carbon was released. The sediment concentration was twice as high as in July, with sediment contents of up to 457 g l-1 and 3058 t of sediment, including 53 t organic carbon and 5 t nitrogen, being released.
In addition, the data from the 36 days of observations from Slump D were upscaled to cover the main summer season of 1 July to 31 August (62 days) and to include all 229 active retrogressive thaw slumps along the Yukon Coast. In total, all retrogressive thaw slumps along the Yukon Coast contribute a minimum of 1.4 Mio. m³ sediment-meltwater each thawing season, containing a minimum of 172 000 t sediment with 3119 t organic carbon, 327 t nitrogen and 17 t dissolved organic carbon. Therefore, in addition to the coastal erosion input to the Beaufort Sea, retrogressive thaw slumps additionally release 3 % of sediment and 8 % of organic carbon into the ocean. Finally, the future evolution of retrogressive thaw slumps under a warming scenario with summer air temperatures increasing by 2-3 °C by 2081-2100, would lead to an increase of 109-114% in release of sediment-meltwater.
It can be concluded that retrogressive thaw slumps are sensitive to climatic conditions and under projected future Arctic warming will contribute larger amounts of thawed permafrost material (including organic carbon and nitrogen) into the environment.
In this thesis, I develop a theoretical implementation of prosodic reconstruction and apply it to the empirical domain of German sentences in which part of a focus or contrastive topic is fronted.
Prosodic reconstruction refers to the idea that sentences involving syntactic movement show prosodic parallels with corresponding simpler structures without movement. I propose to model this recurrent observation by ordering syntax-prosody mapping before copy deletion.
In order to account for the partial fronting data, the idea is extended to the mapping between prosody and information structure. This assumption helps to explain why object-initial sentences containing a broad focus or broad contrastive topic show similar prosodic and interpretative restrictions as sentences with canonical word order.
The empirical adequacy of the model is tested against a set of gradient acceptability judgments.
The field of nanophotonics focuses on the interaction between electromagnetic radiation and matter on the nanometer scale. The elements of nanoscale photonic devices can transfer excitation energy non-radiatively from an excited donor molecule to an acceptor molecule by Förster resonance energy transfer (FRET). The efficiency of this energy transfer is highly dependent on the donor-acceptor distance. Hence, in these nanoscale photonic devices it is of high importance to have a good control over the spatial assembly of used fluorophores. Based on molecular self-assembly processes, various nanostructures can be produced. Here, DNA nanotechnology and especially the DNA origami technique are auspicious self-assembling methods. By using DNA origami nanostructures different fluorophores can be introduced with a high local control to create a variety of nanoscale photonic objects. The applications of such nanostructures range from photonic wires and logic gates for molecular computing to artificial light harvesting systems for artificial photosynthesis.
In the present cumulative doctoral thesis, different FRET systems on DNA origami structures have been designed and thoroughly analyzed. Firstly, the formation of guanine (G) quadruplex structures from G rich DNA sequences has been studied based on a two-color FRET system (Fluorescein (FAM)/Cyanine3 (Cy3)). Here, the influences of different cations (Na+ and K+), of the DNA origami structure and of the DNA sequence on the G-quadruplex formation have been analyzed. In this study, an ion-selective K+ sensing scheme based on the G-quadruplex formation on DNA origami structures has been developed. Subsequently, the reversibility of the G-quadruplex formation on DNA origami structures has been evaluated. This has been done for the simple two-color FRET system which has then been advanced to a switchable photonic wire by introducing additional fluorophores (FAM/Cy3/Cyanine5 (Cy5)/IRDye®700). In the last part, the emission intensity of the acceptor molecule (Cy5) in a three-color FRET cascade has been tuned by arranging multiple donor (FAM) and transmitter (Cy3) molecules around the central acceptor molecule. In such artificial light harvesting systems, the excitation energy is absorbed by several donor and transmitter molecules followed by an energy transfer to the acceptor leading to a brighter Cy5 emission. Furthermore, the range of possible excitation wavelengths is extended by using several different fluorophores (FAM/Cy3/Cy5). In this part of the thesis, the light harvesting efficiency (antenna effect) and the FRET efficiency of different donor/transmitter/acceptor assemblies have been analyzed and the artificial light harvesting complex has been optimized in this respect.
Contemporary multi-core processors are parallel systems that also provide shared memory for programs running on them. Both the increasing number of cores in so-called many-core systems and the still growing computational power of the cores demand for memory systems that are able to deliver high bandwidths. Caches are essential components to satisfy this requirement. Nevertheless, hardware-based cache coherence in many-core chips faces practical limits to provide both coherence and high memory bandwidths. In addition, a shift away from global coherence can be observed. As a result, alternative architectures and suitable programming models need to be investigated.
This thesis focuses on fast communication for non-cache-coherent many-core architectures. Experiments are conducted on the Single-Chip Cloud Computer (SCC), a non-cache-coherent many-core processor with 48 mesh-connected cores. Although originally designed for message passing, the results of this thesis show that shared memory can be efficiently used for one-sided communication on this kind of architecture. One-sided communication enables data exchanges between processes where the receiver is not required to know the details of the performed communication. In the notion of the Message Passing Interface (MPI) standard, this type of communication allows to access memory of remote processes. In order to support this communication scheme on non-cache-coherent architectures, both an efficient process synchronization and a communication scheme with software-managed cache coherence are designed and investigated.
The process synchronization realizes the concept of the general active target synchronization scheme from the MPI standard. An existing classification of implementation approaches is extended and used to identify an appropriate class for the non-cache-coherent shared memory platform. Based on this classification, existing implementations are surveyed in order to find beneficial concepts, which are then used to design a lightweight synchronization protocol for the SCC that uses shared memory and uncached memory accesses. The proposed scheme is not prone to process skew and also enables direct communication as soon as both communication partners are ready. Experimental results show very good scaling properties and up to five times lower synchronization latency compared to a tuned message-based MPI implementation for the SCC.
For the communication, SCOSCo, a shared memory approach with software-managed cache coherence, is presented. According requirements for the coherence that fulfill MPI's separate memory model are formulated, and a lightweight implementation exploiting SCC hard- and software features is developed. Despite a discovered malfunction in the SCC's memory subsystem, the experimental evaluation of the design reveals up to five times better bandwidths and nearly four times lower latencies in micro-benchmarks compared to the SCC-tuned but message-based MPI library. For application benchmarks, like a parallel 3D fast Fourier transform, the runtime share of communication can be reduced by a factor of up to five. In addition, this thesis postulates beneficial hardware concepts that would support software-managed coherence for one-sided communication on future non-cache-coherent architectures where coherence might be only available in local subdomains but not on a global processor level.
This cumulative dissertation consists of five chapters. In terms of research content, my thesis can be divided into two parts. Part one examines local interactions and spillover effects between small regional governments using spatial econometric methods. The second part focuses on patterns within municipalities and inspects which institutions of citizen participation, elections and local petitions, influence local housing policies.
This thesis investigates the processing and representation of (ir-)regularity in inflectional verb morphology in German and English. The focus lies on the predictions from models of morphological processing about the production of subtypes of irregular verbs which are usually subsumed under the category `irregular verbs'. Thus, this dissertation presents three journal articles investigating the language production of healthy speakers and speakers with agrammatic aphasia in order to fill a gap both for the availability of language production data and systematically tested patterns of irregularity. The second Chapter set out to investigate whether regularity of a verb or its phonological complexity (measured in number of phonemes) better predict the production accuracies of German speakers with agrammatic aphasia. While regular verbs were significantly more often correct than mixed and irregular verbs, production accuracies of irregular and mixed verbs for impaired participants did not differ. Thus, no influence of phonological complexity was observed. Chapter 3 aimed at teasing apart the influence of stem changes and affix type on the production accuracies of English speaking individuals with agrammatic aphasia. The analyses revealed that the presence of stem changes but not the type of affix had a significant effect on the production accuracies. Moreover, as four different verb types were tested, results showed that production accuracies did not conform to a regular-irregular distinction but that accuracies differed by the degree of regularity. In Chapter 4, long-lag primed picture naming design was used to study if the differences found in the production accuracies of Chapter 3 were also associated with differences in production latencies of non-brain damaged speakers. A morphological priming effect was found, however, in neither experiment the effect differed of the three verb types tested. In addition to standard frequentist analysis, Bayesian analysis were performed. In this way the absence of a difference of the morphological priming effect between verb types was interpreted as actual evidence for the lack of such a difference. Hence, this thesis presents diverging results on the production of subtypes of irregular verbs in healthy and impaired adult speakers. However, at the same time these results provided evidence that the conventional regular-irregular distinction is not adequate for testing models of morphological processing.
Lithospheric plates move over the low viscosity asthenosphere balancing several forces. The driving forces include basal shear stress exerted by mantle convection and plate boundary forces such as slab pull and ridge push, whereas the resisting forces include inter-plate friction, trench resistance, and cratonic root resistance. These generate plate motions, the lithospheric stress field and dynamic topography which are observed with different geophysical methods. The orientation and tectonic regime of the observed crustal/lithospheric stress field further contribute to our knowledge of different deformation processes occurring within the Earth's crust and lithosphere. Using numerical models previous studies were able to identify major forces generating stresses in the crust and lithosphere which also contribute to the formation of topography as well as driving lithospheric plates. They showed that the first-order stress pattern explaining about 80\,\% of the stress field originates from a balance of forces acting at the base of the moving lithospheric plates due to convective flow in the underlying mantle. The remaining second-order stress pattern is due to lateral density variations in the crust and lithosphere in regions of pronounced topography and high gravitational potential, such as the Himalayas and mid-ocean ridges. By linking global lithosphere dynamics to deep mantle flow this study seeks to evaluate the influence of shallow and deep density heterogenities on plate motions, lithospheric stress field and dynamic topography using the geoid as a major constraint for mantle rheology. We use the global 3D lithosphere-asthenosphere model SLIM3D with visco-elasto-plastic rheology coupled at 300 km depth to a spectral model of mantle flow. The complexity of the lithosphere-asthenosphere component allows for the simulation of power-law rheology with creep parameters accounting for both diffusion and dislocation creep within the uppermost 300 km.
First we investigate the influence of intra-plate friction and asthenospheric viscosity on present-day plate motions. Previous modelling studies have suggested that small friction coefficients (µ < 0.1, yield stress ~ 100 MPa) can lead to plate tectonics in models of mantle convection. Here we show that, in order to match present-day plate motions and net rotation, the frictional parameter must be less than 0.05. We are able to obtain a good fit with the magnitude and orientation of observed plate velocities (NUVEL-1A) in a no-net-rotation (NNR) reference frame with µ < 0.04 and minimum asthenosphere viscosity ~ 5*10e19 Pas to 10e20 Pas. Our estimates of net rotation (NR) of the lithosphere suggest that amplitudes ~ 0.1-0.2 °/Ma, similar to most observation-based estimates, can be obtained with asthenosphere viscosity cutoff values of ~ 10e19 Pas to 5*10e19 Pas and friction coefficient µ < 0.05.
The second part of the study investigates further constraints on shallow and deep mantle heterogeneities causing plate motion by predicting lithosphere stress field and topography and validating with observations. Lithosphere stresses and dynamic topography are computed using the modelling setup and rheological parameters for prescribed plate motions. We validate our results with the World Stress Map 2016 (WSM2016) and the observed residual topography. Here we tested a number of upper mantle thermal-density structures. The one used to calculate plate motions is considered the reference thermal-density structure. This model is derived from a heat flow model combined with a sea floor age model. In addition we used three different thermal-density structures derived from global S-wave velocity models to show the influence of lateral density heterogeneities in the upper 300 km on model predictions. A large portion of the total dynamic force generating stresses in the crust/lithosphere has its origin in the deep mantle, while topography is largely influenced by shallow heterogeneities. For example, there is hardly any difference between the stress orientation patterns predicted with and without consideration of the heterogeneities in the upper mantle density structure across North America, Australia, and North Africa. However, the crust is dominant in areas of high altitude for the stress orientation compared to the all deep mantle contribution.
This study explores the sensitivity of all the considered surface observables with regards to model parameters providing insights into the influence of the asthenosphere and plate boundary rheology on plate motion as we test various thermal-density structures to predict stresses and topography.
According to the classical plume hypothesis, mantle plumes are localized upwellings of hot, buoyant material in the Earth’s mantle. They have a typical mushroom shape, consisting of a large plume head, which is associated with the formation of voluminous flood basalts (a Large
Igneous Province) and a narrow plume tail, which generates a linear, age-progressive chain of volcanic edifices (a hotspot track) as the tectonic plate migrates over the relatively stationary plume. Both plume heads and tails reshape large areas of the Earth’s surface over many tens of millions of years.
However, not every plume has left an exemplary record that supports the classical hypothesis. The main objective of this thesis is therefore to study how specific hotspots have created the crustal thickness pattern attributed to their volcanic activities. Using regional geodynamic
models, the main chapters of this thesis address the challenge of deciphering the three individual (and increasingly complex) Réunion, Iceland, and Kerguelen hotspot histories, especially focussing on the interactions between the respective plume and nearby spreading ridges.
For this purpose, the mantle convection code ASPECT is used to set up three-dimensional numerical models, which consider the specific local surroundings of each plume by prescribing time-dependent boundary conditions for temperature and mantle flow. Combining reconstructed plate boundaries and plate motions, large-scale global flow velocities and an inhomogeneous lithosphere thickness distribution together with a dehydration rheology represents a novel setup for regional convection models.
The model results show the crustal thickness pattern produced by the plume, which is compared to present-day topographic structures, crustal thickness estimates and age determinations of volcanic provinces associated with hotspot activity. Altogether, the model results agree well
with surface observations. Moreover, the dynamic development of the plumes in the models provide explanations for the generation of smaller, yet characteristic volcanic features that were previously unexplained. Considering the present-day state of a model as a prediction for the
current temperature distribution in the mantle, it cannot only be compared to observations on the surface, but also to structures in the Earth’s interior as imaged by seismic tomography.
More precisely, in the case of the Réunion hotspot, the model demonstrates how the distinctive gap between the Maldives and Chagos is generated due to the combination of the ridge geometry and plume-ridge interaction. Further, the Rodrigues Ridge is formed as the surface expression
of a long-distance sublithospheric flow channel between the upwelling plume and the closest ridge segment, confirming the long-standing hypothesis of Morgan (1978) for the first time in a dynamic context. The Réunion plume has been studied in connection with the seismological
RHUM-RUM project, which has recently provided new seismic tomography images that yield an excellent match with the geodynamic model.
Regarding the Iceland plume, the numerical model shows how plume material may have accumulated in an east-west trending corridor of thin lithosphere across Greenland and resulted in simultaneous melt generation west and east of Greenland. This provides an explanation for the
extremely widespread volcanic material attributed to magma production of the Iceland hotspot and demonstrates that the model setup is also able to explain more complicated hotspot histories. The Iceland model results also agree well with newly derived seismic tomographic images.
The Kerguelen hotspot has an extremely complex history and previous studies concluded that the plume might be dismembered or influenced by solitary waves in its conduit to produce the reconstructed variable melt production rate. The geodynamic model, however, shows that a constant plume influx can result in a variable magma production rate if the plume interacts with nearby mid-ocean ridges. Moreover, the Ninetyeast Ridge in the model is created by on-ridge activities, while the Kerguelen plume was located beneath the Australian plate. This is also a contrast to earlier studies, which described the Ninetyeast Ridge as the result of the Indian plate passing over the plume. Furthermore, the Amsterdam-Saint Paul Plateau in the model is the result of plume material flowing from the upwelling toward the Southeast Indian Ridge, whereas previous geochemical studies attributed that volcanic province to a separate deep plume.
In summary, the three case studies presented in this thesis consistently highlight the importance of plume-ridge interaction in order to reconstruct the overall volcanic hotspot record as well as specific smaller features attributed to a certain hotspot. They also demonstrate that it is not necessary to attribute highly complicated properties to a specific plume in order to account for complex observations. Thus, this thesis contributes to the general understanding of plume dynamics and extends the very specific knowledge about the Réunion, Iceland, and Kerguelen mantle plumes.
With recent advances in the area of information extraction, automatically extracting structured information from a vast amount of unstructured textual data becomes an important task, which is infeasible for humans to capture all information manually. Named entities (e.g., persons, organizations, and locations), which are crucial components in texts, are usually the subjects of structured information from textual documents. Therefore, the task of named entity mining receives much attention. It consists of three major subtasks, which are named entity recognition, named entity linking, and relation extraction.
These three tasks build up an entire pipeline of a named entity mining system, where each of them has its challenges and can be employed for further applications. As a fundamental task in the natural language processing domain, studies on named entity recognition have a long history, and many existing approaches produce reliable results. The task is aiming to extract mentions of named entities in text and identify their types. Named entity linking recently received much attention with the development of knowledge bases that contain rich information about entities. The goal is to disambiguate mentions of named entities and to link them to the corresponding entries in a knowledge base. Relation extraction, as the final step of named entity mining, is a highly challenging task, which is to extract semantic relations between named entities, e.g., the ownership relation between two companies.
In this thesis, we review the state-of-the-art of named entity mining domain in detail, including valuable features, techniques, evaluation methodologies, and so on. Furthermore, we present two of our approaches that focus on the named entity linking and relation extraction tasks separately.
To solve the named entity linking task, we propose the entity linking technique, BEL, which operates on a textual range of relevant terms and aggregates decisions from an ensemble of simple classifiers. Each of the classifiers operates on a randomly sampled subset of the above range. In extensive experiments on hand-labeled and benchmark datasets, our approach outperformed state-of-the-art entity linking techniques, both in terms of quality and efficiency.
For the task of relation extraction, we focus on extracting a specific group of difficult relation types, business relations between companies. These relations can be used to gain valuable insight into the interactions between companies and perform complex analytics, such as predicting risk or valuating companies. Our semi-supervised strategy can extract business relations between companies based on only a few user-provided seed company pairs. By doing so, we also provide a solution for the problem of determining the direction of asymmetric relations, such as the ownership_of relation. We improve the reliability of the extraction process by using a holistic pattern identification method, which classifies the generated extraction patterns. Our experiments show that we can accurately and reliably extract new entity pairs occurring in the target relation by using as few as five labeled seed pairs.
Nowadays, graph data models are employed, when relationships between entities have to be stored and are in the scope of queries. For each entity, this graph data model locally stores relationships to adjacent entities. Users employ graph queries to query and modify these entities and relationships. These graph queries employ graph patterns to lookup all subgraphs in the graph data that satisfy certain graph structures. These subgraphs are called graph pattern matches. However, this graph pattern matching is NP-complete for subgraph isomorphism. Thus, graph queries can suffer a long response time, when the number of entities and relationships in the graph data or the graph patterns increases.
One possibility to improve the graph query performance is to employ graph views that keep ready graph pattern matches for complex graph queries for later retrieval. However, these graph views must be maintained by means of an incremental graph pattern matching to keep them consistent with the graph data from which they are derived, when the graph data changes. This maintenance adds subgraphs that satisfy a graph pattern to the graph views and removes subgraphs that do not satisfy a graph pattern anymore from the graph views.
Current approaches for incremental graph pattern matching employ Rete networks. Rete networks are discrimination networks that enumerate and maintain all graph pattern matches of certain graph queries by employing a network of condition tests, which implement partial graph patterns that together constitute the overall graph query. Each condition test stores all subgraphs that satisfy the partial graph pattern. Thus, Rete networks suffer high memory consumptions, because they store a large number of partial graph pattern matches. But, especially these partial graph pattern matches enable Rete networks to update the stored graph pattern matches efficiently, because the network maintenance exploits the already stored partial graph pattern matches to find new graph pattern matches. However, other kinds of discrimination networks exist that can perform better in time and space than Rete networks. Currently, these other kinds of networks are not used for incremental graph pattern matching.
This thesis employs generalized discrimination networks for incremental graph pattern matching. These discrimination networks permit a generalized network structure of condition tests to enable users to steer the trade-off between memory consumption and execution time for the incremental graph pattern matching. For that purpose, this thesis contributes a modeling language for the effective definition of generalized discrimination networks. Furthermore, this thesis contributes an efficient and scalable incremental maintenance algorithm, which updates the (partial) graph pattern matches that are stored by each condition test. Moreover, this thesis provides a modeling evaluation, which shows that the proposed modeling language enables the effective modeling of generalized discrimination networks. Furthermore, this thesis provides a performance evaluation, which shows that a) the incremental maintenance algorithm scales, when the graph data becomes large, and b) the generalized discrimination network structures can outperform Rete network structures in time and space at the same time for incremental graph pattern matching.
Data profiling is the computer science discipline of analyzing a given dataset for its metadata. The types of metadata range from basic statistics, such as tuple counts, column aggregations, and value distributions, to much more complex structures, in particular inclusion dependencies (INDs), unique column combinations (UCCs), and functional dependencies (FDs). If present, these statistics and structures serve to efficiently store, query, change, and understand the data. Most datasets, however, do not provide their metadata explicitly so that data scientists need to profile them.
While basic statistics are relatively easy to calculate, more complex structures present difficult, mostly NP-complete discovery tasks; even with good domain knowledge, it is hardly possible to detect them manually. Therefore, various profiling algorithms have been developed to automate the discovery. None of them, however, can process datasets of typical real-world size, because their resource consumptions and/or execution times exceed effective limits.
In this thesis, we propose novel profiling algorithms that automatically discover the three most popular types of complex metadata, namely INDs, UCCs, and FDs, which all describe different kinds of key dependencies. The task is to extract all valid occurrences from a given relational instance. The three algorithms build upon known techniques from related work and complement them with algorithmic paradigms, such as divide & conquer, hybrid search, progressivity, memory sensitivity, parallelization, and additional pruning to greatly improve upon current limitations. Our experiments show that the proposed algorithms are orders of magnitude faster than related work. They are, in particular, now able to process datasets of real-world, i.e., multiple gigabytes size with reasonable memory and time consumption.
Due to the importance of data profiling in practice, industry has built various profiling tools to support data scientists in their quest for metadata. These tools provide good support for basic statistics and they are also able to validate individual dependencies, but they lack real discovery features even though some fundamental discovery techniques are known for more than 15 years. To close this gap, we developed Metanome, an extensible profiling platform that incorporates not only our own algorithms but also many further algorithms from other researchers. With Metanome, we make our research accessible to all data scientists and IT-professionals that are tasked with data profiling. Besides the actual metadata discovery, the platform also offers support for the ranking and visualization of metadata result sets.
Being able to discover the entire set of syntactically valid metadata naturally introduces the subsequent task of extracting only the semantically meaningful parts. This is challenge, because the complete metadata results are surprisingly large (sometimes larger than the datasets itself) and judging their use case dependent semantic relevance is difficult. To show that the completeness of these metadata sets is extremely valuable for their usage, we finally exemplify the efficient processing and effective assessment of functional dependencies for the use case of schema normalization.
Natural products and their derivatives have always been a source of drug leads. In particular, bacterial compounds have played an important role in drug development, for example in the field of antibiotics. A decrease in the discovery of novel leads from natural sources and the hope of finding new leads through the generation of large libraries of drug-like compounds by combinatorial chemistry aimed at specific molecular targets drove the pharmaceutical companies away from research on natural products. However, recent technological advances in genetics, bioinformatics and analytical chemistry have revived the interest in natural products. The ribosomally synthesized and post-translationally modified peptides (RiPPs) are a group of natural products generated by the action of post-translationally modifying enzymes on precursor peptides translated from mRNA by ribosomes. The great substrate promiscuity exhibited by many of the enzymes from RiPP biosynthetic pathways have led to the generation of hundreds of novel synthetic and semisynthetic variants, including variants carrying non-canonical amino acids (ncAAs). The microviridins are a family of RiPPs characterized by their atypical tricyclic structure composed of lactone and lactam rings, and their activity as serine protease inhibitors. The generalities of their biosynthetic pathway have already been described, however, the lack of information on details such as the protease responsible for cleaving off the leader peptide from the cyclic core peptide has impeded the fast and cheap production of novel microviridin variants. In the present work, knowledge on leader peptide activation of enzymes from other RiPP families has been extrapolated to the microviridin family, making it possible to bypass the need of a leader peptide. This feature allowed for the exploitation of the microviridin biosynthetic machinery for the production of novel variants through the establishment of an efficient one-pot in vitro platform. The relevance of this chemoenzymatic approach has been exemplified by the synthesis of novel potent serine protease inhibitors from both rationally-designed peptide libraries and bioinformatically predicted microviridins. Additionally, new structure-activity relationships (SARs) could be inferred by screening microviridin intermediates. The significance of this technique was further demonstrated by the simple incorporation of ncAAs into the microviridin scaffold.
Recognizing, understanding, and responding to quantities are considerable skills for human beings. We can easily communicate quantities, and we are extremely efficient in adapting our behavior to numerical related tasks. One usual task is to compare quantities. We also use symbols like digits in numerical-related tasks. To solve tasks including digits, we must to rely on our previously learned internal number representations.
This thesis elaborates on the process of number comparison with the use of noisy mental representations of numbers, the interaction of number and size representations and how we use mental number representations strategically. For this, three studies were carried out.
In the first study, participants had to decide which of two presented digits was numerically larger. They had to respond with a saccade in the direction of the anticipated answer. Using only a small set of meaningfully interpretable parameters, a variant of random walk models is described that accounts for response time, error rate, and variance of response time for the full matrix of 72 digit pairs. In addition, the used random walk model predicts a numerical distance effect even for error response times and this effect clearly occurs in the observed data. In relation to corresponding correct answers error responses were systematically faster. However, different from standard assumptions often made in random walk models, this account required that the distributions of step sizes of the induced random walks be asymmetric to account for this asymmetry between correct and incorrect responses.
Furthermore, the presented model provides a well-defined framework to investigate the nature and scale (e.g., linear vs. logarithmic) of the mapping of numerical magnitude onto its internal representation. In comparison of the fits of proposed models with linear and logarithmic mapping, the logarithmic mapping is suggested to be prioritized.
Finally, we discuss how our findings can help interpret complex findings (e.g., conflicting speed vs. accuracy trends) in applied studies that use number comparison as a well-established diagnostic tool. Furthermore, a novel oculomotoric effect is reported, namely the saccadic overschoot effect. The participants responded by saccadic eye movements and the amplitude of these saccadic responses decreases with numerical distance.
For the second study, an experimental design was developed that allows us to apply the signal detection theory to a task where participants had to decide whether a presented digit was physically smaller or larger. A remaining question is, whether the benefit in (numerical magnitude – physical size) congruent conditions is related to a better perception than in incongruent conditions. Alternatively, the number-size congruency effect is mediated by response biases due to numbers magnitude. The signal detection theory is a perfect tool to distinguish between these two alternatives. It describes two parameters, namely sensitivity and response bias. Changes in the sensitivity are related to the actual task performance due to real differences in perception processes whereas changes in the response bias simply reflect strategic implications as a stronger preparation (activation) of an anticipated answer. Our results clearly demonstrate that the number-size congruency effect cannot be reduced to mere response bias effects, and that genuine sensitivity gains for congruent number-size pairings contribute to the number-size congruency effect.
Third, participants had to perform a SNARC task – deciding whether a presented digit was odd or even. Local transition probability of irrelevant attributes (magnitude) was varied while local transition probability of relevant attributes (parity) and global probability occurrence of each stimulus were kept constantly. Participants were quite sensitive in recognizing the underlying local transition probability of irrelevant attributes. A gain in performance was observed for actual repetitions of the irrelevant attribute in relation to changes of the irrelevant attribute in high repetition conditions compared to low repetition conditions. One interpretation of these findings is that information about the irrelevant attribute (magnitude) in the previous trial is used as an informative precue, so that participants can prepare early processing stages in the current trial, with the corresponding benefits and costs typical of standard cueing studies.
Finally, the results reported in this thesis are discussed in relation to recent studies in numerical cognition.
Proteins are molecules that are essential for life and carry out an enormous number of functions in organisms. To this end, they change their conformation and bind to other molecules. However, the interplay between conformational change and binding is not fully understood. In this work, this interplay is investigated with molecular dynamics (MD) simulations of the protein-peptide system Mdm2-PMI and by analysis of data from relaxation experiments.
The central task it to uncover the binding mechanism, which is described by the sequence of (partial) binding events and conformational change events including their probabilities. In the simplest case, the binding mechanism is described by a two-step model: binding followed by conformational change or conformational change followed by binding. In the general case, longer sequences with multiple conformational changes and partial binding events are possible as well as parallel pathways that differ in their sequences of events. The theory of Markov state models (MSMs) provides the theoretical framework in which all these cases can be modeled. For this purpose, MSMs are estimated in this work from MD data, and rate equation models, which are related to MSMs, are inferred from experimental relaxation data.
The MD simulation and Markov modeling of the PMI-Mdm2 system shows that PMI and Mdm2 can bind via multiple pathways. A main result of this work is a dissociation rate on the order of one event per second, which was calculated using Markov modeling and is in agreement with experiment. So far, dissociation rates and transition rates of this magnitude have only been calculated with methods that speed up transitions by acting with time-dependent, external forces on the binding partners. The simulation technique developed in this work, in contrast, allows the estimation of dissociation rates from the combination of free energy calculation and direct MD simulation of the fast binding process. Two new statistical estimators TRAM and TRAMMBAR are developed to estimate a MSM from the joint data of both simulation types.
In addition, a new analysis technique for time-series data from chemical relaxation experiments is developed in this work. It allows to identify one of the above-mentioned two-step mechanisms as the mechanism that underlays the data. The new method is valid for a broader range of concentrations than previous methods and therefore allows to choose the concentrations such that the mechanism can be uniquely identified. It is successfully tested with data for the binding of recoverin to a rhodopsin kinase peptide.
Start-up incentives targeted at unemployed individuals have become an important tool of the Active Labor Market Policy (ALMP) to fight unemployment in many countries in recent years. In contrast to traditional ALMP instruments like training measures, wage subsidies, or job creation schemes, which are aimed at reintegrating unemployed individuals into dependent employment, start-up incentives are a fundamentally different approach to ALMP, in that they intend to encourage and help unemployed individuals to exit unemployment by entering self-employment and, thus, by creating their own jobs. In this sense, start-up incentives for unemployed individuals serve not only as employment and social policy to activate job seekers and combat unemployment but also as business policy to promote entrepreneurship. The corresponding empirical literature on this topic so far has been mainly focused on the individual labor market perspective, however. The main part of the thesis at hand examines the new start-up subsidy (“Gründungszuschuss”) in Germany and consists of four empirical analyses that extend the existing evidence on start-up incentives for unemployed individuals from multiple perspectives and in the following directions:
First, it provides the first impact evaluation of the new start-up subsidy in Germany. The results indicate that participation in the new start-up subsidy has significant positive and persistent effects on both reintegration into the labor market as well as the income profiles of participants, in line with previous evidence on comparable German and international programs, which emphasizes the general potential of start-up incentives as part of the broader ALMP toolset. Furthermore, a new innovative sensitivity analysis of the applied propensity score matching approach integrates findings from entrepreneurship and labor market research about the key role of an individual’s personality on start-up decision, business performance, as well as general labor market outcomes, into the impact evaluation of start-up incentives. The sensitivity analysis with regard to the inclusion and exclusion of usually unobserved personality variables reveals that differences in the estimated treatment effects are small in magnitude and mostly insignificant. Consequently, concerns about potential overestimation of treatment effects in previous evaluation studies of similar start-up incentives due to usually unobservable personality variables are less justified, as long as the set of observed control variables is sufficiently informative (Chapter 2).
Second, the thesis expands our knowledge about the longer-term business performance and potential of subsidized businesses arising from the start-up subsidy program. In absolute terms, the analysis shows that a relatively high share of subsidized founders successfully survives in the market with their original businesses in the medium to long run. The subsidy also yields a “double dividend” to a certain extent in terms of additional job creation. Compared to “regular”, i.e., non-subsidized new businesses founded by non-unemployed individuals in the same quarter, however, the economic and growth-related impulses set by participants of the subsidy program are only limited with regard to employment growth, innovation activity, or investment. Further investigations of possible reasons for these differences show that differential business growth paths of subsidized founders in the longer run seem to be mainly limited by higher restrictions to access capital and by unobserved factors, such as less growth-oriented business strategies and intentions, as well as lower (subjective) entrepreneurial persistence. Taken together, the program has only limited potential as a business and entrepreneurship policy intended to induce innovation and economic growth (Chapters 3 and 4).
And third, an empirical analysis on the level of German regional labor markets yields that there is a high regional variation in subsidized start-up activity relative to overall new business formation. The positive correlation between regular start-up intensity and the share among all unemployed individuals who participate in the start-up subsidy program suggests that (nascent) unemployed founders also profit from the beneficial effects of regional entrepreneurship capital. Moreover, the analysis of potential deadweight and displacement effects from an aggregated regional perspective emphasizes that the start-up subsidy for unemployed individuals represents a market intervention into existing markets, which affects incumbents and potentially produces inefficiencies and market distortions. This macro perspective deserves more attention and research in the future (Chapter 5).
The Cauchy problem for the linearised Einstein equation and the Goursat problem for wave equations
(2017)
In this thesis, we study two initial value problems arising in general relativity. The first is the Cauchy problem for the linearised Einstein equation on general globally hyperbolic spacetimes, with smooth and distributional initial data. We extend well-known results by showing that given a solution to the linearised constraint equations of arbitrary real Sobolev regularity, there is a globally defined solution, which is unique up to addition of gauge solutions. Two solutions are considered equivalent if they differ by a gauge solution. Our main result is that the equivalence class of solutions depends continuously on the corre- sponding equivalence class of initial data. We also solve the linearised constraint equations in certain cases and show that there exist arbitrarily irregular (non-gauge) solutions to the linearised Einstein equation on Minkowski spacetime and Kasner spacetime.
In the second part, we study the Goursat problem (the characteristic Cauchy problem) for wave equations. We specify initial data on a smooth compact Cauchy horizon, which is a lightlike hypersurface. This problem has not been studied much, since it is an initial value problem on a non-globally hyperbolic spacetime. Our main result is that given a smooth function on a non-empty, smooth, compact, totally geodesic and non-degenerate Cauchy horizon and a so called admissible linear wave equation, there exists a unique solution that is defined on the globally hyperbolic region and restricts to the given function on the Cauchy horizon. Moreover, the solution depends continuously on the initial data. A linear wave equation is called admissible if the first order part satisfies a certain condition on the Cauchy horizon, for example if it vanishes. Interestingly, both existence of solution and uniqueness are false for general wave equations, as examples show. If we drop the non-degeneracy assumption, examples show that existence of solution fails even for the simplest wave equation. The proof requires precise energy estimates for the wave equation close to the Cauchy horizon. In case the Ricci curvature vanishes on the Cauchy horizon, we show that the energy estimates are strong enough to prove local existence and uniqueness for a class of non-linear wave equations. Our results apply in particular to the Taub-NUT spacetime and the Misner spacetime. It has recently been shown that compact Cauchy horizons in spacetimes satisfying the null energy condition are necessarily smooth and totally geodesic. Our results therefore apply if the spacetime satisfies the null energy condition and the Cauchy horizon is compact and non-degenerate.
Self-adaptive data quality
(2017)
Carrying out business processes successfully is closely linked to the quality of the data inventory in an organization. Lacks in data quality lead to problems: Incorrect address data prevents (timely) shipments to customers. Erroneous orders lead to returns and thus to unnecessary effort. Wrong pricing forces companies to miss out on revenues or to impair customer satisfaction. If orders or customer records cannot be retrieved, complaint management takes longer. Due to erroneous inventories, too few or too much supplies might be reordered.
A special problem with data quality and the reason for many of the issues mentioned above are duplicates in databases. Duplicates are different representations of same real-world objects in a dataset. However, these representations differ from each other and are for that reason hard to match by a computer. Moreover, the number of required comparisons to find those duplicates grows with the square of the dataset size. To cleanse the data, these duplicates must be detected and removed. Duplicate detection is a very laborious process. To achieve satisfactory results, appropriate software must be created and configured (similarity measures, partitioning keys, thresholds, etc.). Both requires much manual effort and experience.
This thesis addresses automation of parameter selection for duplicate detection and presents several novel approaches that eliminate the need for human experience in parts of the duplicate detection process.
A pre-processing step is introduced that analyzes the datasets in question and classifies their attributes semantically. Not only do these annotations help understanding the respective datasets, but they also facilitate subsequent steps, for example, by selecting appropriate similarity measures or normalizing the data upfront. This approach works without schema information.
Following that, we show a partitioning technique that strongly reduces the number of pair comparisons for the duplicate detection process. The approach automatically finds particularly suitable partitioning keys that simultaneously allow for effective and efficient duplicate retrieval. By means of a user study, we demonstrate that this technique finds partitioning keys that outperform expert suggestions and additionally does not need manual configuration. Furthermore, this approach can be applied independently of the attribute types.
To measure the success of a duplicate detection process and to execute the described partitioning approach, a gold standard is required that provides information about the actual duplicates in a training dataset. This thesis presents a technique that uses existing duplicate detection results and crowdsourcing to create a near gold standard that can be used for the purposes above. Another part of the thesis describes and evaluates strategies how to reduce these crowdsourcing costs and to achieve a consensus with less effort.