Refine
Has Fulltext
- yes (135) (remove)
Year of publication
- 2017 (135) (remove)
Document Type
- Doctoral Thesis (135) (remove)
Keywords
- FRET (3)
- Klimawandel (3)
- Nanopartikel (3)
- climate change (3)
- Adipositas (2)
- Arbeitsmarktpolitik (2)
- Bioraffinerie (2)
- Calciumphosphat (2)
- DNA origami (2)
- Depression (2)
- Förster resonance energy transfer (2)
- Gold (2)
- Lebensqualität (2)
- Machine Learning (2)
- Mediationsanalyse (2)
- Naturgefahren (2)
- Netzwerke (2)
- Prosodie (2)
- Temperatur (2)
- Verhalten (2)
- Wissen (2)
- Zeitreihenanalyse (2)
- behaviour (2)
- demographic change (2)
- depression (2)
- entrepreneurship (2)
- gold (2)
- knowledge (2)
- mediation analysis (2)
- nanoparticles (2)
- natural hazards (2)
- networks (2)
- numerical modeling (2)
- numerische Modellierung (2)
- obesity (2)
- prosody (2)
- quality of life (2)
- self-assembly (2)
- temperature (2)
- "Reactive Flux" Ratenkonstanten (1)
- 2-Thiodisaccharide (1)
- 2-Thiodisaccharides (1)
- 2P-FRET (1)
- 3D Modellierung (1)
- 3D-modeling (1)
- 70s (1)
- Abhängigkeit (1)
- Aggression (1)
- Aktivierungsentropie (1)
- Akzeptabilitätsbewertung (1)
- Anaphylaxie (1)
- Anisotrope Kolloide (1)
- Anregungs-Abfrage-Experiment (1)
- Antibiotika (1)
- Antiferromagnetismus (1)
- Arbeitslosenversicherung (1)
- Arbeitslosigkeit (1)
- Arbeitssuche (1)
- Arbeitssuchverhalten (1)
- Arbeitsteilung (1)
- Arbeitsverhalten (1)
- Architektur und Kunst (1)
- Arctic boundary layer (1)
- Arktik (1)
- Artverbreitung (1)
- Asteroseismologie (1)
- Aufarbeitung von Fruktose (1)
- Ausdauerleistung (1)
- Auslandsschutz (1)
- Ausstellung (1)
- Australia (1)
- Australien (1)
- Automatisierung (1)
- Autotrophie (1)
- Azobenzol (1)
- Azobenzol-haltiges Tensid (1)
- BAPTA (1)
- Bachstufen (1)
- Bakterien Sensor (1)
- Beobachtung anthropogener Aktivitäten (1)
- Beobachtung von Erdbebenquellen (1)
- Berufseinstieg (1)
- Berufskarriere (1)
- Beton (1)
- Bildungsausländer (1)
- Bildungsexpansion (1)
- Bioinformatik (1)
- Biomasse (1)
- Biorefinery (1)
- Biosensoren (1)
- Biosynthese (1)
- Blockcopolymervesikel (1)
- Bohrlochrandausbrüche (1)
- Bombyx mori Seide (1)
- Bombyx mori silk (1)
- Borel Funktionen (1)
- Borel functions (1)
- Brechungsindex von Azobenzol-haltigen Tensiden (1)
- Calcium (1)
- Calcium Bindungsstelle (1)
- Calcium binding site (1)
- Calcium phosphate (1)
- Canada (1)
- Catalysis (1)
- Cauchy horizon (1)
- Cauchyhorizont (1)
- Central America (1)
- Central American literary historiography (1)
- Central American literary studies (1)
- Cer Ammonium Nitrat (CAN) (1)
- Ceric Ammonium Nitrate (CAN) (1)
- Chemodynamik der Milchstraße (1)
- Cherenkov showers (1)
- Cherenkov-Schauern (1)
- Chile (1)
- Closure Positive Shift (CPS) (1)
- Coarea Formel (1)
- Colitis ulcerosa (1)
- Constraint-basierte Modellierung (1)
- Consular Act (1)
- Coping (1)
- Coumarin (1)
- Cre Rekombinase (1)
- Cre recombinase (1)
- Crohn's disease (1)
- Cyanobakterien (1)
- DBD (1)
- DFTB3 (1)
- DNA (1)
- DNA Origami (1)
- DNA assembly (1)
- DNA damage (1)
- DNA-Origami (1)
- DNA-Schädigung (1)
- DNS Assemblierung (1)
- Dark Matter (1)
- Datenintegration (1)
- Datenqualität (1)
- Deep Learning (1)
- Desensibilisierung (1)
- Design Thinking (1)
- Deutschland (1)
- Diabetes mellitus Typ 2 (1)
- Dichteheterogenitäten im oberen Mantel (1)
- Dihydroxyaceton (1)
- Discrimination Networks (1)
- Display (1)
- Diversität (1)
- Dokument Analyse (1)
- Doppelsterne (1)
- Doppelt hydrophile Blockcopolymere (1)
- Dotierung (1)
- Dreisprachigkeit (1)
- Dunkler Materie (1)
- Duplikaterkennung (1)
- E-Learning (1)
- ECIS (1)
- EM (1)
- Einkommen (1)
- Eisenbahninfrastruktur (1)
- Eisenoxidnanopartikel (1)
- El`gygytgyn Kratersee (1)
- Elektromyographie (1)
- Elektrosprayionisation (1)
- Ellipse (1)
- El’gygytgyn Crater Lake (1)
- Empfehlungen (1)
- Entdeckung (1)
- Entrepreneurship (1)
- Entstehung der Milchstraße (1)
- Entstehung von Galaxien (1)
- Entwicklung (1)
- Entwicklung von Galaxien (1)
- Epidemiologie (1)
- Erdbeben Modellierung (1)
- Erdnussallergie (1)
- Ereigniskorrelierte Hirnpotentiale (EKP) (1)
- Ernährung (1)
- Erosion (1)
- Eutrophierung (1)
- Evaluierung semantischer Suchmaschinen (1)
- Evolutionsgenetik (1)
- Exportplattformen (1)
- FVB/NJ Maus (1)
- FVB/NJ mouse (1)
- Faltenstrukturen (1)
- Familienarbeit (1)
- Feinsedimente (1)
- Ferromagnetismus (1)
- Festigkeit des Schiefer (1)
- Fettoxidation (1)
- Fettstoffwechsel (1)
- FhuA (1)
- Finanzwissenschaften (1)
- Flocking (1)
- Fluoreszenz (1)
- Fokus (1)
- Formen sozialer Identitätsbildung (1)
- Fotoschalter (1)
- Frauen (1)
- Freeze-Thaw-Cycles (1)
- Frost-Tau-Wechsel (1)
- Förster-Resonanz-Energie-Transfer (1)
- Förster-Resonanzenergietransfer (1)
- GIS (1)
- GIS-Dienstkomposition (1)
- Galaxien (1)
- Galaxien: Evolution (1)
- Galaxien: Kinematik und Dynamik (1)
- Galaxien: Statistiken (1)
- Gator Netzwerk (1)
- Gator networks (1)
- Gebirgsbäche (1)
- Gebrechlichkeit (1)
- Geographical mobility (1)
- Geomorphologie (1)
- Geothermie (1)
- Gerinne-Hang-Kopplung (1)
- German (1)
- Germany (1)
- Geschiebetransport (1)
- Geschmackssystems (1)
- Gewichtserhalt (1)
- Gewinnung benannter Entitäten (1)
- Gleichgewicht (1)
- Gleichgewichtstraining (1)
- Goldnanopartikel (1)
- Gravitationswelle (1)
- Grundwasser (1)
- Gründungsförderung (1)
- Gründungszuschuss (1)
- HPLC (1)
- Harnstoff (1)
- Hausarbeit (1)
- Hohlzylinderversuche (1)
- Hormone (1)
- Huftiere (1)
- Humankapital (1)
- Hybrid (1)
- Hybridmodell (1)
- Hydroxymethylfurfural (1)
- IC Model (1)
- IC Modell (1)
- Iceland (1)
- Industrie 4.0 (1)
- Information Retrieval (1)
- Informationsextraktion (1)
- Inklusionsabhängigkeit (1)
- Innovation (1)
- Innovationsmanagement (1)
- Intellektuellen-Rolle (1)
- Intermedialität (1)
- Intervention (1)
- Inversions-Theorie (1)
- Ionenmobilitätsspektrometrie (1)
- Island (1)
- Job search behavior (1)
- Johannes Bobrowski (1)
- Johannes Bobrwoski (1)
- Jugendliche (1)
- Kaffee (1)
- Kanada (1)
- Katalyse (1)
- Kenias öffentlicher Dienst (1)
- Kenya public service (1)
- Kerguelen (1)
- Kern Methoden (1)
- Ketzin (1)
- Kinder (1)
- Kindheit (1)
- Kinematik (1)
- Klassifikationsgüte (1)
- Kleidung (1)
- Klimafolgenanalyse (1)
- Kognition (1)
- Kohlenhydrat-Protein Interaction (1)
- Kohlenhydrate (1)
- Kohlenhydratoxidation (1)
- Kohlenstoff (1)
- Kohlenstoff-Nanopunkte (1)
- Kohlenstoff-Punkte (1)
- Kohlenstofffilme (1)
- Kohlenstoffisotope (1)
- Kohlenstoffnitride (1)
- Kohlenstoffpartikel (1)
- Kohlenstoffsynthese (1)
- Komposite (1)
- Konformationsselektion (1)
- Konsulargesetz (1)
- Kontaktdruck (1)
- Koordination (1)
- Koordinationskomplexe (1)
- Krafttraining (1)
- Krankheitsbewältigung (1)
- Kultur (1)
- Körpergewichtsregulation (1)
- Küste (1)
- L2 sentence processing (1)
- LAMP (1)
- Labor market policies (1)
- Ladung Transport (1)
- Ladungsträgerrekombination (1)
- Landformen (1)
- Landschaftspräferenzen (1)
- Latin American art (1)
- Lawinen (1)
- Leerlaufspannung (1)
- Leistungsinformationen verwenden (1)
- Leistungsmanagement (1)
- Leitruss (1)
- Lerngelegenheiten (1)
- Leseexperiment (1)
- Licht (1)
- Lignin (1)
- Line Suche (1)
- Litoral (1)
- Lumping (1)
- Lyrik (1)
- Lysimeter (1)
- Längsschnitt (1)
- Lösungsmitteleffekte (1)
- Lösungsprozess (1)
- Magnetic Particle Imaging (1)
- Magnetosomen-Ketten (1)
- Magnetresonanztomograpgie (1)
- Makrophyten (1)
- Mantleplumes (1)
- Markov state models (1)
- Markowketten (1)
- Maximalkraft/Schnellkraft (1)
- Mehrsprachigkeit (1)
- Mesokristalle (1)
- Message Passing Interface (1)
- Metabolomics (1)
- Metadaten (1)
- Metanome (1)
- Methoden der semantischen Suche (1)
- Microdialyse (1)
- Microeconometrics (1)
- Microviridin (1)
- Middleware (1)
- Mikrobiologie (1)
- Mikroökonometrie (1)
- Milky Way chemodynamics (1)
- Milky Way evolution (1)
- Mineralisation (1)
- Minimax Optimalität (1)
- Mobil (1)
- Mode (1)
- Modellierung des seismischen Zyklus (1)
- Mongolei (1)
- Mongolia (1)
- Morbus Crohn (1)
- Museum (1)
- N-Alkylglycin (1)
- N-alkyl-glycine (1)
- NGS (1)
- NLME (1)
- NLP (1)
- Nachkommen (1)
- Nachwuchssport (1)
- Nahrungsergänzungsmittel (1)
- Nahrungsmittelallergie (1)
- Nanolinsen (1)
- Naturrisiken (1)
- Naturstoffe (1)
- Navier-Stokes-Gleichungen (1)
- Navier-Stoks equations (1)
- Neo-institutionalismus (1)
- Nettoproduktion (1)
- Nettorotation der Lithosphäre (1)
- Niederschlag (1)
- Normalenbündel (1)
- Norwegian (1)
- Norwegisch (1)
- Nucleus parabrachialis (1)
- OLED (1)
- OSK (1)
- OSTK (1)
- Oberflächenwasser-Grundwasser Interaktion (1)
- Objekttopikalisierung (1)
- Ontologien (1)
- Optimierung von Biosynthesewegen (1)
- Optogenetik (1)
- Organisationsforschung (1)
- Organizational innovation (1)
- Origo-Spiro-Thio-Ketal-Stäbe (1)
- Oxo-Kohlenstoff (1)
- PBPK (1)
- PPGIS (1)
- Partikelverben (1)
- Pathogen (1)
- Patientenschulung (1)
- Periphyton (1)
- Permafrost (1)
- Personal Data (1)
- Pflanzen-Habitat Interaktionen (1)
- Pharmakodynamik (1)
- Photochemie (1)
- Photogrammetrie (1)
- Photokatalyse (1)
- Phylogeographie (1)
- Plasmonik (1)
- Plattenbewegungen (1)
- Plume-Rücken Interaktion (1)
- Polyester (1)
- Polykondensation (1)
- Polymerchemie (1)
- Polymere (1)
- Polymersynthese (1)
- Polypeptoide (1)
- Populations Analyse (1)
- Populationsökologie (1)
- Posidonia shale (1)
- Posidonienschiefer (1)
- Prevalence (1)
- Privatsphäre (1)
- Professionalisierung (1)
- Profilerstellung für Daten (1)
- Progredienzangst (1)
- Protease-Inhibitoren (1)
- Protein-Polymer Konjugaten (1)
- Protein-Protein-Interaktion (1)
- Proteinkinetik (1)
- Prozessidentifikation (1)
- Prozesssynchronisierung (1)
- Präsentation (1)
- Prävalenz (1)
- Prüfköpfe (1)
- Psycholinguistik (1)
- Pyrolyse (1)
- Pyrophosphat (1)
- QM/MM Molekulardynamik (1)
- QM/MM stochastic dynamics (1)
- Quadratsäure (1)
- Quantenchemie (1)
- RAFT/MADIX Polymerisation (1)
- RAFT/MADIX polymerization (1)
- ROP (1)
- ROS (1)
- Radarinterferometrie mit synthetischer Apertur (1)
- Radikalreaktionen (1)
- Radiokarbon (1)
- Radiosensitization (1)
- Rauheit (1)
- Raumwellen (1)
- Reaktionsmechanismen (1)
- Regionale Mobilität (1)
- Regionalökonometrie (1)
- Regionalökonomie (1)
- Reibung an Plattengrenzen (1)
- Rektifizierbarkeit höherer Ordnung (1)
- Reptilien (1)
- Rete Netzwerk (1)
- Rete networks (1)
- RiPP (1)
- Ringöffnungspolymerisation (1)
- Risiko- und Vulnerabilitätsfaktoren (1)
- Risk and Vulnerability Factors (1)
- Riss (1)
- Rumpfkinematik (1)
- Rumpfkraft (1)
- Réunion (1)
- SERS (1)
- Satellitenbilder (1)
- Satzverarbeitung (1)
- Schaden (1)
- Schadensmodellierung (1)
- Schnittstelle (1)
- Scrambling (1)
- Sedimentfracht (1)
- Sedimentquellenidentifizierung (1)
- Seen (1)
- Seismizität und Tektonik (1)
- Selbstassemblierung (1)
- Selbstmanagement (1)
- Selbstorganisation (1)
- Senioren (1)
- Sexual Aggression (1)
- Sexuelle Aggression (1)
- Silbernanopartikel (1)
- Skelettmuskel (1)
- Software-basierte Cache-Kohärenz (1)
- Southeast Asia (1)
- Spannung (1)
- Spannungsänderungen (1)
- Spektralanalyse (1)
- Spektroskopie (1)
- Spillovereffekte (1)
- Sprachdiagnostik (1)
- Spracherkennung (1)
- Spracherwerb (1)
- Sprachinhibition (1)
- Sprachproduktion (1)
- Sprachwahrnehmung (1)
- Sprachwechsel (1)
- Staufen im Breisgau (1)
- Stereotypien (1)
- Sternentwicklung (1)
- Sternwinde (1)
- Stoffwechselnetze (1)
- Strukturationstheorie (1)
- Städe (1)
- Städte Effizienz (1)
- Subduktion (1)
- Subjective beliefs (1)
- Subjektive Erwartungen (1)
- Surface Hopping Dynamik (1)
- Syntax (1)
- Systemsbiologie (1)
- Säuglinge (1)
- Südostasien (1)
- TAVI (1)
- Talsperre (1)
- Tannine (1)
- Technologieakzeptanz (1)
- Technologietransfer (1)
- Teile und Herrsche (1)
- Templatgesteuerte Assemblierung (1)
- Templatphase (1)
- Temporäre Ambiguität (1)
- Theorie der Zielsysteme (1)
- Theory of Mind (1)
- Thermoregulationsverhalten (1)
- Tiefe Biosphäre (1)
- Titandioxid (1)
- Transkriptionsfaktor (1)
- Transmembranprotein (1)
- Trendanalysen (1)
- Turkey (1)
- Türkei (1)
- UHI (1)
- Ultraschall (1)
- Untertage-Kohlevergasung (1)
- Upgrade of Fructose (1)
- Urteilsgenauigkeit (1)
- VERITAS (1)
- Variabilität (1)
- Versuchstierkunde (1)
- Verwaltungsmodernisierung (1)
- Veränderungsbereitschaft (1)
- Veränderungsverhalten (1)
- Viskositätsstruktur im oberen Mantel (1)
- Vollkorn (1)
- Vorhersagemodelle (1)
- Wasser-Luft-Grenzfläche (1)
- Wasserhaushalt (1)
- Wellenbrechung und Diffraktion (1)
- Wellengleichung (1)
- Wetterlagen (1)
- Wirbelsäule (1)
- Wirkstoffinteraktionen (1)
- Wissenschaftlichesworkflows (1)
- Wolf-Rayet (1)
- Wärmetransport (1)
- Wüste (1)
- Zeitreihenuntersuchung (1)
- Zeitskala (1)
- Zell Bewegung (1)
- Zell-substrat Adhäsion (1)
- Zweisprachigkeit (1)
- absorptive capacity (1)
- acceptability judgments (1)
- activated urethane (1)
- activation entropy (1)
- adolescents (1)
- aggression (1)
- air-water interface (1)
- aktiviertes Urethan (1)
- aldehyde (1)
- anaphylaxis (1)
- anger regulation (1)
- anisotrop (1)
- anisotropic (1)
- annotation (1)
- antecedent complexity (1)
- antibiotic combinations (1)
- antiferromagnetism (1)
- anxiety (1)
- approximate differentiability (1)
- approximative Differenzierbarkeit (1)
- architecture and art (1)
- archive (1)
- archivo (1)
- arctic (1)
- argument mining (1)
- argumentation (1)
- argumentation mining (1)
- argumentation structure (1)
- argumentation structure parsing (1)
- arktische Grenzschicht (1)
- arte latinoamericano (1)
- artificial Insemination (1)
- artistic network (1)
- asteroseismology (1)
- asymmetric competition (1)
- asymmetrische Konkurrenz (1)
- ausländische Direktinvestitionen (1)
- ausländische Studierende in Deutschland (1)
- automatic classification (1)
- automatic processes (1)
- automation (1)
- automatische Klassifizierung (1)
- automatische Prozesse (1)
- autotrophy (1)
- azobenzene (1)
- azobenzene refractive index (1)
- azobenzene surfactant (1)
- años 70 (1)
- bacterial population growth (1)
- bacterial sensor (1)
- balance (1)
- balance training (1)
- bedload transport (1)
- benthic primary producers (1)
- benthische Primärproduzenten (1)
- bilingualism (1)
- binary stars (1)
- biogeography (1)
- bioinformatics (1)
- biomass (1)
- biorefinery (1)
- biosensors (1)
- biosynthesis (1)
- block copolymer vesicles (1)
- body waves (1)
- borehole breakouts (1)
- business support (1)
- calcium (1)
- calcium phosphate (1)
- carbohydrate oxidation (1)
- carbohydrate-protein interaction (1)
- carbohydrates (1)
- carbon (1)
- carbon dots (1)
- carbon films (1)
- carbon isotopes (1)
- carbon nanodots (1)
- carbon nitrides (1)
- carbon particle (1)
- carbon synthesis (1)
- cardiac rehabilitation (1)
- career entry of novice teachers (1)
- catalytic azobenzene isomerization (1)
- cell movement (1)
- cell-substrate adhesion (1)
- change behavior (1)
- change readiness (1)
- channel steps (1)
- channel-hillslope coupling (1)
- charge carrier recombination (1)
- charge transport (1)
- childhood (1)
- children (1)
- chronisch entzündliche Darmerkrankungen (1)
- cities (1)
- classification accuracy (1)
- climate impact analysis (1)
- coarea formula (1)
- coast (1)
- coffee (1)
- cognition (1)
- cognitive vulnerability (1)
- coherence (1)
- communities (1)
- community (1)
- composite materials (1)
- computational seismology (1)
- computergestützte Seismologie (1)
- comunidad (1)
- concrete (1)
- conducting soot (1)
- conformational selection (1)
- constraint-based modeling (1)
- consular protection (1)
- contrastive topic (1)
- controlled radical polymerization (1)
- coordination (1)
- coordination complexes (1)
- coping (1)
- cosmological simulations (1)
- coumarin (1)
- crack (1)
- culture (1)
- cyanobacteria (1)
- damage (1)
- damage modeling (1)
- das Cauchyproblem (1)
- das Goursatproblem (1)
- das charakteristische Cauchyproblem (1)
- data integration (1)
- data profiling (1)
- data quality (1)
- deep biosphere (1)
- deep learning (1)
- degradation (1)
- demografische Entwicklung (1)
- demografischer Wandel (1)
- dependency (1)
- der Städtische Wärmeinseleffekt (1)
- der Urbane Hitzeinsel Effekt (1)
- der Urbane Hitzeinsel Effekt basierend auf Landoberflächentemperatur (1)
- desensitization (1)
- desert (1)
- development (1)
- diagnostic competence (1)
- diagnostic instruments for language assessment (1)
- diagnostische Kompetenz (1)
- die Stadtform (1)
- die linearisierte Einsteingleichung (1)
- dihydroxyacetone (1)
- discourse parsing (1)
- discourse structure (1)
- discovery (1)
- discrimination networks (1)
- disease management (1)
- display (1)
- dissociative electron attachment (1)
- dissoziative Elektronen Anlagerung (1)
- diversity (1)
- divide-and-conquer (1)
- division of labour (1)
- document analysis (1)
- domestic work (1)
- doping (1)
- double hydrophilic block copolymers (1)
- dress (1)
- drug drug interactions (1)
- duplicate detection (1)
- dynamic topography (1)
- dynamische Topographie (1)
- dysfunctional attitudes (1)
- dysfunktionale Einstellungen (1)
- earthquake modeling (1)
- earthquake source observations (1)
- ecological modelling (1)
- educational expansion (1)
- eindeutige Spaltenkombination (1)
- einseitige Kommunikation (1)
- elderly (1)
- electrical chemotaxis assay (1)
- electromyography (1)
- electrospray ionization (ESI) (1)
- elektrischer Chemotaxis Assy (1)
- ellipsis (1)
- elliptic quasicomplexes (1)
- elliptische Quasi-Komplexe (1)
- epidemiology (1)
- equatorial electrojet (1)
- erosion (1)
- estuary (1)
- etiology (1)
- event-related potentials (ERP) (1)
- evolutionary history (1)
- exercise capacity (1)
- exercise performance (1)
- exercise supervision (1)
- exhibition (1)
- export platform (1)
- family work (1)
- fashion (1)
- fat oxidation (1)
- fear of progression (1)
- ferromagnetism (1)
- flocking (1)
- fluorescence (1)
- flussunterbrechende Analyse (1)
- focus (1)
- food allergy (1)
- foreign direct investment (1)
- foreign students in Germany (1)
- frailty (1)
- freshwater ecosystems (1)
- functional dependency (1)
- funktionale Abhängigkeit (1)
- galactic astronomy (1)
- galaktische Astrophysik (1)
- galaxies: evolution (1)
- galaxies: kinematics and dynamics (1)
- galaxies: statistics (1)
- galaxy (1)
- galaxy evolution (1)
- galaxy formation (1)
- garden path (1)
- gelöster organischer Kohlenstoff (1)
- geodynamic models (1)
- geodynamische Modelle (1)
- geomechanical modelling (1)
- geomechanische Modellierung (1)
- geomorphology (1)
- geospatial services (1)
- geothermal (1)
- gesellschaftspolitische Konzepte (1)
- gesellschaftspolitische Rolle der Intellektuellen (1)
- giant unilamellar vesicles (1)
- gold nanoparticles (1)
- grafische Modelle (1)
- graphical models (1)
- gravitational wave (1)
- groundwater (1)
- heat transport (1)
- high quantum yield (1)
- high resolution mass spectrometry (1)
- higher order rectifiability (1)
- historical conscience (1)
- historical subject (1)
- historisches Bewusstsein (1)
- historisches Subjekt (1)
- hochauflösende Massenspektrometrie (1)
- hohe Quantenausbeute (1)
- hollow cylinder experiments (1)
- hormones (1)
- human capital (1)
- hybrid (1)
- hybrid model (1)
- hybrid multi-junction solar cell (1)
- hybrid semantic search (1)
- hybride Mehrschichtsolarzellen (1)
- hybride semantische Suche (1)
- hyporheic zone (1)
- hyporheische Zone (1)
- importance sampling (1)
- inclusion dependency (1)
- income (1)
- incremental graph pattern matching (1)
- individual anaerobic threshold (1)
- individuelle anaerobe Schwelle (1)
- induced fit (1)
- induced seismicity (1)
- induzierte Passform (1)
- induzierte Seismizität (1)
- infants (1)
- inflammatory bowel disease (1)
- information extraction (1)
- information retrieval (1)
- inkrementelle Graphmustersuche (1)
- innovation adoption (1)
- innovation management (1)
- interdepartmental committee (1)
- interface (1)
- intermediality (1)
- intermediaries (1)
- interministerielle Arbeitsgruppe (1)
- interspecific interactions (1)
- interspezifische Wechselwirkungen (1)
- intervention (1)
- inverse Probleme (1)
- inverse problems (1)
- inverse theory (1)
- ion mobility spectrometry (1)
- iron oxide nanoparticle (1)
- isothermal amplification (1)
- isothermale Amplifikation (1)
- job search (1)
- kardiologische Rehabilitation (1)
- katalytische Isomerisation von Azobenzolen (1)
- kernel methods (1)
- kinematics (1)
- kognitive Vulnerabilität (1)
- kollektive Zuschreibungen (1)
- konsularische Hilfe (1)
- konsularischer Schutz (1)
- kontrastives Topik (1)
- kontrollierte radikalische Polymerisationen (1)
- kosmologische Simulationen (1)
- körperliche Leistungsfähigkeit (1)
- labelling processes (1)
- labor market policy (1)
- laboratory animal sciences (1)
- lakes (1)
- landforms (1)
- landscape preferences (1)
- language acquisition (1)
- language diagnostics (1)
- language inhibition (1)
- language production (1)
- language recognition (1)
- language switching (1)
- learning opportunities (1)
- lebensweltliche Erfahrungen (1)
- life-world experiences (1)
- light (1)
- line search (1)
- lipid metabolism (1)
- literarische Kritik (1)
- literary criticism (1)
- lithosphere net rotation (1)
- lithosphere stress field (1)
- lithosphärisches Spannungsfeld (1)
- littoral eutrophication (1)
- longitudinal methodology (1)
- lower critical solution temperature (1)
- lower extremity muscle strength/power (1)
- lumping (1)
- lunar tides (1)
- lunare Gezeiten (1)
- lyric poetry (1)
- lysimeter (1)
- machine learning (1)
- macrophytes (1)
- magnetic resonance imaging (1)
- magnetische resonante Beugung (1)
- magnetischer Zirkulardichroismus (1)
- magnetosome chains (1)
- magnetotactic bacteria (1)
- magnetotaktische Bakterien (1)
- mantle plumes (1)
- maschinelle Verarbeitung natürlicher Sprache (1)
- massereiche Sterne (1)
- massive stars (1)
- maternal diet (1)
- maternale Ernährung (1)
- mathematical modelling (1)
- mathematische Modellierung (1)
- mechanistic modeling (1)
- mechanistische Modellierung (1)
- mental number representation (1)
- mentale Zahlenrepräsentation (1)
- mesocrystals (1)
- metabolic networks (1)
- metabolomics (1)
- metadata (1)
- metanome (1)
- microbial decomposition (1)
- microbiology (1)
- microdialysis (1)
- microeconometric analyses (1)
- microviridin (1)
- microwave synthesis (1)
- middleware (1)
- mikrobieller Abbau (1)
- mikrowellengestützte Synthese (1)
- mikrökonometrische Analysen (1)
- mineralization (1)
- minimax optimality (1)
- mixed methods (1)
- mobile (1)
- mock observations (1)
- model-driven software engineering (1)
- modellgetriebene Softwareentwicklung (1)
- molecular rods (1)
- molekulare Stäbe (1)
- monitoring (1)
- monitoring of anthropogenic activities (1)
- morphological changes (1)
- morphologischen Veränderungen (1)
- mountain rivers (1)
- multilingualism (1)
- museum (1)
- nachaltige Städteentwicklung (1)
- named entity mining (1)
- nanolenses (1)
- nanoparticle (1)
- nanoporous carbon particles (1)
- nanoporöser Kohlenstoffpartikel (1)
- natural language processing (1)
- natural products (1)
- natural risks (1)
- natürliche Sprachverarbeitung (1)
- net production (1)
- nicht-lineare gemischte Modelle (NLME) (1)
- nicht-monetäre Bewertung (1)
- nichtisothermer Mehrphasenfluss (1)
- non-isothermal multiphase flow (1)
- non-monetary valuation (1)
- non-national students (1)
- normal bundle (1)
- nudging (1)
- numerical cognition (1)
- numerical simulation (1)
- numerical techniques (1)
- numerische Kognition (1)
- numerische Methoden (1)
- numerische Simulation (1)
- nutrition (1)
- nutritional supplements (1)
- object based image analysis (1)
- object topicalization (1)
- objektbasierte Bildanalyse (1)
- occupational career (1)
- offspring (1)
- oligo spiro thio ketal rods (1)
- one-sided communication (1)
- ontologies (1)
- open circuit voltage (1)
- optically induced dynamics (1)
- optisch induzierte Dynamik (1)
- optogenetics (1)
- oral immunotherapy (1)
- orale Immuntherapie (1)
- organic carbon (1)
- organic matter quality (1)
- organic solar cells (1)
- organisation science (1)
- organische Solarzellen (1)
- organischer Kohlenstoff (1)
- organisches Material (1)
- oxidase (1)
- oxocarbon (1)
- parabrachial nucleus (1)
- parallel programming (1)
- parallele Programmierung (1)
- participatory mapping (1)
- particle assembly (1)
- particle verbs (1)
- patchy particles (1)
- pathogen (1)
- pathway engineering (1)
- patient education (1)
- peanut allergy (1)
- peptide (1)
- performance information use (1)
- performance management (1)
- periphyton (1)
- permafrost (1)
- persönliche Informationen (1)
- pharmacodynamics (1)
- phosphonate containing polymers (1)
- phosphonathaltige Polymere (1)
- photocatalysis (1)
- photocatalytic water splitting (1)
- photochemistry (1)
- photogrammetry (1)
- photokatalytische Wasserspaltung (1)
- photoswitches (1)
- physical attractiveness (1)
- physiologie-basierte Pharmacokinetic (PBPK) (1)
- physische Attraktivität (1)
- plant-habitat interactions (1)
- plasmon nano-particles (1)
- plasmonic catalysis (1)
- plasmonics (1)
- plasmonische Katalyse (1)
- plasmonische Nanopartikeln (1)
- plate boundary friction (1)
- plate motions (1)
- plume-ridge interaction (1)
- plötzliche stratosphärische Erwärmungsereignisse (1)
- political economics (1)
- politische Ökonomie (1)
- polycondensation (1)
- polyesters (1)
- polymer chemistry (1)
- polymer induced Biomineralization (1)
- polymer synthesis (1)
- polymers (1)
- polymervermittelte Biomineralisation (1)
- polymictic lakes (1)
- polymiktische Seen (1)
- polypeptoids (1)
- popPBPK (1)
- popPK (1)
- population analysis (1)
- population ecology (1)
- porous materials (1)
- poröse Materialien (1)
- precipitation (1)
- predictive modelling (1)
- printing (1)
- privacy (1)
- process identification (1)
- process synchronization (1)
- prosodic boundary cues (1)
- prosodic phrase boundaries (1)
- prosodische Grenzmarkierungen (1)
- prosodische Phrasengrenzen (1)
- protease inhibitor (1)
- protein kinetics (1)
- protein-polymer conjugate (1)
- protein-protein interaction (1)
- psycholinguistics (1)
- public economics (1)
- public management (1)
- public sector innovation (1)
- pump-probe experiment (1)
- pyrolysis (1)
- pyrophosphate (1)
- qualitative study (1)
- qualititative Studie (1)
- quantum chemistry (1)
- rabbinic discussion (1)
- radical reactions (1)
- radiocarbon (1)
- railway infrastructure (1)
- randomisierte Studie (1)
- randomized controlled trial (1)
- rare-earth metals (1)
- raum-zeitlich (1)
- reactive flux rate constants (1)
- reactive oxygen species (1)
- recommendation (1)
- red giant stars (1)
- red meat (1)
- redes artísticas transatlánticas (1)
- regional climate simulations (1)
- regional economics (1)
- regionale Klimasimulationen (1)
- reproductive technologies (1)
- reptiles (1)
- resistance training (1)
- responsa (1)
- ribosomal dynamics (1)
- riesen unilamellare Vesikel (1)
- role of intellect (1)
- rote Riesensterne (1)
- rotes Fleisch (1)
- roughness (1)
- satellite images (1)
- schaltbare Materialien (1)
- scientific workflows (1)
- scrambling (1)
- sediment source fingerprinting (1)
- sediment transport modelling (1)
- seismic cycle modeling (1)
- seismicity and tectonics (1)
- self-management (1)
- self-paced reading (1)
- seltene Erden (1)
- semantic domain modeling (1)
- semantic ranking (1)
- semantic search (1)
- semantic search evaluation (1)
- semantic search methods (1)
- semantische Domänenmodellierung (1)
- semantische Suche (1)
- semantisches Ranking (1)
- sensors (1)
- service composition (1)
- shale strength (1)
- silver nanoparticles (1)
- skeletal muscle (1)
- smart materials (1)
- snow avalanches (1)
- social cooperation (1)
- social identity formation (1)
- social simulation (1)
- society-political concepts (1)
- society-political role of intellectuals (1)
- socio-discourse (1)
- software-based cache coherence (1)
- solution process (1)
- solvent effect (1)
- soziale Kooperation (1)
- soziale Simulation (1)
- soziodiskurs (1)
- spatial econometrics (1)
- spatio-temporal (1)
- species distribution (1)
- spectral analysis (1)
- spectroscopy (1)
- speech perception (1)
- spillover effects (1)
- spine (1)
- sprachdiagnostische Verfahren (1)
- squaric acid (1)
- start-up subsidy (1)
- stellar evolution (1)
- stellar population (1)
- stellar winds (1)
- stellare Population (1)
- stereotypy (1)
- stochastic interacting particles (1)
- stochastisches interagierendes System (1)
- stopped-flow (1)
- stress (1)
- stress changes (1)
- structural properties (1)
- structuration theory (1)
- strukturelle Eigenschaften (1)
- subduction (1)
- sudden stratospheric warming (1)
- supramolecular chemistry (1)
- supramolekulare Chemie (1)
- surface hopping dynamics (1)
- surface modification (1)
- surface urban heat island effect (1)
- surface water-groundwater interaction (1)
- suspended sediments (1)
- sustainable urban development (1)
- synoptic cyclones (1)
- synoptische Zyklone (1)
- syntax (1)
- synthetic aperture radar interferometry (1)
- synthetic biology (1)
- synthetische Beobachtungen (1)
- synthetische Biologie (1)
- systems biology (1)
- tannins (1)
- task demands (1)
- taste processing (1)
- teacher judgment accuracy (1)
- technology acceptance (1)
- technology transfer (1)
- template phase (1)
- the Cauchy problem (1)
- the Goursat problem (1)
- the characteristic Cauchy problem (1)
- the linearised Einstein equation (1)
- theory of goal systems (1)
- theory of mind (1)
- thermal isomerization of azobenzene (1)
- thermisch angeregte Isomerisierung von Azobenzolen (1)
- thermo-mechanical modeling (1)
- thermo-mechanische Modellierung (1)
- thermodynamic structure (1)
- thermodynamische Struktur (1)
- thermoregulatory behaviour (1)
- time scale (1)
- time series analysis (1)
- time series investigation (1)
- time-kill curves (1)
- time-series analysis (1)
- titania (1)
- transcription factor (1)
- transition path sampling (1)
- translation theory (1)
- transmembrane protein (1)
- treatment effects (1)
- trend analyses (1)
- trilingualism (1)
- trunk kinematics (1)
- trunk muscle strength (1)
- type 2 diabetes (1)
- ulcerative colitis (1)
- ultra-thin membrane (1)
- ultradünne Membranen (1)
- ultrafast phenomena (1)
- ultraschnelle Phänomene (1)
- ultrasound (1)
- unconventional shale (1)
- underground coal gasification (1)
- unemployment (1)
- unemployment insurance (1)
- ungulates (1)
- unique column combination (1)
- unkonventionelle Schiefer (1)
- untere kritische Entmischungstemperatur (1)
- upper mantle density heterogeneities (1)
- upper mantle viscosity structure (1)
- urban efficiency (1)
- urban form (1)
- urban heat island effect (1)
- urea (1)
- variability (1)
- vertical coupling (1)
- vertikale Kuppelung (1)
- vertrackte Probleme (1)
- water balance (1)
- water reservoir (1)
- wave equation (1)
- wave scattering and diffraction (1)
- wavelet (1)
- weather pattern (1)
- weight maintenance (1)
- weight regulation (1)
- weißer Kohlenstoff (1)
- white carbon (1)
- whole-grain (1)
- wicked problems (1)
- women (1)
- worries and concerns (1)
- wrinkles (1)
- x-ray magnetic circular dichroism (XMCD) (1)
- x-ray magnetic resonant diffraction (XMRD) (1)
- young athletes (1)
- zentralamerika (1)
- zentralamerikanische literarische Historiographie (1)
- Ängste und Sorgen (1)
- Ärgerregulation (1)
- Ästuar (1)
- Ätiologie (1)
- Überwachung (1)
- Übungsanleitung (1)
- äquatorialer Elektrojet (1)
- ökologische Modellierung (1)
- азобензолсодержащие ПАВ (1)
- каталитическая изомеризация азобензолов (1)
- плазмонные наночастицы (1)
- показатель преломления азобензолов (1)
- הזרעה מלאכותית (1)
- טכנולוגיה (1)
- ספרות השו"ת (1)
- פריון (1)
Institute
- Institut für Chemie (25)
- Institut für Geowissenschaften (21)
- Institut für Physik und Astronomie (14)
- Institut für Biochemie und Biologie (13)
- Department Psychologie (8)
- Wirtschaftswissenschaften (7)
- Department Linguistik (6)
- Institut für Mathematik (6)
- Department Sport- und Gesundheitswissenschaften (5)
- Institut für Ernährungswissenschaft (5)
תקציר
מצוות הפריון הינה אחת מן המצוות החשובות ביהדות ומגדירה במובנים רבים את מהות קיומו של האדם היהודי הדתי. זוגות דתיים אשר סובלים מבעיית פריון נקרעים לא פעם בין מחוייבותם הדתית והלחץ החברתי בקהילתם לבין אי יכולתם להביא ילדים לעולם בדרך טבעית. החל מאמצע המאה ה 20- הפכו טכנולוגיות פריון רפואיות לנגישות לציבור הרחב בישראל ובעולם והובילו למהפכה חברתית אשר אפשרה לזוגות חשוכי ילדים להגשים את חלומם בפעם הראשונה ולהפוך להורים. פריצת דרך רפואית זו לא נעלמה מעיני הציבור הדתי על זרמיו השונים וגרמה לטלטלה בתפיסת עולמם של מנהיגיהם, אשר נדרשו לתת מענה הלכתי הולם המאזן בין רווחת קהילותיהם לבין שמירה על ערכי ומצוות הדת היהודית. על רקע תמורות אלו התפתח שיח הלכתי ואקדמי חשוב ומעניין באשר למקומה של טכנולוגיה מודרנית בחברה המתנהלת על פי קודים מסורתיים. עם זאת, הדיאלוג החי המתקיים בין מנהיגי היהדות לבין קהילותיהם, אשר מתועד כאוסף שאלות ותשובות (שו"ת) לבעיות הלכתיות אקטואליות בתחום הפריון, לא נחקר דיו.
עבודתנו עוסקת במחקר ספרות השו"ת ובגישת פוסקי ההלכה להסתייעות בטכנולוגיית ההזרעה המלאכותית ובטיפולי פוריות חדשניים נוספים שהתפתחו בעקבותיה כגון הפריה חוץ גופית, פונדקאות ותרומת ביציות. העבודה סוקרת את התגבשות ועיצוב ההלכה בשאלה זו, כפי שעולה מספרות השו"ת החל מתקופת הראשונים (המאות ה 11- עד ה- 16 ) ועד לפוסקים מתקופת האחרונים (המאה ה- 16 ואילך), ומתמקדת בעיקר בתקופה המודרנית של המאה האחרונה. במרכז העבודה יעמדו מעמדם ההלכתי של האם והוולד בעקבות שימוש בהזרעה והפריה מלאכותית וכן השלכותיהן ההלכתיות של השימוש הרפואי בזרע.
תרומתו של מחקר זה הינה בהצגת וניתוח התמודדות הפוסקים עם העימות הערכי וההלכתי שמעוררות תמורות במבנה המשפחה ובדפוסי הורות מסורתיים כתוצאה משימוש הולך וגובר בטכנולוגיות פריון מתקדמות בחברה המודרנית. יתר על כן, מחקר זה מרחיב את הדיון ההלכתי אל מעבר לעמדות הפוסקים ביהדות האורתודוקסית, הנצמדים לעקרונות ההלכה ההיסטורית, ומציג לראשונה אף את קשת הדעות של רבנים המשתייכים לזרמי היהדות המתקדמת והקונסרבטיבית, הדוגלים בפיתוח ההלכה והתאמתה למציאות החיים המשתנה. כך, מעניקה עבודת מחקר זו תמונה שלמה ומקיפה יותר באשר להתמודדות הרבנים עם שאלת ההזרעה המלאכותית ומציגה את רב גוניותה של ההלכה כפי שמשתקפת בזרמי היהדות השונים בישראל ובתפוצות.
העבודה מחולקת לשישה חלקים אשר יפורטו בקצרה להלן.
חלק ראשון - מבוא לשאלת ההזרעה המלאכותית בספרות השו"ת
חלק זה מהווה מבוא והקדמה תיאורטית ומחקרית לשאלת ההזרעה המלאכותית ומטרתו לצייד את הקורא ברקע ההיסטורי ובכלים המחקריים הדרושים להמשך העבודה. הפרק הראשון סוקר מצוות דתיות והשקפות תרבותיות בנוגע לילודה בחברה היהודית ואילו הפרק השני מניח יסודות מתודולוגיים והיסטוריים הנחוצים לחקר שאלת ההזרעה המלאכותית בספרות השו"ת.
פרק ראשון - מצוות דתיות והשקפות תרבותיות בנוגע לילודה בחברה היהודית
פרק זה מציג את המשנה האידיאולוגית העומדת בבסיס ההצדקה ההלכתית להתרתם של טיפולי הפוריות. הפרק דן בחשיבות מצוות הפריון כפי שעולה ממקורות הלכתיים קדומים וברקע ההיסטורי והתרבותי למרכזיותה של מצווה זו ומצוות נלוות לה בדת היהודית. הפרק מדגים כיצד עיצבה חשיבות המצווה את דפוס המשפחה היהודית לאורך ההיסטוריה ועד לימינו ומסתיים בדיון קצר בשאלות רפואיות מוסריות העולות מהנגשת טכנולוגיות פריון מתקדמות במדינת ישראל המודרנית.
פרק שני - יסודות מתודולוגיים והיסטוריים בחקר שאלת ההזרעה המלאכותית בספרות השו"ת
בפרק זה מוצגים תחילה מטרות ושיטות המחקר אשר נבחרו לשם ניתוח הפסיקה בשאלת ההזרעה המלאכותית. מכיוון שהעבודה עוסקת בשאלה רפואית חדשנית והשפעתה על העולם הדתי, יערך מחקר זה תוך שימוש בניתוח רב מימדי המשלב מספר שיטות מחקר ובראשן השיטה הדוגמטית-היסטורית, כמו גם בניתוח ביקורתי של שיקולי המדיניות העומדים בבסיס שיקולי הפוסקים לפי אסוכלת הריאליזים ההלכתי. לאחר מכן נסקרים ונידונים המקורות ההיסטוריים העיקריים אשר עוסקים בהתעברות על טבעית באמבט או באמצעות שכיבה על סדינים והעומדים בבסיס הכרעות הפוסקים בהשלכות הזרעה מלאכותית על מעמדם ההלכתי של האישה והילוד.
חלק שני- ספרות השו"ת האורתודוקסית בשאלת הזרעה מלאכותית מן הבעל
חלקו שני של המחקר עוסק בניתוח ספרות השו"ת האורתודוקסית בשאלת הזרעה מלאכותית מן הבעל ובוחן את התפתחותו הרעיונית של איסור השחתת זרע ואת עמדות הפוסקים בשאלת הזרעה מלאכותית מן הבעל כפי שנידונה בספרות השו"ת.
פרק ראשון - התפתחותו הרעיונית של איסור השחתת זרע
בפרק זה נסקרים מקורותיו ההיסטוריים של האיסור להשחתת זרע. איסור זה נושק לכל תחומי חייו של הזוג הדתי ובעיקר לתחום המיניות והפריון ובעל השפעה מהותית על מגבלות הצניעות שהוחלו בעקבותיו על המיניות הגברית. קיים ויכוח בין החוקרים האם נובע האיסור מן המקורות העתיקים כגון התלמוד או לחילופין התעצם בעקבות החמרות הרבנים. מחקרינו מציע כי האפשרות השניה היא הסבירה יותר. חכמי התלמוד מפגינים גישה מקלה כלפי האיסור אך מביעים סלידה מהוצאת זרע לבטלה מתוך חשיבה חסידית או אמונה כי עיסוק בעינוג מיני עצמי עלול להסיט את האדם מקיום מצוות הפריון. לעומת זאת, בספרות הקבלה ובמיוחד בזוהר ניכרת החמרה במימדי האיסור החודרת מאוחר יותר אל עולם ההלכה.
פרק שני - דיון בהזרעה מן הבעל בספרות השו"ת
הפרק דן בפסיקותיהם של רבנים בולטים בזרם האורתודוקסי בשאלת ההזרעה מן הבעל. בספרות השו"ת האורתודוקסית ניכרת השפעתו של איסור הוצאת זרע לבטלה על שיקולי הפוסקים באם להתיר את הטיפול בהזרעה מלאכותית מן הבעל. פוסקים מקלים כגון הרבנים פיינשטיין, נבנצל ויוסף מייחסים משקל רב יותר לחשיבות מצוות הפריון ומצמצמים בשל כך את תחולת האיסור. לעומתם, פוסקים מחמירים ובראשם הרבנים טננבוים, קוק, עוזיאל, הדאיה ולדנברג מתנגדים לטיפולי פוריות מחשש השחתת זרע ובשל השלכותיה הרוחניות של העבירה על האיסור. שאלה מהותית נוספת בשיח ההלכתי בה נחלקו הרבנים הינה באשר לאפשרות קיומה של מצוות הפריה והרביה באמצעות הריון שהושג בסיוע רפואי. פוסקים מדור האחרונים, כגון הרבנים טננבוים, סגל ואזולאי (החיד"א), ובעקבותיהם פוסקים בולטים במאה ה- 20 וה- 21 כמו הרבנים עוזיאל, הדאיה, ולדנברג וקניבסקי סברו כי ניתן לקיים את המצווה באופן טבעי בלבד. לעומת זאת, הרבנים נבנצל, יוסף, וינברג ואוירבך ייחסו חשיבות רבה יותר לכוונת העושה ולתוצאה המיוחלת של ההריון אשר מגשימה לדעתם את הרציונל העומד בבסיס מצוות הפרייה הרבייה ובמצוות ישוב העולם.
שאלה נוספת המתעוררת בעניין הזרעה מן הבעל ונידונה בפרק זה עוסקת בבעיית העקרות ההלכתית. התעקשות הרבנים לשמור על הלכות הנידה במתכונתן המסורתית חושפת נשים דתיות לאי ודאות באשר לטהרתן ומכפיפה את חיי המין והפריון של משפחות שלמות למרות הסמכות הרבנית.
חלק שלישי- ספרות השו"ת האורתודוקסית בשאלת הזרעה מלאכותית מתורם
חלקו השלישי של המחקר עוסק בדיון ההלכתי האורתודוקסי בשאלת ההזרעה המלאכותית מתורם יהודי או נוכרי לאישה נשואה. מרבית הפוסקים הסתייגו מתרומת זרע לאישה נשואה מתורם יהודי כמו גם מנוכרי משיקולים הלכתיים ואידיאולוגיים.
פרק ראשון - שאלת ההזרעה המלאכותית מתורם יהודי
תרומת זרע מתורם יהודי זר מעלה חששות הלכתיים רבים לגבי מעמדה ההלכתי של האישה כגון ניאוף וקיום מצוות הייבום והחליצה. כמו כן הועלו חששות לגבי יחוסו וכשרותו ההלכתית של הקטין כגון חשש ממזרות, גילוי עריות בין ילדי התורם, פגיעה במעמד הכהונה ובעיות בתחום המעמד האישי בשל אי ודאות לגבי זהות התורם. בנוסף, משליכה הכרעה בשאלות אלה על זכאות הקטין לזכויות סוציאליות המגיעות לו מאביו, כגון הזכות למזונות ולירושה. פסיקת ההלכה במאה האחרונה אימצה את עמדתו של הרב פיינשטיין, לפיה רק הריון הנובע ממגע מיני ישיר בין אישה נשואה לגבר זר, בניגוד להזרעה מלאכותית, יכול לפגוע במעמד האישה והילוד. עם זאת, פוסקים רבים אסרו לבצע הזרעה מלאכותית מגבר יהודי מטעמי זהירות, בגין חששות אלו.
פרק שני - הזרעה מלאכותית מתורם לא יהודי לאישה נשואה
על אף שבמקרה של תרומת זרע מנוכרי נעדר הילוד יחוס לאביו הנוכרי וכשרותו נקבעת על פי מעמד האם בלבד, מסתייגים פוסקים רבים מן הפעולה. הפוסקים מונים טיעונים רבים לאיסור תרומת זרע מגבר נוכרי וביניהם חילול השם, פגיעה בביצוע מנהג הייבום והחליצה, ספקות לגבי זכויות הירושה, עידוד ההפקרות המינית ופגיעה ביציבות התא המשפחתי המסורתי. בנוסף חוששים הם מבלבול לגבי זהות אבי הילוד ופגיעה במעמד הכהונה. במהלך המאה ה- 20 ניכרת מגמה בקרב פוסקים העוסקים בקבלה לייחוס תכונות שליליות לזרע נוכרי, אשר עוברות לילוד ובאופן זה פוגעות בקדושת העם היהודי. מעבר לשאלות ההלכתיות מועלים בפרק זה קשיים אתיים אשר מעוררת תרומת זרע אנונימית, המונעת מן הילוד לברר את שורשיו הביולוגיים.
חלק רביעי- יחס הפסיקה האורתודוקסית לשינויים במבנה המשפחה המסורתית
בעקבות שימוש בטכנולוגיות פריון חלק זה מנתח את יחס הפסיקה האורתודוקסית לשינויים במבנה המשפחה המסורתית בעקבות שימוש בטכנולוגיות פריון ומורכב משלושה פרקים: הזרעה מלאכותית לאישה פנויה, הפריה חוץ גופית והקמת משפחה אלטרנטיבית בפרספקטיבה הלכתית אורתודוקסית.
פרק ראשון - הזרעה מלאכותית לאישה פנויה
המוסכמה ההלכתית הינה כי ילדה של אישה פנויה לא ייחשב לממזר, אפילו אם אביו הביולוגי הוא יהודי. למרות זאת אסרו פוסקים מסויימים על הזרעה מלאכותית במקרה זה בגין חששות לייחוס הילוד, גילוי עריות וגזילת ירושת שאר האחים. חלק מן הפוסקים קוראים לאסור הזרעה מלאכותית לאישה פנויה בשם טובת הילד ולא מסיבות הלכתיות. אפשרות נוספת אשר השלכותיה ההלכתיות נידונות בפרק זה הינה ביצוע הזרעה מלאכותית באישה רווקה או אלמנה באמצעות שימוש בזרעו של אדם שנפטר. חשוב לציין כי השימוש בטכנולוגיה זו שנוי במחלוקת בקרב הרבנים בגין החשיבות אותה מייחסת היהדות לערך כבוד המת.
פרק שני - הפריה חוץ גופית
הפריית מבחנה יכולה להתבצע בין בני זוג נשואים בסיוע או ללא היזקקות לתרומת ביצית או זרע. פוסקים אשר התירו את השימוש בטכנולוגיה זו הביעו את חששותיהם בנוגע להשלכותיה ההלכתיות של הפריה החוץ גופית בשל הקושי לפקח על התהליך, הנעשה מחוץ לגוף האדם. מתנגדי הפריית המבחנה סברו כי היא מתערבת במלאכת הבריאה האלוהית ועשויה לדרדר את האנושות מבחינה מוסרית לתכנון ולשיבוט בני אדם.
הפריה חוץ גופית מעוררות שאלות הלכתיות חדשניות ובראשן שאלת הגדרת האמהות, כגון במקרי פונדקאות ותרומת ביציות, מהן הסתייגו מרבית פוסקי הדור במאה ה 20- . הקביעה שאומצה בשיח ההלכתי העכשווי גורסת כי אמו של התינוק תהיה האישה שילדה אותו, גם אם נולד מתרומת ביצית. הכרעה זו מובלעת ברציונל החוקים המסדירים את הפונדקאות במדינת ישראל ואשר מטילים מגבלות הלכתיות על בחירת הפונדקאית לשם מניעת חשש ממזרות או שאלה לגבי יהדותו של הילוד.
פרק שלישי - הקמת משפחה אלטרנטיבית בפרספקטיבה הלכתית אורתודוקסית
קיימת בעיה הלכתית מהותית במיסוד הקשר החד-מיני ביהדות האורתודוקסית. ספרות השו"ת האורתודוקסית אינה עוסקת במפורש בשאלת השימוש בהזרעה מלאכותית לצורך הקמת המשפחה החד- מינית. עם זאת, ניתן ללמוד רבות על גישת האורתודוקסיה משו"תים עכשוויים העוסקים בתופעת ההומוסקסואליות ובקיום מצוות הפריון במקרה זה. הדעה הרווחת היום בקרב הפוסקים הינה כי קשר חד- מיני אסור, כמו גם מיסודו וקיום מצוות פרו ורבו במסגרת זו. בשל כך ניתן להסיק כי הרבנים לא יתירו שימוש בהזרעה מלאכותית לצורך קיום מצוות הפריון במשפחה חד-מינית.
חלק חמישי- עמדת רבני הזרם רפורמי והקונסרבטיבי בשאלת ההזרעה המלאכותית
חלק זה מתמקד בזרמי היהדות הרפורמית והקונסרבטיבית ומציג את עמדות רבני זרמים אלו בעניין ההזרעה המלאכותית תוך עימותן עם עמדות פוסקים אורתודוקסיים.
פרק ראשון - עמדת הרבנות הרפורמית בשאלת ההזרעה המלאכותית
חשיבות מצוות הפריון מודגשת בספרות השו"ת הרפורמית, אולם טיפולי פוריות אינם מתפרשים כדרך היחידה לקיומה. על מנת שלא לפגוע במרקם החיים המשפחתי מוצעת לא אחת חלופת האימוץ לטיפולי פוריות מייגעים, תרומת זרע והיזקקות לפונדקאות ותרומת ביציות. שאלת השחתת הזרע, המעסיקה רבות את רבני הזרם האורתודוקסי, כמעט שאינה נידונה על ידי רבני הזרם הרפורמי. באשר לשאלות של כשרות וייחוס, רווחת הדעה בקרב רבנים רפורמיים בולטים כגון הרב הומולקה ורומיין, כי מעמד הממזרות איננו רלוונטי עוד בימינו ולכן ממילא לא נשקפת לילד הסכנה שזכויותיו הדתיות ייפגעו.
ועד הרבנים הרפורמיים התיר לחבריו לפעול לפי שיקול דעתם בעניין עריכת טקס נישואין דתי בין בני זוג בני אותו המין, תוך הדגשת הויכוח הפנימי בתנועה ומורת הרוח הקיימת מצד הזרמים האחרים ביהדות.
פרק שני - עמדת הרבנות הקונסרבטיבית בשאלת ההזרעה המלאכותית
עמדת רבני היהדות הקונסרבטיבית גורסת כי טיפולי פוריות מהווים רשות, אך אינם בגדר חובה עבור זוגות הרוצים לקיים את מצוות הפריה והרבייה ומתקשים בכך. בדומה לביטולה של גזירת הממזרות בזרם הרפורמי, בוטל מעמד זה לאחרונה גם ביהדות הקונסרבטיבית. אוננות במסגרת טיפולי פוריות הינה מותרת ואיננה מוגדרת כהשחתת זרע לבטלה ואף מותרת גם לשם תרומת זרע של יהודי לאישה לא יהודיה. כמו בזרם הרפורמי גם כאן מונחת אופציית האימוץ על סדר היום.
כנסת הרבנים הקונסרבטיבים בארצות הברית קיבלה החלטה ברוב זעום להכרה אזרחית בהומוסקסואלים, אך עם זאת השאירה על כנו את האיסור לקדש זוגות אלה. אין בהחלטה זו הכרה הלכתית בזוגיות החד-מינית אלא קבלתם החברתית של זוגות אלה ועידודם להשתלב בקהילות הקונסרבטיביות.
חלק שישי - סיכום ודיון בהשלכות המחקר
בחלקו השישי והאחרון בעבודת מחקר זו נסכם ונדון בהשלכותיה האתיות והגלובליולת של הפסיקה ההלכית בשאלה הנידונה.
כפי שניתן לראות מעבודת מחקר זו, הרבנים האורתודוקסיים מסתמכים בהכרעותיהם בעיקר על שיקולים הלכתיים ופחות על שיקולים אתיים ואוניברסליים. מחד, מסתייגים פוסקים אורתודוקסיים מסויימים משימוש לרעה בטיפולי פוריות, אך מאידך מוכנים הם בשם עקרונות דתיים להתיר שימוש בטכנולוגיות אלו במקרים שאינם מצריכים זאת משיקולים רפואיים. במיוחד בולט העדר התייחסותם של הפוסקים האורתודוקסיים לשיקולים מוסריים חברתיים, כמו גם לרצונה ולרווחתה של האישה העוברת טיפולי פוריות. לעומתם מביעים רבני הזרם הרפורמי והקונסרבטיבי את דאגתם להשלכותיהם של טיפולים אלו על נשים הנזקקות להתערבות רפואית מעין זו לשם התעברות ואלו העובדות בתעשיית הפריון. יש לציין כי הדיון ההלכתי עוסק בהיבט צר בלבד של שאלת ההזרעה המלאכותית ואינו נוגע ברבות מן השאלות המורכבות שמעוררות טכנולוגיות פריון מתקדמות. עם זאת הדיון ההלכתי הינו חשוב ביותר ומציף אף שאלות נוספות בדבר הסכנה הצפויה מהפיכתן של טכנולוגיות אלה לכלי שרת בידי אידאולוגיות דתיות או פוליטיות.
Die vorliegende Studie beschäftigt sich mit dem Berufseinstieg von Lehrpersonen und untersucht Zusammenhänge zwischen der Nutzung von Lerngelegenheiten, sozialer Kooperation, individuellen Determinanten und Kompetenzselbsteinschätzungen, die erstmalig in dieser Zusammenstellung überprüft worden sind.
Die Literaturrecherche machte deutlich, dass der Berufseinstieg eine besonders wichtige Phase für die berufliche Sozialisation darstellt. Die Nutzung von Lerngelegenheiten wurde empirisch noch nicht sehr oft untersucht vor allem nicht im Zusammenhang mit individuellen Determinanten und die Kooperation meist nur unter dem Fokus der Bedeutung für die Schule.
An der empirischen Untersuchung nahmen 223 berufseinsteigende Lehrkräfte aus dem Allgemeinen Pflichtschulbereich in Niederösterreich teil, die verpflichtend in den ersten beiden Dienstjahren Fortbildungen besuchen müssen. Diese Situation ist einzigartig in Österreich, da es nur in diesem Bundesland seit 2011 eine verpflichtend zu besuchende Berufseinstiegsphase gibt. Ab 2019 wird es in Österreich eine Induktionsphase für alle Lehrkräfte geben, die in den Beruf einsteigen.
Mit Hilfe von Strukturgleichungsmodellen wurden die Zusammenhänge untersucht und es zeigte sich, dass es diese in allen Bereichen gibt. Am Ende konnte ein neues Theoriemodell zur Nutzung von Lerngelegenheiten abgeleitet werden, das sich in Theorien zu professionellen Kompetenzentwicklung und zu Lernen in Aus-, Fortund Weiterbildung einordnen lässt.
Im kognitiven Vulnerabilitäts-Stress-Modell der Depression von A.T. Beck (1967, 1976) spielen dysfunktionale Einstellungen bei der Entstehung von Depression in Folge von erlebtem Stress eine zentrale Rolle. Diese Theorie prägt seit Jahrzehnten die ätiologische Erforschung der Depression, jedoch ist die Bedeutung dysfunktionaler Einstellungen im Prozess der Entstehung einer Depression insbesondere im Kindes- und Jugendalter nach wie vor unklar. Die vorliegende Arbeit widmet sich einigen in der bisherigen Forschung wenig behandelten Fragen. Diese betreffen u. a. die Möglichkeit nichtlinearer Effekte dysfunktionaler Einstellungen, Auswirkungen einer Stichprobenselektion, Entwicklungseffekte sowie die Spezifität etwaiger Zusammenhänge für eine depressive Symptomatik.
Zur Beantwortung dieser Fragen wurden Daten von zwei Messzeitpunkten der PIER-Studie, eines großangelegten Längsschnittprojekts über Entwicklungsrisiken im Kindes- und Jugendalter, genutzt. Kinder und Jugendliche im Alter von 9 bis 18 Jahren berichteten zweimal im Abstand von ca. 20 Monaten im Selbstberichtsverfahren über ihre dysfunktionalen Einstellungen, Symptome aus verschiedenen Störungsbereichen sowie über eingetretene Lebensereignisse.
Die Ergebnisse liefern Evidenz für ein Schwellenmodell, in dem dysfunktionale Einstellungen unabhängig von Alter und Geschlecht nur im höheren Ausprägungsbereich eine Wirkung als Vulnerabilitätsfaktor zeigen, während im niedrigen Ausprägungsbereich keine Zusammenhänge zur späteren Depressivität bestehen. Eine Wirkung als Vulnerabilitätsfaktor war zudem nur in der Subgruppe der anfänglich weitgehend symptomfreien Kinder und Jugendlichen zu beobachten. Das Schwellenmodell erwies sich als spezifisch für eine depressive Symptomatik, es zeigten sich jedoch auch (teilweise ebenfalls nichtlineare) Effekte dysfunktionaler Einstellungen auf die Entwicklung von Essstörungssymptomen und aggressivem Verhalten. Bei 9- bis 13-jährigen Jungen standen dysfunktionale Einstellungen zudem in Zusammenhang mit einer Tendenz, Stress in Leistungskontexten herbeizuführen.
Zusammen mit den von Sahyazici-Knaak (2015) berichteten Ergebnissen aus der PIER-Studie weisen die Befunde darauf hin, dass dysfunktionale Einstellungen im Kindes- und Jugendalter – je nach betrachteter Subgruppe – Ursache, Symptom und Konsequenz der Depression darstellen können. Die in der vorliegenden Arbeit gezeigten nichtlinearen Effekte dysfunktionaler Einstellungen und die Effekte der Stichprobenselektion bieten eine zumindest teilweise Erklärung für die Heterogenität früherer Forschungsergebnisse. Insgesamt lassen sie auf komplexe – und nicht ausschließlich negative – Auswirkungen dysfunktionaler Einstellungen schließen. Für eine adäquate Beurteilung der „Dysfunktionalität“ der von A.T. Beck so betitelten Einstellungen erscheint daher eine Berücksichtigung der betrachteten Personengruppe, der absoluten Ausprägungen und der fraglichen Symptomgruppen geboten.
Die Arktis erwärmt sich schneller als der Rest der Erde. Die Auswirkungen manifestieren sich unter Anderem in einer verstärkten Erwärmung der arktischen Grenzschicht. Diese Arbeit befasst sich mit Wechselwirkungen zwischen synoptischen Zyklonen und der arktischen Atmosphäre auf lokalen bis überregionalen Skalen. Ausgangspunkt dafür sind Messdaten und Modellsimulationen für den Zeitraum der N-ICE2015 Expedition, die von Anfang Januar bis Ende Juni 2015 im arktischen Nordatlantiksektor stattgefunden hat.
Anhand von Radiosondenmessungen lassen sich Auswirkungen von synoptischen Zyklonen am deutlichsten im Winter erkennen, da sie durch die Advektion warmer und feuchter Luftmassen in die Arktis den Zustand der Atmosphäre von einem strahlungs-klaren in einen strahlungs-opaken ändern. Obwohl dieser scharfe Kontrast nur im Winter existiert, zeigt die Analyse, dass der integrierte Wasserdampf als Indikator für die Advektion von Luftmassen aus niedrigen Breiten in die Arktis auch im Frühjahr geeignet ist. Neben der Advektion von Luftmassen wird der Einfluss der Zyklonen auf die statische Stabilität charakterisiert. Beim Vergleich der N-ICE2015 Beobachtungen mit der SHEBA Kampagne (1997/1998), die über dickerem Eis stattfand, finden sich trotz der unterschiedlichen Meereisregime Ähnlichkeiten in der statischen Stabilität der Atmosphäre. Die beobachteten Differenzen in der Stabilität lassen sich auf Unterschiede in der synoptischen Aktivität zurückführen. Dies lässt vermuten, dass die dünnere Eisdecke auf saisonalen Zeitskalen nur einen geringen Einfluss auf die thermodynamische Struktur der arktischen Troposphäre besitzt, solange eine dicke Schneeschicht sie bedeckt. Ein weiterer Vergleich mit den parallel zur N-ICE2015 Kampagne gestarteten Radiosonden der AWIPEV Station in Ny-Åesund, Spitzbergen, macht deutlich, dass die synoptischen Zyklonen oberhalb der Orographie auf saisonalen Zeitskalen das Wettergeschehen bestimmen.
Des Weiteren werden für Februar 2015 die Auswirkungen von in der Vertikalen variiertem Nudging auf die Entwicklung der Zyklonen am Beispiel des hydrostatischen regionalen Klimamodells HIRHAM5 untersucht. Es zeigt sich, dass die Unterschiede zwischen den acht Modellsimulationen mit abnehmender Anzahl der genudgten Level zunehmen. Die größten Differenzen resultieren vornehmlich aus dem zeitlichen Versatz der Entwicklung synoptischer Zyklonen. Zur Korrektur des Zeitversatzes der Zykloneninitiierung genügt es bereits, Nudging in den unterstem 250 m der Troposphäre anzuwenden. Daneben findet sich zwischen den genudgten HIRHAM5-Simulation und den in situ Messungen der gleiche positive Temperaturbias, den auch ERA-Interim besitzt. Das freie HIRHAM hingegen reproduziert das positive Ende der N-ICE2015 Temperaturverteilung gut, besitzt aber einen starken negativen Bias, der sehr wahrscheinlich aus einer Unterschätzung des Feuchtegehalts resultiert. An Beispiel einer Zyklone wird gezeigt, dass Nudging Einfluss auf die Lage der Höhentiefs besitzt, die ihrerseits die Zyklonenentwicklung am Boden beeinflussen. Im Weiteren wird mittels eines für kleine Ensemblegrößen geeigneten Varianzmaßes eine statistische Einschätzung der Wirkung des Nudgings auf die Vertikale getroffen. Es wird festgestellt, dass die Ähnlichkeit der Modellsimulationen in der unteren Troposphäre generell höher ist als darüber und in 500 hPa ein lokales Minimum besitzt.
Im letzten Teil der Analyse wird die Wechselwirkung der oberen und unteren Stratosphäre anhand zuvor betrachteter Zyklonen mit Daten der ERA-Interim Reanalyse untersucht. Lage und Ausrichtung des Polarwirbels erzeugten ab Anfang Februar 2015 eine ungewöhnlich große Meridionalkomponente des Tropopausenjets, die Zugbahnen in die zentrale Arktis begünstigte. Am Beispiel einer Zyklone wird die Übereinstimmung der synoptischen Entwicklung mit den theoretischen Annahmen über den abwärts gerichteten Einfluss der Stratosphäre auf die Troposphäre hervorgehoben. Dabei spielt die nicht-lineare Wechselwirkung zwischen der Orographie Grönlands, einer Intrusion stratosphärischer Luft in die Troposphäre sowie einer in Richtung Arktis propagierender Rossby-Welle eine tragende Rolle. Als Indikator dieser Wechselwirkung werden horizontale Signaturen aus abwechselnd aufsteigender und absinkender Luft innerhalb der Troposphäre identifiziert.
The functioning of the surface water-groundwater interface as buffer, filter and reactive zone is important for water quality, ecological health and resilience of streams and riparian ecosystems. Solute and heat exchange across this interface is driven by the advection of water. Characterizing the flow conditions in the streambed is challenging as flow patterns are often complex and multidimensional, driven by surface hydraulic gradients and groundwater discharge. This thesis presents the results of an integrated approach of studies, ranging from the acquisition of field data, the development of analytical and numerical approaches to analyse vertical temperature profiles to the detailed, fully-integrated 3D numerical modelling of water and heat flux at the reach scale. All techniques were applied in order to characterize exchange flux between stream and groundwater, hyporheic flow paths and temperature patterns.
The study was conducted at a reach-scale section of the lowland Selke River, characterized by distinctive pool riffle sequences and fluvial islands and gravel bars. Continuous time series of hydraulic heads and temperatures were measured at different depths in the river bank, the hyporheic zone and within the river. The analyses of the measured diurnal temperature variation in riverbed sediments provided detailed information about the exchange flux between river and groundwater. Beyond the one-dimensional vertical water flow in the riverbed sediment, hyporheic and parafluvial flow patterns were identified. Subsurface flow direction and magnitude around fluvial islands and gravel bars at the study site strongly depended on the position around the geomorphological structures and on the river stage. Horizontal water flux in the streambed substantially impacted temperature patterns in the streambed. At locations with substantial horizontal fluxes the penetration depths of daily temperature fluctuations was reduced in comparison to purely vertical exchange conditions.
The calibrated and validated 3D fully-integrated model of reach-scale water and heat fluxes across the river-groundwater interface was able to accurately represent the real system. The magnitude and variations of the simulated temperatures matched the observed ones, with an average mean absolute error of 0.7 °C and an average Nash Sutcliffe Efficiency of 0.87. The simulation results showed that the water and heat exchange at the surface water-groundwater interface is highly variable in space and time with zones of daily temperature oscillations penetrating deep into the sediment and spots of daily constant temperature following the average groundwater temperature. The average hyporheic flow path temperature was found to strongly correlate with the flow path residence time (flow path length) and the temperature gradient between river and groundwater. Despite the complexity of these processes, the simulation results allowed the derivation of a general empirical relationship between the hyporheic residence times and temperature patterns. The presented results improve our understanding of the complex spatial and temporal dynamics of water flux and thermal processes within the shallow streambed. Understanding these links provides a general basis from which to assess hyporheic temperature conditions in river reaches.
Trunk loading and back pain
(2017)
An essential function of the trunk is the compensation of external forces and loads in order to guarantee stability. Stabilising the trunk during sudden, repetitive loading in everyday tasks, as well as during performance is important in order to protect against injury. Hence, reduced trunk stability is accepted as a risk factor for the development of back pain (BP). An altered activity pattern including extended response and activation times as well as increased co-contraction of the trunk muscles as well as a reduced range of motion and increased movement variability of the trunk are evident in back pain patients (BPP). These differences to healthy controls (H) have been evaluated primarily in quasi-static test situations involving isolated loading directly to the trunk. Nevertheless, transferability to everyday, dynamic situations is under debate. Therefore, the aim of this project is to analyse 3-dimensional motion and neuromuscular reflex activity of the trunk as response to dynamic trunk loading in healthy (H) and back pain patients (BPP).
A measurement tool was developed to assess trunk stability, consisting of dynamic test situations. During these tests, loading of the trunk is generated by the upper and lower limbs with and without additional perturbation. Therefore, lifting of objects and stumbling while walking are adequate represents. With the help of a 12-lead EMG, neuromuscular activity of the muscles encompassing the trunk was assessed. In addition, three-dimensional trunk motion was analysed using a newly developed multi-segmental trunk model. The set-up was checked for reproducibility as well as validity. Afterwards, the defined measurement set-up was applied to assess trunk stability in comparisons of healthy and back pain patients.
Clinically acceptable to excellent reliability could be shown for the methods (EMG/kinematics) used in the test situations. No changes in trunk motion pattern could be observed in healthy adults during continuous loading (lifting of objects) of different weights. In contrast, sudden loading of the trunk through perturbations to the lower limbs during walking led to an increased neuromuscular activity and ROM of the trunk. Moreover, BPP showed a delayed muscle response time and extended duration until maximum neuromuscular activity in response to sudden walking perturbations compared to healthy controls. In addition, a reduced lateral flexion of the trunk during perturbation could be shown in BPP.
It is concluded that perturbed gait seems suitable to provoke higher demands on trunk stability in adults. The altered neuromuscular and kinematic compensation pattern in back pain patients (BPP) can be interpreted as increased spine loading and reduced trunk stability in patients. Therefore, this novel assessment of trunk stability is suitable to identify deficits in BPP. Assignment of affected BPP to therapy interventions with focus on stabilisation of the trunk aiming to improve neuromuscular control in dynamic situations is implied. Hence, sensorimotor training (SMT) to enhance trunk stability and compensation of unexpected sudden loading should be preferred.
Trends in precipitation over Germany and the Rhine basin related to changes in weather patterns
(2017)
Precipitation as the central meteorological feature for agriculture, water security, and human well-being amongst others, has gained special attention ever since. Lack of precipitation may have devastating effects such as crop failure and water scarcity. Abundance of precipitation, on the other hand, may as well result in hazardous events such as flooding and again crop failure. Thus, great effort has been spent on tracking changes in precipitation and relating them to underlying processes. Particularly in the face of global warming and given the link between temperature and atmospheric water holding capacity, research is needed to understand the effect of climate change on precipitation.
The present work aims at understanding past changes in precipitation and other meteorological variables. Trends were detected for various time periods and related to associated changes in large-scale atmospheric circulation. The results derived in this thesis may be used as the foundation for attributing changes in floods to climate change. Assumptions needed for the downscaling of large-scale circulation model output to local climate stations are tested and verified here.
In a first step, changes in precipitation over Germany were detected, focussing not only on precipitation totals, but also on properties of the statistical distribution, transition probabilities as a measure for wet/dry spells, and extreme precipitation events.
Shifting the spatial focus to the Rhine catchment as one of the major water lifelines of Europe and the largest river basin in Germany, detected trends in precipitation and other meteorological variables were analysed in relation to states of an ``optimal'' weather pattern classification. The weather pattern classification was developed seeking the best skill in explaining the variance of local climate variables.
The last question addressed whether observed changes in local climate variables are attributable to changes in the frequency of weather patterns or rather to changes within the patterns itself. A common assumption for a downscaling approach using weather patterns and a stochastic weather generator is that climate change is expressed only as a changed occurrence of patterns with the pattern properties remaining constant. This assumption was validated and the ability of the latest generation of general circulation models to reproduce the weather patterns was evaluated.
% Paper 1
Precipitation changes in Germany in the period 1951-2006 can be summarised briefly as negative in summer and positive in all other seasons. Different precipitation characteristics confirm the trends in total precipitation: while winter mean and extreme precipitation have increased, wet spells tend to be longer as well (expressed as increased probability for a wet day followed by another wet day). For summer the opposite was observed: reduced total precipitation, supported by decreasing mean and extreme precipitation and reflected in an increasing length of dry spells.
Apart from this general summary for the whole of Germany, the spatial distribution within the country is much more differentiated. Increases in winter precipitation are most pronounced in the north-west and south-east of Germany, while precipitation increases are highest in the west for spring and in the south for autumn. Decreasing summer precipitation was observed in most regions of Germany, with particular focus on the south and west.
The seasonal picture, however, was again differently represented in the contributing months, e.g.\ increasing autumn precipitation in the south of Germany is formed by strong trends in the south-west in October and in the south-east in November. These results emphasise the high spatial and temporal organisation of precipitation changes.
% Paper 2
The next step towards attributing precipitation trends to changes in large-scale atmospheric patterns was the derivation of a weather pattern classification that sufficiently stratifies the local climate variables under investigation. Focussing on temperature, radiation, and humidity in addition to precipitation, a classification based on mean sea level pressure, near-surface temperature, and specific humidity was found to have the best skill in explaining the variance of the local variables. A rather high number of 40 patterns was selected, allowing typical pressure patterns being assigned to specific seasons by the associated temperature patterns. While the skill in explaining precipitation variance is rather low, better skill was achieved for radiation and, of course, temperature.
Most of the recent GCMs from the CMIP5 ensemble were found to reproduce these weather patterns sufficiently well in terms of frequency, seasonality, and persistence.
% Paper 3
Finally, the weather patterns were analysed for trends in pattern frequency, seasonality, persistence, and trends in pattern-specific precipitation and temperature. To overcome uncertainties in trend detection resulting from the selected time period, all possible periods in 1901-2010 with a minimum length of 31 years were considered. Thus, the assumption of a constant link between patterns and local weather was tested rigorously. This assumption was found to hold true only partly. While changes in temperature are mainly attributable to changes in pattern frequency, for precipitation a substantial amount of change was detected within individual patterns.
Magnitude and even sign of trends depend highly on the selected time period. The frequency of certain patterns is related to the long-term variability of large-scale circulation modes.
Changes in precipitation were found to be heterogeneous not only in space, but also in time - statements on trends are only valid for the specific time period under investigation. While some part of the trends can be attributed to changes in the large-scale circulation, distinct changes were found within single weather patterns as well.
The results emphasise the need to analyse multiple periods for thorough trend detection wherever possible and add some note of caution to the application of downscaling approaches based on weather patterns, as they might misinterpret the effect of climate change due to neglecting within-type trends.
Translating innovation
(2017)
This doctoral thesis studies the process of innovation adoption in public administrations, addressing the research question of how an innovation is translated to a local context. The study empirically explores Design Thinking as a new problem-solving approach introduced by a federal government organisation in Singapore. With a focus on user-centeredness, collaboration and iteration Design Thinking seems to offer a new way to engage recipients and other stakeholders of public services as well as to re-think the policy design process from a user’s point of view. Pioneered in the private sector, early adopters of the methodology include civil services in Australia, Denmark, the United Kingdom, the United States as well as Singapore. Hitherto, there is not much evidence on how and for which purposes Design Thinking is used in the public sector.
For the purpose of this study, innovation adoption is framed in an institutionalist perspective addressing how concepts are translated to local contexts. The study rejects simplistic views of the innovation adoption process, in which an idea diffuses to another setting without adaptation. The translation perspective is fruitful because it captures the multidimensionality and ‘messiness’ of innovation adoption. More specifically, the overall research question addressed in this study is: How has Design Thinking been translated to the local context of the public sector organisation under investigation? And from a theoretical point of view: What can we learn from translation theory about innovation adoption processes?
Moreover, there are only few empirical studies of organisations adopting Design Thinking and most of them focus on private organisations. We know very little about how Design Thinking is embedded in public sector organisations. This study therefore provides further empirical evidence of how Design Thinking is used in a public sector organisation, especially with regards to its application to policy work which has so far been under-researched.
An exploratory single case study approach was chosen to provide an in-depth analysis of the innovation adoption process. Based on a purposive, theory-driven sampling approach, a Singaporean Ministry was selected because it represented an organisational setting in which Design Thinking had been embedded for several years, making it a relevant case with regard to the research question. Following a qualitative research design, 28 semi-structured interviews (45-100 minutes) with employees and managers were conducted. The interview data was triangulated with observations and documents, collected during a field research research stay in Singapore.
The empirical study of innovation adoption in a single organisation focused on the intra-organisational perspective, with the aim to capture the variations of translation that occur during the adoption process. In so doing, this study opened the black box often assumed in implementation studies. Second, this research advances translation studies not only by showing variance, but also by deriving explanatory factors. The main differences in the translation of Design Thinking occurred between service delivery and policy divisions, as well as between the first adopter and the rest of the organisation. For the intra-organisational translation of Design Thinking in the Singaporean Ministry the following five factors played a role: task type, mode of adoption, type of expertise, sequence of adoption, and the adoption of similar practices.
Thermal cis-trans isomerization of azobenzene studied by path sampling and QM/MM stochastic dynamics
(2017)
Azobenzene-based molecular photoswitches have extensively been applied to biological systems, involving photo-control of peptides, lipids and nucleic acids. The isomerization between the stable trans and the metastable cis state of the azo moieties leads to pronounced changes in shape and other physico-chemical properties of the molecules into which they are incorporated. Fast switching can be induced via transitions to excited electronic states and fine-tuned by a large number of different substituents at the phenyl rings. But a rational design of tailor-made azo groups also requires control of their stability in the dark, the half-lifetime of the cis isomer. In computational chemistry, thermally activated barrier crossing on the ground state Born-Oppenheimer surface can efficiently be estimated with Eyring’s transition state theory (TST) approach; the growing complexity of the azo moiety and a rather heterogeneous environment, however, may render some of the underlying simplifying assumptions problematic.
In this dissertation, a computational approach is established to remove two restrictions at once: the environment is modeled explicitly by employing a quantum mechanical/molecular mechanics (QM/MM) description; and the isomerization process is tracked by analyzing complete dynamical pathways between stable states. The suitability of this description is validated by using two test systems, pure azo benzene and a derivative with electron donating and electron withdrawing substituents (“push-pull” azobenzene). Each system is studied in the gas phase, in toluene and in polar DMSO solvent. The azo molecules are treated at the QM level using a very recent, semi-empirical approximation to density functional theory (density functional tight binding approximation). Reactive pathways are sampled by implementing a version of the so-called transition path sampling method (TPS), without introducing any bias into the system dynamics. By analyzing ensembles of reactive trajectories, the change in isomerization pathway from linear inversion to rotation in going from apolar to polar solvent, predicted by the TST approach, could be verified for the push-pull derivative. At the same time, the mere presence of explicit solvation is seen to broaden the distribution of isomerization pathways, an effect TST cannot account for.
Using likelihood maximization based on the TPS shooting history, an improved reaction coordinate was identified as a sine-cosine combination of the central bend angles and the rotation dihedral, r (ω,α,α′). The computational van’t Hoff analysis for the activation entropies was performed to gain further insight into the differential role of solvent for the case of the unsubstituted and the push-pull azobenzene. In agreement with the experiment, it yielded positive activation entropies for azobenzene in the DMSO solvent while negative for the push-pull derivative, reflecting the induced ordering of solvent around the more dipolar transition state associated to the latter compound. Also, the dynamically corrected rate constants were evaluated using the reactive flux approach where an increase comparable to the experimental one was observed for a high polarity medium for both azobenzene derivatives.
Einleitung: Die Erdnussallergie zählt zu den häufigsten Nahrungsmittelallergien im Kindesalter. Bereits kleine Mengen Erdnuss (EN) können zu schweren allergischen Reaktionen führen. EN ist der häufigste Auslöser einer lebensbedrohlichen Anaphylaxie bei Kindern und Jugendlichen. Im Gegensatz zu anderen frühkindlichen Nahrungsmittelallergien entwickeln Patienten mit einer EN-Allergie nur selten eine natürliche Toleranz. Seit mehreren Jahren wird daher an kausalen Therapiemöglichkeiten für EN-Allergiker, insbesondere an der oralen Immuntherapie (OIT), geforscht. Erste kleinere Studien zur OIT bei EN-Allergie zeigten erfolgsversprechende Ergebnisse. Im Rahmen einer randomisierten, doppelblind, Placebo-kontrollierten Studie mit größerer Fallzahl werden in der vorliegenden Arbeit die klinische Wirksamkeit und Sicherheit dieser Therapieoption bei Kindern mit EN-Allergie genauer evaluiert. Des Weiteren werden immunologische Veränderungen sowie die Lebensqualität und Therapiebelastung unter OIT untersucht.
Methoden: Kinder zwischen 3-18 Jahren mit einer IgE-vermittelten EN-Allergie wurden in die Studie eingeschlossen. Vor Beginn der OIT wurde eine orale Provokation mit EN durchgeführt. Die Patienten wurden 1:1 randomisiert und entsprechend der Verum- oder Placebogruppe zugeordnet. Begonnen wurde mit 2-120 mg EN bzw. Placebo pro Tag, abhängig von der Reaktionsdosis bei der oralen Provokation. Zunächst wurde die tägliche OIT-Dosis alle zwei Wochen über etwa 14 Monate langsam bis zu einer Erhaltungsdosis von mindestens 500 mg EN (= 125 mg EN-Protein, ~ 1 kleine EN) bzw. Placebo gesteigert. Die maximal erreichte Dosis wurde dann über zwei Monate täglich zu Hause verabreicht. Im Anschluss erfolgte erneut eine orale Provokation mit EN. Der primäre Endpunkt der Studie war die Anzahl an Patienten der Verum- und Placebogruppe, die unter oraler Provokation nach OIT ≥1200 mg EN vertrugen (=„partielle Desensibilisierung“). Sowohl vor als auch nach OIT wurde ein Hautpricktest mit EN durchgeführt und EN-spezifisches IgE und IgG4 im Serum bestimmt. Außerdem wurden die Basophilenaktivierung sowie die Ausschüttung von T-Zell-spezifischen Zytokinen nach Stimulation mit EN in vitro gemessen. Anhand von Fragebögen wurde die Lebensqualität vor und nach OIT sowie die Therapiebelastung während OIT erfasst.
Ergebnisse: 62 Patienten wurden in die Studie eingeschlossen und randomisiert. Nach etwa 16 Monaten unter OIT zeigten 74,2% (23/31) der Patienten der Verumgruppe und nur 16,1% (5/31) der Placebogruppe eine „partielle Desensibilisierung“ gegenüber EN (p<0,001). Im Median vertrugen Patienten der Verumgruppe 4000 mg EN (~8 kleine EN) unter der Provokation nach OIT wohingegen Patienten der Placebogruppe nur 80 mg EN (~1/6 kleine EN) vertrugen (p<0,001). Fast die Hälfte der Patienten der Verumgruppe (41,9%) tolerierten die Höchstdosis von 18 g EN unter Provokation („komplette Desensibilisierung“). Es zeigte sich ein vergleichbares Sicherheitsprofil unter Verum- und Placebo-OIT in Bezug auf objektive Nebenwirkungen. Unter Verum-OIT kam es jedoch signifikant häufiger zu subjektiven Nebenwirkungen wie oralem Juckreiz oder Bauchschmerzen im Vergleich zu Placebo (3,7% der Verum-OIT-Gaben vs. 0,5% der Placebo-OIT-Gaben, p<0,001). Drei Kinder der Verumgruppe (9,7%) und sieben Kinder der Placebogruppe (22,6%) beendeten die Studie vorzeitig, je zwei Patienten beider Gruppen aufgrund von Nebenwirkungen. Im Gegensatz zu Placebo, zeigten sich unter Verum-OIT signifikante immunologische Veränderungen. So kam es zu einer Abnahme des EN-spezifischen Quaddeldurchmessers im Hautpricktest, einem Anstieg der EN-spezifischen IgG4-Werte im Serum sowie zu einer verminderten EN-spezifischen Zytokinsekretion, insbesondere der Th2-spezifischen Zytokine IL-4 und IL-5. Hinsichtlich der EN-spezifischen IgE-Werte sowie der EN-spezifischen Basophilenaktivierung zeigten sich hingegen keine Veränderungen unter OIT. Die Lebensqualität von Kindern der Verumgruppe war nach OIT signifikant verbessert, jedoch nicht bei Kindern der Placebogruppe. Während der OIT wurde die Therapie von fast allen Kindern (82%) und Müttern (82%) als positiv bewertet (= niedrige Therapiebelastung).
Diskussion: Die EN-OIT führte bei einem Großteil der EN-allergischen Kinder zu einer Desensibilisierung und einer deutlich erhöhten Reaktionsschwelle auf EN. Somit sind die Kinder im Alltag vor akzidentellen Reaktionen auf EN geschützt, was die Lebensqualität der Kinder deutlich verbessert. Unter den kontrollierten Studienbedingungen zeigte sich ein akzeptables Sicherheitsprofil, mit vorrangig milder Symptomatik. Die klinische Desensibilisierung ging mit Veränderungen auf immunologischer Ebene einher. Langzeitstudien zur EN-OIT müssen jedoch abgewartet werden, um die klinische und immunologische Wirksamkeit hinsichtlich einer möglichen langfristigen oralen Toleranzinduktion sowie die Sicherheit unter langfristiger OIT zu untersuchen, bevor das Therapiekonzept in die Praxis übertragen werden kann.
The classical Navier-Stokes equations of hydrodynamics are usually written in terms of vector analysis. More promising is the formulation of these equations in the language of differential forms of degree one. In this way the study of Navier-Stokes equations includes the analysis of the de Rham complex. In particular, the Hodge theory for the de Rham complex enables one to eliminate the pressure from the equations. The Navier-Stokes equations constitute a parabolic system with a nonlinear term which makes sense only for one-forms. A simpler model of dynamics of incompressible viscous fluid is given by Burgers' equation. This work is aimed at the study of invariant structure of the Navier-Stokes equations which is closely related to the algebraic structure of the de Rham complex at step 1. To this end we introduce Navier-Stokes equations related to any elliptic quasicomplex of first order differential operators. These equations are quite similar to the classical Navier-Stokes equations including generalised velocity and pressure vectors. Elimination of the pressure from the generalised Navier-Stokes equations gives a good motivation for the study of the Neumann problem after Spencer for elliptic quasicomplexes. Such a study is also included in the work.We start this work by discussion of Lamé equations within the context of elliptic quasicomplexes on compact manifolds with boundary. The non-stationary Lamé equations form a hyperbolic system. However, the study of the first mixed problem for them gives a good experience to attack the linearised Navier-Stokes equations. On this base we describe a class of non-linear perturbations of the Navier-Stokes equations, for which the solvability results still hold.
In littoral zones of lakes, multiple processes determine lake ecology and water quality. Lacustrine groundwater discharge (LGD), most frequently taking place in littoral zones, can transport or mobilize nutrients from the sediments and thus contribute significantly to lake eutrophication. Furthermore, lake littoral zones are the habitat of benthic primary producers, namely submerged macrophytes and periphyton, which play a key role in lake food webs and influence lake water quality. Groundwater-mediated nutrient-influx can potentially affect the asymmetric competition between submerged macrophytes and periphyton for light and nutrients. While rooted macrophytes have superior access to sediment nutrients, periphyton can negatively affect macrophytes by shading. LGD may thus facilitate periphyton production at the expense of macrophyte production, although studies on this hypothesized effect are missing.
The research presented in this thesis is aimed at determining how LGD influences periphyton, macrophytes, and the interactions between these benthic producers. Laboratory experiments were combined with field experiments and measurements in an oligo-mesotrophic hard water lake.
In the first study, a general concept was developed based on a literature review of the existing knowledge regarding the potential effects of LGD on nutrients and inorganic and organic carbon loads to lakes, and the effect of these loads on periphyton and macrophytes. The second study includes a field survey and experiment examining the effects of LGD on periphyton in an oligotrophic, stratified hard water lake (Lake Stechlin). This study shows that LGD, by mobilizing phosphorus from the sediments, significantly promotes epiphyton growth, especially at the end of the summer season when epilimnetic phosphorus concentrations are low. The third study focuses on the potential effects of LGD on submerged macrophytes in Lake Stechlin. This study revealed that LGD may have contributed to an observed change in macrophyte community composition and abundance in the shallow littoral areas of the lake. Finally, a laboratory experiment was conducted which mimicked the conditions of a seepage lake. Groundwater circulation was shown to mobilize nutrients from the sediments, which significantly promoted periphyton growth. Macrophyte growth was negatively affected at high periphyton biomasses, confirming the initial hypothesis.
More generally, this thesis shows that groundwater flowing into nutrient-limited lakes may import or mobilize nutrients. These nutrients first promote periphyton, and subsequently provoke radical changes in macrophyte populations before finally having a possible influence on the lake’s trophic state. Hence, the eutrophying effect of groundwater is delayed and, at moderate nutrient loading rates, partly dampened by benthic primary producers. The present research emphasizes the importance and complexity of littoral processes, and the need to further investigate and monitor the benthic environment. As present and future global changes can significantly affect LGD, the understanding of these complex interactions is required for the sustainable management of lake water quality.
The Cauchy problem for the linearised Einstein equation and the Goursat problem for wave equations
(2017)
In this thesis, we study two initial value problems arising in general relativity. The first is the Cauchy problem for the linearised Einstein equation on general globally hyperbolic spacetimes, with smooth and distributional initial data. We extend well-known results by showing that given a solution to the linearised constraint equations of arbitrary real Sobolev regularity, there is a globally defined solution, which is unique up to addition of gauge solutions. Two solutions are considered equivalent if they differ by a gauge solution. Our main result is that the equivalence class of solutions depends continuously on the corre- sponding equivalence class of initial data. We also solve the linearised constraint equations in certain cases and show that there exist arbitrarily irregular (non-gauge) solutions to the linearised Einstein equation on Minkowski spacetime and Kasner spacetime.
In the second part, we study the Goursat problem (the characteristic Cauchy problem) for wave equations. We specify initial data on a smooth compact Cauchy horizon, which is a lightlike hypersurface. This problem has not been studied much, since it is an initial value problem on a non-globally hyperbolic spacetime. Our main result is that given a smooth function on a non-empty, smooth, compact, totally geodesic and non-degenerate Cauchy horizon and a so called admissible linear wave equation, there exists a unique solution that is defined on the globally hyperbolic region and restricts to the given function on the Cauchy horizon. Moreover, the solution depends continuously on the initial data. A linear wave equation is called admissible if the first order part satisfies a certain condition on the Cauchy horizon, for example if it vanishes. Interestingly, both existence of solution and uniqueness are false for general wave equations, as examples show. If we drop the non-degeneracy assumption, examples show that existence of solution fails even for the simplest wave equation. The proof requires precise energy estimates for the wave equation close to the Cauchy horizon. In case the Ricci curvature vanishes on the Cauchy horizon, we show that the energy estimates are strong enough to prove local existence and uniqueness for a class of non-linear wave equations. Our results apply in particular to the Taub-NUT spacetime and the Misner spacetime. It has recently been shown that compact Cauchy horizons in spacetimes satisfying the null energy condition are necessarily smooth and totally geodesic. Our results therefore apply if the spacetime satisfies the null energy condition and the Cauchy horizon is compact and non-degenerate.
Information on the contemporary in-situ stress state of the earth’s crust is essential for geotechnical applications and physics-based seismic hazard assessment. Yet, stress data records for a data point are incomplete and their availability is usually not dense enough to allow conclusive statements. This demands a thorough examination of the in-situ stress field which is achieved by 3D geomechanicalnumerical models. However, the models spatial resolution is limited and the resulting local stress state is subject to large uncertainties that confine the significance of the findings. In addition, temporal variations of the in-situ stress field are naturally or anthropogenically induced. In my thesis I address these challenges in three manuscripts that investigate (1) the current crustal stress field orientation, (2) the 3D geomechanical-numerical modelling of the in-situ stress state, and (3) the phenomenon of injection induced temporal stress tensor rotations. In the first manuscript I present the first comprehensive stress data compilation of Iceland with 495 data records. Therefore, I analysed image logs from 57 boreholes in Iceland for indicators of the orientation of the maximum horizontal stress component. The study is the first stress survey from different kinds of stress indicators in a geologically very young and tectonically active area of an onshore spreading ridge. It reveals a distinct stress field with a depth independent stress orientation even very close to the spreading centre. In the second manuscript I present a calibrated 3D geomechanical-numerical modelling approach of the in-situ stress state of the Bavarian Molasse Basin that investigates the regional (70x70x10km³) and local (10x10x10km³) stress state. To link these two models I develop a multi-stage modelling approach that provides a reliable and efficient method to derive from the larger scale model initial and boundary conditions for the smaller scale model. Furthermore, I quantify the uncertainties in the models results which are inherent to geomechanical-numerical modelling in general and the multi-stage approach in particular. I show that the significance of the models results is mainly reduced due to the uncertainties in the material properties and the low number of available stress magnitude data records for calibration. In the third manuscript I investigate the phenomenon of injection induced temporal stress tensor rotation and its controlling factors. I conduct a sensitivity study with a 3D generic thermo-hydro-mechanical model. I show that the key control factors for the stress tensor rotation are the permeability as the decisive factor, the injection rate, and the initial differential stress. In particular for enhanced geothermal systems with a low permeability large rotations of the stress tensor are indicated. According to these findings the estimation of the initial differential stress in a reservoir is possible provided the permeability is known and the angle of stress rotation is observed. I propose that the stress tensor rotations can be a key factor in terms of the potential for induced seismicity on pre-existing faults due to the reorientation of the stress field that changes the optimal orientation of faults.
Für den Industrialisierungsprozess von Entwicklungs- und Schwellenländern haben ausländische Direktinvestitionen (ADI) eine wichtige Funktion. Sie können zum einen zu einer Erhöhung des industriellen Output des Ziellandes führen und zum anderen als Träger von technologischem Wissen fungieren. Neues Wissen kann den Empfängerländern der ADI durch Spillovereffekte und Technologietransfers ausländischer Tochterunternehmen zufließen. Diese Arbeit soll Antworten auf die Fragen geben, durch welche Mechanismen Spillovereffekte und Technologietransfers ausgelöst werden und wie Entwicklungs- und Schwellenländern diesen Wissenszufluss zur Beschleunigung ihres Industrialisierungsprozesses einsetzen können. Hierfür wird ein Konzept zur Förderung von Spillovereffekten entwickelt. Weiterhin wird ein theoretisches Modell entwickelt, in dem der Technologietransfer ausländischer Exportplattformen erstmals in Abhängigkeit des Anteils der Vorprodukte, die im Gastland nachgefragt werden, untersucht. In den Fallstudien Irland und Malaysia werden die Ergebnisse des theoretischen Modells sowie des entwickelten Konzepts illustriert.
Der technologische Wandel stellt Organisationen vor die Herausforderung, Innovationen möglichst schnell produktiv zu nutzen und damit einen Wettbewerbsvorteil zu erzielen. Der Erfolg der Technologieeinführung hängt stark mit der Schaffung von Akzeptanz bei den Mitarbeitern zusammen. Bestehende Ansätze wie die Diffusionstheorie (Rogers, 2003) oder das Technology Acceptance Model (Davis, 1989; Venkatesh und Davis, 1996; Venkatesh und Davis, 2000; Venkatesh, Morris u. a., 2003) widmen sich dem Organisationskontext jedoch nur am Rande. Ihre Modelle zielen auf die Übernahme einer Technologie in freier Entscheidung und im Marktkontext ab. Weiterhin beleuchten sie den Widerstand gegen Neuerungen nicht, welcher sich bei der verpflichtenden Übernahme bilden kann. Zur Untersuchung der Technologieeinführung und von Akzeptanzbildungsprozessen in Organisationen sind sie daher nur begrenzt nutzbar.
Das Ziel dieser Arbeit ist es daher, den spezifischen Einfluss des Kontextes Organisation auf die Akzeptanz und das Nutzungsverhalten herauszuarbeiten. Konkreter soll die Forschungsfrage geklärt werden, welchen Einfluss unterschiedliche Organisationstypen auf die Akzeptanz- und Nutzungsdynamik innerhalb von Organisationen haben. Hierfür wird die Erweiterung und Synthese bestehender Modelle der Akzeptanzforschung um organisationsspezifische Attribute vorgenommen. Das resultierende Modell erfasst die dynamische Entwicklung innerhalb der Organisation und ermöglicht damit die Beobachtung des Wandels. Die Funktionsweise des entwickelten Modells soll in einem Simulationsexperiment demonstriert und die Wirkung unterschiedlicher Organisationsformen verdeutlicht werden.
Das Modell vereint daher zwei Perspektiven: Die personale Perspektive fasst Akzeptanz als kognitiv-psychischen Prozess auf individueller Ebene. Dieser basiert auf den Kalkülen und Entscheidungen einzelner Personen. Zentral sind hierfür die Beiträge der Diffusionstheorie (Rogers, 2003) sowie das Technology Acceptance Model in seinen diversen Weiterentwicklungen und Veränderungen (Davis, 1989; Venkatesh und Davis, 1996; Venkatesh und Davis, 2000; Venkatesh, Morris u. a., 2003). Individuelle Faktoren aus unterschiedlichen Fit-Theorien (Goodhue und Thompson, 1995; Floyd, 1986; Liu, Lee und Chen, 2011; Parkes, 2013) werden genutzt, um diese Modelle anzureichern. Neben der Entwicklung
einer positiven, förderlichen Einstellung muss jedoch auch die Ablehnung und das offene Opponieren gegen die Innovation berücksichtigt werden (Patsiotis, Hughes und Webber, 2012).
Die organisatorische Perspektive hingegen sieht Akzeptanzentscheidungen eingebettet in den sozialen Kontext der Organisation. Die gegenseitige Beeinflussung basiert auf der Beobachtung der Umgebung und der Internalisierung sozialen Drucks. Dem steht in Organisationen die intendierte Beeinflussung in Form von Steuerung gegenüber. Beide Vorgänge formen das Akzeptanz- oder das Nutzungsverhalten der Mitarbeiter. Ausgehend von einem systemtheoretischen Organisationsbegriff werden unterschiedliche Steuerungsmedien (Luhmann, 1997; Fischer, 2009) vorgestellt. Diese können durch Steuerungsakteure
(Change Agents, Management) intendiert eingesetzt werden, um den Akzeptanz- und Nutzungsprozess über Interventionen zu gestalten.
Die Wirkung der Medien unterscheidet sich in verschiedenen Organisationstypen. Zur Analyse unterschiedlicher Organisationstypen werden die Konfigurationen nach Mintzberg (1979) herangezogen. Diese zeichnen sich durch unterschiedliche Koordinationsmechanismen aus, welche wiederum auf dem Einsatz von Steuerungsmedien beruhen.
Die Demonstration der Funktionsweise und Analysemöglichkeiten des entwickelten Modells erfolgt anhand eines Simulationsexperiments mittels der Simulationsplattform AnyLogic. Das Gültigkeitsspektrum wird anhand einer Sensitivitätsanalyse geprüft.
In der Simulation lassen sich spezifische Muster der Nutzung und Akzeptanzentwicklung nachweisen. Die Akzeptanz ist durch ein initiales Absinken und ein anschließendes gedämpftes Wachstum gekennzeichnet. Die Nutzung wird in der Organisation hingegen schnell durchgesetzt und verharrt dann auf einem stabilen Niveau. Für die Organisationstypen konnten unterschiedliche Effekte beobachtet werden. So eignet sich die bürokratische Steuerungsform zur Nutzungserhöhung, schafft es jedoch nicht, die Akzeptanz zu steigern. Organisationen, welche eher auf gegenseitige Abstimmung zur Koordination ausgelegt sind, erhöhen die Akzeptanz, jedoch nicht die Nutzung. Weiterhin ist die Entwicklung der Akzeptanz in diesem Organisationstyp sehr unsicher und weist einen hohen Schwankungsbereich auf.
In Zeiten eines sich schnell ändernden und vielseitigen Energiemarktes müssen Kohlenstoffmaterialien für verschiedene Anforderungen einsetzbar sein. Dies erfordert flexibel synthetisierbare Kohlenstoffmaterialien bevorzugt aus günstigen und nachhaltigen Kohlenstoffquellen. Es ist allerdings nicht leicht Vorläuferverbindungen auszumachen, welche sich einerseits für verschiedene Herstellungsverfahren eignen und deren Kohlenstoffprodukte andererseits in spezifischen Eigenschaften, wie der Struktur, des Stickstoffanteils, der Oberfläche und der Porengrößen, eingestellt werden können. In diesem Zusammenhang können natürliche Polyphenole, etwa überschüssige Tannine aus der Weinproduktion, eine neue Welt zu hoch funktionalen und vielseitig einstellbaren Kohlenstoffmaterialien mit hohen Ausbeuten öffnen.
Das Hauptziel dieser vorliegenden Thesis war es neue funktionale, einstellbare und skalierbare nanostrukturierte Kohlenstoffmaterialien aus Tanninen (insbesondere Tanninsäure) für unterschiedliche elektrochemische Zwecke zu synthetisieren und zu charakterisieren. Ermöglicht wurde dies durch unterschiedliche synthetische Herangehensweisen, wie etwa der polymeren Strukturdirektion, dem ionothermalen Templatieren und der weichen Templatierung. An Stelle des weitläufig gebräuchlichen, aber kanzerogenen Vernetzungsagens Formaldehyd wurden bei den vorgestellten Synthesen Harnstoff und Thioharnstoff gewählt, um zugleich die synthetisierten Kohlenmaterialien variabel dotieren zu können.
Daher wurden im ersten Teil der Arbeit die Wechselwirkungen, Reaktionen und thermischen Verhaltensweisen von Tanninsäure und Mixturen von Tanninsäure und Harnstoff bzw. Thioharnstoff untersucht, um daraus wichtige Erkenntnisse für die verschiedenen Kohlenstoffsynthesen zu gewinnen.
Durch die Verwendung eines polymeren Strukturierungsagenz Pluronic P123 konnten in einer ersten Kohlenstoffsynthese nachhaltige und dotierbare Kohlenstoffpartikel mit Durchmessern im Nanometerbereich aus Tanninsäure und Harnstoff hergestellt werden. Es konnte dabei gezeigt werden, dass durch die Modifikation der verschiedenen Syntheseparameter die Kohlenstoffnanopartikel gemäß ihres gemittelten Partikeldurchmessers, ihrer BET-Oberfläche, ihrer Komposition, ihrer Leitfähigkeit und ihrer chemischen Stabilität einstellbar sind. Dies eröffnete die Möglichkeit diese Kohlenstoffpartikel als alternatives und nachhaltiges Rußmaterial einzusetzen.
Weiterhin war es durch die ionothermale Templatierung möglich poröse, dotierte und kontrollierbare Kohlenstoffpartikel mit hohen spezifischen Oberflächen aus den gewählten Präkursorverbindungen zu synthetisieren, die sich für den Einsatz in Superkondensatoren eignen.
Auf diesen Erkenntnissen aufbauend konnten mittels der Rotationsbeschichtung poröse binderfreie und strukturierte Kohlenstofffilme synthetisiert werden, die eine spinodale Struktur aufwiesen. Anhand der Modifikation der Stammlösungskonzentration, der Rotationsgeschwindigkeit und der verwendeten Substrate konnten die Filmdicke (100-1000 nm), die Morphologie und Gesamtoberfläche gezielt beeinflusst werden. Die erweiterte elektrochemische Analyse zeigte außerdem ein sehr gut zugängliches Porensystem der porösen Kohlenstofffilme.
Allumfassend konnten demnach verschiedene Synthesewege für Kohlenstoffmaterialien aus Tanninen aufgezeigt werden, die verschiedenartig strukturiert und kontrolliert werden können und sich für diverse Anwendungsgebiete eignen.
In the present work side-chain polystyrenes were synthesized and characterized, in order to be applied in multilayer OLEDs fabricated by solution process techniques. Manufacture of optoelectronic devices by solution process techniques is meant to decrease significantly fabrication cost and allow large scale production of such devices.
This dissertation focusses in three series, enveloped in two material classes. The two classes differ to each other in the type of charge transport exhibited, either ambipolar transport or electron transport. All materials were applied in all-organic solution processed green Ir-based devices.
In the first part, a series of ambipolar host materials were developed to transport both charge types, holes and electrons, and be applied especially as matrix for green Ir-based emitters. It was possible to increase devices efficacy by modulating the predominant charge transport type. This was achieved by modification of molecules electron transport part with more electron-deficient heterocycles or by extending the delocalization of the LUMO. Efficiencies up to 28.9 cd/A were observed for all-organic solution-process three layer devices.
In the second part, suitability of triarylboranes and tetraphenylsilanes as electron transport materials was studied. High triplet energies were obtained, up to 2.95 eV, by rational combination of both molecular structures. Although the combination of both elements had a low effect in materials electron transport properties, high efficiencies around 24 cd/A were obtained for the series in all-organic solution-processed two layer devices.
In the last part, benzene and pyridine were chosen as the series electron-transport motif. By controlling the relative pyridine content (RPC) solubility into methanol was induced for polystyrenes with bulky side-chains. Materials with RPC ≥ 0.5 could be deposited orthogonally from solution without harming underlying layers. From the best of our knowledge, this is the first time such materials are applied in this architecture showing moderate efficiencies around 10 cd/A in all-organic solution processed OLEDs.
Overall, the outcome of these studies will actively contribute to the current research on materials for all-solution processed OLEDs.
I. Ceric ammonium nitrate (CAN) mediated thiocyanate radical additions to glycals
In this dissertation, a facile entry was developed for the synthesis of 2-thiocarbohydrates and their transformations. Initially, CAN mediated thiocyanation of carbohydrates was carried out to obtain the basic building blocks (2-thiocyanates) for the entire studies. Subsequently, 2-thiocyanates were reduced to the corresponding thiols using appropriate reagents and reaction conditions. The screening of substrates, stereochemical outcome and the reaction mechanism are discussed briefly (Scheme I).
Scheme I. Synthesis of the 2-thiocyanates II and reductions to 2-thiols III & IV.
An interesting mechanism was proposed for the reduction of 2-thiocyanates II to 2-thiols III via formation of a disulfide intermediate. The water soluble free thiols IV were obtained by cleaving the thiocyanate and benzyl groups in a single step. In the subsequent part of studies, the synthetic potential of the 2-thiols was successfully expanded by simple synthetic transformations.
II. Transformations of the 2-thiocarbohydrates
The 2-thiols were utilized for convenient transformations including sulfa-Michael additions, nucleophilic substitutions, oxidation to disulfides and functionalization at the anomeric position. The diverse functionalizations of the carbohydrates at the C-2 position by means of the sulfur linkage are the highlighting feature of these studies. Thus, it creates an opportunity to expand the utility of 2-thiocarbohydrates for biological studies.
Reagents and conditions: a) I2, pyridine, THF, rt, 15 min; b) K2CO3, MeCN, rt, 1 h; c) MeI, K2CO3, DMF, 0 °C, 5 min; d) Ac2O, H2SO4 (1 drop), rt, 10 min; e) CAN, MeCN/H2O, NH4SCN, rt, 1 h; f) NaN3, ZnBr2, iPrOH/H2O, reflux, 15 h; g) NaOH (1 M), TBAI, benzene, rt, 2 h; h) ZnCl2, CHCl3, reflux, 3 h.
Scheme II. Functionalization of 2-thiocarbohydrates.
These transformations have enhanced the synthetic value of 2-thiocarbohydrates for the preparative scale. Worth to mention is the Lewis acid catalyzed replacement of the methoxy group by other nucleophiles and the synthesis of the (2→1) thiodisaccharides, which were obtained with complete β-selectivity. Additionally, for the first time, the carbohydrate linked thiotetrazole was synthesized by a (3 + 2) cycloaddition approach at the C-2 position.
III. Synthesis of thiodisaccharides by thiol-ene coupling.
In the final part of studies, the synthesis of thiodisaccharides by a classical photoinduced thiol-ene coupling was successfully achieved.
Reagents and conditions: 2,2-Dimethoxy-2-phenylacetophenone (DPAP), CH2Cl2/EtOH, hv, rt.
Scheme III. Thiol-ene coupling between 2-thiols and exo-glycals.
During the course of investigations, it was found that the steric hindrance plays an important role in the addition of bulky thiols to endo-glycals. Thus, we successfully screened the suitable substrates for addition of various thiols to sterically less hindered alkenes (Scheme III). The photochemical addition of 2-thiols to three different exo-glycals delivered excellent regio- and diastereoselectivities as well as yields, which underlines the synthetic potential of this convenient methodology.
Nowadays, the need to protect the environment becomes more urgent than ever. In the field of chemistry, this translates to practices such as waste prevention, use of renewable feedstocks, and catalysis; concepts based on the principles of green chemistry. Polymers are an important product in the chemical industry and are also in the focus of these changes. In this thesis, more sustainable approaches to make two classes of polymers, polypeptoids and polyesters, are described.
Polypeptoids or poly(alkyl-N-glycines) are isomers of polypeptides and are biocompatible, as well as degradable under biologically relevant conditions. In addition to that, they can have interesting properties such as lower critical solution temperature (LCST) behavior. They are usually synthesized by the ring opening polymerization (ROP) of N-carboxy anhydrides (NCAs), which are produced with the use of toxic compounds (e.g. phosgene) and which are highly sensitive to humidity. In order to avoid the direct synthesis and isolation of the NCAs, N-phenoxycarbonyl-protected N-substituted glycines are prepared, which can yield the NCAs in situ. The conditions for the NCA synthesis and its direct polymerization are investigated and optimized for the simplest N-substituted glycine, sarcosine. The use of a tertiary amine in less than stoichiometric amounts compared to the N-phenoxycarbonyl--sarcosine seems to accelerate drastically the NCA formation and does not affect the efficiency of the polymerization. In fact, well defined polysarcosines that comply to the monomer to initiator ratio can be produced by this method. This approach was also applied to other N-substituted glycines.
Dihydroxyacetone is a sustainable diol produced from glycerol, and has already been used for the synthesis of polycarbonates. Here, it was used as a comonomer for the synthesis of polyesters. However, the polymerization of dihydroxyacetone presented difficulties, probably due to the insolubility of the macromolecular chains. To circumvent the problem, the dimethyl acetal protected dihydroxyacetone was polymerized with terephthaloyl chloride to yield a soluble polymer. When the carbonyl was recovered after deprotection, the product was insoluble in all solvents, showing that the carbonyl in the main chain hinders the dissolution of the polymers. The solubility issue can be avoided, when a 1:1 mixture of dihydroxyacetone/ ethylene glycol is used to yield a soluble copolyester.
Anthropogenically amplified erosion leads to increased fine-grained sediment input into the fluvial system in the 15.000 km2 Kharaa River catchment in northern Mongolia and constitutes a major stressing factor for the aquatic ecosystem. This study uniquely combines the application of intensive monitoring, source fingerprinting and catchment modelling techniques to allow for the comparison of the credibility and accuracy of each single method. High-resolution discharge data were used in combination with daily suspended solid measurements to calculate the suspended sediment budget and compare it with estimations of the sediment budget model SedNet. The comparison of both techniques showed that the development of an overall sediment budget with SedNet was possible, yielding results in the same order of magnitude (20.3 kt a- 1 and 16.2 kt a- 1).
Radionuclide sediment tracing, using Be-7, Cs-137 and Pb-210 was applied to differentiate sediment sources for particles < 10μm from hillslope and riverbank erosion and showed that riverbank erosion generates 74.5% of the suspended sediment load, whereas surface erosion contributes 21.7% and gully erosion only 3.8%. The contribution of the single subcatchments of the Kharaa to the suspended sediment load was assessed based on their variation in geochemical composition (e.g. in Ti, Sn, Mo, Mn, As, Sr, B, U, Ca and Sb). These variations were used for sediment source discrimination with geochemical composite fingerprints based on Genetic Algorithm driven Discriminant Function Analysis, the Kruskal–Wallis H-test and Principal Component Analysis. The contributions of the individual sub-catchment varied from 6.4% to 36.2%, generally showing higher contributions from the sub-catchments in the middle, rather than the upstream portions of the study area.
The results indicate that river bank erosion generated by existing grazing practices of livestock is the main cause for elevated fine sediment input. Actions towards the protection of the headwaters and the stabilization of the river banks within the middle reaches were identified as the highest priority. Deforestation and by lodging and forest fires should be prevented to avoid increased hillslope erosion in the mountainous areas. Mining activities are of minor importance for the overall catchment sediment load but can constitute locally important point sources for particular heavy metals in the fluvial system.
In this thesis, stochastic dynamics modelling collective motions of populations, one of the most mysterious type of biological phenomena, are considered. For a system of N particle-like individuals, two kinds of asymptotic behaviours are studied : ergodicity and flocking properties, in long time, and propagation of chaos, when the number N of agents goes to infinity. Cucker and Smale, deterministic, mean-field kinetic model for a population without a hierarchical structure is the starting point of our journey : the first two chapters are dedicated to the understanding of various stochastic dynamics it inspires, with random noise added in different ways. The third chapter, an attempt to improve those results, is built upon the cluster expansion method, a technique from statistical mechanics. Exponential ergodicity is obtained for a class of non-Markovian process with non-regular drift. In the final part, the focus shifts onto a stochastic system of interacting particles derived from Keller and Segel 2-D parabolicelliptic model for chemotaxis. Existence and weak uniqueness are proven.
Understanding the distribution of species is fundamental for biodiversity conservation, ecosystem management, and increasingly also for climate impact assessment. The presence of a species in a given site depends on physiological limitations (abiotic factors), interactions with other species (biotic factors), migratory or dispersal processes (site accessibility) as well as the continuing
effects of past events, e.g. disturbances (site legacy). Existing approaches to predict species distributions either (i) correlate observed species occurrences with environmental variables describing abiotic limitations, thus ignoring biotic interactions, dispersal and legacy effects (statistical species distribution model, SDM); or (ii) mechanistically model the variety of processes determining species distributions (process-based model, PBM). SDMs are widely used due to their easy applicability and ability to handle varied data qualities. But they fail to reproduce the dynamic response of species distributions to changing conditions. PBMs are expected to be superior in this respect, but they need very specific data unavailable for many species, and are often more complex and require more computational effort. More recently, hybrid models link the two approaches to combine their respective strengths.
In this thesis, I apply and compare statistical and process-based approaches to predict species distributions, and I discuss their respective limitations, specifically for applications in changing environments. Detailed analyses of SDMs for boreal tree species in Finland reveal that nonclimatic predictors - edaphic properties and biotic interactions - are important limitations at the treeline, contesting the assumption of unrestricted, climatically induced range expansion. While the estimated SDMs are successful within their training data range, spatial and temporal model transfer fails. Mapping and comparing sampled predictor space among data subsets identifies spurious extrapolation as the plausible explanation for limited model transferability. Using these findings, I analyze the limited success of an established PBM (LPJ-GUESS) applied to the same problem. Examination of process representation and parameterization in the PBM identifies implemented processes to adjust (competition between species, disturbance) and missing processes that are crucial in boreal forests (nutrient limitation, forest management). Based on climatic correlations shifting over time, I stress the restricted temporal transferability of bioclimatic limits used in LPJ-GUESS and similar PBMs. By critically assessing the performance of SDM and PBM in this application, I demonstrate the importance of understanding the limitations of the
applied methods.
As a potential solution, I add a novel approach to the repertoire of existing hybrid models. By simulation experiments with an individual-based PBM which reproduces community dynamics resulting from biotic factors, dispersal and legacy effects, I assess the resilience of coastal vegetation to abrupt hydrological changes. According to the results of the resilience analysis, I then modify temporal SDM predictions, thereby transferring relevant process detail from PBM to
SDM. The direction of knowledge transfer from PBM to SDM avoids disadvantages of current hybrid models and increases the applicability of the resulting model in long-term, large-scale applications. A further advantage of the proposed framework is its flexibility, as it is readily extended to other model types, disturbance definitions and response characteristics.
Concluding, I argue that we already have a diverse range of promising modelling tools at hand, which can be refined further. But most importantly, they need to be applied more thoughtfully. Bearing their limitations in mind, combining their strengths and openly reporting underlying assumptions and uncertainties is the way forward.
Start-up incentives targeted at unemployed individuals have become an important tool of the Active Labor Market Policy (ALMP) to fight unemployment in many countries in recent years. In contrast to traditional ALMP instruments like training measures, wage subsidies, or job creation schemes, which are aimed at reintegrating unemployed individuals into dependent employment, start-up incentives are a fundamentally different approach to ALMP, in that they intend to encourage and help unemployed individuals to exit unemployment by entering self-employment and, thus, by creating their own jobs. In this sense, start-up incentives for unemployed individuals serve not only as employment and social policy to activate job seekers and combat unemployment but also as business policy to promote entrepreneurship. The corresponding empirical literature on this topic so far has been mainly focused on the individual labor market perspective, however. The main part of the thesis at hand examines the new start-up subsidy (“Gründungszuschuss”) in Germany and consists of four empirical analyses that extend the existing evidence on start-up incentives for unemployed individuals from multiple perspectives and in the following directions:
First, it provides the first impact evaluation of the new start-up subsidy in Germany. The results indicate that participation in the new start-up subsidy has significant positive and persistent effects on both reintegration into the labor market as well as the income profiles of participants, in line with previous evidence on comparable German and international programs, which emphasizes the general potential of start-up incentives as part of the broader ALMP toolset. Furthermore, a new innovative sensitivity analysis of the applied propensity score matching approach integrates findings from entrepreneurship and labor market research about the key role of an individual’s personality on start-up decision, business performance, as well as general labor market outcomes, into the impact evaluation of start-up incentives. The sensitivity analysis with regard to the inclusion and exclusion of usually unobserved personality variables reveals that differences in the estimated treatment effects are small in magnitude and mostly insignificant. Consequently, concerns about potential overestimation of treatment effects in previous evaluation studies of similar start-up incentives due to usually unobservable personality variables are less justified, as long as the set of observed control variables is sufficiently informative (Chapter 2).
Second, the thesis expands our knowledge about the longer-term business performance and potential of subsidized businesses arising from the start-up subsidy program. In absolute terms, the analysis shows that a relatively high share of subsidized founders successfully survives in the market with their original businesses in the medium to long run. The subsidy also yields a “double dividend” to a certain extent in terms of additional job creation. Compared to “regular”, i.e., non-subsidized new businesses founded by non-unemployed individuals in the same quarter, however, the economic and growth-related impulses set by participants of the subsidy program are only limited with regard to employment growth, innovation activity, or investment. Further investigations of possible reasons for these differences show that differential business growth paths of subsidized founders in the longer run seem to be mainly limited by higher restrictions to access capital and by unobserved factors, such as less growth-oriented business strategies and intentions, as well as lower (subjective) entrepreneurial persistence. Taken together, the program has only limited potential as a business and entrepreneurship policy intended to induce innovation and economic growth (Chapters 3 and 4).
And third, an empirical analysis on the level of German regional labor markets yields that there is a high regional variation in subsidized start-up activity relative to overall new business formation. The positive correlation between regular start-up intensity and the share among all unemployed individuals who participate in the start-up subsidy program suggests that (nascent) unemployed founders also profit from the beneficial effects of regional entrepreneurship capital. Moreover, the analysis of potential deadweight and displacement effects from an aggregated regional perspective emphasizes that the start-up subsidy for unemployed individuals represents a market intervention into existing markets, which affects incumbents and potentially produces inefficiencies and market distortions. This macro perspective deserves more attention and research in the future (Chapter 5).
Galaxies evolve on cosmological timescales and to study this evolution we can either study the stellar populations, tracing the star formation and chemical enrichment, or the dynamics, tracing interactions and mergers of galaxies as well as accretion. In the last decades this field has become one of the most active research areas in modern astrophysics and especially the use of integral field spectrographs furthered our understanding. This work is based on data of NGC 5102 obtained with the panoramic integral field spectrograph MUSE. The data are analysed with two separate and complementary approaches: In the first part, standard methods are used to measure the kinematics and than model the gravitational potential using these exceptionally high-quality data. In the second part I develop the new method of surface brightness fluctuation spectroscopy and quantitatively explore its potential to investigate the bright evolved stellar population.
Measuring the kinematics of NGC 5102 I discover that this low-luminosity S0 galaxy hosts two counter rotating discs. The more central stellar component co-rotates with the large amount of HI gas. Investigating the populations I find strong central age and metallicity gradients with a younger and more metal rich central population. The spectral resolution of MUSE does not allow to connect these population gradients with the two counter rotating discs.
The kinematic measurements are modelled with Jeans anisotropic models to infer the gravitational potential of NGC 5102. Under the self-consistent mass-follows-light assumption none of the Jeans models is able to reproduce the observed kinematics. To my knowledge this is the strongest evidence evidence for a dark matter dominated system obtained with this approach so far. Including a Navarro, Frenk & White dark matter halo immediately solves the discrepancies. A very robust result is the logarithmic slope of the total matter density. For this low-mass galaxy I find a value of -1.75 +- 0.04, shallower than an isothermal halo and even shallower than published values for more massive galaxies. This confirms a tentative relation between total mass slope and stellar mass of galaxies.
The Surface Brightness Fluctuation (SBF) method is a well established distance measure, but due to its sensitive to bright stars also used to study evolved stars in unresolved stellar populations. The wide-field spectrograph MUSE offers the possibility to apply this technique for the first time to spectroscopic data. In this thesis I develop the spectroscopic SBF technique and measure the first SBF spectrum of any galaxy. I discuss the challenges for measuring SBF spectra that rise due to the complexity of integral field spectrographs compared to imaging instruments.
Since decades, stellar population models indicate that SBFs in intermediate-to-old stellar systems are dominated by red giant branch and asymptotic giant branch stars. Especially the later carry significant model uncertainties, making these stars a scientifically interesting target. Comparing the NGC 5102 SBF spectrum with stellar spectra I show for the first time that M-type giants cause the fluctuations. Stellar evolution models suggest that also carbon rich thermally pulsating asymptotic giant branch stars should leave a detectable signal in the SBF spectrum. I cannot detect a significant contribution from these stars in the NGC 5102 SBF spectrum.
I have written a stellar population synthesis tool that predicts for the first time SBF spectra. I compute two sets of population models: based on observed and on theoretical stellar spectra. In comparing the two models I find that the models based on observed spectra predict weaker molecular features. The comparison with the NGC 5102 spectrum reveals that these models are in better agreement with the data.
Although it has become common practice to build applications based on the reuse of existing components or services, technical complexity and semantic challenges constitute barriers to ensuring a successful and wide reuse of components and services. In the geospatial application domain, the barriers are self-evident due to heterogeneous geographic data, a lack of interoperability and complex analysis processes.
Constructing workflows manually and discovering proper services and data that match user intents and preferences is difficult and time-consuming especially for users who are not trained in software development. Furthermore, considering the multi-objective nature of environmental modeling for the assessment of climate change impacts and the various types of geospatial data (e.g., formats, scales, and georeferencing systems) increases the complexity challenges.
Automatic service composition approaches that provide semantics-based assistance in the process of workflow design have proven to be a solution to overcome these challenges and have become a frequent demand especially by end users who are not IT experts. In this light, the major contributions of this thesis are:
(i) Simplification of service reuse and workflow design of applications for climate impact analysis by following the eXtreme Model-Driven Development (XMDD) paradigm.
(ii) Design of a semantic domain model for climate impact analysis applications that comprises specifically designed services, ontologies that provide domain-specific vocabulary for referring to types and services, and the input/output annotation of the services using the terms defined in the ontologies.
(iii) Application of a constraint-driven method for the automatic composition of workflows for analyzing the impacts of sea-level rise. The application scenario demonstrates the impact of domain modeling decisions on the results and the performance of the synthesis algorithm.
The motivation of this work was to investigate the self-assembly of a block copolymer species that attended little attraction before, double hydrophilic block copolymers (DHBCs). DHBCs consist of two linear hydrophilic polymer blocks. The self-assembly of DHBCs towards suprastructures such as particles and vesicles is determined via a strong difference in hydrophilicity between the corresponding blocks leading to a microphase separation due to immiscibility. The benefits of DHBCs and the corresponding particles and vesicles, such as biocompatibility, high permeability towards water and hydrophilic compounds as well as the large amount of possible functionalizations that can be addressed to the block copolymers make the application of DHBC based structures a viable choice in biomedicine. In order to assess a route towards self-assembled structures from DHBCs that display the potential to act as cargos for future applications, several block copolymers containing two hydrophilic polymer blocks were synthesized. Poly(ethylene oxide)-b-poly(N-vinylpyrrolidone) (PEO-b-PVP) and Poly(ethylene oxide)-b-poly(N-vinylpyrrolidone-co-N-vinylimidazole) (PEO-b-P(VP-co-VIm) block copolymers were synthesized via reversible deactivation radical polymerization (RDRP) techniques starting from a PEO-macro chain transfer agent. The block copolymers displayed a concentration dependent self-assembly behavior in water which was determined via dynamic light scattering (DLS). It was possible to observe spherical particles via laser scanning confocal microscopy (LSCM) and cryogenic scanning electron microscopy (cryo SEM) at highly concentrated solutions of PEO-b-PVP. Furthermore, a crosslinking strategy with (PEO-b-P(VP-co-VIm) was developed applying a diiodo derived crosslinker diethylene glycol bis(2-iodoethyl) ether to form quaternary amines at the VIm units. The formed crosslinked structures proved stability upon dilution and transfer into organic solvents. Moreover, self-assembly and crosslinking in DMF proved to be more advantageous and the crosslinked structures could be successfully transferred to aqueous solution. The afforded spherical submicron particles could be visualized via LSCM, cryo SEM and Cryo TEM.
Double hydrophilic pullulan-b-poly(acrylamide) block copolymers were synthesized via copper catalyzed alkyne azide cycloaddition (CuAAC) starting from suitable pullulan alkyne and azide functionalized poly(N,N-dimethylacrylamide) (PDMA) and poly(N-ethylacrylamide) (PEA) homopolymers. The conjugation reaction was confirmed via SEC and 1H-NMR measurements. The self-assembly of the block copolymers was monitored with DLS and static light scattering (SLS) measurements indicating the presence of hollow spherical structures. Cryo SEM measurements could confirm the presence of vesicular structures for Pull-b-PEA block copolymers. Solutions of Pull-b-PDMA displayed particles in cryo SEM. Moreover, an end group functionalization of Pull-b-PDMA with Rhodamine B allowed assessing the structure via LSCM and hollow spherical structures were observed indicating the presence of vesicles, too.
An exemplified pathway towards a DHBC based drug delivery vehicle was demonstrated with the block copolymer Pull-b-PVP. The block copolymer was synthesized via RAFT/MADIX techniques starting from a pullulan chain transfer agent. Pull-b-PVP displayed a concentration dependent self-assembly in water with an efficiency superior to the PEO-b-PVP system, which could be observed via DLS. Cryo SEM and LSCM microscopy displayed the presence of spherical structures. In order to apply a reversible crosslinking strategy on the synthesized block copolymer, the pullulan block was selectively oxidized to dialdehydes with NaIO4. The oxidation of the block copolymer was confirmed via SEC and 1H-NMR measurements. The self-assembled and oxidized structures were subsequently crosslinked with cystamine dihiydrochloride, a pH and redox responsive crosslinker resulting in crosslinked vesicles which were observed via cryo SEM. The vesicular structures of crosslinked Pull-b-PVP could be disassembled by acid treatment or the application of the redox agent tris(2-carboxyethyl)-phosphin-hydrochloride. The successful disassembly was monitored with DLS measurements.
To conclude, self-assembled structures from DHBCs such as particles and vesicles display a strong potential to generate an impact on biomedicine and nanotechnologies. The variety of DHBC compositions and functionalities are very promising features for future applications.
Self-adaptive data quality
(2017)
Carrying out business processes successfully is closely linked to the quality of the data inventory in an organization. Lacks in data quality lead to problems: Incorrect address data prevents (timely) shipments to customers. Erroneous orders lead to returns and thus to unnecessary effort. Wrong pricing forces companies to miss out on revenues or to impair customer satisfaction. If orders or customer records cannot be retrieved, complaint management takes longer. Due to erroneous inventories, too few or too much supplies might be reordered.
A special problem with data quality and the reason for many of the issues mentioned above are duplicates in databases. Duplicates are different representations of same real-world objects in a dataset. However, these representations differ from each other and are for that reason hard to match by a computer. Moreover, the number of required comparisons to find those duplicates grows with the square of the dataset size. To cleanse the data, these duplicates must be detected and removed. Duplicate detection is a very laborious process. To achieve satisfactory results, appropriate software must be created and configured (similarity measures, partitioning keys, thresholds, etc.). Both requires much manual effort and experience.
This thesis addresses automation of parameter selection for duplicate detection and presents several novel approaches that eliminate the need for human experience in parts of the duplicate detection process.
A pre-processing step is introduced that analyzes the datasets in question and classifies their attributes semantically. Not only do these annotations help understanding the respective datasets, but they also facilitate subsequent steps, for example, by selecting appropriate similarity measures or normalizing the data upfront. This approach works without schema information.
Following that, we show a partitioning technique that strongly reduces the number of pair comparisons for the duplicate detection process. The approach automatically finds particularly suitable partitioning keys that simultaneously allow for effective and efficient duplicate retrieval. By means of a user study, we demonstrate that this technique finds partitioning keys that outperform expert suggestions and additionally does not need manual configuration. Furthermore, this approach can be applied independently of the attribute types.
To measure the success of a duplicate detection process and to execute the described partitioning approach, a gold standard is required that provides information about the actual duplicates in a training dataset. This thesis presents a technique that uses existing duplicate detection results and crowdsourcing to create a near gold standard that can be used for the purposes above. Another part of the thesis describes and evaluates strategies how to reduce these crowdsourcing costs and to achieve a consensus with less effort.
This project was focused on generating ultra thin stimuli responsive membranes with an embedded transmembrane protein to act as the pore. The membranes were formed by crosslinking of transmembrane protein polymer conjugates. The conjugates were self assembled on air water interface and the polymer chains crosslinked using a UV crosslinkable comonomer to engender the membrane. The protein used for the studies reported herein was one of the largest transmembrane channel proteins, ferric hydroxamate uptake protein component A (FhuA), found in the outer membrane of Escherichia coli (E. coli). The wild type protein and three genetic variants of FhuA were provided by the group of Prof. Schwaneberg in Aachen. The well known thermo responsive poly(N isopropylacrylamide) (PNIPAAm) and the pH and thermo responsive polymer poly((2-dimethylamino)ethyl methacrylate) (PDMAEMA) were conjugated to FhuA and the genetic variants via controlled radical polymerization (CRP) using grafting from technique. These polymers were chosen because they would provide stimuli handles in the resulting membranes. The reported polymerization was the first ever attempt to attach polymer chains onto a membrane protein using site specific modification.
The conjugate synthesis was carried out in two steps – a) FhuA was first converted into a macroinitiator by covalently linking a water soluble functional CRP initiator to the lysine residues. b) Copper mediated CRP was then carried out in pure buffer conditions with and without sacrificial initiator to generate the conjugates.
The challenge was carrying out the modifications on FhuA without denaturing it. FhuA, being a transmembrane protein, requires amphiphilic species to stabilize its highly hydrophobic transmembrane region. For the experiments reported in this thesis, the stabilizing agent was 2 methyl 2,4-pentanediol (MPD). Since the buffer containing MPD cannot be considered a purely aqueous system, and also because MPD might interfere with the polymerization procedure, the reaction conditions were first optimized using a model globular protein, bovine serum albumin (BSA). The optimum conditions were then used for the generation of conjugates with FhuA.
The generated conjugates were shown to be highly interfacially active and this property was exploited to let them self assemble onto polar apolar interfaces. The emulsions stabilized by particles or conjugates are referred to as Pickering emulsions. Crosslinking conjugates with a UV crosslinkable co monomer afforded nano thin micro compartments. Interfacial self assembly at the air water interface and subsequent UV crosslinking also yielded nano thin, stimuli responsive membranes which were shown to be mechanically robust. Initial characterization of the flux and permeation of water through these membranes is also reported herein. The generated nano thin membranes with PNIPAAm showed reduced permeation at elevated temperatures owing to the resistance by the hydrophobic and thus water-impermeable polymer matrix, hence confirming the stimulus responsivity.
Additionally, as a part of collaborative work with Dr. Changzhu Wu, TU Dresden, conjugates of three enzymes with current/potential industrial relevance (candida antarctica lipase B, benzaldehyde lyase and glucose oxidase) with stimuli responsive polymers were synthesized. This work aims at carrying out cascade reactions in the Pickering emulsions generated by self assembled enzyme polymer conjugate.
In der vorliegenden Arbeit konnten erfolgreich zwei unterschiedliche Hybridmaterialien (HM) über die Sol-Gel-Methode synthetisiert werden. Bei den HM handelt es sich um Monolithe mit einem Durchmesser von bis zu 4,5 cm. Das erste HM besteht aus Titandioxid und Bombyx mori Seide und wird als TS bezeichnet, während das zweite weniger Seide und zusätzlich Polyethylenoxid (PEO) enthält und daher als TPS abgekürzt wird. Einige der HM wurden nach der Synthese in eine wässrige Tetrachloridogoldsäure-Lösung getaucht, wodurch sich auf der Oberfläche Goldnanopartikel gebildet haben.
Die Materialien wurden mittels Elektronenmikroskopie, energiedispersiver Röntgenspektroskopie, Ramanspektroskopie sowie Röntgenpulverdiffraktometrie charakterisiert. Die Ergebnisse zeigen, dass beide HM aus etwa 5 nm großen, sphärischen Titandioxidnanopartikeln aufgebaut sind, die primär aus Anatas und zu einem geringen Anteil aus Brookit bestehen. Die Goldnanopartikel bei TPS_Au waren größer und polydisperser als die Goldnanopartikel auf dem TS_Au HM. Darüber hinaus sind die Goldnanopartikel im TS HM tiefer in das Material eingedrungen als beim TPS HM.
Die weiterführende Analyse der HM mittels Elementaranalyse und thermogravimetrischer Analyse ergab für TPS einen geringeren Anteil an organischen Bestandteilen im HM als für TS, obwohl für beide Synthesen die gleiche Masse an organischen Materialien eingesetzt wurde. Es wird vermutet, dass das PEO während der Synthese teilweise wieder aus dem Material herausgewaschen wird. Diese Theorie korreliert mit den Ergebnissen aus der Stickstoffsorption und der Quecksilberporosimetrie, die für das TPS HM eine höhere Oberfläche als für das TS HM anzeigten.
Die Variation einiger Syntheseparameter wie die Menge an Seide und PEO oder die Zusammensetzung der Titandioxidvorläuferlösung hatte einen großen Einfluss auf die synthetisierten HM. Während unterschiedliche Mengen an PEO die Größe des HM beeinflussten, konnte ohne Seide kein HM in einer ähnlichen Größe hergestellt werden. Die Bildung der HM wird stark von der Zusammensetzung der Titandioxidvorläuferlösung beeinflusst. Eine Veränderung führte daher nur selten zur Bildung eines homogenen HM.
Die in dieser Arbeit synthetisierten HM wurden als Photokatalysatoren für die Wasserspaltung und den Abbau von Methylenblau eingesetzt. Bei der photokatalytischen Wasserspaltung wurde zunächst der Einfluss unterschiedlicher Goldkonzentrationen beim TPS HM auf die Wasserstoffausbeute untersucht. Die besten Ergebnisse wurden bei einer Menge von 2,5 mg Tetrachloridogoldsäure erhalten. Darüber hinaus wurde gezeigt, dass mit dem TPS HM eine deutlich höhere Menge an Wasserstoff gewonnen werden konnte als mit dem TS HM. Die Ursachen für die schlechtere Aktivität werden in der geringeren spezifischen Oberfläche, der unterschiedlichen Porenstruktur, dem höheren Anteil an Seide und besonders in der geringeren Größe und höheren Eindringtiefe der Goldnanopartikel vermutet. Darüber hinaus konnte mit einem höheren UV-Anteil in der Lichtquelle sowie durch die Zugabe von Ethanol als Opferreagenz eine Zunahme der Wasserstoffausbeute erzielt werden.
Bei dem Methylenblauabbau wurde für beide HM zunächst nur eine Adsorption des Methylenblaus beobachtet. Nach der Zugabe von Wasserstoffperoxid konnte nach 8 h bereits eine fast vollständige Oxidation des Methylenblaus unter sichtbarem Licht beobachtet werden. Die Ursache für die etwas höhere Aktivität von TPS gegenüber TS wird in der unterschiedlichen Porenstruktur und dem höheren Anteil an Seide im TS HM vermutet. Insgesamt zeigen beide HM eine gute photokatalytische Aktivität für den Abbau von Methylenblau im Vergleich zu den erhaltenen Werten aus der Literatur.
Die zerstörungsfreien Prüfungen von Bauwerken mit Hilfe von Ultraschallmessverfahren haben in den letzten Jahren an Bedeutung gewonnen. Durch Ultraschallmessungen können die Geometrien von Bauteilen bestimmt sowie von außen nicht sichtbare Fehler wie Delaminationen und Kiesnester erkannt werden.
Mit neuartigen, in das Betonbauteil eingebetteten Ultraschallprüfköpfen sollen nun Bauwerke dauerhaft auf Veränderungen überprüft werden. Dazu werden Ultraschallsignale direkt im Inneren eines Bauteils erzeugt, was die Möglichkeiten der herkömmlichen Methoden der Bauwerksüberwachung wesentlich erweitert. Ein Ultraschallverfahren könnte mit eingebetteten Prüfköpfen ein Betonbauteil kontinuierlich integral überwachen und damit auch stetig fortschreitende Gefügeänderungen, wie beispielsweise Mikrorisse, registrieren.
Sicherheitsrelevante Bauteile, die nach dem Einbau für Messungen unzugänglich oder mittels Ultraschall, beispielsweise durch zusätzliche Beschichtungen der Oberfläche, nicht prüfbar sind, lassen sich mit eingebetteten Prüfköpfen überwachen. An bereits vorhandenen Bauwerken können die Ultraschallprüfköpfe mithilfe von Bohrlöchern und speziellem Verpressmörtel auch nachträglich in das Bauteil integriert werden. Für Fertigbauteile bieten sich eingebettete Prüfköpfe zur Herstellungskontrolle sowie zur Überwachung der Baudurchführung als Werkzeug der Qualitätssicherung an. Auch die schnelle Schadensanalyse eines Bauwerks nach Naturkatastrophen, wie beispielsweise einem Erdbeben oder einer Flut, ist denkbar.
Durch die gute Ankopplung ermöglichen diese neuartigen Prüfköpfe den Einsatz von empfindlichen Auswertungsmethoden, wie die Kreuzkorrelation, die Coda-Wellen-Interferometrie oder die Amplitudenauswertung, für die Signalanalyse. Bei regelmäßigen Messungen können somit sich anbahnende Schäden eines Bauwerks frühzeitig erkannt werden.
Da die Schädigung eines Bauwerks keine direkt messbare Größe darstellt, erfordert eine eindeutige Schadenserkennung in der Regel die Messung mehrerer physikalischer Größen die geeignet verknüpft werden. Physikalische Größen können sein: Ultraschalllaufzeit, Amplitude des Ultraschallsignals und Umgebungstemperatur. Dazu müssen Korrelationen zwischen dem Zustand des Bauwerks, den Umgebungsbedingungen und den Parametern des gemessenen Ultraschallsignals untersucht werden.
In dieser Arbeit werden die neuartigen Prüfköpfe vorgestellt. Es wird beschrieben, dass sie sich, sowohl in bereits errichtete Betonbauwerke als auch in der Konstruktion befindliche, einbauen lassen. Experimentell wird gezeigt, dass die Prüfköpfe in mehreren Ebenen eingebettet sein können da ihre Abstrahlcharakteristik im Beton nahezu ungerichtet ist. Die Mittenfrequenz von rund 62 kHz ermöglicht Abstände, je nach Betonart und SRV, von mindestens 3 m zwischen Prüfköpfen die als Sender und Empfänger arbeiten. Die Empfindlichkeit der eingebetteten Prüfköpfe gegenüber Veränderungen im Beton wird an Hand von zwei Laborexperimenten gezeigt, einem Drei-Punkt-Biegeversuch und einem Versuch zur Erzeugung von Frost-Tau-Wechsel Schäden. Die Ergebnisse werden mit anderen zerstörungsfreien Prüfverfahren verglichen. Es zeigt sich, dass die Prüfköpfe durch die Anwendung empfindlicher Auswertemethoden, auftretende Risse im Beton detektieren, bevor diese eine Gefahr für das Bauwerk darstellen. Abschließend werden Beispiele von Installation der neuartigen Ultraschallprüfköpfe in realen Bauteilen, zwei Brücken und einem Fundament, gezeigt und basierend auf dort gewonnenen ersten Erfahrungen ein Konzept für die Umsetzung einer Langzeitüberwachung aufgestellt.
In this thesis, I develop a theoretical implementation of prosodic reconstruction and apply it to the empirical domain of German sentences in which part of a focus or contrastive topic is fronted.
Prosodic reconstruction refers to the idea that sentences involving syntactic movement show prosodic parallels with corresponding simpler structures without movement. I propose to model this recurrent observation by ordering syntax-prosody mapping before copy deletion.
In order to account for the partial fronting data, the idea is extended to the mapping between prosody and information structure. This assumption helps to explain why object-initial sentences containing a broad focus or broad contrastive topic show similar prosodic and interpretative restrictions as sentences with canonical word order.
The empirical adequacy of the model is tested against a set of gradient acceptability judgments.
This cumulative doctoral dissertation, based on three publications, is devoted to the investigation of several aspects of azobenzene molecular switches, with the aid of computational chemistry.
In the first paper, the isomerization rates of a thermal cis → trans isomerization of azobenzenes for species formed upon an integer electron transfer, i.e., with added or removed electron, are calculated from Eyring’s transition state theory and activation energy barriers, computed by means of density functional theory. The obtained results are discussed in connection with an experimental study of the thermal cis → trans isomerization of azobenzene derivatives in the presence of gold nanoparticles, which is demonstrated to be greatly accelerated in comparison to the same isomerization reaction in the absence of nanoparticles.
The second paper is concerned with electronically excited states of (i) dimers, composed of two photoswitchable units placed closely side-by-side, as well as (ii) monomers and dimers adsorbed on a silicon cluster. A variety of quantum chemistry methods, capable of calculating molecular electronic absorption spectra, based on density functional and wave function theories, is employed to quantify changes in optical absorption upon dimerization and covalent grafting to a surface. Specifically, the exciton (Davydov) splitting between states of interest is determined from first-principles calculations with the help of natural transition orbital analysis, allowing for insight into the nature of excited states.
In the third paper, nonadiabatic molecular dynamics with trajectory surface hopping is applied to model the photoisomerization of azobenzene dimers, (i) for the isolated case (exhibiting the exciton coupling between two molecules) as well as (ii) for the constrained case (providing the van der Waals interaction with environment in addition to the exciton coupling between two monomers). For the latter, the additional azobenzene molecules, surrounding the dimer, are introduced, mimicking a densely packed self-assembled monolayer. From obtained results it is concluded that the isolated dimer is capable of isomerization likewise the monomer, whereas the steric hindrance considerably suppresses trans → cis photoisomerization.
Furthermore, the present dissertation comprises the general introduction describing the main features of the azobenzene photoswitch and objectives of this work, theoretical basis of the employed methods, and discussion of gained findings in the light of existing literature. Also, additional results on (i) activation parameters of the thermal cis → trans isomerization of azobenzenes, (ii) an approximate scheme to account for anharmonicity of molecular vibrations in calculation of the activation entropy, as well as (iii) absorption spectra of photoswitch–silicon composites obtained from time-demanding wave function-based methods are presented.
Prosody is a rich source of information that heavily supports spoken language comprehension. In particular, prosodic phrase boundaries divide the continuous speech stream into chunks reflecting the semantic and syntactic structure of an utterance. This chunking or prosodic phrasing plays a critical role in both spoken language processing and language acquisition. Aiming at a better understanding of the underlying processing mechanisms and their acquisition, the present work investigates factors that influence prosodic phrase boundary perception in adults and infants. Using the event-related potential (ERP) technique, three experimental studies examined the role of prosodic context (i.e., phrase length) in German phrase boundary perception and of the main prosodic boundary cues, namely pitch change, final lengthening, and pause. With regard to the boundary cues, the dissertation focused on the questions which cues or cue combination are essential for the perception of a prosodic boundary and on whether and how this cue weighting develops during infancy.
Using ERPs is advantageous because the technique captures the immediate impact of (linguistic) information during on-line processing. Moreover, as it can be applied independently of specific task demands or an overt response performance, it can be used with both infants and adults. ERPs are particularly suitable to study the time course and underlying mechanisms of boundary perception, because a specific ERP component, the Closure Positive Shift (CPS) is well established as neuro-physiological indicator of prosodic boundary perception in adults.
The results of the three experimental studies first underpin that the prosodic context plays an immediate role in the processing of prosodic boundary information. Moreover, the second study reveals that adult listeners perceive a prosodic boundary also on the basis of a sub-set of the boundary cues available in the speech signal. Both ERP and simultaneously collected behavioral data (i.e., prosodic judgements) suggest that the combination of pitch change and final lengthening triggers boundary perception; however, when presented as single cues, neither pitch change nor final lengthening were sufficient. Finally, testing six- and eight-month-old infants shows that the early sensitivity for prosodic information is reflected in a brain response resembling the adult CPS. For both age groups, brain responses to prosodic boundaries cued by pitch change and final lengthening revealed a positivity that can be interpreted as a CPS-like infant ERP component. In contrast, but comparable to the adults’ response pattern, pitch change as a single cue does not provoke an infant CPS. These results show that infant phrase boundary perception is not exclusively based on pause detection and hint at an early ability to exploit subtle, relational prosodic cues in speech perception.
This thesis investigates the processing of non-canonical word orders and whether non-canonical orders involving object topicalizations, midfield scrambling and particle verbs are treated the same by native (L1) and non-native (L2) speakers. The two languages investigated are Norwegian and German.
32 L1 Norwegian and 32 L1 German advanced learners of Norwegian were tested in two experiments on object topicalization in Norwegian. The results from the online self-paced reading task and the offline agent identification task show that both groups are able to identify the non-canonical word order and show a facilitatory effect of animate subjects in their reanalysis. Similarly high error rates in the agent identification task suggest that globally unambiguous object topicalizations are a challenging structure for L1 and L2 speakers alike.
The same participants were also tested in two experiments on particle placement in Norwegian, again using a self-paced reading task, this time combined with an acceptability rating task. In the acceptability rating L1 and L2 speakers show the same preference for the verb-adjacent placement of the particle over the non-adjacent placement after the direct object. However, this preference for adjacency is only found in the L1 group during online processing, whereas the L2 group shows no preference for either order.
Another set of experiments tested 33 L1 German and 39 L1 Slavic advanced learners of German on object scrambling in ditransitive clauses in German. Non-native speakers accept both object orders and show neither a preference for either order nor a processing advantage for the canonical order. The L1 group, in contrast, shows a small, but significant preference for the canonical dative-first order in the judgment and the reading task.
The same participants were also tested in two experiments on the application of the split rule in German particle verbs. Advanced L2 speakers of German are able to identify particle verbs and can apply the split rule in V2 contexts in an acceptability judgment task in the same way as L1 speakers. However, unlike the L1 group, the L2 group is not sensitive to the grammaticality manipulation during online processing. They seem to be sensitive to the additional lexical information provided by the particle, but are unable to relate the split particle to the preceding verb and recognize the ungrammaticality in non-V2 contexts.
Taken together, my findings suggest that non-canonical word orders are not per se more difficult to identify for L2 speakers than L1 speakers and can trigger the same reanalysis processes as in L1 speakers. I argue that L2 speakers’ ability to identify a non-canonical word order depends on how the non-canonicity is signaled (case marking vs. surface word order), on the constituents involved (identical vs. different word types), and on the impact of the word order change on sentence meaning. Non-canonical word orders that are signaled by morphological case marking and cause no change to the sentence’s content are hard to detect for L2 speakers.
Prevalence and Predictors of Sexual Aggression Victimization and Perpetration in Chile and Turkey
(2017)
Background: Although sexual aggression is recognized as a serious issue worldwide, the current knowledge base is primarily built on evidence from Western countries, particularly the U.S. For the present doctoral research, Chile and Turkey were selected based on theoretical considerations to examine the prevalence as well as predictors of sexual aggression victimization and perpetration. The first aim of this research project was to systematically review the available evidence provided by past studies on this topic within each country. The second aim was to empirically study the prevalence of experiencing and engaging in sexual aggression since the age of consent among college students in Chile and Turkey. The third aim was to conduct cross-cultural analyses examining pathways to victimization and perpetration based on a two-wave longitudinal design.
Methods: This research adopted a gender-inclusive approach by considering men and women in both victim and perpetrator roles. For the systematic reviews, multiple-stage literature searches were performed, and based on a predefined set of eligibility criteria, 28 studies in Chile and 56 studies in Turkey were identified for inclusion. A two-wave longitudinal study was conducted to examine the prevalence and predictors of sexual aggression among male and female college students in Chile and Turkey. Self-reports of victimization and perpetration were assessed with a Chilean Spanish or Turkish version of the Sexual Aggression and Victimization Scale. Two path models were conceptualized in which participants’ risky sexual scripts for consensual sex, risky sexual behavior, sexual self-esteem, sexual assertiveness, and religiosity were assessed at T1 and used as predictors of sexual aggression victimization and perpetration at T2 in the following 12 months, mediated through past victimization or perpetration, respectively. The models differed in that sexual assertiveness was expected to serve different functions for victimization (refusal assertiveness negatively linked to victimization) and perpetration (initiation assertiveness positively linked to perpetration).
Results: Both systematic reviews revealed that victimization was addressed by all included studies, but data on perpetration was severely limited. A great heterogeneity not only in victimization rates but also in predictors was found, which may be attributed to a lack of conceptual and methodological consistency across studies. The empirical analysis of the prevalence of sexual aggression in Chile revealed a victimization rate of 51.9% for women and 48.0% for men, and a perpetration rate of 26.8% for men and 16.5% for women. In the Turkish original data, victimization was reported by 77.6% of women and 65.5% of men, whereas, again, lower rates were found for perpetration, with 28.9% of men and 14.2% of women reporting at least one incident. The cross-cultural analyses showed, as expected, that risky sexual scripts informed risky sexual behavior, and thereby indirectly increased the likelihood of victimization and perpetration at T2 in both samples. More risky sexual scripts were also linked to lower levels of refusal assertiveness in both samples, indirectly increasing the vulnerability to victimization at T2. High sexual self-esteem decreased the probability of victimization at T2 through higher refusal assertiveness as well as through less risky sexual behavior also in both samples, whereas it increased the odds of perpetration at T2 via higher initiation assertiveness in the Turkish sample only. Furthermore, high religiosity decreased the odds of perpetration and victimization at T2 through less risky sexual scripts and less risky sexual behavior in both samples. It reduced the vulnerability to victimization through less risky sexual scripts and higher refusal assertiveness in the Chilean sample only. In the Turkish sample only, it increased the odds of perpetration and victimization through lower sexual self-esteem.
Conclusions: The findings showed that sexual aggression is a widespread problem in both Chile and Turkey, contributing cross-cultural evidence to the international knowledge base and indicating the clear need for implementing policy measures and prevention strategies in each country. Based on the results of the prospective analyses, concrete implications for intervention efforts are discussed.
Estuarine marshes are ecosystems that are situated at the transition zone between land and water and are thus controlled by physical and biological interactions. Marsh vegetation offers important ecosystem services by filtrating solid and dissolved substances from the water and providing habitat. By buffering a large part of the arriving flow velocity, attenuating wave energy and serving as erosion control for riverbanks, tidal marshes furthermore reduce the destructive effects of storm surges and storm waves and thus contribute to ecosystem-based shore protection. However, in many estuaries, extensive embankments, artificial bank protection, river dredging and agriculture threaten tidal marshes. Global warming might entail additional risks, such as changes in water levels, an increase of the tidal amplitude and a resulting shift of the salinity zones. This can affect the dynamics of the shore and foreland vegetation, and vegetation belts can be narrowed or fragmented. Against this background, it is crucial to gain a better understanding of the processes underlying the spatio temporal vegetation dynamics in brackish marshes. Furthermore, a better understanding of how plant-habitat relationships generate patterns in tidal marsh vegetation is vital to maintain ecosystem functions and assess the response of marshes to environmental change as well as the success of engineering and restoration projects.
For this purpose, three research objectives were addressed within this thesis: (1) to explore the possibility of vegetation serving as self-adaptive shore protection by quantifying the reduction of current velocity in the vegetation belt and the morphologic plasticity of a brackish marsh pioneer, (2) to disentangle the roles of abiotic factors and interspecific competition on species distribution and stand characteristics in brackish marshes, and (3) to develop a mechanistic vegetation model that helps analysing the influence of habitat conditions on the spatio-temporal dynamic of tidal marsh vegetation. These aspects were investigated using a combination of field studies and statistical as well as process-based modelling.
To explore the possibility of vegetation serving as self-adaptive coastal protection, in the first study, we measured current velocity with and without living vegetation, recorded ramet density and plant thickness during two growing periods at two locations in the Elbe estuary and assessed the adaptive value of a larger stem diameter of plants at locations with higher mechanical stress by biomechanical measurements. The results of this study show that under non-storm conditions, the vegetation belt of the marsh pioneer Bolboschoenus maritimus is able to buffer a large proportion of the flow velocity. We were furthermore able to show that morphological traits of plant species are adapted to hydrodynamic forces by demonstrating a positive correlation between ramet thickness and cross-shore current. In addition, our measurements revealed that thicker ramets growing at the front of the vegetation belt have a significantly higher stability than ramets inside the vegetation belt. This self-adaptive effect improves the ability of B. maritimus to grow and persist in the pioneer zone and could provide an adaptive value in habitats with high mechanical stress.
In the second study, we assessed the distribution of the two marsh species and a set of stand characteristics, namely aboveground and belowground biomass, ramet density, ramet height and the percentage of flowering ramets. Furthermore, we collected information on several abiotic habitat factors to test their effect on plant growth and zonation with generalised linear models (GLMs). Our results demonstrate that flow velocity is the main factor controlling the distribution of Bolboschoenus maritimus and Phragmites australis. Additionally, inundation height and duration, as well as intraspecific competition affect distribution patterns. This study furthermore shows that cross-shore flow velocity does not only directly influence the distribution of the two marsh species, but also alters the plants’ occurrence relative to inun-dation height and duration. This suggests an effect of cross-shore flow velocity on their tolerance to inundation. The analysis of the measured stand characteristics revealed a negative effect of total flow velocity on all measured parameters of B. maritimus and thus confirmed our expectation that flow velocity is a decisive stressor which influences the growth of this species.
To gain a better understanding of the processes and habitat factors influencing the spatio-temporal vegetation dynamics in brackish marshes, I built a spatially explicit, mechanistic model applying a pattern-oriented modelling approach. A sensitivity analysis of the para-meters of this dynamic habitat-macrophyte model HaMac suggests that rhizome growth is the key process for the lateral dynamics of brackish marshes. From the analysed habitat factors, P. australis patterns were mainly influenced by flow velocity. The competition with P. australis was of key importance for the belowground biomass of B. maritimus. Concerning vegetation dynamics, the model results emphasise that without the effect of flow velocity the B. maritimus vegetation belt would expand into the tidal flat at locations with present vegetation recession, suggesting that flow velocity is the main reason for vegetation recession at exposed locations.
Overall, the results of this thesis demonstrate that brackish marsh vegetation considerably contributes to flow reduction under average flow conditions and can hence be a valuable component of shore-protection schemes. At the same time, the distribution, growth and expansion of tidal marsh vegetation is substantially influenced by flow. Altogether, this thesis provides a clear step forward in understanding plant-habitat interactions in tidal marshes. Future research should integrate studies of vertical marsh accretion with research on the factors that control the lateral position of marshes.
The existence of diverse and active microbial ecosystems in the deep subsurface – a biosphere that was originally considered devoid of life – was discovered in multiple microbiological studies. However, most of the studies are restricted to marine ecosystems, while our knowledge about the microbial communities in the deep subsurface of lake systems and their potentials to adapt to changing environmental conditions is still fragmentary. This doctoral thesis aims to build up a unique data basis for providing the first detailed high-throughput characterization of the deep biosphere of lacustrine sediments and to emphasize how important it is to differentiate between the living and the dead microbial community in deep biosphere studies.
In this thesis, up to 3.6 Ma old sediments (up to 317 m deep) of the El’gygytgyn Crater Lake were examined, which represents the oldest terrestrial climate record of the Arctic. Combining next generation sequencing with detailed geochemical characteristics and other environmental parameters, the microbial community composition was analyzed in regard to changing climatic conditions within the last 3.6 Ma to 1.0 Ma (Pliocene and Pleistocene). DNA from all investigated sediments was successfully extracted and a surprisingly diverse (6,910 OTUs) and abundant microbial community in the El’gygytgyn deep sediments were revealed. The bacterial abundance (10³-10⁶ 16S rRNA copies g⁻¹ sediment) was up to two orders of magnitudes higher than the archaeal abundance (10¹-10⁵) and fluctuates with the Pleistocene glacial/interglacial cyclicality. Interestingly, a strong increase in the microbial diversity with depth was observed (approximately 2.5 times higher diversity in Pliocene sediments compared to Pleistocene sediments). The increase in diversity with depth in the Lake El’gygytgyn is most probably caused by higher sedimentary temperatures towards the deep sediment layers as well as an enhanced temperature-induced intra-lake bioproductivity and higher input of allochthonous organic-rich material during Pliocene climatic conditions. Moreover, the microbial richness parameters follow the general trends of the paleoclimatic parameters, such as the paleo-temperature and paleo-precipitation. The most abundant bacterial representatives in the El’gygytgyn deep biosphere are affiliated with the phyla Proteobacteria, Actinobacteria, Bacteroidetes, and Acidobacteria, which are also commonly distributed in the surrounding permafrost habitats. The predominated taxon was the halotolerant genus Halomonas (in average 60% of the total reads per sample).
Additionally, this doctoral thesis focuses on the live/dead differentiation of microbes in cultures and environmental samples. While established methods (e.g., fluorescence in situ hybridization, RNA analyses) are not applicable to the challenging El’gygytgyn sediments, two newer methods were adapted to distinguish between DNA from live cells and free (extracellular, dead) DNA: the propidium monoazide (PMA) treatment and the cell separation adapted for low amounts of DNA. The applicability of the DNA-intercalating dye PMA was successfully evaluated to mask free DNA of different cultures of methanogenic archaea, which play a major role in the global carbon cycle. Moreover, an optimal procedure to simultaneously treat bacteria and archaea was developed using 130 µM PMA and 5 min of photo-activation with blue LED light, which is also applicable on sandy environmental samples with a particle load of ≤ 200 mg mL⁻¹. It was demonstrated that the soil texture has a strong influence on the PMA treatment in particle-rich samples and that in particular silt and clay-rich samples (e.g., El’gygytgyn sediments) lead to an insufficient shielding of free DNA by PMA. Therefore, a cell separation protocol was used to distinguish between DNA from live cells (intracellular DNA) and extracellular DNA in the El’gygytgyn sediments. While comparing these two DNA pools with a total DNA pool extracted with a commercial kit, significant differences in the microbial composition of all three pools (mean distance of relative abundance: 24.1%, mean distance of OTUs: 84.0%) was discovered. In particular, the total DNA pool covers significantly fewer taxa than the cell-separated DNA pools and only inadequately represents the living community. Moreover, individual redundancy analyses revealed that the microbial community of the intra- and extracellular DNA pool are driven by different environmental factors. The living community is mainly influenced by life-dependent parameters (e.g., sedimentary matrix, water availability), while the extracellular DNA is dependent on the biogenic silica content. The different community-shaping parameters and the fact, that a redundancy analysis of the total DNA pool explains significantly less variance of the microbial community, indicate that the total DNA represents a mixture of signals of the live and dead microbial community.
This work provides the first fundamental data basis of the diversity and distribution of microbial deep biosphere communities of a lake system over several million years. Moreover, it demonstrates the substantial importance of extracellular DNA in old sediments. These findings may strongly influence future environmental community analyses, where applications of live/dead differentiation avoid incorrect interpretations due to a failed extraction of the living microbial community or an overestimation of the past community diversity in the course of total DNA extraction approaches.
In der vorliegenden Arbeit wurden verschiedene Polymere hergestellt, die bestimmte funktionelle Gruppen beinhalten. Diese Gruppen werden zum Teil durch Alkylketten geschützt, zum Teil liegen sie ungeschützt im Polymer vor. Mit diesen Polymeren wurden Untersuchungen mit knochenähnlichen Materialien sogenanntem Calciumphosphat durchgeführt. Es wurde der Einfluss der verschiedenen Polymere auf die Bildung dieser knochenähnlichen Substanzen untersucht und auch der Einfluss auf die Stabilität und das Auflösungsverhalten der Calciumphosphate. Dabei sollte ein besonderes Augenmerk auf die funktionellen Gruppen, sogenannte Phosphonsäuren und deren Ester, die die Phosphonsäuren schützen, gesetzt werden. Es stellte sich heraus, dass bei der Bildung der knochenähnlichen Materialien die Polymere mit Estergruppen eine leichte Förderung der Calciumphosphat-Bildung verursachen, während die ungeschützten Polymere die Bildung des „Knochenmaterials“ sehr stark verzögern. Dieser Effekt verstärkt sich noch, wenn eine weitere bestimmte Komponente zum Polymer hinzukommt und somit ein Copolymer gebildet wird. Diese Copolymere beschleunigen bzw. verlangsamen die Calciumphosphatbildung noch stärker. Werden Polymere mit einem anderen Polymergerüst aber den gleichen Phosphonsäuresetern in den Seitenketten verwendet, ändert sich der Einfluss der Calciumphosphat-Bildung wenig. Verglichen mit Polymeren ohne solche Phosphonsäuregruppen wird erkennbar, dass es weniger die Phosphonsäuregruppe ist, die die Mineralisation beeinflusst, sondern es eher eine Folge der Säure im Polymer ist.
Wird die Stabilisierung und Auflösung der Knochenähnlichen Substanzen betrachtet, fällt auf, dass auch hier wieder die Säuren den größten Effekt ausüben. Die Phosphonsäuregruppen scheinen dabei jedoch tatsächlich einen besonderen Effekt auszuüben, da bei diesen die Stabilisierung und auch das Auflösungsvermögen von Calciumphospaht von allen untersuchten Polymeren am größten sind.
In der Arbeit konnte außerdem gezeigt werden, dass die Polymere und Copolymere mit Phosphonsäuregruppen einen leicht positiven Effekt auf die Zahngesundheit zeigen. Die Zahl von Bakterien auf der Zahnoberfläche konnte reduziert werden und bei der Untersuchung der Zahnauflösung wurde eine glattere Zahnoberfläche erhalten, jedoch wurde auch mit den untersuchten Polymeren der Zahn im Inneren angegriffen. Weitere Untersuchungen können hier noch genaueren Aufschluss geben. Außerdem sollten auch die Polymere mit dem unterschiedlichen Polymergerüst und Phosphonsäureestergruppen untersucht werden.
Letztere Polymere wurden verwendet, um festere “gelartige“ Polymernetzwerke herzustellen und deren Einfluss auf die Calciumphosphatmineralisation zu untersuchen. Es stellte sich heraus, dass ohne das Einbetten einiger Calciumphosphatteilchen keine Bildung von Calciumphospaht an den Materialien ausgelöst wurde, wurden die sogenannten Hydrogele jedoch mit Calciumphosphatpartikeln geimpft, konnte deutliches weiteres Calciumphosphatwachstum beobachtet werden. Das Material lässt sich auch in verschiedene Formen bringen. Somit könnte das System nach weiteren Untersuchungen zur Verträglichkeit mit Zellen oder Geweben ein mögliches Material für Implantate darstellen, mit denen gezielt Knochenwachstum eingeleitet werden könnte.
Personal Big Data
(2017)
Many users of cloud-based services are concerned about questions of data privacy. At the same time, they want to benefit from smart data-driven services, which require insight into a person’s individual behaviour. The modus operandi of user modelling is that data is sent to a remote server where the model is constructed and merged with other users’ data. This thesis proposes selective cloud computing, an alternative approach, in which the user model is constructed on the client-side and only an abstracted generalised version of the model is shared with the remote services.
In order to demonstrate the applicability of this approach, the thesis builds an exemplary client-side user modelling technique. As this thesis is carried out in the area of Geoinformatics and spatio-temporal data is particularly sensitive, the application domain for this experiment is the analysis and prediction of a user’s spatio-temporal behaviour.
The user modelling technique is grounded in an innovative conceptual model, which builds upon spatial network theory combined with time-geography. The spatio-temporal constraints of time-geography are applied to the network structure in order to create individual spatio-temporal action spaces. This concept is translated into a novel algorithmic user modelling approach which is solely driven by the user’s own spatio-temporal trajectory data that is generated by the user’s smartphone.
While modern smartphones offer a rich variety of sensory data, this thesis only makes use of spatio-temporal trajectory data, enriched by activity classification, as the input and foundation for the algorithmic model. The algorithmic model consists of three basal components: locations (vertices), trips (edges), and clusters (neighbourhoods).
After preprocessing the incoming trajectory data in order to identify locations, user feedback is used to train an artificial neural network to learn temporal patterns for certain location types (e.g. work, home, bus stop, etc.). This Artificial Neural Network (ANN) is used to automatically detect future location types by their spatio-temporal patterns. The same is done in order to predict the duration of stay at a certain location. Experiments revealed that neural nets were the most successful statistical and machine learning tool to detect those patterns. The location type identification algorithm reached an accuracy of 87.69%, the duration prediction on binned data was less successful and deviated by an average of 0.69 bins. A challenge for the location type classification, as well as for the subsequent components, was the imbalance of trips and connections as well as the low accuracy of the trajectory data. The imbalance is grounded in the fact that most users exhibit strong habitual patterns (e.g. home > work), while other patterns are rather rare by comparison. The accuracy problem derives from the energy-saving location sampling mode, which creates less accurate results.
Those locations are then used to build a network that represents the user’s spatio-temporal behaviour. An initial untrained ANN to predict movement on the network only reached 46% average accuracy. Only lowering the number of included edges, focusing on more common trips, increased the performance. In order to further improve the algorithm, the spatial trajectories were introduced into the predictions. To overcome the accuracy problem, trips between locations were clustered into so-called spatial corridors, which were intersected with the user’s current trajectory. The resulting intersected trips were ranked through a k-nearest-neighbour algorithm. This increased the performance to 56%. In a final step, a combination of a network and spatial clustering algorithm was built in order to create clusters, therein reducing the variety of possible trips. By only predicting the destination cluster instead of the exact location, it is possible to increase the performance to 75% including all classes.
A final set of components shows in two exemplary ways how to deduce additional inferences from the underlying spatio-temporal data. The first example presents a novel concept for predicting the ‘potential memorisation index’ for a certain location. The index is based on a cognitive model which derives the index from the user’s activity data in that area. The second example embeds each location in its urban fabric and thereby enriches its cluster’s metadata by further describing the temporal-semantic activity in an area (e.g. going to restaurants at noon).
The success of the client-side classification and prediction approach, despite the challenges of inaccurate and imbalanced data, supports the claimed benefits of the client-side modelling concept. Since modern data-driven services at some point do need to receive user data, the thesis’ computational model concludes with a concept for applying generalisation to semantic, temporal, and spatial data before sharing it with the remote service in order to comply with the overall goal to improve data privacy. In this context, the potentials of ensemble training (in regards to ANNs) are discussed in order to highlight the potential of only sharing the trained ANN instead of the raw input data.
While the results of our evaluation support the assets of the proposed framework, there are two important downsides of our approach compared to server-side modelling. First, both of these server-side advantages are rooted in the server’s access to multiple users’ data. This allows a remote service to predict spatio-in the user-specific data, which represents the second downside. While minor classes will likely be minor classes in a bigger dataset as well, for each class, there will still be more variety than in the user-specific dataset. The author emphasises that the approach presented in this work holds the potential to change the privacy paradigm in modern data-driven services. Finding combinations of client- and server-side modelling could prove a promising new path for data-driven innovation.
Beyond the technological perspective, throughout the thesis the author also offers a critical view on the data- and technology-driven development of this work. By introducing the client-side modelling with user-specific artificial neural networks, users generate their own algorithm. Those user-specific algorithms are influenced less by generalised biases or developers’ prejudices. Therefore, the user develops a more diverse and individual perspective through his or her user model. This concept picks up the idea of critical cartography, which questions the status quo of how space is perceived and represented.
This study was inspired by the desire to contribute to literature on performance management from the context of a developing country. The guiding research questions were: How do managers use performance information in decision making? Why do managers use performance information the way they do? The study was based on theoretical strands of neo-patrimonialism and new institutionalism. The nature of the inquiry informed the choice of a qualitative case study research design. Data was assembled through face-to-face interviews, some observations, and collection of documents from managers at the levels of the directorate, division, and section/units. The managers who were the focus of this study are current or former staff members of the state departments in Kenya’s national Ministry of Agriculture, Livestock, and Fisheries as well as from departments responsible for coordination of performance related reforms.
The findings of this study show that performance information is regularly produced but its use by managers varies. Examples of use include preparing reports to external bodies, making decisions for resource re-allocation, making recommendations for rewards and sanctions, and policy advisory. On categorizing the forms of use as passive, purposeful, political or perverse, evidence shows that they overlap and that some of the forms are so closely related that it is difficult to separate them empirically.
On what can explain the forms of use established, four factors namely; political will and leadership; organizational capacity; administrative culture; and managers’ interests and attitudes, were investigated. While acknowledging the interrelatedness and even overlapping of the factors, the study demonstrates that there is explanatory power to each though with varying depth and scope. The study thus concludes that: Inconsistent political will and leadership for performance management reforms explain forms of use that are passive, political and perverse. Low organizational capacity could best explain passive and some limited aspects of purposeful use. Informal, personal and competitive administrative culture is associated with purposeful use and mostly with political and perverse use. Limited interest and apprehensive attitude are best associated with passive use.
The study contributes to the literature particularly in how institutions in a context of neo-patrimonialism shape performance information use. It recommends that further research is necessary to establish how neo-patrimonialism positively affects performance oriented reforms. This is interesting in particular given the emerging thinking on pockets of effectiveness and developmental patrimonialism. This is important since it is expected that performance related reforms will continue to be advocated in developing countries in the foreseeable future.
Approaching physical limits in speed and size of today's magnetic storage and processing technologies demands new concepts for controlling magnetization and moves researches on optically induced magnetic dynamics. Studies on photoinduced magnetization dynamics and their underlying mechanisms have been primarily performed on ferromagnetic metals. Ferromagnetic dynamics bases on transfer of the conserved angular momentum connected with atomic magnetic moments out of the parallel aligned magnetic system into other degrees of freedom.
In this thesis the so far rarely studied response of antiferromagnetic order to ultra-short optical laser pulses in a metal is investigated. The experiments were performed at the FemtoSpex slicing facility at the storage ring BESSY II, an unique source for ultra-short elliptically polarized x-ray pulses. Laser-induced changes of the 4f-magnetic order parameter in ferro- and antiferromagnetic dysprosium (Dy), were studied by x-ray methods, which yield directly comparable quantities. The discovered fundamental differences in the temporal and spatial behavior of ferro- and antiferrmagnetic dynamics are assinged to an additional channel for angular momentum transfer, which reduces the antiferromagnetic order by redistributing angular momentum within the non-parallel aligned magnetic system, and hence conserves the zero net magnetization. It is shown that antiferromagnetic dynamics proceeds considerably faster and more energy-efficient than demagnetization in ferromagnets. By probing antiferromagnetic order in time and space, it is found to be affected along the whole sample depth of an in situ grown 73 nm tick Dy film. Interatomic transfer of angular momentum via fast diffusion of laser-excited 5d electrons is held responsible for the out-most long-ranging effect. Ultrafast ferromagnetic dynamics can be expected to base on the same origin, which however leads to demagnetization only in regions close to interfaces caused by super-diffusive spin transport. Dynamics due to local scattering processes of excited but less mobile electrons, occur in both magnetic alignments only in directly excited regions of the sample and on slower pisosecond timescales. The thesis provides fundamental insights into photoinduced magnetic dynamics by directly comparing ferro- and antiferromagnetic dynamics in the same material and by consideration of the laser-induced magnetic depth profile.
Contemporary multi-core processors are parallel systems that also provide shared memory for programs running on them. Both the increasing number of cores in so-called many-core systems and the still growing computational power of the cores demand for memory systems that are able to deliver high bandwidths. Caches are essential components to satisfy this requirement. Nevertheless, hardware-based cache coherence in many-core chips faces practical limits to provide both coherence and high memory bandwidths. In addition, a shift away from global coherence can be observed. As a result, alternative architectures and suitable programming models need to be investigated.
This thesis focuses on fast communication for non-cache-coherent many-core architectures. Experiments are conducted on the Single-Chip Cloud Computer (SCC), a non-cache-coherent many-core processor with 48 mesh-connected cores. Although originally designed for message passing, the results of this thesis show that shared memory can be efficiently used for one-sided communication on this kind of architecture. One-sided communication enables data exchanges between processes where the receiver is not required to know the details of the performed communication. In the notion of the Message Passing Interface (MPI) standard, this type of communication allows to access memory of remote processes. In order to support this communication scheme on non-cache-coherent architectures, both an efficient process synchronization and a communication scheme with software-managed cache coherence are designed and investigated.
The process synchronization realizes the concept of the general active target synchronization scheme from the MPI standard. An existing classification of implementation approaches is extended and used to identify an appropriate class for the non-cache-coherent shared memory platform. Based on this classification, existing implementations are surveyed in order to find beneficial concepts, which are then used to design a lightweight synchronization protocol for the SCC that uses shared memory and uncached memory accesses. The proposed scheme is not prone to process skew and also enables direct communication as soon as both communication partners are ready. Experimental results show very good scaling properties and up to five times lower synchronization latency compared to a tuned message-based MPI implementation for the SCC.
For the communication, SCOSCo, a shared memory approach with software-managed cache coherence, is presented. According requirements for the coherence that fulfill MPI's separate memory model are formulated, and a lightweight implementation exploiting SCC hard- and software features is developed. Despite a discovered malfunction in the SCC's memory subsystem, the experimental evaluation of the design reveals up to five times better bandwidths and nearly four times lower latencies in micro-benchmarks compared to the SCC-tuned but message-based MPI library. For application benchmarks, like a parallel 3D fast Fourier transform, the runtime share of communication can be reduced by a factor of up to five. In addition, this thesis postulates beneficial hardware concepts that would support software-managed coherence for one-sided communication on future non-cache-coherent architectures where coherence might be only available in local subdomains but not on a global processor level.
To what extent cities can be made sustainable under the mega-trends of urbanization and climate change remains a matter of unresolved scientific debate. Our inability in answering this question lies partly in the deficient knowledge regarding pivotal humanenvironment interactions. Regarded as the most well documented anthropogenic climate modification, the urban heat island (UHI) effect – the warmth of urban areas relative to the rural hinterland – has raised great public health concerns globally. Worse still, heat waves are being observed and are projected to increase in both frequency and intensity, which further impairs the well-being of urban dwellers. Albeit with a substantial increase in the number of publications on UHI in the recent decades, the diverse urban-rural definitions applied in previous studies have remarkably hampered the general comparability of results achieved. In addition, few studies have attempted to synergize the land use data and thermal remote sensing to systematically assess UHI and its contributing factors.
Given these research gaps, this work presents a general framework to systematically quantify the UHI effect based on an automated algorithm, whereby cities are defined as clusters of maximum spatial continuity on the basis of land use data, with their rural hinterland being defined analogously. By combining land use data with spatially explicit surface skin temperatures from satellites, the surface UHI intensity can be calculated in a consistent and robust manner. This facilitates monitoring, benchmarking, and categorizing UHI intensities for cities across scales. In light of this innovation, the relationship between city size and UHI intensity has been investigated, as well as the contributions of urban form indicators to the UHI intensity.
This work delivers manifold contributions to the understanding of the UHI, which have complemented and advanced a number of previous studies. Firstly, a log-linear relationship between surface UHI intensity and city size has been confirmed among the 5,000 European cities. The relationship can be extended to a log-logistic one, when taking a wider range of small-sized cities into account. Secondly, this work reveals a complex interplay between UHI intensity and urban form. City size is found to have the strongest influence on the UHI intensity, followed by the fractality and the anisometry. However, their relative contributions to the surface UHI intensity depict a pronounced regional heterogeneity, indicating the importance of considering spatial patterns of UHI while implementing UHI adaptation measures.
Lastly, this work presents a novel seasonality of the UHI intensity for individual clusters in the form of hysteresis-like curves, implying a phase shift between the time series of UHI intensity and background temperatures. Combining satellite observation and urban boundary layer simulation, the seasonal variations of UHI are assessed from both screen and skin levels. Taking London as an example, this work ascribes the discrepancies between the seasonality observed at different levels mainly to the peculiarities of surface skin temperatures associated with the incoming solar radiation. In addition, the efforts in classifying cities according to their UHI characteristics highlight the important role of regional climates in determining the UHI.
This work serves as one of the first studies conducted to systematically and statistically scrutinize the UHI. The outcomes of this work are of particular relevance for the overall spatial planning and regulation at meso- and macro levels in order to harness the benefits of rapid urbanization, while proactively minimizing its ensuing thermal stress.
Magnetotactic bacteria possess an intracellular structure called the magnetosome chain. Magnetosome chains contain nano−particles of iron crystals enclosed by a membrane and aligned on a cytoskeletal filament. Due to the presence of the magnetosome chains, magnetotactic bacteria are able to orient and swim along the magnetic field lines. A detailed study of structural properties of magnetosome chains in magnetotactic bacteria has primary scientific interests. It can provide more insight into the formation of the cytoskeleton in bacteria. In this thesis, we develop a new framework to study the structural properties of magnetosome chains in magnetotactic bacteria.
First, we address the bending stiffness of magnetosome chains resulting from two main contributions: the magnetic interactions of magnetosome particles and the bending stiffness of the cytoskeletal filament to which the magnetosomes are anchored. Our analysis indicates that the linear configuration of magnetosome particles without the stabilisation to the cytoskeleton may close to ring like structures, with no net magnetic moment, which thus can not perform as a compass in cellular navigation. As a result we think that one of the roles of the filament is to stabilize the linear configuration against ring closure.
We then investigate the equilibrium configurations of magnetosome particles including linear chain and closed−ring structures. We notably observe that for the formation of a stable linear structure on the cytoskeletal filament, presence of a binding energy is needed. In the presence of external stimuli the stability of the magnetosome chain is due to the internal dipole−dipole interactions, the stiffness and the binding energy of the protein structure connecting the magnetosome particles to the filament. Our observations, during and after the treatment of the magnetosome chain with the external magnetic field substantiates the stabilisation of magnetosome chains to the cytoskeletal filament by proteinous linkers and the dynamic feature of these structures.
Finally, we employ our model to study the FMR spectra of magnetosome chains in a single cell of magnetotactic bacteria. We explore the effect of magnetocrystalline anisotropy in three-fold symmetry observed in FMR spectra and the peculiarity of different spectra arisen from different mutants of these bacteria.
Natural and potentially hazardous events occur on the Earth’s surface every day. The most destructive of these processes must be monitored, because they may cause loss of lives, infrastructure, and natural resources, or have a negative effect on the environment. A variety of remote sensing technologies allow the recoding of data to detect these processes in the first place, partly based on the diagnostic landforms that they form. To perform this effectively, automatic methods are desirable.
Universal detection of natural hazards is challenging due to their differences in spatial impacts, timing and longevity of consequences, and the spatial resolution of remote-sensing data. Previous studies have reported that topographic metrics such as roughness, which can be captured from digital elevation data, can reveal landforms diagnostic of natural hazards, such as gullies, dunes, lava fields, landslides and snow avalanches, as these landforms tend to be more heterogeneous than the surrounding landscape. A single roughness metric is often limited in such detections; however, a more complex approach that exploits the spatial relation and the location of objects, such as object-based image analysis (OBIA), is desirable.
In this thesis, I propose a topographic roughness measure derived from an airborne laser scanning (ALS) digital terrain model (DTM) and discuss its performance in detecting landforms principally diagnostic of natural hazards. I further develop OBIA-based algorithms for the detection of snow avalanches using near-infrared (NIR) aerial images, and the size (changes) of mountain lakes using LANDSAT satellite images. I quantitatively test and document how the level of difficulty in detecting these very challenging landforms depends on the input data resolution, the derivatives that could be evaluated from images and DTMs, the size, shape and complexity of landforms, and the capabilities of obtaining the information in the data. I demonstrate that surface roughness is a promising metric for detecting different landforms in diverse environments, and that OBIA assists significantly in detecting parts of lakes and snow avalanches that may not be correctly assigned by applying only the thresholding of spectral properties of data and their derivatives.
The curvature-based surface roughness parameter allows the detection of gullies, dunes, lava fields and landslides with a user’s accuracy of 0.63, 0.21, 0.53, and 0.45, respectively. The OBIA algorithms for detecting lakes and snow avalanches obtained user’s accuracy of 0.98, and 0.78, respectively. Most of the analysed landforms constituted only a small part of the entire dataset, and therefore the user’s accuracy is the most appropriate performance measure that should be given in a such classification, because it tells how many automatically-extracted pixels in fact represent the object that one wants to classify, and its calculation does not take the second (background) class into account. One advantage of the proposed roughness parameter is that it allows the extraction of the heterogeneity of the surface without the need for data detrending. The OBIA approach is novel in that it allows the classification of lakes regardless of the physical state of their water, and also allows the separation of frozen lakes from glaciers that have very similar water indices used in purely optical remote sensing applications. The algorithm proposed for snow avalanches allows the detection of release zones, tracks, and deposition zones by verifying the snow heterogeneity based on a roughness metric evaluated from a water index, and by analysing the local relation of segments with their neighbouring objects. This algorithm contains few steps, which allows for the simultaneous classification of avalanches that occur on diverse mountain slopes and differ in size and shape.
This thesis contributes to natural hazard research as it provides automatic solutions to tracking six different landforms that are diagnostic of natural hazards over large regions. This is a step toward delineating areas susceptible to the processes producing these landforms and the improvement of hazard maps.
Underground coal gasification (UCG) has the potential to increase worldwide coal reserves by developing coal resources, currently not economically extractable by conventional mining methods. For that purpose, coal is combusted in situ to produce a high-calorific synthesis gas with different end-use options, including electricity generation as well as production of fuels and chemical feedstock. Apart from the high economic potentials, UCG may induce site‐specific environmental impacts, including ground surface subsidence and pollutant migration of UCG by-products into shallow freshwater aquifers. Sustainable and efficient UCG operation requires a thorough understanding of the coupled thermal, hydraulic and mechanical processes, occurring in the UCG reactor vicinity. The development and infrastructure costs of UCG trials are very high; therefore, numerical simulations of coupled processes in UCG are essential for the assessment of potential environmental impacts. Therefore, the aim of the present study is to assess UCG-induced permeability changes, potential hydraulic short circuit formation and non-isothermal multiphase fluid flow dynamics by means of coupled numerical simulations. Simulation results on permeability changes in the UCG reactor vicinity demonstrate that temperature-dependent thermo-mechanical parameters have to be considered in near-field assessments, only. Hence, far-field simulations do not become inaccurate, but benefit from increased computational efficiency when thermo-mechanical parameters are maintained constant. Simulations on potential hydraulic short circuit formation between single UCG reactors at regional-scale emphasize that geologic faults may induce hydraulic connections, and thus compromise efficient UCG operation. In this context, the steam jacket surrounding high-temperature UCG reactors plays a vital role in avoiding UCG by-products escaping into freshwater aquifers and in minimizing energy consumption by formation fluid evaporation. A steam jacket emerges in the close reactor vicinity due to phase transition of formation water and is a non-isothermal flow phenomenon. Considering this complex multiphase flow behavior, an innovative conceptual modeling approach, validated against field data, enables the quantification and prediction of UCG reactor water balances. The findings of this doctoral thesis provide an important basis for integration of thermo-hydro-mechanical simulations in UCG, required for the assessment and mitigation of its potential environmental impacts as well as optimization of its efficiency.
Ziel der vorliegenden Arbeit war die Synthese und Charakterisierung von anisotropen Goldnanopartikeln in einer geeigneten Polyelektrolyt-modifizierten Templatphase. Der Mittelpunkt bildet dabei die Auswahl einer geeigneten Templatphase, zur Synthese von einheitlichen und reproduzierbaren anisotropen Goldnanopartikeln mit den daraus resultierenden besonderen Eigenschaften. Bei der Synthese der anisotropen Goldnanopartikeln lag der Fokus in der Verwendung von Vesikeln als Templatphase, wobei hier der Einfluss unterschiedlicher strukturbildender Polymere (stark alternierende Maleamid-Copolymere PalH, PalPh, PalPhCarb und PalPhBisCarb mit verschiedener Konformation) und Tenside (SDS, AOT – anionische Tenside) bei verschiedenen Synthese- und Abtrennungsbedingungen untersucht werden sollte.
Im ersten Teil der Arbeit konnte gezeigt werden, dass PalPhBisCarb bei einem pH-Wert von 9 die Bedingungen eines Röhrenbildners für eine morphologische Transformation von einer vesikulären Phase in eine röhrenförmige Netzwerkstruktur erfüllt und somit als Templatphase zur formgesteuerten Bildung von Nanopartikeln genutzt werden kann.
Im zweiten Teil der Arbeit wurde dargelegt, dass die Templatphase PalPhBisCarb (pH-Wert von 9, Konzentration von 0,01 wt.%) mit AOT als Tensid und PL90G als Phospholipid (im Verhältnis 1:1) die effektivste Wahl einer Templatphase für die Bildung von anisotropen Strukturen in einem einstufigen Prozess darstellt. Bei einer konstanten Synthesetemperatur von 45 °C wurden die besten Ergebnisse bei einer Goldchloridkonzentration von 2 mM, einem Gold-Templat-Verhältnis von 3:1 und einer Synthesezeit von 30 Minuten erzielt. Ausbeute an anisotropen Strukturen lag bei 52 % (Anteil an dreieckigen Nanoplättchen von 19 %). Durch Erhöhung der Synthesetemperatur konnte die Ausbeute auf 56 % (29 %) erhöht werden.
Im dritten Teil konnte durch zeitabhängige Untersuchungen gezeigt werden, dass bei Vorhandensein von PalPhBisCarb die Bildung der energetisch nicht bevorzugten Plättchen-Strukturen bei Raumtemperatur initiiert wird und bei 45 °C ein Optimum annimmt.
Kintetische Untersuchungen haben gezeigt, dass die Bildung dreieckiger Nanoplättchen bei schrittweiser Zugabe der Goldchlorid-Präkursorlösung zur PalPhBisCarb enthaltenden Templatphase durch die Dosierrate der vesikulären Templatphase gesteuert werden kann. In umgekehrter Weise findet bei Zugabe der Templatphase zur Goldchlorid-Präkursorlösung bei 45 °C ein ähnlicher, kinetisch gesteuerter Prozess der Bildung von Nanodreiecken statt mit einer maximalen Ausbeute dreieckigen Nanoplättchen von 29 %.
Im letzten Kapitel erfolgten erste Versuche zur Abtrennung dreieckiger Nanoplättchen von den übrigen Geometrien der gemischten Nanopartikellösung mittels tensidinduzierter Verarmungsfällung. Bei Verwendung von AOT mit einer Konzentration von 0,015 M wurde eine Ausbeute an Nanoplättchen von 99 %, wovon 72 % dreieckiger Geometrien hatten, erreicht.
Aufgrund verschiedener wissenschaftlicher Erkenntnisse wird jungen Sporttreibenden vom Gebrauch von Nahrungsergänzungsmitteln (NEM) abgeraten. Diese Dissertation verfolgt vor dem Hintergrund der Theorie der Zielsysteme (TDZ) das Ziel der Erstellung anwendungsorientieren Handlungswissens, anhand dessen Interventionsempfehlungen zur Reduzierung des prävalenten NEM-Konsums im Nachwuchssport ableitbar sind. Insgesamt wurden sechs Untersuchungen durchgeführt. Die Versuchsteilnehmenden absolvierten in sämtlichen Studien eine Variante der lexikalischen Entscheidungsaufgabe. Diese Aufgabe diente der Operationalisierung von automatisch aktivier- und abrufbaren nahrungsergänzungsmittelbezogenen Ziel-Mittel-Relationen.
In einer Stichprobe von Sportstudierenden zeigte sich, dass NEM mit dem Ziel Leistung assoziiert sind (Studie 1). Unter Berücksichtigung des NEM-Konsums wurde dieses Ergebnis für Nachwuchsathletinnen und -athleten aus dem Breitensport repliziert (Studie 2). Zusätzlich konnte in beiden Studien die Bedeutung dieser Ziel-Mittel-Relationen für das Verhalten nachgewiesen werden. In den nachfolgenden Untersuchungen wurden spezifische Veränderungsmechanismen der verhaltensleitenden Ziel-Mittel-Relation aus Leistung und NEM zunächst an Sportstudierenden experimentell evaluiert. Durch das Herausstellen der fehlenden leistungssteigernden Wirkung von NEM konnte diese Zielassoziation nicht modifiziert werden (Studie 3). Das Betonen gesundheitsschädigender Konsequenzen (Studie 4) und das Akzentuieren einer gesunden Ernährung (Studie 5) erwiesen sich demgegenüber als geeignet zur Veränderung der Ziel-Mittel-Relation. Das Herausstellen einer gesunden Ernährung führte deskriptiv bei Nachwuchsathletinnen und -athleten ebenfalls zur Modifikation der Zielassoziation (Studie 6). Die inferenzstatistische Bestätigung der Ergebnisse dieser Studie steht aufgrund der geringen Teststärke der Untersuchung noch aus.
Insgesamt verdeutlichen die Ergebnisse, dass die auf Ebene automatischer Kognitionen bestehende und verhaltensleitende Assoziation des Gebrauchs von NEM mit Leistung durch die Akzentuierung gesundheitlicher Perspektiven experimentell verändert werden kann. Abschließend wird die theoretische und praktische Bedeutung des erstellten Handlungswissen für künftige Interventionsempfehlungen zur Reduzierung des Gebrauchs von NEM diskutiert.
Mundos comunes
(2017)
A partir de la constatación de una serie de movimientos y diálogos artísticos, correspondientes a las décadas del setenta y ochenta, su tesis doctoral interroga la noción de “lo latinoamericano” en el ámbito del arte y la cultura. La investigación realiza un “viaje recopilatorio” de documentos a través de distintas ciudades europeas que permiten problematizar el lugar, la comunidad y la memoria cultural de lo latinoamericano.
For several decades, researchers have tried to explain how speakers of more than one language (multilinguals) manage to keep their languages separate and to switch from one language to the other depending on the context. This ability of multilingual speakers to use the intended language, while avoiding interference from the other language(s) has recently been termed “language control”.
A multitude of studies showed that when bilinguals process one language, the other language is also activated and might compete for selection. According to the most influential model of language control developed over the last two decades, competition from the non-intended language is solved via inhibition. In particular, the Inhibitory Control (IC) model proposed by Green (1998) puts forward that the amount of inhibition applied to the non-relevant language depends on its dominance, in that the stronger the language the greater the strength of inhibition applied to it. Within this account, the cost required to reactivate a previously inhibited language depends on the amount of inhibition previously exerted on it, that is, reactivation costs are greater for a stronger compared to a weaker language. In a nutshell, according to the IC model, language control is determined by language dominance.
The goal of the present dissertation is to investigate the extent to which language control in multilinguals is affected by language dominance and whether and how other factors might influence this process. Three main factors are considered in this work: (i) the time speakers have to prepare for a certain language or PREPARATION TIME, (ii) the type of languages involved in the interactional context or LANGUAGE TYPOLOGY, and (iii) the PROCESSING MODALITY, that is, whether the way languages are controlled differs between reception and production.
The results obtained in the four manuscripts, either published or in revision, indicate that language dominance alone does not suffice to explain language switching patterns. In particular, the present thesis shows that language control is profoundly affected by each of the three variables described above. More generally, the findings obtained in the present dissertation indicate that language control in multilingual speakers is a much more dynamic system than previously believed and is not exclusively determined by language dominance, as predicted by the IC model (Green, 1998).
Hintergrund
Für Patienten mit hochgradiger Aortenklappenstenose, die aufgrund ihres Alters oder ihrer Multimorbidität ein hohes Operationsrisiko tragen, konnte mit der kathetergestützten Aortenklappenkorrektur (transcatheter aortic valve implantation, TAVI) eine vielversprechende Alternative zum herzchirurgischen Eingriff etabliert werden. Explizite Daten zur multidisziplinären kardiologischen Rehabilitation nach TAVI liegen bislang nicht vor. Ziel vorliegender Arbeit war, den Effekt der kardiologischen Rehabilitation auf die körperliche Leistungsfähigkeit, den emotionalen Status, die Lebensqualität und die Gebrechlichkeit bei Patienten nach TAVI zu untersuchen sowie Prädiktoren für die Veränderung der körperlichen Leistungsfähigkeit und der Lebensqualität zu identifizieren.
Methodik
Zwischen 10/2013 und 07/2015 wurden 136 Patienten (80,6 ± 5,0 Jahre, 47,8 % Männer) in Anschlussheilbehandlung nach TAVI in drei kardiologischen Rehabilitationskliniken eingeschlossen. Zur Beurteilung des Effekts der kardiologischen Rehabilitation wurden jeweils zu Beginn und Ende der Rehabilitation der Frailty (Gebrechlichkeits)-Index (Score bestehend aus Barthel-Index, Instrumental Activities of Daily Living, Mini Mental State Exam, Mini Nutritional Assessment, Timed Up and Go und subjektiver Mobilitätsverschlechterung), die Lebensqualität im Short-Form 12 (SF-12) sowie die funktionale körperliche Leistungsfähigkeit im 6-Minuten Gehtest (6-minute walk test, 6MWT) und die maximale körperliche Leistungsfähigkeit in der Belastungs-Ergometrie erhoben. Zusätzlich wurden soziodemographische Daten (z. B. Alter und Geschlecht), Komorbiditäten (z. B. chronisch obstruktive Lungenerkrankung, koronare Herzkrankheit und Karzinom), kardiovaskuläre Risikofaktoren und die NYHA-Klasse dokumentiert. Prädiktoren für die Veränderung der körperlichen Leistungsfähigkeit und Lebensqualität wurden mit Kovarianzanalysen angepasst.
Ergebnisse
Die maximale Gehstrecke im 6MWT konnte um 56,3 ± 65,3 m (p < 0,001) und die maximale körperliche Leistungsfähigkeit in der Belastungs-Ergometrie um 8,0 ± 14,9 Watt (p < 0001) gesteigert werden. Weiterhin konnte eine Verbesserung im SF-12 sowohl in der körperlichen Summenskala um 2,5 ± 8,7 Punkte (p = 0,001) als auch in der psychischen Summenskala um 3,4 ± 10,2 Punkte (p = 0,003) erreicht werden. In der multivariaten Analyse waren ein höheres Alter und eine höhere Bildung signifikant mit einer geringeren Zunahme im 6MWT assoziiert, währenddessen eine bessere kognitive Leistungsfähigkeit und Adipositas einen positiven prädiktiven Wert aufwiesen. Eine höhere Selbstständigkeit und ein besserer Ernährungsstatus beeinflussten die Veränderung in der körperlichen Summenskala des SF-12 positiv, währenddessen eine bessere kognitive Leistungsfähigkeit einen Prädiktor für eine geringere Veränderung darstellte. Des Weiteren hatten die jeweiligen Ausgangswerte der körperlichen und psychischen Summenskala im SF-12 einen inversen Einfluss auf die Veränderungen in der gleichen Skala.
Schlussfolgerung
Eine multidisziplinäre kardiologische Rehabilitation kann sowohl die körperliche Leistungs-fähigkeit und Lebensqualität verbessern als auch die Gebrechlichkeit von Patienten nach kathetergestützter Aortenklappenkorrektur verringern. Daraus resultierend gilt es, spezifische Assessments für die kardiologische Rehabilitation zu entwickeln. Weiterhin ist es notwendig, individualisierte Therapieprogramme mit besonderem Augenmerk auf kognitive Funktionen und Ernährung zu initiieren, um die Selbstständigkeit hochbetagter Patienten zu erhalten bzw. wiederherzustellen und um die Pflegebedürftigkeit der Patienten hinauszuzögern.
The timing and location of the two largest earthquakes of the 21st century (Sumatra, 2004 and Tohoku 2011, events) greatly surprised the scientific community, indicating that the deformation processes that precede and follow great megathrust earthquakes remain enigmatic. During these phases before and after the earthquake a combination of multi-scale complex processes are acting simultaneously: Stresses built up by long-term tectonic motions are modified by sudden jerky deformations during earthquakes, before being restored by multiple ensuing relaxation processes.
This thesis details a cross-scale thermomechanical model developed with the aim of simulating the entire subduction process from earthquake (1 minute) to million years’ time scale, excluding only rupture propagation. The model employs elasticity, non-linear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences, and, by using an adaptive time-step algorithm, recreates the deformation process as observed naturally over single and multiple seismic cycles. The model is thoroughly tested by comparing results to those from known high- resolution solutions of generic modeling setups widely used in modeling of rupture propagation. It is demonstrated, that while not modeling rupture propagation explicitly, the modeling procedure correctly recognizes the appearance of instability (earthquake) and correctly simulates the cumulative slip at a fault during great earthquake by means of a quasi-dynamic approximation.
A set of 2D models is used to study the effects of non-linear transient rheology on the postseismic processes following great earthquakes. Our models predict that the viscosity in the mantle wedge drops by 3 to 4 orders of magnitude during a great earthquake with magnitude above 9. This drop in viscosity results in spatial scales and timings of the relaxation processes following the earthquakes that are significantly different to previous estimates. These models replicate centuries long seismic cycles exhibited by the greatest earthquakes (like the Great Chile 1960 Earthquake) and are consistent with the major features of postseismic surface displacements recorded after the Great Tohoku Earthquake.
The 2D models are also applied to study key factors controlling maximum magnitudes of earthquakes in subduction zones. Even though methods of instrumentally observing earthquakes at subduction zones have rapidly improved in recent decades, the characteristic recurrence interval of giant earthquakes (Mw>8.5) is much larger than the currently available observational record and therefore the necessary conditions for giant earthquakes are not clear. Statistical studies have recognized the importance of the slab shape and its surface roughness, state of the strain of the upper plate and thickness of sediments filling the trenches. In this thesis we attempt to explain these observations and to identify key controlling parameters. We test a set of 2D models representing great earthquake seismic cycles at known subduction zones with various known geometries, megathrust friction coefficients, and convergence rates implemented. We found that low-angle subduction (large effect) and thick sediments in the subduction channel (smaller effect) are the fundamental necessary conditions for generating giant earthquakes, while the change of subduction velocity from 10 to 3.5 cm/yr has a lower effect. Modeling results also suggest that having thick sediments in the subduction channel causes low static friction, resulting in neutral or slightly compressive deformation in the overriding plate for low-angle subduction zones. These modeling results agree well with observations for the largest earthquakes. The model predicts the largest possible earthquakes for subduction zones of given dipping angles. The predicted maximum magnitudes exactly threshold magnitudes of all known giant earthquakes of 20th and 21st centuries.
The clear limitation of most of the models developed in the thesis is their 2D nature. Development of 3D models with comparable resolution and complexity will require significant advances in numerical techniques. Nevertheless, we conducted a series of low-resolution 3D models to study the interaction between two large asperities at a subduction interface separated by an aseismic gap of varying width. The novelty of the model is that it considers behavior of the asperities during multiple seismic cycles. As expected, models show that an aseismic gap with a narrow width could not prevent rupture propagation from one asperity to another, and that rupture always crosses the entire model. When the gap becomes too wide, asperities do not interact anymore and rupture independently. However, an interesting mode of interaction was observed in the model with an intermediate width of the aseismic gap: In this model the asperities began to stably rupture in anti-phase following multiple seismic cycles. These 3D modeling results, while insightful, must be considered preliminary because of the limitations in resolution.
The technique developed in this thesis for cross-scale modeling of seismic cycles can be used to study the effects of multiple seismic cycles on the long-term deformation of the upper plate. The technique can be also extended to the case of continental transform faults and for the advanced 3D modeling of specific subduction zones. This will require further development of numerical techniques and adaptation of the existing advanced highly scalable parallel codes like LAMEM and ASPECT.
Proteins are molecules that are essential for life and carry out an enormous number of functions in organisms. To this end, they change their conformation and bind to other molecules. However, the interplay between conformational change and binding is not fully understood. In this work, this interplay is investigated with molecular dynamics (MD) simulations of the protein-peptide system Mdm2-PMI and by analysis of data from relaxation experiments.
The central task it to uncover the binding mechanism, which is described by the sequence of (partial) binding events and conformational change events including their probabilities. In the simplest case, the binding mechanism is described by a two-step model: binding followed by conformational change or conformational change followed by binding. In the general case, longer sequences with multiple conformational changes and partial binding events are possible as well as parallel pathways that differ in their sequences of events. The theory of Markov state models (MSMs) provides the theoretical framework in which all these cases can be modeled. For this purpose, MSMs are estimated in this work from MD data, and rate equation models, which are related to MSMs, are inferred from experimental relaxation data.
The MD simulation and Markov modeling of the PMI-Mdm2 system shows that PMI and Mdm2 can bind via multiple pathways. A main result of this work is a dissociation rate on the order of one event per second, which was calculated using Markov modeling and is in agreement with experiment. So far, dissociation rates and transition rates of this magnitude have only been calculated with methods that speed up transitions by acting with time-dependent, external forces on the binding partners. The simulation technique developed in this work, in contrast, allows the estimation of dissociation rates from the combination of free energy calculation and direct MD simulation of the fast binding process. Two new statistical estimators TRAM and TRAMMBAR are developed to estimate a MSM from the joint data of both simulation types.
In addition, a new analysis technique for time-series data from chemical relaxation experiments is developed in this work. It allows to identify one of the above-mentioned two-step mechanisms as the mechanism that underlays the data. The new method is valid for a broader range of concentrations than previous methods and therefore allows to choose the concentrations such that the mechanism can be uniquely identified. It is successfully tested with data for the binding of recoverin to a rhodopsin kinase peptide.
The ionosphere, which is strongly influenced by the Sun, is known to be also affected by meteorological processes. These processes, despite having their origin in the troposphere and stratosphere, interact with the upper atmosphere. Such an interaction between atmospheric layers is known as vertical coupling. During geomagnetically quiet times, when near-Earth space is not under the influence of solar storms, these processes become important drivers for ionospheric variability. Studying the link between these processes in the lower atmosphere and the ionospheric variability is important for our understanding of fundamental mechanisms in ionospheric and meteorological research.
A prominent example of vertical coupling between the stratosphere and the ionosphere are the so-called stratospheric sudden warming (SSW) events that occur usually during northern winters and result in an increase in the polar stratospheric temperature and a reversal of the circumpolar winds. While the phenomenon of SSW is confined to the northern polar stratosphere, its influence on the ionosphere can be observed even at equatorial latitudes. During SSW events, the connection between the polar stratosphere and the equatorial ionosphere is believed to be through the modulation of global atmospheric tides. These tides are fundamental for the ionospheric E-region wind dynamo that generates electric fields and currents in the ionosphere. Observations of ionospheric currents indicate a large enhancement of the semidiurnal lunar tide in response to SSW events. Thus, the semidiurnal lunar tide becomes an important driver of ionospheric variability during SSW events.
In this thesis, the ionospheric effect of SSW events is investigated in the equatorial region, where a narrow but an intense E-region current known as the equatorial electrojet (EEJ) flows above the dip equator during the daytime. The day-to-day variability of the EEJ can be determined from magnetic field records at geomagnetic observatories close to the dip equator. Such magnetic data are available for several decades and allows to investigate the impact of SSW events on the EEJ and, even more importantly, helps in understanding the effects of SSW events on the equatorial ionosphere. An excellent long-term record of the geomagnetic field at the equator from 1922 onwards is available for the observatory Huancayo in Peru and is extensively utilized in this study.
The central subject of this thesis is the investigation of lunar tides in the EEJ during SSW events by analyzing long time series. This is done by estimating the lunar tidal amplitude in the EEJ from the magnetic records at Huancayo and by comparing them to measurements of the polar stratospheric wind and temperature, which led to the identification of the known SSW events from 1952 onwards. One goal of this thesis is to identify SSW events that predate 1952. To this end, superposed epoch analysis (SEA) is employed to establish a relationship between the lunar tidal power and the wind and temperature conditions in the lower atmosphere. A threshold value for the lunar tidal power is identified that is discriminative for the known SSW events. This threshold is then used to identify lunar tidal enhancements, which are indicative for any historic SSW events prior to 1952. It can be shown, that the number of lunar tidal enhancements and thus the occurrence frequency of historic SSW events between 1926 and 1952 is similar to the occurrence frequency of the known SSW events from 1952 onwards.
Next to the classic SSW definition, the concept of polar vortex weakening (PVW) is utilized in this thesis. PVW is defined for higher latitudes and altitudes (≈ 40km) than the classical SSW definition (≈ 32km). The correlation between the timing and magnitude of lunar tidal enhancements in the EEJ and the timing and magnitude of PVW is found to be better than for the classic SSW definition. This suggests that the lunar tidal enhancements in the EEJ are closely linked to the state of the middle atmosphere.
Geomagnetic observatories located in different longitudes at the dip equator allow investigating the longitudinally dependent variability of the EEJ during SSW events. For this purpose, the lunar tidal enhancements in the EEJ are determined for the Peruvian and Indian sectors during the major SSW events of the years 2006 and 2009. It is found that the lunar tidal amplitude shows similar enhancements in the Peruvian sector during both SSW events, while the enhancements are notably different for the two events in the Indian sector.
In summary, this thesis shows that lunar tidal enhancements in the EEJ are indeed correlated to the occurrence of SSW events and they should be considered a prominent driver of low latitude ionospheric variability. Secondly, lunar tidal enhancements are found to be longitudinally variable. This suggests that regional effects, such as ionospheric conductivity and the geometry and strength of the geomagnetic field, also play an important role and have to be considered when investigating the mechanisms behind vertical coupling.
Background: Consumption of whole-grain, coffee, and red meat were consistently related to the risk of developing type 2 diabetes in prospective cohort studies, but potentially underlying biological mechanisms are not well understood. Metabolomics profiles were shown to be sensitive to these dietary exposures, and at the same time to be informative with respect to the risk of type 2 diabetes. Moreover, graphical network-models were demonstrated to reflect the biological processes underlying high-dimensional metabolomics profiles.
Aim: The aim of this study was to infer hypotheses on the biological mechanisms that link consumption of whole-grain bread, coffee, and red meat, respectively, to the risk of developing type 2 diabetes. More specifically, it was aimed to consider network models of amino acid and lipid profiles as potential mediators of these risk-relations.
Study population: Analyses were conducted in the prospective EPIC-Potsdam cohort (n = 27,548), applying a nested case-cohort design (n = 2731, including 692 incident diabetes cases). Habitual diet was assessed with validated semiquantitative food-frequency questionnaires. Concentrations of 126 metabolites (acylcarnitines, phosphatidylcholines, sphingomyelins, amino acids) were determined in baseline-serum samples. Incident type 2 diabetes cases were assed and validated in an active follow-up procedure. The median follow-up time was 6.6 years.
Analytical design: The methodological approach was conceptually based on counterfactual causal inference theory. Observations on the network-encoded conditional independence structure restricted the space of possible causal explanations of observed metabolomics-data patterns. Given basic directionality assumptions (diet affects metabolism; metabolism affects future diabetes incidence), adjustment for a subset of direct neighbours was sufficient to consistently estimate network-independent direct effects. Further model-specification, however, was limited due to missing directionality information on the links between metabolites. Therefore, a multi-model approach was applied to infer the bounds of possible direct effects. All metabolite-exposure links and metabolite-outcome links, respectively, were classified into one of three categories: direct effect, ambiguous (some models indicated an effect others not), and no-effect.
Cross-sectional and longitudinal relations were evaluated in multivariable-adjusted linear regression and Cox proportional hazard regression models, respectively. Models were comprehensively adjusted for age, sex, body mass index, prevalence of hypertension, dietary and lifestyle factors, and medication.
Results: Consumption of whole-grain bread was related to lower levels of several lipid metabolites with saturated and monounsaturated fatty acids. Coffee was related to lower aromatic and branched-chain amino acids, and had potential effects on the fatty acid profile within lipid classes. Red meat was linked to lower glycine levels and was related to higher circulating concentrations of branched-chain amino acids. In addition, potential marked effects of red meat consumption on the fatty acid composition within the investigated lipid classes were identified.
Moreover, potential beneficial and adverse direct effects of metabolites on type 2 diabetes risk were detected. Aromatic amino acids and lipid metabolites with even-chain saturated (C14-C18) and with specific polyunsaturated fatty acids had adverse effects on type 2 diabetes risk. Glycine, glutamine, and lipid metabolites with monounsaturated fatty acids and with other species of polyunsaturated fatty acids were classified as having direct beneficial effects on type 2 diabetes risk.
Potential mediators of the diet-diabetes links were identified by graphically overlaying this information in network models. Mediation analyses revealed that effects on lipid metabolites could potentially explain about one fourth of the whole-grain bread effect on type 2 diabetes risk; and that effects of coffee and red meat consumption on amino acid and lipid profiles could potentially explain about two thirds of the altered type 2 diabetes risk linked to these dietary exposures.
Conclusion: An algorithm was developed that is capable to integrate single external variables (continuous exposures, survival time) and high-dimensional metabolomics-data in a joint graphical model. Application to the EPIC-Potsdam cohort study revealed that the observed conditional independence patterns were consistent with the a priori mediation hypothesis: Early effects on lipid and amino acid metabolism had the potential to explain large parts of the link between three of the most widely discussed diabetes-related dietary exposures and the risk of developing type 2 diabetes.
Lithospheric plates move over the low viscosity asthenosphere balancing several forces. The driving forces include basal shear stress exerted by mantle convection and plate boundary forces such as slab pull and ridge push, whereas the resisting forces include inter-plate friction, trench resistance, and cratonic root resistance. These generate plate motions, the lithospheric stress field and dynamic topography which are observed with different geophysical methods. The orientation and tectonic regime of the observed crustal/lithospheric stress field further contribute to our knowledge of different deformation processes occurring within the Earth's crust and lithosphere. Using numerical models previous studies were able to identify major forces generating stresses in the crust and lithosphere which also contribute to the formation of topography as well as driving lithospheric plates. They showed that the first-order stress pattern explaining about 80\,\% of the stress field originates from a balance of forces acting at the base of the moving lithospheric plates due to convective flow in the underlying mantle. The remaining second-order stress pattern is due to lateral density variations in the crust and lithosphere in regions of pronounced topography and high gravitational potential, such as the Himalayas and mid-ocean ridges. By linking global lithosphere dynamics to deep mantle flow this study seeks to evaluate the influence of shallow and deep density heterogenities on plate motions, lithospheric stress field and dynamic topography using the geoid as a major constraint for mantle rheology. We use the global 3D lithosphere-asthenosphere model SLIM3D with visco-elasto-plastic rheology coupled at 300 km depth to a spectral model of mantle flow. The complexity of the lithosphere-asthenosphere component allows for the simulation of power-law rheology with creep parameters accounting for both diffusion and dislocation creep within the uppermost 300 km.
First we investigate the influence of intra-plate friction and asthenospheric viscosity on present-day plate motions. Previous modelling studies have suggested that small friction coefficients (µ < 0.1, yield stress ~ 100 MPa) can lead to plate tectonics in models of mantle convection. Here we show that, in order to match present-day plate motions and net rotation, the frictional parameter must be less than 0.05. We are able to obtain a good fit with the magnitude and orientation of observed plate velocities (NUVEL-1A) in a no-net-rotation (NNR) reference frame with µ < 0.04 and minimum asthenosphere viscosity ~ 5*10e19 Pas to 10e20 Pas. Our estimates of net rotation (NR) of the lithosphere suggest that amplitudes ~ 0.1-0.2 °/Ma, similar to most observation-based estimates, can be obtained with asthenosphere viscosity cutoff values of ~ 10e19 Pas to 5*10e19 Pas and friction coefficient µ < 0.05.
The second part of the study investigates further constraints on shallow and deep mantle heterogeneities causing plate motion by predicting lithosphere stress field and topography and validating with observations. Lithosphere stresses and dynamic topography are computed using the modelling setup and rheological parameters for prescribed plate motions. We validate our results with the World Stress Map 2016 (WSM2016) and the observed residual topography. Here we tested a number of upper mantle thermal-density structures. The one used to calculate plate motions is considered the reference thermal-density structure. This model is derived from a heat flow model combined with a sea floor age model. In addition we used three different thermal-density structures derived from global S-wave velocity models to show the influence of lateral density heterogeneities in the upper 300 km on model predictions. A large portion of the total dynamic force generating stresses in the crust/lithosphere has its origin in the deep mantle, while topography is largely influenced by shallow heterogeneities. For example, there is hardly any difference between the stress orientation patterns predicted with and without consideration of the heterogeneities in the upper mantle density structure across North America, Australia, and North Africa. However, the crust is dominant in areas of high altitude for the stress orientation compared to the all deep mantle contribution.
This study explores the sensitivity of all the considered surface observables with regards to model parameters providing insights into the influence of the asthenosphere and plate boundary rheology on plate motion as we test various thermal-density structures to predict stresses and topography.
Lignin valorization
(2017)
The topic of this project is the use of lignin as alternative source of aromatic building blocks and oligomers to fossil feedstocks. Lignin is known as the most abundant aromatic polymer in nature and is isolated from the lignocellulosic component of plants by different possible extraction treatments. Both the biomass source and the extraction method affect the structure of the isolated lignin, therefore influencing its further application. Lignin was extracted from beech wood by two different hydrothermal alkaline treatments, which use NaOH and Ba(OH)2 as base and by an acid-catalyzed organosolv process. Moreover, lignin was isolated from bamboo, beech wood and coconut by soda treatment of the biomasses. A comparison of the structural features of such isolated lignins was performed through the use of a wide range of analytical methods. Alkaline lignins resulted in a better candidate as carbon precursor and macromonomers for the synthesis of polymer than organosolv lignin. In fact, alkaline lignins showed higher residual mass after carbonization and higher content of the reactive hydroxy functionalities. In contrast, the lignin source turned out to slightly affect the lignin hydroxyl content.
One of the most common lignin modifications is its deconstruction to obtain aromatic molecules, which can be used as starting materials for the synthesis of fine chemicals. Lignin deconstruction leads to a complex mixture of aromatic molecules. A gas chromatographic analytical method was developed to characterize the mixture of products obtained by lignin deconstruction via heterogeneous catalytic hydrogenolysis. The analytical protocol allowed the quantification of three main groups of molecules by means of calibration curves, internal standard and a preliminary silylation step of the sample. The analytical method was used to study the influence of the hydrogenolysis catalyst, temperature and system (flow and batch reactor) on the yield and selectivity of the aromatic compounds.
Lignin extracted from beech wood by a hydrothermal process using Ba(OH)2 as base, was functionalized by aromatic nitration in order to add nitrogen functionalities. The final goal was the synthesis of a nitrogen doped carbon. Nitrated lignin was reduced to the amino form in order to compare the influence of different nitrogen functionalities on the porosity of the final carbon. The carbons were obtained by ionothermal treatment of the precursors in the presence of the eutectic salt mixture KCl/ZnCl2 Such synthesized carbons showed micro-, macro- and mesoporosity and were tested for their electrocatalytic activity towards the oxygen reduction reaction. Mesoporous carbon derived from nitro lignin displayed the highest electrocatalytic activity.
Lignins isolated from coconut, beech wood and bamboo were used as macromonomers for the synthesis of biobased polyesters. A condensation reaction was performed between lignin and a hyper branched poly(ester-amine), previously obtained by condensation of triethanolamine and adipic acid. The influence of the lignin source and content on the thermochemical and mechanical properties of the final material was investigated. The prepolymer showed adhesive properties towards aluminum and its shear strength was therefore measured. The gluing properties of such synthesized glues turned out to be independent from the lignin source but affected by the amount of lignin in the final material.
This work shows that, although still at a laboratory scale, the valorization of lignin can overcome the critical issues of lignin´s structure variability and complexity.
Cellular membranes constantly experience remodeling, as exemplified by morphological changes during endo- and exocytosis. Regulation of membrane morphology is essential for these processes. In this work, we attempt to establish a regulation path based on the use of photoswitches exhibiting conformational changes in model membranes, namely, giant unilamellar vesicles (GUVs). The mechanism of the changes in the GUVs’ morphology caused by isomerization of the photosensitive molecules has been previously explored but still remains elusive. We examine the morphological reshaping of GUVs in the presence of the photoswitch o-tetrafluoroazobenzene (F-azo) and show that the mechanism behind the resulting morphological changes involves both an increase in the membrane area and generation of a positive spontaneous curvature. First, we characterize the partitioning of F-azo in a single-component membrane using both experimental and computational approaches. The partition coefficient calculated from molecular dynamic simulations agrees with experimental data obtained with size-exclusion chromatography. Then, we implement the approach of vesicle electrodeformation in order to assess the increase in the membrane area, which is observed as a result of the conformational change of F-azo. Finally, the local and the effective membrane spontaneous curvatures were estimated from the observed shapes of vesicles exhibiting outward budding. We then extend the application of the F-azo to multicomponent lipid membranes, which exhibit a coexistence of domains in different liquid phases due to a miscibility gap between the lipids. We perform initial experiments to investigate whether F-azo can be employed to modulate the lateral lipid packing and organization. We observe either complete mixing of the domains or the appearing of disordered domains within the domains of more ordered phase. The type of behavior observed in response to the photoisomerization of F-azo was dependent on the used lipid composition. We believe that the findings introduced here will have an impact in understanding and controlling both lipid phase modulation and regulation of the membrane morphology in membrane systems.
Im Fokus der kumulativen Dissertationsschrift stehen Grundschullehrerinnen und Grundschullehrer in ihrer Rolle als Diagnostikerinnen und Diagnostiker. Exemplarisch werden mit der Einschätzung von Aufgabenschwierigkeiten und der Feststellung von Sprachförderbedarf zwei diagnostische Herausforderungen untersucht, die Lehrkräfte an Grundschulen bewältigen müssen.
Die vorliegende Arbeit umfasst drei empirische Teilstudien, die in einem Rahmentext integriert sind, in dem zunächst die theoretischen Hintergründe und die empirische Befundlage erläutert werden. Hierbei wird auf die diagnostische Kompetenz von Lehrkräften bzw. die Genauigkeit von Lehrerurteilen eingegangen. Ferner wird der Einsatz standardisierter Testinstrumente als wichtiger Bestandteil des diagnostischen Aufgabenfeldes von Lehrkräften charakterisiert. Außerdem werden zentrale Aspekte der Sprachdiagnostik in der Grundschule dargestellt.
In Teilstudie 1 (Hoffmann & Böhme, 2014b) wird der Frage nachgegangen, wie akkurat Grundschullehrerinnen und -lehrer die Schwierigkeit von Deutsch- und Mathematikaufgaben einschätzen. Darüber hinaus wird untersucht, welche Faktoren mit der Über- oder Unterschätzung der Schwierigkeit von Aufgaben zusammenhängen.
In Teilstudie 2 (Hoffmann & Böhme, 2017) wird geprüft, inwiefern die Klassifikationsgüte von Entschei¬dungen zum sprachlichen Förderbedarf mit den hierfür genutzten diagnostischen Informationsquellen kovariiert. Der Fokus liegt hierbei vor allem auf der Untersuchung von Effekten des Einsatzes von sprachdiagnostischen Verfahren.
Teilstudie 3 (Hoffmann, Böhme & Stanat, 2017) untersucht schließlich die Frage, welche diagnostischen Verfahren gegenwärtig bundesweit an den Grundschulen zur Feststellung sprachlichen Förderbedarfs genutzt werden und ob diese Verfahren etwa testtheoretischen Gütekriterien genügen.
Die zentralen Ergebnisse der Teilstudien werden im Rahmentext zusammen¬gefasst und studienübergreifend diskutiert. Hierbei wird auch auf methodische Stärken und Schwächen der drei Beiträge sowie auf Implikationen für die zukünftige Forschung und die schulische Praxis hingewiesen.
In der vorliegenden Dissertation werden lebensweltliche Erfahrungszusammenhänge in 128 Aufsätzen ausländischer Studierender, die oft auch als Bildungsausländer bezeichnet werden, und grundsätzliche Prozesse bei der Herausbildung sowie Veränderung kollektiver Zuschreibungen im Rahmen der sozialen Identitätsbildung hermeneutisch untersucht.
Mit Hilfe der nach der empirischen Methode der Grounded Theory kodierten Interpretationsergebnisse werden in der qualitativen Längsschnittstudie Vergleiche angestellt, sowohl fallintern als auch fallübergreifend, um Muster zu entdecken, Typologien zu konstruieren und die jeweiligen (kulturellen) Horizonte in den Aufsätzen zu verallgemeinern.
Neben der Grounded Theory bildet die Theorie der „alltäglichen Lebenswelt“ (Alfred Schütz) eine Basis der Herangehensweise an die Untersuchung der Aufsätze. In diesem Kontext wird ausgehend von der graduellen Unterteilung der Fremdheit in alltägliche, strukturelle und radikale Fremdheit sowie ausgehend von Goffmans Identitätskonzept der Frage nachgegangen, inwieweit sich in den untersuchten Texten die Bildung und Veränderung sozialer Identitäten feststellen lassen. Dabei werden Akkulturationsprozesse und Prozesse der Selbstidentifikation, bezüglich einer angenommenen Gemeinschaft, analysiert, die von kollektiven (kulturellen) Schemata bestimmt sind. In diesem Zusammenhang kann die vorliegende Dissertation zeigen, dass sich bestimmte kulturelle Schemata in der Auseinandersetzung mit dem vormals neuen Leben in Deutschland herausgebildet haben und bestimmte ältere Erfahrungen immer wieder zur Bestätigung dieser Bilder bzw. Erfahrungsschemata herangezogen und wie der Abdruck in Gestein fossiliert werden.
Menschen mit chronisch entzündlichen Darmerkrankungen (CED) leiden unter vielfältigen körperlichen und psychosozialen Einschränkungen. Wie auch bei anderen chronischen Erkrankungen könnten Patientenschulungen ihr psychisches Befinden verbessern (z.B. De Ridder & Schreurs, 2001; Faller, Reusch & Meng, 2011a; Küver, Becker & Ludt, 2008; Schüssler, 1998; Warsi, Wang, LaValley, Avorn & Solomon, 2004). Für CED liegen jedoch nur wenige Schulungsevaluationen vor (z.B. Bregenzer et al., 2005; Mussell, Böcker, Nagel, Olbrich & Singer, 2003; Oxelmark, Magnusson, Löfberg & Hillerås, 2007), deren Aussagekraft i.d.R. durch methodische Mängel eingeschränkt ist. Daher ist die Bedeutung von Schulungsprogrammen für CED-Betroffene weiterhin offen. Überdies gibt es für den deutschen Sprachraum noch keine Schulung, die zu psychischen Verbesserungen führt. Aus diesem Grunde wurde ein 1,5-tägiges Wochenend-Seminar mit medizinischen und psychologischen Inhalten konzeptionalisiert, manualisiert und in der vorliegenden Studie evaluiert.
Zur summativen Evaluation nahmen 181 ambulante CED-Patienten an einer prospektiven, multizentrischen, randomisierten, kontrollierten Studie mit vier Messzeitpunkten teil: vor (T1), zwei Wochen (T2) und drei Monate (T3) nach dem Seminar. Zur 12-Monatskatamnese (T4EG) wurde die Stabilität der Effekte in der Experimentalgruppe (EG; n = 86) überprüft. Die Wartekontrollgruppe (n = 95) erhielt zunächst die Standardbehandlung, also keine Patientenschulung, und konnte an dieser nach der dritten Datenerhebung ebenfalls teilnehmen. Kovarianzanalysen (ANCOVAs) mit Kontrolle für die jeweilige Ausgangslage wurden durchgeführt. Weitere Analysen legten eine Adjustierung für die Krankheitsaktivität zu T1 nahe, weshalb diese als zusätzliche Kovariate in die ANCOVAs aufgenommen wurde. Krankheitsbezogene Ängste und Sorgen (PS-CEDE Gesamtwert zu T3; Krebs, Kachel & Faller, 1998) fungierten als primärer Zielparameter. Zu den sekundären Zielkriterien gehörten Progredienzangst und Angstbewältigung (PA-F-KF und PA-F; Mehnert, Herschbach, Berg, Henrich & Koch, 2006 bzw. Dankert et al., 2003; Herschbach et al., 2005) sowie die Gesundheitskompetenzen Positive Grundhaltung, Aktive Lebensgestaltung und Erwerb von Fertigkeiten und Handlungsstrategien (heiQ; Osborne, Elsworth & Whitfield, 2007; Schuler et al., 2013). Weitere sekundäre Zielparameter waren gesundheitsbezogene Lebensqualität (SF-12; Bullinger & Kirchberger, 1998), Symptome einer Angststörung oder Depression (PHQ-4; Kroenke, Spitzer, Williams & Löwe, 2009; Löwe et al., 2010), Wissen, der Umgang mit der CED bzw. von ihr ausgelösten negativen Gefühlen sowie die Zufriedenheit der Teilnehmenden mit dem Seminar. Von Interesse war außerdem, ob Geschlecht, Alter, Art, Dauer oder Aktivität der Erkrankung vor der Schulung einen Einfluss auf die genannten Variablen hatten und ob für sie differentielle Wirksamkeitseffekte bestanden. Darüber hinaus wurden krankheitsbezogene Ängste und Sorgen von ungeschulten Studienteilnehmern untersucht.
Zwei Wochen und drei Monate nach der Schulung ließen sich im Vergleich von Experimental- und Kontrollgruppe signifikante, mittlere bis große Effekte auf krankheitsbezogene Ängste und Sorgen, Progredienzangst und deren Bewältigung sowie eine Positive Grundhaltung der CED gegenüber erzielen (stets p ≤ .001). Außerdem kam es zu beiden Messzeitpunkten zu signifikanten, großen Interventionseffekten auf den Erwerb von Fertigkeiten und Handlungsstrategien im Umgang mit der Erkrankung, das Wissen um sie und den Umgang mit ihr (stets p < .001) sowie zu moderaten Effekten auf den Umgang mit CED-bedingten negativen Gefühlen (T2: p = .001; T3: p = .008). Alle beschriebenen Effekte waren auch nach zwölf Monaten noch stabil. Für Aktive Lebensgestaltung, gesundheitsbezogene Lebensqualität sowie Angst- und Depressionssymptomatik konnten keine Schulungseffekte nachgewiesen werden.
Die zusätzliche Kontrolle für die Krankheitsaktivität zu T1 führte zu keinen wesentlichen Änderungen in den Ergebnissen. Auch bei den Subgruppenanalysen hatte die Krankheitsaktivität keinen relevanten Einfluss auf die Wirksamkeit der Schulung. Gleiches gilt für Geschlecht, Alter, Art und Dauer der CED. Mit Ausnahme der Krankheitsaktivität deuteten dies bereits die zur Baseline durchgeführten t-Tests an, bei denen insgesamt nur sehr wenige signifikante, höchstens moderate Unterschiede zwischen den einzelnen Subgruppen auftraten.
Sowohl bei der formativen als auch der summativen Evaluation zeigte sich überdies die hohe Zufriedenheit der Teilnehmenden mit der Schulung. Neben der Akzeptanz konnte außerdem die Durchführbarkeit bestätigt werden. Die Auswertung der Ängste und Sorgen der Studienteilnehmenden lieferte zudem Hinweise für die Entwicklung und Modifikation von Interventionen für CED-Betroffene.
Es lässt sich festhalten, dass für die hier evaluierte Schulung für CED-Patienten ein Wirksamkeitsnachweis erbracht werden konnte und sie sehr positiv von den Teilnehmenden bewertet wurde. Sie führte sowohl kurz-, mittel- als auch langfristig zu substantiellen Verbesserungen in psychischer Belastung, Selbstmanagement-Fähigkeiten, der Bewältigung der Erkrankung sowie im Wissen und war gleichermaßen wirksam bei Betroffenen, die sich in Geschlecht, Alter, Art, Dauer oder Aktivität ihrer CED unterschieden.
Hintergrund: Etablierte Protein- und Nukleinsäure-basierte Methoden für den spezifischen Pathogennachweis sind nur unter standardisierten Laborbedingungen von geschultem Personal durchführbar und daher mit einem hohen Zeit- und Kostenaufwand verbunden. In der Nukleinsäure-basierten Diagnostik kann durch die Einführung der isothermalen Amplifikation eine schnelle und kostengünstige Alternative zur Polymerase-Kettenreaktion (PCR) verwendet werden. Die Loop-mediated isothermal amplification (LAMP) bietet aufgrund der hohen Amplifikationseffizienz vielfältige Detektionsmöglichkeiten, die sowohl für Schnelltest- als auch für Monitoring-Anwendungen geeignet sind.
Ein wesentliches Ziel dieser Arbeit war die Verbesserung der Anwendbarkeit der LAMP und die Entwicklung einer neuen Methode für den einfachen, schnellen und günstigen Nachweis von Pathogenen mittels alternativer DNA- oder Pyrophosphat-abhängiger Detektionsverfahren. Hier wurden zunächst direkte und indirekte Detektionsmethoden untersucht und darauf aufbauend ein Verfahren entwickelt, mit dem neue Metallionen-abhängige Fluoreszenzfarbstoffe für die selektive Detektion von Pyrophosphat in der LAMP und anderen enzymatischen Reaktionen identifiziert werden können. Als Alternative für die DNA-basierte Detektion in der digitalen LAMP sollten die zuvor etablierten Farbstoffe für den Pyrophosphatnachweis in einer Emulsion getestet werden. Abschließend wurde ein neuer Reaktionsmechanismus für die effiziente Generierung hochmolekularer DNA unter isothermalen Bedingungen als Alternative zur LAMP entwickelt.
Ergebnisse: Für den Nachweis RNA- und DNA-basierter Phythopathogene konnte die Echtzeit- und Endpunktdetektion mit verschiedenen Farbstoffen in einem geschlossenen System etabliert werden. Hier wurde Berberin als DNA-interkalierender Fluoreszenzfarbstoff mit vergleichbarer Sensitivität zu SYBR Green und EvaGreen erfolgreich in der LAMP mit Echtzeitdetektion eingesetzt. Ein Vorteil von Berberin gegenüber den anderen Farbstoffen ist die Toleranz der DNA-Polymerase auch bei hohen Farbstoffkonzentrationen. Berberin kann daher auch in der geschlossenen LAMP-Reaktion ohne zusätzliche Anpassung der Reaktionsbedingungen für die Endpunktdetektion verwendet werden. Darüber hinaus konnte Hydroxynaphtholblau (HNB), das für den kolorimetrischen Endpunktnachweis bekannt ist, erstmals auch für die fluorimetrische Detektion der LAMP in Echtzeit eingesetzt werden. Zusätzlich konnten in der Arbeit weitere Metallionen-abhängige Farbstoffe zur indirekten Detektion der LAMP über das Pyrophosphat identifiziert werden. Dafür wurde eine iterative Methode entwickelt, mit der potenzielle Farbstoffe hinsichtlich ihrer Enzymkompatibilität und ihrer spektralen Eigenschaften bei An- oder Abwesenheit von Manganionen selektiert werden können. Mithilfe eines kombinatorischen Screenings im Mikrotiterplattenformat konnte die komplexe Konzentrationsabhängigkeit zwischen den einzelnen Komponenten für einen fluorimetrischen Verdrängungsnachweis untersucht werden. Durch die Visualisierung des Signal-Rausch-Verhältnis’ als Intensitätsmatrix (heatmap) konnten zunächst Alizarinrot S und Tetrazyklin unter simulierten Reaktionsbedingungen selektiert werden. In der anschließenden enzymatischen LAMP-Reaktion konnte insbesondere Alizarinrot S als günstiger, nicht-toxischer und robuster Fluoreszenzfarbstoff identifiziert werden und zeigte eine Pyrophosphat-abhängige Zunahme der Fluoreszenzintensität. Die zuvor etablierten Farbstoffe (HNB, Calcein und Alizarinrot S) konnten anschließend erfolgreich für die indirekte, fluorimetrische Detektion von Pyrophosphat in einer LAMP-optimierten Emulsion eingesetzt werden. Die Stabilität und Homogenität der generierten Emulsion wurde durch den Zusatz des Emulgators Poloxamer 188 verbessert. Durch die fluoreszenzmikroskopische Analyse der Emulsion war eine eindeutige Diskriminierung der positiven und negativen Tröpfchen vor allem bei Einsatz von Calcein und Alizarinrot S möglich. Aufgrund des komplexen Primer-Designs und der hohen Wahrscheinlichkeit unspezifischer Amplifikation in der LAMP wurde eine neue Bst DNA-Polymerase-abhängige isothermale Amplifikationsreaktion entwickelt. Durch die Integration einer spezifischen Linkerstruktur (abasische Stelle oder Hexaethylenglykol) zwischen zwei Primersequenzen konnte ein bifunktioneller Primer die effiziente Regenerierung der Primerbindungsstellen gewährleisten. Der neue Primer induziert nach der spezifischen Hybridisierung auf dem Templat die Rückfaltung zu einer Haarnadelstruktur und blockiert gleichzeitig die Polymeraseaktivität am Gegenstrang, wodurch eine autozyklische Amplifikation trotz konstanter Reaktionstemperatur möglich ist. Die Effizienz der „Hinge-initiated Primer dependent Amplification“ (HIP) konnte abschließend durch die Verkürzung der Distanz zwischen einem modifizierten Hinge-Primer und einem PCR-ähnlichen Primer verbessert werden.
Schlussfolgerung: Die LAMP hat sich aufgrund der hohen Robustheit und Effizienz zu einer leistungsfähigen Alternative für die klassische PCR in der molekularbiologischen Diagnostik entwickelt. Unterschiedliche Detektionsverfahren verbessern die Leistungsfähigkeit der qualitativen und quantitativen LAMP für die Feldanwendungen und für die Diagnostik, da die neuen DNA- und Pyrophosphat-abhängigen Nachweismethoden in einer geschlossenen Reaktion eingesetzt werden können und so eine einfache Pathogendiagnostik ermöglichen. Die gezeigten Methoden können darüber hinaus zu einer Kostensenkung und Zeitersparnis gegenüber den herkömmlichen Methoden beitragen. Ein attraktives Ziel stellt die Weiterentwicklung der HIP für den Pathogennachweis als Alternative zur LAMP dar. Hierbei können die neuen LAMP-Detektionsverfahren ebenfalls Anwendung finden. Die Verwendung von Bst DNA-Polymerase-abhängigen Reaktionen ermöglicht darüber hinaus die Integration einer robusten isothermalen Amplifikation in mikrofluidische Systeme. Durch die Kombination der Probenvorbereitung, Amplifikation und Detektion sind zukünftige Anwendungen mit kurzer Analysezeit und geringem apparativen Aufwand insbesondere in der Pathogendiagnostik möglich.
Direct anthropogenic influences on the Earth’s subsurface during drilling, extraction or injection activities, can affect land stability by causing subsidence, uplifts or lateral displacements. They can occur in localized as well as in uninhabited and inhabited regions. Thus the associated risks for humans, infrastructure, and environment must be minimized. To achieve this, appropriate surveillance methods must be found that can be used for simultaneous monitoring during such activities. Multi-temporal synthetic aperture radar interferometry (MT-InSAR) methods like the Persistent Scatterer Interferometry (PSI) and the Small BAseline Subsets (SBAS) have been developed as standard approaches for satellite-based surface displacement monitoring. With increasing spatial resolution and availability of SAR sensors in recent years, MT-InSAR can be valuable for the detection and mapping of even the smallest man-made displacements.
This doctoral thesis aims at investigating the capacities of the mentioned standard methods for this purpose, and comprises three main objectives against the backdrop of a user-friendly surveillance service:
(1) the spatial and temporal significance assessment against leveling, (2) the suitability evaluation of PSI and SBAS under different conditions, and (3) the analysis of the link between surface motion and subsurface processes.
Two prominent case studies on anthropogenic induced subsurface processes in Germany serve as the basis for this goal. The first is the distinct urban uplift with severe damages at Staufen im Breisgau that has been associated since 2007 with a failure to implement a shallow geothermal energy supply for an individual building. The second case study considers the pilot project of geological carbon dioxide (CO2) storage at Ketzin, and comprises borehole drilling and fluid injection of more than 67 kt CO2 between 2008 and 2013. Leveling surveys at Staufen and comprehensive background knowledge of the underground processes gained from different kinds of in-situ measurements at both locations deliver a suitable basis for this comparative study and the above stated objectives. The differences in location setting, i.e. urban versus rural site character, were intended to investigate the limitations in the applicability of PSI and SBAS.
For the MT-InSAR analysis, X-band images from the German TerraSAR-X and TanDEM-X satellites were acquired in the standard Stripmap mode with about 3 m spatial resolution in azimuth and range direction. Data acquisition lasted over a period of five years for Staufen (2008-2013), and four years for Ketzin (2009-2013). For the first approximation of the subsurface source, an inversion of the InSAR outcome in Staufen was applied. The modeled uplift based on complex hydromechanical simulations and a correlation analysis with bottomhole pressure data were used for comparison with MT-InSAR measurements at Ketzin.
In response to the defined objectives of this thesis, a higher level of detail can be achieved in mapping surface displacements without in-situ effort by using MT-InSAR in comparison to leveling (1). A clear delineation of the elliptical shaped uplift border and its magnitudes at different parts was possible at Staufen, with the exception of a vegetated area in the northwest. Vegetation coverage and the associated temporal signal decorrelation are the main limitations of MT-InSAR as clearly demonstrated at the Ketzin test site. They result in insufficient measurement point density and unwrapping issues. Therefore, spatial resolutions of one meter or better are recommended to achieve an adequate point density for local displacement analysis and to apply signal noise reduction. Leveling measurements can provide a complementary data source here, but require much effort pertaining to personnel even at the local scale. Horizontal motions could be identified at Staufen by only comparing the temporal evolution of the 1D line of sight (LOS) InSAR measurements with the available leveling data. An exception was the independent LOS decomposition using ascending and descending data sets for the period 2012-2013. The full 3D displacement field representation failed due to insufficient orbit-related, north-south sensitivity of the satellite-based measurements. By using the dense temporal mapping capabilities of the TerraSAR-X/TanDEM-X satellites after every 11 days, the temporal displacement evolution could be captured as good as that with leveling.
With respect to the tested methods and in the view of generality, SBAS should be preferred over PSI (2). SBAS delivered a higher point density, and was therefore less affected by phase unwrapping issues in both case studies. Linking surface motions with subsurface processes is possible when considering simplified geophysical models (3), but it still requires intensive research to gain a deep understanding.
BACKGROUND: Aggressive behavior at an early age is linked to a broad range of psychosocial problems in later life. That is why risk factors of the occurrence and the development of aggression have been examined for a long time in psychological science. The present doctoral dissertation aims to expand this research by investigating risk factors in three intrapersonal domains using the prominent social-information processing approach by Crick and Dodge (1994) as a framework model. Anger regulation was examined as an affective, theory of mind as a cognitive, and physical attractiveness as an appearance-related developmental factor of aggression in middle childhood. An additional goal of this work was to develop and validate a behavioral observation assessment of anger regulation as past research lacked in ecologically valid measures of anger regulation that are applicable for longitudinal studies.
METHODS: Three empirical studies address the aforementioned intrapersonal risk factors. In each study, data from the PIER-project were used, a three-wave-longitudinal study covering three years with a total sample size of 1,657 children in the age between 6 and 11 years (at the first measurement point). The central constructs were assessed via teacher-reports (aggression), behavioral observation (anger regulation), computer tests (theory of mind), and independent ratings (physical attractiveness). The predictive value of each proposed risk factor for the development of aggressive behavior was examined via structural equation modeling.
RESULTS AND CONCLUSION: The newly developed behavioral observation measure was found to be a reliable and valid tool to assess anger regulation in middle childhood, but limited in capturing a full range of relevant regulation strategies. That might be the reason, why maladaptive anger regulation was not found to function as a risk factor of subsequent aggressive behavior. However, children’s deficits in theory of mind and a low level in physical attractiveness significantly predicted later aggression. Problematic peer relationships were identified as underlying the link between low attractiveness and aggression. Thus, fostering children’s skills in theory of mind and their ability to call existing beliefs about the nature of more versus less attractive individuals into question may be important starting points for the prevention of aggressive behavior in middle childhood.
In Germany more than 200.000 persons die of cancer every year, which makes it the second most common cause of death. Chemotherapy and radiation therapy are often combined to exploit a supra-additive effect, as some chemotherapeutic agents like halogenated nucleobases sensitize the cancerous tissue to radiation. The radiosensitizing action of certain therapeutic agents can be at least partly assigned to their interaction with secondary low energy electrons (LEEs) that are generated along the track of the ionizing radiation. In the therapy of cancer DNA is an important target, as severe DNA damage like double strand breaks induce the cell death. As there is only a limited number of radiosensitizing agents in clinical practice, which are often strongly cytotoxic, it would be beneficial to get a deeper understanding of the interaction of less toxic potential radiosensitizers with secondary reactive species like LEEs. Beyond that LEEs can be generated by laser illuminated nanoparticles that are applied in photothermal therapy (PTT) of cancer, which is an attempt to treat cancer by an increase of temperature in the cells. However, the application of halogenated nucleobases in PTT has not been taken into account so far. In this thesis the interaction of the potential radiosensitizer 8-bromoadenine (8BrA) with LEEs was studied. In a first step the dissociative electron attachment (DEA) in the gas phase was studied in a crossed electron-molecular beam setup. The main fragmentation pathway was revealed as the cleavage of the C-Br bond. The formation of a stable parent anion was observed for electron energies around 0 eV. Furthermore, DNA origami nanostructures were used as platformed to determine electron induced strand break cross sections of 8BrA sensitized oligonucleotides and the corresponding nonsensitized sequence as a function of the electron energy. In this way the influence of the DEA resonances observed for the free molecules on the DNA strand breaks was examined. As the surrounding medium influences the DEA, pulsed laser illuminated gold nanoparticles (AuNPs) were used as a nanoscale electron source in an aqueous environment. The dissociation of brominated and native nucleobases was tracked with UV-Vis absorption spectroscopy and the generated fragments were identified with surface enhanced Raman scattering (SERS). Beside the electron induced damage, nucleobase analogues are decomposed in the vicinity of the laser illuminatednanoparticles due to the high temperatures. In order to get a deeper understanding of the different dissociation mechanisms, the thermal decomposition of the nucleobases in these systems was studied and the influence of the adsorption kinetics of the molecules was elucidated. In addition to the pulsed laser experiments, a dissociative electron transfer from plasmonically generated ”hot electrons” to 8BrA was observed under low energy continuous wave laser illumination and tracked with SERS. The reaction was studied on AgNPs and AuNPs as a function of the laser intensity and wavelength. On dried samples the dissociation of the molecule was described by fractal like kinetics. In solution, the dissociative electron transfer was observed as well. It turned out that the timescale of the reaction rates were slightly below typical integration times of Raman spectra. In consequence such reactions need to be taken into account in the interpretation of SERS spectra of electrophilic molecules. The findings in this thesis help to understand the interaction of brominated nucleobases with plasmonically generated electrons and free electrons. This might help to evaluate the potential radiosensitizing action of such molecules in cancer radiation therapy and PTT.
The goal of this thesis is related to the question how to introduce and combine simultaneously plasmonic and photoswitching properties to different nano-objects. In this thesis I investigate the complexes between noble metal nanoparticles and cationic surfactants containing azobenzene units in their hydrophobic tail, employing absorption spectroscopy, surface zeta-potential, and electron microscopy.
In the first part of the thesis, the formation of complexes between negatively charged laser ablated spherical gold nanoparticles and cationic azobenzene surfactants in trans- conformation is explored. It is shown that the constitution of the complexes strongly depends on a surfactant-to-gold molar ratio. At certain molar ratios, particle self-assembly into nanochains and their aggregation have been registered. At higher surfactant concentrations, the surface charge of nanoparticles turned positive, attributed to the formation of the stabilizing double layer of azobenzene surfactants on gold nanoparticle surfaces. These gold-surfactant complexes remained colloidally stable. UV light induced trans-cis isomerization of azobenzene surfactant molecules and thus perturbed the stabilizing surfactant shell, causing nanoparticle aggregation. The results obtained with silver and silicon nanoparticles mimick those for the comprehensively studied gold nanoparticles, corroborating the proposed model of complex formation.
In the second part, the interaction between plasmonic metal nanoparticles (Au, Ag, Pd, alloy Au-Ag, Au-Pd), as well as silicon nanoparticles, and cis-isomers of azobenzene containing compounds is addressed. Cis-trans thermal isomerization of azobenzenes was enhanced in the presence of gold, palladium, and alloy gold-palladium nanoparticles. The influence of the surfactant structure and nanoparticle material on the azobenzene isomerization rate is expounded. Gold nanoparticles showed superior catalytic activity for thermal cis-trans isomerization of azobenzenes. In a joint project with theoretical chemists, we demonstrated that the possible physical origin of this phenomenon is the electron transfer between azobenzene moieties and nanoparticle surfaces.
In the third part, complexes between gold nanorods and azobenzene surfactants with different tail length were exposed to UV and blue light, inducing trans-cis and cis-trans isomerization of surfactant, respectively. At the same time, the position of longitudinal plasmonic absorption maximum of gold nanorods experienced reversible shift responding to the changes in local dielectric environment. Surface plasmon resonance condition allowed the estimation of the refractive index of azobenzene containing surfactants in solution.
Die vorliegende Arbeit trägt den Titel: Intellektuellen-Rolle in Günter Grass Werken : „Die Plebejer proben den Aufstand“(1966), „Örtlich betäubt“(1969), „Aus dem Tagebuch einer Schnecke“(1972), und „Ein weites Feld“(1995).
Das erste Kapitel befasst sich insgesamt mit drei Haupttiteln
II. Der Intellektuelle
II.1 Das allgemeine Umfeld
In diesem Teil der Dissertation sollen Aussagen getroffen werden, die auf folgende und weitere Fragen eine Antwort geben: Was ist ein Intellektueller? Wie kam der Begriff zustande? Gibt es Unterschiede zwischen den Intellektuellen und wie werden sie eingeteilt?
II.2 Das deutsche Umfeld
Die Behandlung des Nazisystems und dessen historische Hintergründe vermittelt bedeutsame Lehren. Aber wozu braucht man diese Lehren? Gibt es Spuren von Nationalsozialismus heutzutage? Wo waren die Intellektuellen bei der Bildung des Nationalsozialismus? Ist der Nationalsozialismus erst mit Hitler aufgetaucht? Wenn zuvor, in welcher Phase hat er sich im Bewusstsein der Deutschen verankert? Ob theoretische bzw. geistige Tendenzen dazu beigetragen haben?
II.3 Das Bild von Grass als Intellektueller
II.3.1 Positionierung
Eine Hauptthese für Grass intellektuelle Positionierung wird durch die Verbindung zwischen Grass’ Grundkonzeption der gesellschaftspolitischen Intellektualität und der Gruppe 47 ermittelt. Dann bezweckt die Behandlung von Grass Bild nach Erscheinen seines autobiographischen Werks: „Beim Häuten der Zwiebel“ (2006), dass seine Intellektualität nicht nur aus dem positiven, sondern auch aus dem negativen Profil beleuchtet wird.
Aus der Darstellung zahlreicher Ansichten von Günter Grass werden fünf thematische Kernpunkte als Konzepte behandelt. Unter jedem Konzept sollen spezifische Vorschläge zur gesellschaftlichen Positionierung aufgezeigt werden.
II.3.2 Grass’ politische Merkmale
Es handelt sich hier um die intellektuellen Charaktereigenschaften. Dadurch kommen manche Fragen zu Wort: Hat Günter Grass gesellschaftliche Aktivitäten? Hat er die Voraussetzungen dafür? Wie ist der Umfang seiner Aktivitäten? Hat die Gruppe 47 Einfluss auf Grass intellektuelle Merkmale? Steht bei Grass eine Methode der gesellschaftspolitischen Arbeit zur Verfügung?
Dann wird die politische Sprache von Günter Grass und ihre Wirkung auf den Rezipienten untersucht. Danach wird nach Grass’ Auffassung von der Revision gefragt und ob sie mit seiner Auffassung der Aufklärung zusammenpasst. Darauf wird die Funktion der Revision in seinem literarischen Werk und in seiner gesellschaftspolitischen Aktivität gezeigt.
Abschließend werden die Argumente seiner Intellektualität untersucht:
Wie hat Grass’ gesellschaftspolitische Aktivität den konkreten politischen Rahmen berücksichtigt? Um diese Frage zu beantworten, muss der Zusammenhang zwischen Politik und Moral verdeutlicht werden.
III. Historischer Kontext und Inhalt der Werke
Unter diesem Titel wird erstens der historische Zusammenhang der untersuchten Werke skizziert. Dann werden meistens durch Argumente aus jedem Werk selbst nicht nur der Kern des Werkes und sein Handlungsverlauf, sondern auch die dafür angewandte Methode dargestellt.
IV. Bezug der untersuchten Werke zu konkreten gesellschaftspolitischen Fragen
IV.1 Interaktionswege des Intellektuellen mit der Gesellschaft, vor allem beim Wandel gesellschaftspolitischer Prozesse
Zentralkonzepte des ersten Werkes sind: Vermittlung, Engagement, Solidarität und die Aktualität als Maßstab. Diese werden durch zwei Konzepte des zweiten Werkes: Appell an Generationen beim Wechsel und Zusammenhaltsprinzip an Revision gebunden, sowie durch die Behandlung vom Prozess der Meinungsbildung im vierten Werk ausgearbeitet.
IV.2 Thematische Aspekte zur Vermeidung eines Naziregimes
Aus den thematischen Perspektiven der drei letzten Werke geht eine bunte Sammlung intellektueller Konzepte aus, die zur Bekämpfung von Nazivorsprünge verwendet werden können.
V. Pädagogische Strategien der untersuchten Werke
Die pädagogischen Aspekte der untersuchten Werke sollen intellektuelle Werte vermitteln, die einen bedeutenden Beitrag zur Lösung gesellschaftspolitischer Probleme und Konflikte leisten.
VI. Entwicklung der literarischen und gesellschaftspolitischen Vision
Hier wird die Entwicklungslinie der gesellschaftspolitischen Vision in den untersuchten Werken verfolgt.
VII. Zur Rezeption der vier Werke
Durch die Auseinandersetzung mit der negativen Kritik wird angestrebt, ihre Subjektivität darzulegen, damit der gesellschaftspolitische Wert der vier Werke enthüllt wird.
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
Das Ziel der Arbeit ist die Entwicklung eines heuristischen Bezugsrahmens zur Erklärung der Komplexität im Kontext von Industrie 4.0 und der demografischen Entwicklung aus strukturationstheoretischer Sicht. Dabei sind in Bezug auf die zukünftig zu erwartenden kognitiven Anforderungen an die Beschäftigten die Fragen essentiell, vor welchen Herausforderungen Unternehmen bezüglich der Einstellung und dem Verhalten sowie dem Erfahrungswissen der Beschäftigten stehen und welche Lösungsansätze sich im Umgang mit den Herausforderungen in der Praxis bisher als hilfreich erweisen.
In Kapitel 1 erfolgt zunächst die Beschreibung der Ausgangslage. Es werden die Begriffe Industrie 4.0 und demografische Entwicklung inhaltlich diskutiert und in einen theoretischen Zusammenhang gebracht.
In Kapitel 2 erfolgt die theoretische Fundierung der Arbeit. Dabei wird eine strukturationstheoretische Sicht auf Unternehmen als soziotechnische Systeme eingenommen. Durch diese „nicht deterministische“ Sichtweise wird ein prozessualer Blick auf den Wandlungsprozess in Unternehmen geschaffen, der es möglich macht, die Beschäftigten als aktiv handelnde Akteure im Sinne von „organisieren“ zur Erklärung möglicher Zusammenhänge zwischen Industrie 4.0 und der demografischen Entwicklung mit einzubeziehen. Der soziotechnische Systemansatz und die Strukturationstheorie bilden in diesem Sinne den „Kern“ des zu entwickelnden heuristischen Bezugsrahmens.
Die inhaltliche Gestaltung des theoriebasierten heuristischen Bezugsrahmens erfolgt in Kapitel 3 und Kapitel 4.
Kapitel 3 beschreibt ausgewählte Aspekte zukünftiger Anforderungen an die Arbeit, die durch eine systematische Aufbereitung des derzeitigen Forschungsstandes zu Industrie 4.0 ermittelt wurden. Sie bilden die „Gestaltungsgrenzen“, innerhalb derer sich je nach betrieblicher Situation unterschiedliche neue oder geänderte Anforderungen an die Beschäftigten bei der Gestaltung von Industrie 4.0 ableiten lassen.
In Kapitel 4 werden ausgewählte Aspekte menschlichen Handelns am Beispiel älterer Beschäftigter in Form zweier Schwerpunkte beschrieben.
Der erste Schwerpunkt betrifft mögliche Einflussfaktoren auf die Einstellung und das Verhalten älterer Beschäftigter im Wandlungsprozess aufgrund eines vorherrschenden Altersbildes im Unternehmen. Grundlage hierzu bildete die Stigmatisierungstheorie als interaktionistischer Ansatz der Sozialtheorie.
Mit dem zweiten Schwerpunkt, den ausgewählten handlungstheoretischen Aspekten der Alternsforschung aus der Entwicklungspsychologie, wird eine Lebensspannenperspektive eingenommen. Inhaltlich werden die komplexitätsinduzierten Faktoren, die sich aus handlungstheoretischer Perspektive mit der Adaptation von älteren Beschäftigten an veränderte äußere und persönliche Lebensbedingungen beschäftigen, systematisiert.
Anschließend wird auf Grundlage der bisherigen theoretischen Vorüberlegungen ein erster theoriebasierter Bezugsrahmen abgeleitet.
Kapitel 5 und Kapitel 6 beschreiben den empirischen Teil, die Durchführung teilstrukturierter Interviews, der Arbeit. Ziel der empirischen Untersuchung war es, neben der theoretischen Fundierung den theoriebasierten heuristischen Bezugsrahmen um Praxiserfahrungen zu konkretisieren und gegebenenfalls zu ergänzen. Hierzu wurde auf Grundlage des theoriebasierten heuristischen Bezugsrahmens mittels teilstrukturierter Interviews das Erfahrungswissen von 23 Experten in persönlichen Gesprächen abgefragt.
Nachdem in Kapitel 5 die Vorgehensweise der empirischen Untersuchung beschrieben wird, erfolgt in Kapitel 6 die Beschreibung der Ergebnisse aus der qualitativen Befragung. Hierzu werden aus den persönlichen Gesprächen zentrale Einflussfaktoren bei der Gestaltung und Umsetzung von Industrie 4.0 im Kontext mit der demografischen Entwicklung analysiert und in die übergeordneten Kategorien Handlungskompetenzen, Einstellung/ Verhalten sowie Erfahrungswissen geclustert.
Anschließend wird der theoriebasierte heuristische Bezugsrahmen durch die übergeordneten Kategorien und Faktoren aus den Expertengesprächen konkretisiert und ergänzt.
In Kapitel 7 werden auf Grundlage des heuristischen Bezugsrahmens sowie der Empfehlungen aus den Experteninterviews beispielhaft Implikationen für die Praxis abgeleitet. Es werden Interventionsmöglichkeiten zur Unterstützung einer positiven Veränderungsbereitschaft und einem positiven Veränderungsverhalten für den Strukturwandel aufgezeigt. Hierzu gehören die Anpassung des Führungsverhaltens im Wandlungsprozess, der Umgang mit der Paradoxie von Stabilität und Flexibilität, der Umgang mit Altersstereotypen in Unternehmen, die Unterstützung von Strategien zu Selektion, Optimierung und Kompensation sowie Maßnahmen zur Ausrichtung von Aktivitäten an die Potenzialrisiken der Beschäftigten.
Eine Zusammenfassung, ein Resümee und ein Ausblick erfolgen abschließend in Kapitel 8.
Import and decomposition of dissolved organic carbon in pre-dams of drinking water reservoirs
(2017)
Dissolved organic carbon (DOC) depicts a key component in the aquatic carbon cycle as well as for drinking water production from surface waters. DOC concentrations increased in water bodies of the northern hemisphere in the last decades, posing ecological consequences and water quality problems. Within the pelagic zone of lakes and reservoirs, the DOC pool is greatly affected by biological activity as DOC is simultaneously produced and decomposed. This thesis aimed for a conceptual understanding of organic carbon cycling and DOC quality changes under differing hydrological and trophic conditions. Further, the occurrence of aquatic priming was investigated, which has been proposed as a potential process facilitating the microbial decomposition of stable allochthonous DOC within the pelagic zone.
To study organic carbon cycling under different hydrological conditions, quantitative and qualitative investigations were carried out in three pre-dams of drinking water reservoirs exhibiting a gradient in DOC concentrations and trophic states. All pre-dams were mainly autotrophic in their epilimnia. Discharge and temperature were identified as the key factors regulating net production and respiration in the upper water layers of the pre-dams. Considerable high autochthonous production was observed during the summer season under higher trophic status and base flow conditions. Up to 30% of the total gained organic carbon was produced within the epilimnia. Consequently, this affected the DOC quality within the pre-dams over the year and enhanced characteristics of algae-derived DOC were observed during base flow in summer. Allochthonous derived DOC dominated at high discharges and oligotrophic conditions when production and respiration were low. These results underline that also small impoundments with typically low water residence times are hotspots of carbon cycling, significantly altering water quality in dependence of discharge conditions, temperature and trophic status. Further, it highlights that these factors need to be considered in future water management as increasing temperatures and altered precipitation patterns are predicted in the context of climate change.
Under base flow conditions, heterotrophic bacteria preferentially utilized older DOC components with a conventional radiocarbon age of 195-395 years before present (i.e. before 1950). In contrast, younger carbon components (modern, i.e. produced after 1950) were mineralized following a storm flow event. This highlights that age and recalcitrance of DOC are independent from each other. To assess the ages of the microbially consumed DOC, a simplified method was developed to recover the respired CO2 from heterotrophic bacterioplankton for carbon isotope analyses (13C, 14C). The advantages of the method comprise the operation of replicate incubations at in-situ temperatures using standard laboratory equipment and thus enabling an application in a broad range of conditions.
Aquatic priming was investigated in laboratory experiments during the microbial decomposition of two terrestrial DOC substrates (peat water and soil leachate). Thereby, natural phytoplankton served as a source of labile organic matter and the total DOC pool increased throughout the experiments due to exudation and cell lysis of the growing phytoplankton. A priming effect for both terrestrial DOC substrates was revealed via carbon isotope analysis and mixing models. Thereby, priming was more pronounced for the peat water than for the soil leachate. This indicates that the DOC source and the amount of the added labile organic matter might influence the magnitude of a priming effect. Additional analysis via high-resolution mass spectrometry revealed that oxidized, unsaturated compounds were more strongly decomposed under priming (i.e. in phytoplankton presence). Given the observed increase in DOC concentrations during the experiments, it can be concluded that aquatic priming is not easily detectable via net concentration changes alone and could be considered as a qualitative effect.
The knowledge gained from this thesis contributes to the understanding of aquatic carbon cycling and demonstrated how DOC dynamics in freshwaters vary with hydrological, seasonal and trophic conditions. It further demonstrated that aquatic priming contributes to the microbial transformation of organic carbon and the observed decay of allochthonous DOC during transport in inland waters.
Background: Obesity is thought to be the consequence of an unhealthy nutrition and a lack of physical activity. Although the resulting metabolic alterations such as impaired glucose homeostasis and insulin sensitivity can usually be improved by physical activity, some obese patients fail to enhance skeletal muscle metabolic health with exercise training. Since this might be largely heritable, maternal nutrition during pregnancy and lactation is hypothesized to impair offspring skeletal muscle physiology.
Objectives: This PhD thesis aims to investigate the consequences of maternal high-fat diet (mHFD) consumption on offspring skeletal muscle physiology and exercise performance. We could show that maternal high-fat diet during gestation and lactation decreases the offspring’s training efficiency and endurance performance by influencing the epigenetic profile of their skeletal muscle and altering the adaptation to an acute exercise bout, which in long-term, increases offspring obesity susceptibility.
Experimental setup: To investigate this issue in detail, we conducted several studies with a similar maternal feeding regime. Dams (C57BL/6J) were either fed a low-fat diet (LFD; 10 energy% from fat) or high-fat diet (HFD; 40 energy% from fat) during pregnancy and lactation. After weaning, male offspring of both maternal groups were switched to a LFD, on which they remained until sacrifice in week 6, 15 or 25. In one study, LFD feeding was followed by HFD provision from week 15 until week 25 to elucidate the effects on offspring obesity susceptibility. In week 7, all mice were randomly allocated to a sedentary group (without running wheel) or an exercised group (with running wheel for voluntary exercise training). Additionally, treadmill endurance tests were conducted to investigate training performance and efficiency. In order to uncover regulatory mechanisms, each study was combined with a specific analytical setup, such as whole genome microarray analysis, gene and protein expression analysis, DNA methylation analyses, and enzyme activity assays.
Results: mHFD offspring displayed a reduced training efficiency and endurance capacity. This was not due to an altered skeletal muscle phenotype with changes in fiber size, number, and type. DNA methylation measurements in 6 week old offspring showed a hypomethylation of the Nr4a1 gene in mHFD offspring leading to an increased gene expression. Since Nr4a1 plays an important role in the regulation of skeletal muscle energy metabolism and early exercise adaptation, this could affect offspring training efficiency and exercise performance in later life.
Investigation of the acute response to exercise showed that mHFD offspring displayed a reduced gene expression of vascularization markers (Hif1a, Vegfb, etc) pointing towards a reduced angiogenesis which could possibly contribute to their reduced endurance capacity. Furthermore, an impaired glucose utilization of skeletal muscle during the acute exercise bout by an impaired skeletal muscle glucose handling was evidenced by higher blood glucose levels, lower GLUT4 translocation and diminished Lactate dehydrogenase activity in mHFD offspring immediately after the endurance test. These points towards a disturbed use of glucose as a substrate during endurance exercise. Prolonged HFD feeding during adulthood increases offspring fat mass gain in mHFD offspring compared to offspring from low-fat fed mothers and also reduces their insulin sensitivity pointing towards a higher obesity and diabetes susceptibility despite exercise training. Consequently, mHFD reduces offspring responsiveness to the beneficial effects of voluntary exercise training.
Conclusion: The results of this PhD thesis demonstrate that mHFD consumption impairs the offspring’s training efficiency and endurance capacity, and reduced the beneficial effects of exercise on the development of diet-induced obesity and insulin resistance in the offspring.
This might be due to changes in skeletal muscle epigenetic profile and/or an impaired skeletal muscle angiogenesis and glucose utilization during an acute exercise bout, which could contribute to a disturbed adaptive response to exercise training.
Das Thema der vorliegenden Arbeit ist die semantische Suche im Kontext heutiger Informationsmanagementsysteme. Zu diesen Systemen zählen Intranets, Web 3.0-Anwendungen sowie viele Webportale, die Informationen in heterogenen Formaten und Strukturen beinhalten. Auf diesen befinden sich einerseits Daten in strukturierter Form und andererseits Dokumente, die inhaltlich mit diesen Daten in Beziehung stehen. Diese Dokumente sind jedoch in der Regel nur teilweise strukturiert oder vollständig unstrukturiert. So beschreiben beispielsweise Reiseportale durch strukturierte Daten den Zeitraum, das Reiseziel, den Preis einer Reise und geben in unstrukturierter Form weitere Informationen, wie Beschreibungen zum Hotel, Zielort, Ausflugsziele an.
Der Fokus heutiger semantischer Suchmaschinen liegt auf dem Finden von Wissen entweder in strukturierter Form, auch Faktensuche genannt, oder in semi- bzw. unstrukturierter Form, was üblicherweise als semantische Dokumentensuche bezeichnet wird. Einige wenige Suchmaschinen versuchen die Lücke zwischen diesen beiden Ansätzen zu schließen. Diese durchsuchen zwar gleichzeitig strukturierte sowie unstrukturierte Daten, werten diese jedoch entweder weitgehend voneinander unabhängig aus oder schränken die Suchmöglichkeiten stark ein, indem sie beispielsweise nur bestimmte Fragemuster unterstützen. Hierdurch werden die im System verfügbaren Informationen nicht ausgeschöpft und gleichzeitig unterbunden, dass Zusammenhänge zwischen einzelnen Inhalten der jeweiligen Informationssysteme und sich ergänzende Informationen den Benutzer erreichen.
Um diese Lücke zu schließen, wurde in der vorliegenden Arbeit ein neuer hybrider semantischer Suchansatz entwickelt und untersucht, der strukturierte und semi- bzw. unstrukturierte Inhalte während des gesamten Suchprozesses kombiniert. Durch diesen Ansatz werden nicht nur sowohl Fakten als auch Dokumente gefunden, es werden auch Zusammenhänge, die zwischen den unterschiedlich strukturierten Daten bestehen, in jeder Phase der Suche genutzt und fließen in die Suchergebnisse mit ein. Liegt die Antwort zu einer Suchanfrage nicht vollständig strukturiert, in Form von Fakten, oder unstrukturiert, in Form von Dokumenten vor, so liefert dieser Ansatz eine Kombination der beiden. Die Berücksichtigung von unterschiedlich Inhalten während des gesamten Suchprozesses stellt jedoch besondere Herausforderungen an die Suchmaschine. Diese muss in der Lage sein, Fakten und Dokumente in Abhängigkeit voneinander zu durchsuchen, sie zu kombinieren sowie die unterschiedlich strukturierten Ergebnisse in eine geeignete Rangordnung zu bringen. Weiterhin darf die Komplexität der Daten nicht an die Endnutzer weitergereicht werden. Die Darstellung der Inhalte muss vielmehr sowohl bei der Anfragestellung als auch bei der Darbietung der Ergebnisse verständlich und leicht interpretierbar sein.
Die zentrale Fragestellung der Arbeit ist, ob ein hybrider Ansatz auf einer vorgegebenen Datenbasis die Suchanfragen besser beantworten kann als die semantische Dokumentensuche und die Faktensuche für sich genommen, bzw. als eine Suche die diese Ansätze im Rahmen des Suchprozesses nicht kombiniert. Die durchgeführten Evaluierungen aus System- und aus Benutzersicht zeigen, dass die im Rahmen der Arbeit entwickelte hybride semantische Suchlösung durch die Kombination von strukturierten und unstrukturierten Inhalten im Suchprozess bessere Antworten liefert als die oben genannten Verfahren und somit Vorteile gegenüber bisherigen Ansätzen bietet. Eine Befragung von Benutzern macht deutlich, dass die hybride semantische Suche als verständlich empfunden und für heterogen strukturierte Datenmengen bevorzugt wird.
Introduction: Carbohydrate (CHO) and fat are the main substrates to fuel prolonged endurance exercise, each having its oxidation patterns regulated by several factors such as intensity, duration and mode of the activity, dietary intake pattern, muscle glycogen concentrations, gender and training status. Exercising at intensities where fat oxidation rates are high has been shown to induce metabolic benefits in recreational and health-oriented sportsmen. The exercise intensity (Fatpeak) eliciting peak fat oxidation rates is therefore of particular interest when aiming to prescribe exercise for the purpose of fat oxidation and related metabolic effects. Although running and walking are feasible and popular among the target population, no reliable protocols are available to assess Fatpeak as well as its actual velocity (VPFO) during treadmill ergometry. Moreover, to date, it remains unclear how pre-exercise CHO availability modulates the oxidative regulation of substrates when exercise is conducted at the intensity where the individual anaerobic threshold (IAT) is located (VIAT). That is, a metabolic marker representing the upper border where constant load endurance exercise can be sustained, being commonly used to guide athletic training or in performance diagnostics. The research objectives of the current thesis were therefore, 1) to assess the reliability and day-to-day variability of VPFO and Fatpeak during treadmill ergometry running; 2) to assess the impact of high CHO (HC) vs. low CHO (LC) diets (where on the LC day a combination of low CHO diet and a glycogen depleting exercise was implemented) on the oxidative regulation of CHOs and fat while exercise is conducted at VIAT. Methods: Research objective 1: Sixteen recreational athletes (f=7, m=9; 25 ± 3 y; 1.76 ± 0.09 m; 68.3 ± 13.7 kg; 23.1 ± 2.9 kg/m²) performed 2 different running protocols on 3 different days with standardized nutrition the day before testing. At day 1, peak oxygen uptake (VO2peak) and the velocities at the aerobic threshold (VLT) and respiratory exchange ratio (RER) of 1.00 (VRER) were assessed. At days 2 and 3, subjects ran an identical submaximal incremental test (Fat-peak test) composed of a 10 min warm-up (70% VLT) followed by 5 stages of 6 min with equal increments (stage 1 = VLT, stage 5 = VRER). Breath-by-breath gas exchange data was measured continuously and used to determine fat oxidation rates. A third order polynomial function was used to identify VPFO and subsequently Fatpeak. The reproducibility and variability of variables was verified with an intraclass correlation coefficient (ICC), Pearson’s correlation coefficient, coefficient of variation (CV) and the mean differences (bias) ± 95% limits of agreement (LoA). Research objective 2: Sixteen recreational runners (m=8, f=8; 28 ± 3 y; 1.76 ± 0.09 m; 72 ± 13 kg; 23 ± 2 kg/m²) performed 3 different running protocols, each allocated on a different day. At day 1, a maximal stepwise incremental test was implemented to assess the IAT and VIAT. During days 2 and 3, participants ran a constant-pace bout (30 min) at VIAT that was combined with randomly assigned HC (7g/kg/d) or LC (3g/kg/d) diets for the 24 h before testing. Breath-by-breath gas exchange data was measured continuously and used to determine substrate oxidation. Dietary data and differences in substrate oxidation were analyzed with a paired t-test. A two-way ANOVA tested the diet X gender interaction (α = 0.05). Results: Research objective 1: ICC, Pearson’s correlation and CV for VPFO and Fatpeak were 0.98, 0.97, 5.0%; and 0.90, 0.81, 7.0%, respectively. Bias ± 95% LoA was -0.3 ± 0.9 km/h for VPFO and -2 ± 8% of VO2peak for Fatpeak. Research objective 2: Overall, the IAT and VIAT were 2.74 ± 0.39 mmol/l and 11.1 ± 1.4 km/h, respectively. CHO oxidation was 3.45 ± 0.08 and 2.90 ± 0.07 g/min during HC and LC bouts respectively (P < 0.05). Likewise, fat oxidation was 0.13 ± 0.03 and 0.36 ± 0.03 g/min (P < 0.05). Females had 14% (P < 0.05) and 12% (P > 0.05) greater fat oxidation compared to males during HC and LC bouts, respectively. Conclusions: Research objective 1: In summary, relative and absolute reliability indicators for VPFO and Fatpeak were found to be excellent. The observed LoA may now serve as a basis for future training prescriptions, although fat oxidation rates at prolonged exercise bouts at this intensity still need to be investigated. Research objective 2: Twenty-four hours of high CHO consumption results in concurrent higher CHO oxidation rates and overall utilization, whereas maintaining a low systemic CHO availability significantly increases the contribution of fat to the overall energy metabolism. The observed gender differences underline the necessity of individualized dietary planning before exerting at intensities associated with performance exercise. Ultimately, future research should establish how these findings can be extrapolated to training and competitive situations and with that provide trainers and nutritionists with improved data to derive training prescriptions.
Adipositas wird mit einer Vielzahl schwerwiegender Folgeerkrankungen in Verbindung gebracht. Eine Gewichtsreduktion führt zu einer Verbesserung der metabolischen Folgen der Adipositas. Es ist bekannt, dass die Mehrzahl der adipösen Personen in den Monaten nach der Gewichtsreduktion einen Großteil des abgenommenen Gewichts wieder zunimmt. Nichtsdestotrotz existiert eine hohe Variabilität hinsichtlich des Langzeiterfolges einer Gewichtsreduktion. Der erfolgreiche Erhalt des reduzierten Körpergewichts einiger Personen führt zu der Frage nach den Faktoren, die einen Gewichtserhalt beeinflussen, mit dem Ziel einen Ansatzpunkt für mögliche Therapiestrategien zu identifizieren.
In der vorliegenden Arbeit wurde im Rahmen einer kontrollierten, randomisierten Studie mit 143 übergewichtigen Probanden untersucht, ob nach einer dreimonatigen Gewichtsreduktion eine zwölfmonatige gewichtsstabilisierende Lebensstilintervention einen Einfluss auf die Veränderungen der neuroendokrinen Regelkreisläufe und damit auf den langfristigen Gewichtserhalt über einen Zeitraum von achtzehn Monaten hat.
Hierbei wurde im Vergleich der beiden Behandlungsgruppen primär festgestellt, dass die multimodale Lebensstilintervention zu einer Gewichtstabilisierung über die Dauer dieser zwölfmonatigen Behandlungsphase führte. In der Kontrollgruppe kam es zu einer moderaten Gewichtszunahme . Dadurch war nach Beendigung der Interventionsphase der BMI der Teilnehmer in der Kontrollgruppe höher als der in der Interventionsgruppe (34,1±6,0 kg*m-2 vs. 32,4±5,7 kg*m-2; p<0,01).
Während der Nachbeobachtungszeit war die Interventionsgruppe durch eine signifikant stärkere Gewichtswiederzunahme im Vergleich zur Kontrollgruppe gekennzeichnet, so dass der BMI zwischen beiden Behandlungsgruppen bereits sechs Monate nach der Intervention keinen Unterschied mehr aufwies.
Bezüglich der hormonellen Veränderung durch die Gewichtsreduktion wurde, wie erwartet, eine Auslenkung des endokrinen Systems beobachtet. Jedoch konnte kein Unterschied der untersuchten Hormone im Vergleich der beiden Behandlungsgruppen ausfindig gemacht werden.
Im Verlauf der Gewichtsabnahme und der anschließenden Studienphasen zeigten sich tendenziell drei verschiedene Verlaufsmuster in den hormonellen Veränderungen. Nach einer zusätzlichen Adjustierung auf den jeweiligen BMI des Untersuchungszeitpunktes konnte für die TSH-Spiegel (p<0,05), die Schilddrüsenhormone (p<0,001) und für die IGF 1-Spiegel (p<0,001) eine über die Studienzeit anhaltende Veränderung festgestellt werden.
Abschließend wurde behandlungsgruppenunabhängig untersucht, ob die Hormonspiegel nach Gewichtsreduktion oder ob die relative hormonelle Veränderung während der Gewichtsreduktion prädiktiv für den Erfolg der Gewichterhaltungsphase ist. Hier fand sich für die Mehrzahl der hormonellen Parameter kein Effekt auf die Langzeitentwicklung der Gewichtszunahme. Jedoch konnte gezeigt werden, dass eine geringere Abnahme der 24h Urin-Metanephrin-Ausscheidung während der Gewichtsabnahmephase mit einem besseren Erfolg bezüglich des Gewichtserhalts über die achtzehnmonatige Studienzeit assoziiert war (standardisiertes Beta= -0,365; r2=0,133 p<0,01). Die anderen hormonellen Achsen zeigten keinen nachweislichen Effekt.
Throughout all different socio-historical tensions undergone by the Latin American modernit(ies), the literary-historical production as well as the reflection on the topic - regional, national, supranational and/or continental – have been part of the critical and intellectual itinerary of very significant political and cultural projects, whose particular development allows the analysis of the socio-discursive dynamics fulfilled by the literary historiography in the search of a historical conscience and representation of the esthetic-literary processes.
In present literary and cultural Central American literary studies, the academic thought on the development of the literary historiography has given place to some works whose main objects of study involve a significant corpus of national literature histories published mainly in the 20th century, between the forties and the eighties. Although these studies differ greatly from the vast academic production undertaken by the literary critics in the last two decades, the field of research of the literary historiography in Central America has made a theoretical-methodological effort, as of the eighties and until now, to analyze the local literary-historical productions.
However, this effort was carried out more systematically in the last five years of the 20th century, within the Central American democratic transition and post-war context, when a national, supra national and transnational model of literary history was boosted. This gave place to the creation and launching of the project Hacia una Historia de las Literaturas Centroamericanas (HILCAS) at the beginning of the new millennium.
Given the ideological relevance which the literary historiography has played in the process of the historical formation of the Hispano-American States, and whose philological tradition has also had an impact in the various Central American nation states, the emergence of this historiographic project marks an important rupture in relation with the national paradigms, and it is also manifested in a movement of transition and tension with regard to the new cultural, comparative and transareal dynamics, which seek to understand the geographical, transnational, medial and transdisciplinary movements within which the esthetic narrative processes and the idea and formation of a critical Central American subject gain shape.
Taking this aspect into account, our study puts forward as its main hypothesis that the historiographic thought developed as a consequence of the project Hacia una Historia de las Literaturas Centroamericanas (HILCAS) constitutes a socio-discursive practice, which reflects the formation of a historic-literary conscience and of a critical intellectual subject, an emergence that takes place between the mid-nineties and the first decade of the 21st century.
In this respect, and taking as a basis the general purpose of this investigation indicated before, the main justification for our object of study consists of making the Central American historiographic reflection visible as a part of the epistemological and cultural changes shown by the Latin American historiographic thought, and from which a new way of conceptualization of the space, the coexistence and the historic conscience emerge with regard to the esthetic-literary practices and processes.
Based on the field and hypothesis stated before, the general purpose of this research is framed by the socio-discursive dimension fulfilled by the Latin American literary historiography, and it aims to analyze the Central American historical-literary thought developed between the second half of the nineties and the beginning of the first decade of the 21st century.
The first main goal of this thesis is to develop a concept of approximate differentiability of higher order for subsets of the Euclidean space that allows to characterize higher order rectifiable sets, extending somehow well known facts for functions. We emphasize that for every subset A of the Euclidean space and for every integer k ≥ 2 we introduce the approximate differential of order k of A and we prove it is a Borel map whose domain is a (possibly empty) Borel set. This concept could be helpful to deal with higher order rectifiable sets in applications.
The other goal is to extend to general closed sets a well known theorem of Alberti on the second order rectifiability properties of the boundary of convex bodies. The Alberti theorem provides a stratification of second order rectifiable subsets of the boundary of a convex body based on the dimension of the (convex) normal cone. Considering a suitable generalization of this normal cone for general closed subsets of the Euclidean space and employing some results from the first part we can prove that the same stratification exists for every closed set.
The valorization of carbohydrates is one of the most promising fields in green chemistry, as it enables to produce bulk chemicals and fuels out of renewable and abundant resources, instead of further exploiting fossil feedstocks. The focus in this thesis is the conversion of fructose, using dehydration and hydrodeoxygenation reactions. The main goal is to find an easy continuous process, including the solubility of the sugar in a green solvent, the conversion over a solid acid as well as over a metal@tungsten carbide catalyst.
At the beginning of this thesis, solid acid catalysts are synthesized by using carbohydrate material like glucose and starch at high temperatures (up to 600 °C). Additionally a third carbon is synthesized, using an activation method based on Ca(OH)2. After carbonization and further sulfonation, using fuming sulfuric acid, the three resulting catalysts are characterized together with sulfonated carbon black and Amberlyst 15 as references. In order to test all solid acid catalysts in reaction, a 250 mm x 4.6 mm stainless steel column is used as a fixed-bed continuous reactor. The temperature (110 °C to 250 °C) and residence time (2 to 30 minutes) is varied, and a direct relationship between contact time and selectivity is determined. The reaction mechanism, as well as the product distribution is showing a dehydration step of fructose towards 5-hydroxymethylfurfural (HMF). These furan-ring molecules are considered as “sleeping giants”, due to the possibility of using them as fuel, but also for upgrading them to chemicals like terephthalic acid or p-xylene. Consecutive reactions are producing levulinic acid, as well as condensation products with ethanol and formic acid. The activated carbon is additionally showing a 2 % yield of 2,5-Dimethylfuran (DMF) production, pointing towards the extraordinary properties of this catalyst. Without a metal catalyst present, what is normally necessary for hydrogenation reactions, a transferhydrogenation (with formic acid) is observed. The active catalyst was therefore carbon itself, what activated the hydrogen on its surface. This phenomenon was just very rarely observed so far. Expensive noble metals are the material of choice, when it comes to hydrogenation reactions nowadays and cheaper alternatives are necessary.
By postulating a similar electronic structure of tungsten carbide (WC) to platinum by Lewy and Boudart, research is focusing on the replacement of Pt. The production of nano-sized tungsten carbide particles (7.5 ± 2.5 nm, 70 m2 g-1) is enabled by the so called “urea glass route” and its catalytic performances are compared to commercial material. It is shown, that the activity is strongly dependent on the size of the particles as well as the surface area. Nano-sized tungsten carbide is showing activity for hydrogenation reactions under mild conditions (maximum 150 °C, 30 bar). This material therefore opens up new possibilities for replacing the rare and expensive platinum with tungsten carbide based catalysts.
Additionally different metal nanoparticles of palladium, copper and nickel are deposited on top of WC to further promote its reactivity. The nickel nanoparticles are strongly connected to WC and showed the best activity as well as selectivity for upgrading HMF with hydrodeoxygenation. The Ni@WC is not leaching and is showing very good hydrodeoxygenation properties with DMF yields up to 90 percent. Copper@WC is not showing good activity and palladium@WC enables undesired consecutive reactions, hydrogenating the furan ring system.
In order to enable the upgrade of fructose to DMF directly in a continuous system, the current H CUBE Pro TM hydrogenation system is customized with a second reaction column. A 250 mm x 4.6 mm stainless steel reactor column is connected ahead of the hydrogen insertion, enabling the dehydration of fructose to HMF derivatives, before pumping these products into the second column for hydrogenation. The overall residence time in the two column reactor system is 14 minutes. The overall results are an almost full conversion with a yield of 38.5 % DMF and 47 % yield of EL. The main disadvantage is the formation of higher mass products, so called humins, which start depositing on top of the catalysts, blocking their active sites.
In general it can be stated, that a two column system goes along with a higher investment as well as more maintenance costs, compared to a one column catalytic approach. To develop a catalyst, which is on the one hand able to dehydrate as well as hydrodeoxygenate the reactants, is aimed for at the last part of the thesis. The activated carbon however shows already activity for hydrodeoxygenation without any metal present and offers itself therefore as an alternative to overcome the temperature instability of Amberlyst 15 (max. 120 °C) for a combined DMF production directly from fructose. The activity for the upgrade to DMF is increased from 2 % to 12 % DMF yield in one mixed continuous column.
In order to scale up the entire one column approach, an 800 mm x 28.5 mm inner diameter column was planned and manufactured. The system is scaled up and assembled, whereas this flow reactor system is able to be run with 50 mL min-1 maximum flow rate, to stand a pressure of maximum 100 bar and be heated to around 500 °C. The tubing and connections, as well as the used devices are planned according to be safe and easy in use. The scaled-up approach offers a reaction column 120 times bigger (510 ml) then the first extension of the commercial system. This further extension offers the possibility of ranging between 1 and 1000 mL min-1, making it possible to use the approach in pilot plant applications.
Herstellung anisotroper Kolloide mittels templatgesteuerter Assemblierung und Kontaktdruckverfahren
(2017)
Diese Arbeit befasste sich mit neuen Konzepten zur Darstellung anisotroper Partikelsysteme durch Anordnung von funktionalisierten Partikeln unter Zuhilfenahme etablierter Methoden wie der templatgestützten Assemblierung von Partikeln und dem Mikrokontaktdruck.
Das erste Teilprojekt beschäftigte sich mit der kontrollierten Herstellung von Faltenstrukturen im Mikro- bis Nanometerbereich. Die Faltenstrukturen entstehen durch die Relaxation eines Systems bestehend aus zwei übereinander liegender Schichten unterschiedlicher Elastizität. In diesem Fall wurden Falten auf einem elastischen PDMS-Substrat durch Generierung einer Oxidschicht auf der Substratoberfläche mittels Plasmabehandlung erzeugt. Die Dicke der Oxidschicht, die über verschiedene Parameter wie Behandlungszeit, Prozessleistung, Partialdruck des plasmaaktiven Gases, Vernetzungsgrad, Deformation sowie Substratdicke einstellbar war, bestimmte Wellenlänge und Amplitude der Falten.
Das zweite Teilprojekt hatte die Darstellung komplexer, kolloidaler Strukturen auf Basis supramolekularer Wechselwirkungen zum Ziel. Dazu sollte vor allem die templatgestützte Assemblierung von Partikeln sowohl an fest-flüssig als auch flüssig-flüssig Grenzflächen genutzt werden. Für Erstere sollten die in Teilprojekt 1 hergestellten Faltenstrukturen als Templat, für Letztere Pickering-Emulsionen zur Anwendung kommen. Im ersten Fall wurden verschiedene, modifizierte Silicapartikel und Magnetitnanopartikel, deren Größe und Oberflächenfunktionalität (Cyclodextrin-, Azobenzol- und Arylazopyrazolgruppen) variierte, in Faltenstrukturen angeordnet. Die Anordnung hing dabei nicht nur vom gewählten Verfahren, sondern auch von Faktoren wie der Partikelkonzentration, der Oberflächenladung oder dem Größenverhältnis der Partikel zur Faltengeometrie ab.
Die Kombination von Cyclodextrin (CD)- und Arylazopyrazol-modifizierten Partikeln ermöglichte, auf Basis einer Wirt-Gast-Wechselwirkung zwischen den Partikeltypen und einer templatgesteuerten Anordnung, die Bildung komplexer und strukturierter Formen in der Größenordnung mehrerer Mikrometer. Dieses System kann einerseits als Grundlage für die Herstellung verschiedener Janus-Partikel herangezogen werden, andererseits stellt die gerichtete Vernetzung zweier Partikelsysteme zu größeren Aggregaten den Grundstein für neuartige, funktionale Materialien dar. Neben der Anordnung an fest-flüssig Grenzflächen konnte außerdem nachgewiesen werden, dass Azobenzol-funktionalisierte Silicapartikel in der Lage sind, Pickering-Emulsionen über mehrere Monate zu stabilisieren. Die Stabilität und Größe der Emulsionsphase kann über Parameter, wie das Volumenverhältnis und die Konzentration, gesteuert werden. CD-funktionalisierte Silicapartikel besaßen dagegen keine Grenzflächenaktivität, während es CD-basierten Polymeren wiederum möglich war, durch die Ausbildung von Einschlusskomplexen mit den hydrophoben Molekülen der Ölphase stabile Emulsionen zu bilden. Dagegen zeigte die Kombination zwei verschiedener Partikelsysteme keinen oder einen destabilisierenden Effekt bezüglich der Ausbildung von Emulsionen.
Im letzten Teilprojekt wurde die Herstellung multivalenter Silicapartikel mittels Mikrokontaktdruck untersucht. Die Faltenstrukturen wurden dabei als Stempel verwendet, wodurch es möglich war, die Patch-Geometrie über die Wellenlänge der Faltenstrukturen zu steuern. Als Tinte diente das positiv geladene Polyelektrolyt Polyethylenimin (PEI), welches über elektrostatische Wechselwirkungen auf unmodifizierten Silicapartikeln haftet. Im Gegensatz zum Drucken mit flachen Stempeln fiel dabei zunächst auf, dass sich die Tinte bei den Faltenstrukturen nicht gleichmäßig über die gesamte Substratfläche verteilt, sondern hauptsächlich in den Faltentälern vorlag. Dadurch handelte es sich bei dem Druckprozess letztlich nicht mehr um ein klassisches Mikrokontaktdruckverfahren, sondern um ein Tiefdruckverfahren. Über das Tiefdruckverfahren war es dann aber möglich, sowohl eine als auch beide Partikelhemisphären gleichzeitig und mit verschiedenen Funktionalitäten zu modifizieren und somit multivalente Silicapartikel zu generieren. In Abhängigkeit der Wellenlänge der Falten konnten auf einer Partikelhemisphäre zwei bis acht Patches abgebildet werden. Für die Patch-Geometrie, sprich Größe und Form der Patches, spielten zudem die Konzentration der Tinte auf dem Stempel, das Lösungsmittel zum Ablösen der Partikel nach dem Drucken sowie die Stempelhärte eine wichtige Rolle. Da die Stempelhärte aufgrund der variierenden Dicke der Oxidschicht bei verschiedenen Wellenlängen nicht kontant ist, wurden für den Druckprozess meist Abgüsse der Faltensubstrate verwendet. Auf diese Weise war auch die Vergleichbarkeit bei variierender Wellenlänge gewährleistet. Neben dem erfolgreichen Nachweis der Modifikation mittels Tiefdruckverfahren konnte auch gezeigt werden, dass über die Komplexierung mit PEI negativ geladene Nanopartikel auf die Partikeloberfläche aufgebracht werden können.
In der vorliegenden Arbeit konnte gezeigt werden, dass die beiden verwendeten Amphiphile mit Cholesterol als hydrophoben Block, gute Template für die Mineralisation von Calciumphosphat an der Wasser/Luft-Grenzfläche sind. Mittels Infrarot-Reflexions-Absorptions-Spektroskopie (IRRAS), Röntgenphotoelektronenspektroskopie (XPS), Energie dispersiver Röntgenspektroskopie (EDXS), Elektronenbeugung (SAED) und hochauflösende Transmissionselektronenmikroskopie (HRTEM) konnte die erfolgreiche Mineralisation von Calciumphosphat für beide Amphiphile an der Wasser/Luft-Grenzfläche nachgewiesen werden. Es konnte auch gezeigt werden, dass das Phasenverhalten der beiden Amphiphile und die bei der Mineralisation von Calciumphosphat gebildeten Kristallphasen nicht identisch sind. Beide Amphiphile üben demnach einen unterschiedlichen Einfluss auf den Mineralisationsverlauf aus.
Beim CHOL-HEM konnte sowohl nach 3 h als auch nach 5 h Octacalciumphosphat (OCP) als einzige Kristallphase mittels XPS, SAED, HRTEM und EDXS nachgewiesen werden. Das A-CHOL hingegen zeigte bei der Mineralisation von Calciumphosphat nach 1 h zunächst eine nicht eindeutig identifizierbare Vorläuferphase aus amorphen Calciumphosphat, Brushit (DCPD) oder OCP. Diese wandelte sich dann nach 3 h und 5 h in ein Gemisch, bestehend aus OCP und ein wenig Hydroxylapatit (HAP) um.
Die Schlussfolgerung daraus ist, dass das CHOL-HEM in der Lage ist, dass während der Mineralisation entstandene OCP zu stabilisieren. Dies geschieht vermutlich durch die Adsorption des Amphiphils bevorzugt an der OCP Oberfläche in [100] Orientierung. Dadurch wird die Spaltung entlang der c-Achse unterdrückt und die Hydrolyse zum HAP verhindert.
Das A-CHOL ist hingegen sterisch anspruchsvoller und kann wahrscheinlich aufgrund seiner Größe nicht so gut an der OCP Kristalloberfläche adsorbieren verglichen zum CHOL HEM. Das CHOL-HEM kann also die Hydrolyse von OCP zu HAP besser unterdrücken als das A-CHOL. Da jedoch auch beim A-CHOL nach einer Mineralisationszeit von 5 h nur wenig HAP zu finden ist, wäre auch hier ein Stabilisierungseffekt der OCP Kristalle möglich. Um eine genaue Aussage darüber treffen zu können, sind jedoch zusätzliche Kontrollexperimente notwendig. Es wäre zum einen denkbar, die Mineralisationsexperimente über einen längeren Zeitraum durchzuführen. Diese könnten zeigen, ob das CHOL-HEM die Hydrolyse vom OCP zum HAP komplett unterdrückt. Außerdem könnte nachgewiesen werden, ob beim A-CHOL das OCP weiter zum HAP umgesetzt wird oder ob ein Gemisch beider Kristallphasen erhalten bleibt.
Um die Mineralisation an der Wasser/Luft-Grenzfläche mit der Mineralisation in Bulklösung zu vergleichen, wurden zusätzlich Mineralisationsexperimente in Bulklösung durchgeführt. Dazu wurden Nitrilotriessigsäure (NTA) und Ethylendiamintetraessigsäure (EDTA) als Mineralisationsadditive verwendet, da NTA unter anderem der Struktur der hydrophilen Kopfgruppe des A-CHOLs ähnelt. Es konnte gezeigt werden, dass ein Vergleich der Mineralisation an der Grenzfläche mit der Mineralisation in Bulklösung nicht ohne weiteres möglich ist. Bei der Mineralisation in Bulklösung wird bei tiefen pH-Werten DCPD und bei höheren pH-Werten HAP gebildet. Diese wurde mittels Röntgenpulverdiffraktometrie Messungen nachgewiesen und durch Infrarotspektroskopie bekräftigt. Die Bildung von OCP wie an der Wasser/Luft-Grenzfläche konnte nicht beobachtet werden.
Es konnte auch gezeigt werden, dass beide Additive NTA und EDTA einen unterschiedlichen Einfluss auf den Verlauf der Mineralisation nehmen. So unterscheiden sich zum einen die Morphologien des gebildeten DCPDs und zum anderen wurde beispielsweise in Anwesenheit von 10 und 15 mM NTA neben DCPD auch HAP bei einem Ausgangs-pH-Wert von 7 nachgewiesen.
Da unser Augenmerk speziell auf der Mineralisation von Calciumphosphat an der Wasser/Luft-Grenzfläche liegt, könnten Folgeexperimente wie beispielsweise GIXD Messungen durchgeführt werden. Dadurch wäre es möglich, einen Überblick über die gebildeten Kristallphasen nach unterschiedlichen Reaktionszeiten direkt auf dem Trog zu erhalten.
Es konnte weiterhin gezeigt werden, dass auch einfache Amphiphile in der Lage sind, die Mineralisation von Calciumphosphat zu steuern. Amphiphile mit Cholesterol als hydrophoben Block bilden offensichtlich besonders stabile Monolagen an der Wasser/Luft-Grenzfläche. Eine Untersuchung des Einflusses ähnlicher Amphiphile mit unterschiedlichen hydrophilen Kopfgruppen auf das Mineralisationsverhalten von Calciumphosphat wäre durchaus interessant.
Nanolenses are linear chains of differently-sized metal nanoparticles, which can theoretically provide extremely high field enhancements. The complex structure renders their synthesis challenging and has hampered closer analyses so far. Here, the technique of DNA origami was used to self-assemble DNA-coated 10 nm, 20 nm, and 60 nm gold or silver nanoparticles into gold or silver nanolenses. Three different geometrical arrangements of gold nanolenses were assembled, and for each of the three, sets of single gold nanolenses were investigated in detail by atomic force microscopy, scanning electron microscopy, dark-field scattering and Raman spectroscopy. The surface-enhanced Raman scattering (SERS) capabilities of the single nanolenses were assessed by labelling the 10 nm gold nanoparticle selectively with dye molecules. The experimental data was complemented by finite-difference time-domain simulations. For those gold nanolenses which showed the strongest field enhancement, SERS signals from the two different internal gaps were compared by selectively placing probe dyes on the 20 nm or 60 nm gold particles. The highest enhancement was found for the gap between the 20 nm and 10 nm nanoparticle, which is indicative of a cascaded field enhancement. The protein streptavidin was labelled with alkyne groups and served as a biological model analyte, bound between the 20 nm and 10 nm particle of silver nanolenses. Thereby, a SERS signal from a single streptavidin could be detected. Background peaks observed in SERS measurements on single silver nanolenses could be attributed to amorphous carbon. It was shown that the amorphous carbon is generated in situ.
Gegenstand der Dissertation ist die Präsentation von Kleidermode in ihr gewidmeten Sonderausstellungen, die in zunehmender Zahl seit den 1990er Jahren in musealen und musemsähnlichen Kontexten veranstaltet werden. Es geht darum, wie Modekörper und vestimentäre Artefakte in diesen Ausstellungen gezeigt werden und welche ästhetischen Erfahrungen sich für die RezipientInnen aus der jeweiligen Konstellation von vestimentärem Objekt und Inszenierungsmittel ergeben können. Das Augenmerk liegt auf der Spannung zwischen dem visuellen Imperativ musealer Zeigepraktiken und den multisensorischen Qualitäten der Kleidermode, v. a. jener hautsinnlichen, die sich aus dem unmittelbaren Kontakt zwischen Körper und Kleid ergeben.
Die zentrale These ist, dass sich das Hautsinnliche der Kleidermode trotz des Berührungsverbots in vielen Ausstellungsinszenierungen zeigen kann. D. h., dass – entgegen häufig wiederholter Behauptungen – ‚der Körper‘, das Tragen und die Bewegung nicht per se oder komplett aus den Kleidern gewichen sind, werden diese musealisiert und ausgestellt. Es findet eine Verschiebung des Körperlichen und Hautsinnlichen, wie das Anfassen, Tragen und Bewegen, in visuelle Darstellungsformen statt. Hautsinnliche Qualitäten der vestimentären Exponate können, auch in Abhängigkeit von den jeweils verwendeten Präsentationsmitteln, von den BesucherInnen in unterschiedlichen Abstufungen sehend oder buchstäblich gespürt werden.
An konkreten Beispielen wird zum einen das Verhältnis von ausgestelltem Kleid und Präsentationsmittel(n) in den Displays untersucht. Dabei stehen folgende Mittel im Fokus, mit deren Hilfe die vestimentären Exponate zur Schau gestellt werden: Vitrinen, Podeste, Ersatzkörper wie Mannequins, optische Hilfsmittel wie Lupen, Bildmedien oder (bewegte) Installationen. Zum anderen wird analysiert, welche Wirkungen die Arrangements jeweils erzielen oder verhindern können, und zwar in Hinblick auf mögliche ästhetische Erfahrungen, die taktilen, haptischen und kinästhetischen Qualitäten der Exponate als BesucherIn sehend oder buchstäblich zu fühlen oder zu spüren. Ob als Identifikation, Projektion, Haptic Vision – es handelt sich um ästhetische Erfahrungen, die sich aus den modischen Kompetenzen der BetrachterInnen speisen und bei denen sich Visuelles und Hautsinnliches oft überlagern.
In der Untersuchung wird eine vernachlässigte, wenn nicht gar unerwünschte Rezeptionsweise diskutiert, die von den AkteurInnen der spezifischen Debatte bspw. als konsumptives Sehen abgewertet wird. Die von mir vorgeschlagene, stärker differenzierende Perspektive ist zugleich eine Kritik an dem bisherigen Diskurs und seinem eng gefassten, teilweise elitären Verständnis von Museum, Bildung und Wissen, mit dem sich AkteurInnen und Institutionen abgrenzen. Der Spezialdiskurs über musealisierte und exponierte Kleidermode steht zudem exemplarisch für die Diskussion, was das Museum, verstanden als Institution, sein kann und soll(te) und ob (und wenn ja, wie) es sich überhaupt noch von anderen Orten und Räumen klar abgrenzen lässt.
According to the classical plume hypothesis, mantle plumes are localized upwellings of hot, buoyant material in the Earth’s mantle. They have a typical mushroom shape, consisting of a large plume head, which is associated with the formation of voluminous flood basalts (a Large
Igneous Province) and a narrow plume tail, which generates a linear, age-progressive chain of volcanic edifices (a hotspot track) as the tectonic plate migrates over the relatively stationary plume. Both plume heads and tails reshape large areas of the Earth’s surface over many tens of millions of years.
However, not every plume has left an exemplary record that supports the classical hypothesis. The main objective of this thesis is therefore to study how specific hotspots have created the crustal thickness pattern attributed to their volcanic activities. Using regional geodynamic
models, the main chapters of this thesis address the challenge of deciphering the three individual (and increasingly complex) Réunion, Iceland, and Kerguelen hotspot histories, especially focussing on the interactions between the respective plume and nearby spreading ridges.
For this purpose, the mantle convection code ASPECT is used to set up three-dimensional numerical models, which consider the specific local surroundings of each plume by prescribing time-dependent boundary conditions for temperature and mantle flow. Combining reconstructed plate boundaries and plate motions, large-scale global flow velocities and an inhomogeneous lithosphere thickness distribution together with a dehydration rheology represents a novel setup for regional convection models.
The model results show the crustal thickness pattern produced by the plume, which is compared to present-day topographic structures, crustal thickness estimates and age determinations of volcanic provinces associated with hotspot activity. Altogether, the model results agree well
with surface observations. Moreover, the dynamic development of the plumes in the models provide explanations for the generation of smaller, yet characteristic volcanic features that were previously unexplained. Considering the present-day state of a model as a prediction for the
current temperature distribution in the mantle, it cannot only be compared to observations on the surface, but also to structures in the Earth’s interior as imaged by seismic tomography.
More precisely, in the case of the Réunion hotspot, the model demonstrates how the distinctive gap between the Maldives and Chagos is generated due to the combination of the ridge geometry and plume-ridge interaction. Further, the Rodrigues Ridge is formed as the surface expression
of a long-distance sublithospheric flow channel between the upwelling plume and the closest ridge segment, confirming the long-standing hypothesis of Morgan (1978) for the first time in a dynamic context. The Réunion plume has been studied in connection with the seismological
RHUM-RUM project, which has recently provided new seismic tomography images that yield an excellent match with the geodynamic model.
Regarding the Iceland plume, the numerical model shows how plume material may have accumulated in an east-west trending corridor of thin lithosphere across Greenland and resulted in simultaneous melt generation west and east of Greenland. This provides an explanation for the
extremely widespread volcanic material attributed to magma production of the Iceland hotspot and demonstrates that the model setup is also able to explain more complicated hotspot histories. The Iceland model results also agree well with newly derived seismic tomographic images.
The Kerguelen hotspot has an extremely complex history and previous studies concluded that the plume might be dismembered or influenced by solitary waves in its conduit to produce the reconstructed variable melt production rate. The geodynamic model, however, shows that a constant plume influx can result in a variable magma production rate if the plume interacts with nearby mid-ocean ridges. Moreover, the Ninetyeast Ridge in the model is created by on-ridge activities, while the Kerguelen plume was located beneath the Australian plate. This is also a contrast to earlier studies, which described the Ninetyeast Ridge as the result of the Indian plate passing over the plume. Furthermore, the Amsterdam-Saint Paul Plateau in the model is the result of plume material flowing from the upwelling toward the Southeast Indian Ridge, whereas previous geochemical studies attributed that volcanic province to a separate deep plume.
In summary, the three case studies presented in this thesis consistently highlight the importance of plume-ridge interaction in order to reconstruct the overall volcanic hotspot record as well as specific smaller features attributed to a certain hotspot. They also demonstrate that it is not necessary to attribute highly complicated properties to a specific plume in order to account for complex observations. Thus, this thesis contributes to the general understanding of plume dynamics and extends the very specific knowledge about the Réunion, Iceland, and Kerguelen mantle plumes.
The field of nanophotonics focuses on the interaction between electromagnetic radiation and matter on the nanometer scale. The elements of nanoscale photonic devices can transfer excitation energy non-radiatively from an excited donor molecule to an acceptor molecule by Förster resonance energy transfer (FRET). The efficiency of this energy transfer is highly dependent on the donor-acceptor distance. Hence, in these nanoscale photonic devices it is of high importance to have a good control over the spatial assembly of used fluorophores. Based on molecular self-assembly processes, various nanostructures can be produced. Here, DNA nanotechnology and especially the DNA origami technique are auspicious self-assembling methods. By using DNA origami nanostructures different fluorophores can be introduced with a high local control to create a variety of nanoscale photonic objects. The applications of such nanostructures range from photonic wires and logic gates for molecular computing to artificial light harvesting systems for artificial photosynthesis.
In the present cumulative doctoral thesis, different FRET systems on DNA origami structures have been designed and thoroughly analyzed. Firstly, the formation of guanine (G) quadruplex structures from G rich DNA sequences has been studied based on a two-color FRET system (Fluorescein (FAM)/Cyanine3 (Cy3)). Here, the influences of different cations (Na+ and K+), of the DNA origami structure and of the DNA sequence on the G-quadruplex formation have been analyzed. In this study, an ion-selective K+ sensing scheme based on the G-quadruplex formation on DNA origami structures has been developed. Subsequently, the reversibility of the G-quadruplex formation on DNA origami structures has been evaluated. This has been done for the simple two-color FRET system which has then been advanced to a switchable photonic wire by introducing additional fluorophores (FAM/Cy3/Cyanine5 (Cy5)/IRDye®700). In the last part, the emission intensity of the acceptor molecule (Cy5) in a three-color FRET cascade has been tuned by arranging multiple donor (FAM) and transmitter (Cy3) molecules around the central acceptor molecule. In such artificial light harvesting systems, the excitation energy is absorbed by several donor and transmitter molecules followed by an energy transfer to the acceptor leading to a brighter Cy5 emission. Furthermore, the range of possible excitation wavelengths is extended by using several different fluorophores (FAM/Cy3/Cy5). In this part of the thesis, the light harvesting efficiency (antenna effect) and the FRET efficiency of different donor/transmitter/acceptor assemblies have been analyzed and the artificial light harvesting complex has been optimized in this respect.
Im Rahmen dieser Arbeit wurden zum einen erste Oligospiro(thio)ketal (OS(T)K)-basierte Modellsysteme (molekulare Sonden) für abstandsabhängige Messungen mittels Förster-Resonanz-Energietransfer (FRET) und zum anderen Sensorfluorophore, basierend auf einem DBD-Fluorophor und BAPTA, zur Messung der intrazellulären Calcium-Konzentration dargestellt.
Für die Synthese von molekularen Sonden für abstandsabhängige Messungen wurden verschiedenste einfach- und doppelt-markierte OS(T)K-Stäbe entwickelt und spektroskopisch untersucht. Die OS(T)K-Stäbe, sogenannte molekulare Stäbe, dienten als starre Abstandshalter zwischen den Fluorophoren. Als Fluorophore wurden Derivate von 6,7-Dihydroxy-4-methylcoumarin (Donor) und Acyl-DBD (Akzeptor) verwendet, die zusammen ein FRET-Paar bilden. Die Fluorophore wurden so funktionalisiert, dass sie sowohl unbeweglich bzw. „starr“, als auch beweglich bzw. „flexibel“ an den OS(T)K-Stab gebunden werden konnten. Für die Synthese der OS(T)K-Stäbe wurden ebenfalls eine Reihe an unterschiedlich langen und kurzen Stabbausteinen synthetisiert. Auf diese Weise wurden eine Vielzahl an verschiedensten einfach- und doppelt-markierten OS(T)K-Stäben dargestellt, deren Fluorophore sowohl „starr“ als auch „flexibel“ gebunden sind. Die dargestellten Stäbe wurden in verschiedensten Lösungsmitteln spektroskopisch untersucht, um anschließend das Verhalten in Vesikel, die eine biomimetische Umgebung darstellen, zu beurteilen. Es wurde festgestellt, dass sich die Stäbe erfolgreich in die Vesikelmembran einlagerten und hohe FRET-Effizienzen aufweisen.
Des Weiteren wurde ein FRET-Paar dargestellt, das sich durch 2-Photonenabsorpion im NIR-Bereich anregen lässt. Es wurde in den lebenden Zellen mittels Fluoreszenzlebenszeitmikroskopie (FLIM) untersucht.
Zur Untersuchung von intrazellulärem Calcium wurden zwei verschiedene DBD-Fluorophore über einen kurzen Linker mit dem Calcium-Chelator BAPTA verknüpft. Die dargestellten Fluorophore wurden sowohl in vitro als auch in vivo auf ihre Calcium-Sensitivität überprüft. Mittels FLIM wurden in lebenden Zellen die Fluoreszenzlebenszeitverteilungen der Fluorophore nach Calcium-Konzentrationsänderungen detektiert.
Functional nanoporous carbon-based materials derived from oxocarbon-metal coordination complexes
(2017)
Nanoporous carbon based materials are of particular interest for both science and industry due to their exceptional properties such as a large surface area, high pore volume, high electroconductivity as well as high chemical and thermal stability. Benefiting from these advantageous properties, nanoporous carbons proved to be useful in various energy and environment related applications including energy storage and conversion, catalysis, gas sorption and separation technologies. The synthesis of nanoporous carbons classically involves thermal carbonization of the carbon precursors (e.g. phenolic resins, polyacrylonitrile, poly(vinyl alcohol) etc.) followed by an activation step and/or it makes use of classical hard or soft templates to obtain well-defined porous structures. However, these synthesis strategies are complicated and costly; and make use of hazardous chemicals, hindering their application for large-scale production. Furthermore, control over the carbon materials properties is challenging owing to the relatively unpredictable processes at the high carbonization temperatures.
In the present thesis, nanoporous carbon based materials are prepared by the direct heat treatment of crystalline precursor materials with pre-defined properties. This synthesis strategy does not require any additional carbon sources or classical hard- or soft templates. The highly stable and porous crystalline precursors are based on coordination compounds of the squarate and croconate ions with various divalent metal ions including Zn2+, Cu2+, Ni2+, and Co2+, respectively. Here, the structural properties of the crystals can be controlled by the choice of appropriate synthesis conditions such as the crystal aging temperature, the ligand/metal molar ratio, the metal ion, and the organic ligand system. In this context, the coordination of the squarate ions to Zn2+ yields porous 3D cube crystalline particles. The morphology of the cubes can be tuned from densely packed cubes with a smooth surface to cubes with intriguing micrometer-sized openings and voids which evolve on the centers of the low index faces as the crystal aging temperature is raised. By varying the molar ratio, the particle shape can be changed from truncated cubes to perfect cubes with right-angled edges.
These crystalline precursors can be easily transformed into the respective carbon based materials by heat treatment at elevated temperatures in a nitrogen atmosphere followed by a facile washing step. The resulting carbons are obtained in good yields and possess a hierarchical pore structure with well-organized and interconnected micro-, meso- and macropores. Moreover, high surface areas and large pore volumes of up to 1957 m2 g-1 and 2.31 cm3 g-1 are achieved, respectively, whereby the macroscopic structure of the precursors is preserved throughout the whole synthesis procedure.
Owing to these advantageous properties, the resulting carbon based materials represent promising supercapacitor electrode materials for energy storage applications. This is exemplarily demonstrated by employing the 3D hierarchical porous carbon cubes derived from squarate-zinc coordination compounds as electrode material showing a specific capacitance of 133 F g-1 in H2SO4 at a scan rate of 5 mV s-1 and retaining 67% of this specific capacitance when the scan rate is increased to 200 mV s-1.
In a further application, the porous carbon cubes derived from squarate-zinc coordination compounds are used as high surface area support material and decorated with nickel nanoparticles via an incipient wetness impregnation. The resulting composite material combines a high surface area, a hierarchical pore structure with high functionality and well-accessible pores. Moreover, owing to their regular micro-cube shape, they allow for a good packing of a fixed-bed flow reactor along with high column efficiency and a minimized pressure drop throughout the packed reactor. Therefore, the composite is employed as heterogeneous catalyst in the selective hydrogenation of 5-hydroxymethylfurfural to 2,5-dimethylfuran showing good catalytic performance and overcoming the conventional problem of column blocking.
Thinking about the rational design of 3D carbon geometries, the functions and properties of the resulting carbon-based materials can be further expanded by the rational introduction of heteroatoms (e.g. N, B, S, P, etc.) into the carbon structures in order to alter properties such as wettability, surface polarity as well as the electrochemical landscape. In this context, the use of crystalline materials based on oxocarbon-metal ion complexes can open a platform of highly functional materials for all processes that involve surface processes.
With recent advances in the area of information extraction, automatically extracting structured information from a vast amount of unstructured textual data becomes an important task, which is infeasible for humans to capture all information manually. Named entities (e.g., persons, organizations, and locations), which are crucial components in texts, are usually the subjects of structured information from textual documents. Therefore, the task of named entity mining receives much attention. It consists of three major subtasks, which are named entity recognition, named entity linking, and relation extraction.
These three tasks build up an entire pipeline of a named entity mining system, where each of them has its challenges and can be employed for further applications. As a fundamental task in the natural language processing domain, studies on named entity recognition have a long history, and many existing approaches produce reliable results. The task is aiming to extract mentions of named entities in text and identify their types. Named entity linking recently received much attention with the development of knowledge bases that contain rich information about entities. The goal is to disambiguate mentions of named entities and to link them to the corresponding entries in a knowledge base. Relation extraction, as the final step of named entity mining, is a highly challenging task, which is to extract semantic relations between named entities, e.g., the ownership relation between two companies.
In this thesis, we review the state-of-the-art of named entity mining domain in detail, including valuable features, techniques, evaluation methodologies, and so on. Furthermore, we present two of our approaches that focus on the named entity linking and relation extraction tasks separately.
To solve the named entity linking task, we propose the entity linking technique, BEL, which operates on a textual range of relevant terms and aggregates decisions from an ensemble of simple classifiers. Each of the classifiers operates on a randomly sampled subset of the above range. In extensive experiments on hand-labeled and benchmark datasets, our approach outperformed state-of-the-art entity linking techniques, both in terms of quality and efficiency.
For the task of relation extraction, we focus on extracting a specific group of difficult relation types, business relations between companies. These relations can be used to gain valuable insight into the interactions between companies and perform complex analytics, such as predicting risk or valuating companies. Our semi-supervised strategy can extract business relations between companies based on only a few user-provided seed company pairs. By doing so, we also provide a solution for the problem of determining the direction of asymmetric relations, such as the ownership_of relation. We improve the reliability of the extraction process by using a holistic pattern identification method, which classifies the generated extraction patterns. Our experiments show that we can accurately and reliably extract new entity pairs occurring in the target relation by using as few as five labeled seed pairs.
Modern welfare states aim at designing unemployment insurance (UI) schemes which minimize the length of unemployment spells. A variety of institutions and incentives, which are embedded in UI schemes across OECD countries, reflect this attempt. For instance, job seekers entering UI are often provided with personal support through a caseworker. They also face the requirement to regularly submit a minimum number of job applications, which is typically enforced through benefit cuts in the case of non-compliance. Moreover, job seekers may systematically receive information on their re-employment prospects. As a consequence, UI design has become a complex task. Policy makers need to define not only the amount and duration of benefit payments, but also several other choice parameters. These include the intensity and quality of personal support through caseworkers, the level of job search requirements, the strictness of enforcement, and the information provided to unemployed individuals. Causal estimates on how these parameters affect re-employment outcomes are thus central inputs to the design of modern UI systems: how much do individual caseworkers influence the transition out of unemployment? Does the requirement of an additional job application translate into increased job finding? Do individuals behave differently when facing a strict versus mild enforcement system? And how does information on re-employment prospects influence the job search decision? This dissertation proposes four novel research designs to answer this question. Chapters one to three elaborate quasi-experimental identification strategies, which are applied to large-scale administrative data from Switzerland. They, respectively, measure how personal interactions with caseworkers (chapter one), the level of job search requirements (chapter two) and the strictness of enforcement (chapter three) affect re-employment outcomes. Chapter four proposes a structural estimation approach, based on linked survey and administrative data from Germany. It studies how over-optimism on future wage offers affects the decision to search for work, and how the provision of information changes this decision.
Steep mountain channels are an important component of the fluvial system. On geological timescales, they shape mountain belts and counteract tectonic uplift by erosion. Their channels are strongly coupled to hillslopes and they are often the main source of sediment transported downstream to low-gradient rivers and to alluvial fans, where commonly settlements in mountainous areas are located. Hence, mountain streams are the cause for one of the main natural hazards in these regions. Due to climate change and a pronounced populating of mountainous regions the attention given to this threat is even growing. Although quantitative studies on sediment transport have significantly advanced our knowledge on measuring and calibration techniques we still lack studies of the processes within mountain catchments. Studies examining the mechanisms of energy and mass exchange on small temporal and spatial scales in steep streams remain sparse in comparison to low-gradient alluvial channels.
In the beginning of this doctoral project, a vast amount of experience and knowledge of a steep stream in the Swiss Prealps had to be consolidated in order to shape the principal aim of this research effort. It became obvious, that observations from within the catchment are underrepresented in comparison to experiments performed at the catchment’s outlet measuring fluxes and the effects of the transported material. To counteract this imbalance, an examination of mass fluxes within the catchment on the process scale was intended. Hence, this thesis is heavily based on direct field observations, which are generally rare in these environments in quantity and quality. The first objective was to investigate the coupling of the channel with surrounding hillslopes, the major sources of sediment. This research, which involved the monitoring of the channel and adjacent hillslopes, revealed that alluvial channel steps play a key role in coupling of channel and hillslopes. The observations showed that hillslope stability is strongly associated with the step presence and an understanding of step morphology and stability is therefore crucial in understanding sediment mobilization. This finding refined the way we think about the sediment dynamics in steep channels and motivated continued research of the step dynamics. However, soon it became obvious that the technological basis for developing field tests and analyzing the high resolution geometry measured in the field was not available. Moreover, for many geometrical quantities in mountain channels definitions and a clear scientific standard was not available. For example, these streams are characterized by a high spatial variability of the channel banks, preventing straightforward calculations of the channel width without a defined reference. Thus, the second and inevitable part of this thesis became the development and evaluation of scientific tools in order to investigate the geometrical content of the study reach thoroughly. The developed framework allowed the derivation of various metrics of step and channel geometry which facilitated research on the a large data set of observations of channel steps. In the third part, innovative, physically-based metrics have been developed and compared to current knowledge on step formation, suggested in the literature. With this analyses it could be demonstrated that the formation of channel steps follow a wide range of hydraulic controls. Due to the wide range of tested parameters channel steps observed in a natural stream were attributed to different mechanisms of step formation, including those based on jamming and those based on key-stones. This study extended our knowledge on step formation in a steep stream and harmonized different, often time seen as competing, processes of step formation. This study was based on observations collected at one point in time. In the fourth part of this project, the findings of the snap-shot observations were extended in the temporal dimension and the derived concepts have been utilized to investigate reach-scale step patterns in response to large, exceptional flood events. The preliminary results of this work based on the long-term analyses of 7 years of long profile surveys showed that the previously observed channel-hillslope mechanism is the responsible for the short-term response of step formation.
The findings of the long-term analyses of step patterns drew a bow to the initial observations of a channel-hillslope system which allowed to join the dots in the dynamics of steep stream. Thus, in this thesis a broad approach has been chosen to gain insights into the complex system of steep mountain rivers. The effort includes in situ field observations (article I), the development of quantitative scientific tools (article II), the reach-scale analyses of step-pool morphology (article III) and its temporal evolution (article IV). With this work our view on the processes within the catchment has been advanced towards a better mechanistic understanding of these fluvial system relevant to improve applied scientific work.
Background and objectives: Age-related losses of lower extremity muscle strength/power and deficits in static and particularly dynamic balance are associated with impaired functional performance and the occurrence of falls. It has been shown that balance and resistance training have the potential to improve balance and muscle strength in healthy older adults. However, it is still open to debate how the effectiveness of balance and resistance training in older adults is influenced by different factors. This includes the role of trunk muscle strength, the comprehensive effects of combined balance and resistance training, and the role of exercise supervision. Therefore, the primary objectives of this doctoral thesis are to investigate the relationship between trunk muscle strength and balance performance and to examine the effects of an expert-based balance and resistance training protocol on various measures of balance and lower extremity muscle strength/power in older adults. Furthermore, the impact of supervised versus unsupervised balance and/or resistance training interventions in the elderly will be evaluated.
Methods: Healthy older adults aged 63-80 years were included in a cross-sectional study, a longitudinal study, and a meta-analysis (range group means meta-analysis: 65.3-81.1 years) registering balance and muscle strength/power performance. Different measures of balance (i.e., static/dynamic, proactive, reactive) were examined using clinical (e.g., Romberg test) and instrumented tests (e.g., 10 meter walking test on a sensor-equipped walkway). Isometric strength of the trunk muscles was assessed using instrumented trunk muscle strength apparatus and lower extremity dynamic muscle strength/power was examined using clinical tests (e.g., Chair Stand Test). Further, a combined balance and resistance training protocol was applied to examine training-induced effects on balance and muscle strength/power as well as the role of supervision in older adults.
Results: Findings revealed that measures of trunk muscle strength and static steady-state balance as well as specific measures of dynamic steady-state balance were significantly associated in the elderly (0.42 ≤ r ≤ 0.57). Combined balance and resistance training significantly improved older adults' static/dynamic steady-state (e.g., Romberg test; habitual gait speed), pro-active (e.g., Timed Up and Go Test), and reactive balance (e.g., Push and Release Test) as well as muscle strength/power (e.g., Chair Stand Test) (0.62 ≤ Cohen’s d ≤ 2.86; all p < 0.05). Supervised compared to unsupervised balance and/or resistance training was superior in enhancing older adults' balance and muscle strength/power performance regarding all observed outcome categories [longitudinal study: effects for the supervised group 0.26 ≤ d ≤ 2.86, effects for the unsupervised group 0.06 ≤ d ≤ 2.30; meta-analysis: all between-subject standardized mean differences (SMDbs) in favor of the supervised training programs 0.24-0.53]. The meta-analysis additionally showed larger effects in favor of supervised interventions when compared to completely unsupervised interventions (0.28 ≤ SMDbs ≤ 1.24). These effects in favor of the supervised programs faded when compared with studies that implemented a small amount of supervised sessions in their unsupervised interventions (−0.06 ≤ SMDbs ≤ 0.41).
Conclusions: Trunk muscle strength is associated with steady-state balance performance and may therefore be integrated in fall-preventive exercise interventions for older adults. The examined positive effects on a large number of important intrinsic fall risk factors (e.g., balance deficits, muscle weakness) imply that particularly the combination of balance and resistance training appears to be a feasible and effective exercise intervention for fall prevention. Owing to the beneficial effects of supervised compared to unsupervised interventions, supervised sessions should be integrated in fall-preventive balance and/or resistance training programs for older adults.
Persistently high unemployment rates are a major threat to the social cohesion in many societies. To moderate the consequences of unemployment industrialized countries spend substantial shares of their GDP on labor market policies, while in recent years there has been a shift from passive measures, such as transfer payments, towards more activating elements which aim to promote the reintegration into the labor market. Although, there exists a wide range of evidence about the effects of traditional active labor market policies (ALMP) on participants’ subsequent labor market outcomes, a deeper understanding of the impact of these programs on the job search behavior and the interplay with long-term labor market outcomes is necessary. This allows policy makers to improve the design of labor market policies and the allocation of unemployed workers into specific programs. Moreover, previous studies have shown that many traditional ALMP programs, like public employment or training schemes, do not achieve the desired results. This underlines the importance of understanding the effect mechanisms, but also the need to develop innovative programs that are more effective. This thesis extends the existing literature with respect to several dimensions.
First, it analyzes the impact of job seekers’ beliefs about upcoming ALMPs programs on the effectiveness of realized treatments later during the unemployment spell. This provides important insights with respect to the job search process and relates potential anticipation effects (on the job seekers behavior before entering a program) to the vast literature evaluating the impact of participating in an ALMP program on subsequent outcomes. The empirical results show that training programs are more effective if the participants expect participation ex ante, while expected treatment effects are unrelated to the actual labor market outcomes of participants. A subsequent analysis of the effect mechanisms shows that job seekers who expect to participate also receive more information by their caseworker and show a higher willingness to adjust their search behavior in association with an upcoming ALMP program. The findings suggest that the effectiveness of training programs can be improved by providing more detailed information about the possibility of a future treatment early during the unemployment spell.
Second, the thesis investigates the effects of a relatively new class of programs that aim to improve the geographical mobility of unemployed workers with respect to the job search behavior, the subsequent job finding prospects and the returns to labor market mobility. To estimate the causal impact of these programs, it is exploited that local employment agencies have a degree of autonomy when deciding about the regional-specific policy mix. The findings show that the policy style of the employment agency indeed affects the job search behavior of unemployed workers. Job seekers who are assigned to agencies with higher preferences for mobility programs increase their search radius without affecting the total number of job applications. This shift of the search effort to distant regions leads to a higher probability to find a regular job and higher wages. Moreover, it is shown that participants in one of the subsidy programs who move to geographically distant region a earn significantly higher wages, end up in more stable jobs and face a higher long-run employment probability compared to non-participants.
Third, the thesis offers an empirical assessment of the unconfoundedness assumption with respect to the relevance of variables that are usually unobserved in studies evaluating ALMP programs. A unique dataset that combines administrative records and survey data allows us to observe detailed information on typical covariates, as well as usually unobserved variables including personality traits, attitudes, expectations, intergenerational information, as well as indicators about social networks and labor market flexibility. The findings show that, although our set of usually unobserved variables indeed has a significant effect on the selection into ALMP programs, the overall impact when estimating treatment effects is rather small.
Finally, the thesis also examines the importance of gender differences in reservation wages that allows assessing the importance of special ALMP programs targeting women. In particular, when including reservation wages in a wage decomposition exercise, the gender gap in realized wages becomes small and statistically insignificant. The strong connection between gender differences in reservation wages and realized wages raises the question how these differences in reservation wages are set in the first place. Since traditional covariates cannot sufficiently explain the gender gap in reservation wages, we perform subgroup analysis to better understand what the driving forces behind this gender gap are.
Magnetische Eisenoxidnanopartikel werden bereits seit geraumer Zeit erfolgreich als MRT-Kontrastmittel in der klinischen Bildgebung eingesetzt. Durch Optimierung der magnetischen Eigenschaften der Nanopartikel kann die Aussagekraft von MR-Aufnahmen verbessert und somit der diagnostische Wert einer MR-Anwendung weiter erhöht werden. Neben der Verbesserung bestehender Verfahren wird die bildgebende Diagnostik ebenso durch die Entwicklung neuer Verfahren, wie dem Magnetic Particle Imaging, vorangetrieben. Da hierbei das Messsignal von den magnetischen Nanopartikeln selbst erzeugt wird, birgt das MPI einen enormen Vorteil hinsichtlich der Sensitivität bei gleichzeitig hoher zeitlicher und räumlicher Auflösung. Da es aktuell jedoch keinen kommerziell vertriebenen in vivo-tauglichen MPI-Tracer gibt, besteht ein dringender Bedarf an geeigneten innovativen Tracermaterialien. Daraus resultierte die Motivation dieser Arbeit biokompatible und superparamagnetische Eisenoxidnanopartikel für den Einsatz als in vivo-Diagnostikum insbesondere im Magnetic Particle Imaging zu entwickeln. Auch wenn der Fokus auf der Tracerentwicklung für das MPI lag, wurde ebenso die MR-Performance bewertet, da geeignete Partikel somit alternativ oder zusätzlich als MR-Kontrastmittel mit verbesserten Kontrasteigenschaften eingesetzt werden könnten.
Die Synthese der Eisenoxidnanopartikel erfolgte über die partielle Oxidation von gefälltem Eisen(II)-hydroxid und Green Rust sowie eine diffusionskontrollierte Kopräzipitation in einem Hydrogel.
Mit der partiellen Oxidation von Eisen(II)-hydroxid und Green Rust konnten erfolgreich biokompatible und über lange Zeit stabile Eisenoxidnanopartikel synthetisiert werden. Zudem wurden geeignete Methoden zur Formulierung und Sterilisierung etabliert, wodurch zahlreiche Voraussetzungen für eine Anwendung als in vivo-Diagnostikum geschaffen wurden. Weiterhin ist auf Grundlage der MPS-Performance eine hervorragende Eignung dieser Partikel als MPI-Tracer zu erwarten, wodurch die Weiterentwicklung der MPI-Technologie maßgeblich vorangetrieben werden könnte. Die Bestimmung der NMR-Relaxivitäten sowie ein initialer in vivo-Versuch zeigten zudem das große Potential der formulierten Nanopartikelsuspensionen als MRT-Kontrastmittel. Die Modifizierung der Partikeloberfläche ermöglicht ferner die Herstellung zielgerichteter Nanopartikel sowie die Markierung von Zellen, wodurch das mögliche Anwendungsspektrum maßgeblich erweitert wurde.
Im zweiten Teil wurden Partikel durch eine diffusionskontrollierte Kopräzipitation im Hydrogel, wobei es sich um eine bioinspirierte Modifikation der klassischen Kopräzipitation handelt, synthetisiert, wodurch Partikel mit einer durchschnittlichen Kristallitgröße von 24 nm generiert werden konnten. Die Bestimmung der MPS- und MR-Performance elektrostatisch stabilisierter Partikel ergab vielversprechende Resultate. In Vorbereitung auf die Entwicklung eines in vivo-Diagnostikums wurden die Partikel anschließend erfolgreich sterisch stabilisiert, wodurch der kolloidale Zustand in MilliQ-Wasser über lange Zeit aufrechterhalten werden konnte. Durch Zentrifugation konnten die Partikel zudem erfolgreich in verschiedene Größenfraktionen aufgetrennt werden. Dies ermöglichte die Bestimmung der idealen Aggregatgröße dieses Partikelsystems in Bezug auf die MPS-Performance.
Natural products and their derivatives have always been a source of drug leads. In particular, bacterial compounds have played an important role in drug development, for example in the field of antibiotics. A decrease in the discovery of novel leads from natural sources and the hope of finding new leads through the generation of large libraries of drug-like compounds by combinatorial chemistry aimed at specific molecular targets drove the pharmaceutical companies away from research on natural products. However, recent technological advances in genetics, bioinformatics and analytical chemistry have revived the interest in natural products. The ribosomally synthesized and post-translationally modified peptides (RiPPs) are a group of natural products generated by the action of post-translationally modifying enzymes on precursor peptides translated from mRNA by ribosomes. The great substrate promiscuity exhibited by many of the enzymes from RiPP biosynthetic pathways have led to the generation of hundreds of novel synthetic and semisynthetic variants, including variants carrying non-canonical amino acids (ncAAs). The microviridins are a family of RiPPs characterized by their atypical tricyclic structure composed of lactone and lactam rings, and their activity as serine protease inhibitors. The generalities of their biosynthetic pathway have already been described, however, the lack of information on details such as the protease responsible for cleaving off the leader peptide from the cyclic core peptide has impeded the fast and cheap production of novel microviridin variants. In the present work, knowledge on leader peptide activation of enzymes from other RiPP families has been extrapolated to the microviridin family, making it possible to bypass the need of a leader peptide. This feature allowed for the exploitation of the microviridin biosynthetic machinery for the production of novel variants through the establishment of an efficient one-pot in vitro platform. The relevance of this chemoenzymatic approach has been exemplified by the synthesis of novel potent serine protease inhibitors from both rationally-designed peptide libraries and bioinformatically predicted microviridins. Additionally, new structure-activity relationships (SARs) could be inferred by screening microviridin intermediates. The significance of this technique was further demonstrated by the simple incorporation of ncAAs into the microviridin scaffold.
Die Elektrosprayionisation (ESI) ist eine der weitverbreitetsten Ionisationstechniken für flüssige Pro-ben in der Massen- und Ionenmobilitäts(IM)-Spektrometrie. Aufgrund ihrer schonenden Ionisierung wird ESI vorwiegend für empfindliche, komplexe Moleküle in der Biologie und Medizin eingesetzt. Überdies ist sie allerdings für ein sehr breites Spektrum an Substanzklassen anwendbar. Die IM-Spektrometrie wurde ursprünglich zur Detektion gasförmiger Proben entwickelt, die hauptsächlich durch radioaktive Quellen ionisiert werden. Sie ist die einzige analytische Methode, bei der Isomere in Echtzeit getrennt und über ihre charakteristische IM direkt identifiziert werden können. ESI wurde in den 90ger Jahren durch die Hill Gruppe in die IM-Spektrometrie eingeführt. Die Kombination wird bisher jedoch nur von wenigen Gruppen verwendet und hat deshalb noch ein hohes Entwick-lungspotential. Ein vielversprechendes Anwendungsfeld ist der Einsatz in der Hochleistungs-flüssigkeitschromatographie (HPLC) zur mehrdimensionalen Trennung. Heutzutage ist die HPLC die Standardmethode zur Trennung komplexer Proben in der Routineanalytik. HPLC-Trennungsgänge sind jedoch häufig langwierig und der Einsatz verschiedener Laufmittel, hoher Flussraten, von Puffern, sowie Laufmittelgradienten stellt hohe Anforderungen an die Detektoren. Die ESI-IM-Spektrometrie wurde in einigen Studien bereits als HPLC-Detektor eingesetzt, war dort bisher jedoch auf Flussratensplitting oder geringe Flussraten des Laufmittels beschränkt.
In dieser kumulativen Doktorarbeit konnte daher erstmals ein ESI IM-Spektrometer als HPLC-Detektor für den Flussratenbereich von 200-1500 μl/min entwickelt werden. Anhand von fünf Publi-kationen wurden (1) über eine umfassende Charakterisierung die Eignung des Spektrometers als HPLC-Detektor festgestellt, (2) ausgewählte komplexe Trenngänge präsentiert und (3) die Anwen-dung zum Reaktionsmonitoring und (4, 5) mögliche Weiterentwicklungen gezeigt.
Erfolgreich konnten mit dem selbst-entwickelten ESI IM-Spektrometer typische HPLC-Bedingungen wie Wassergehalte im Laufmittel von bis zu 90%, Pufferkonzentrationen von bis zu 10 mM, sowie Nachweisgrenzen von bis zu 50 nM erreicht werden. Weiterhin wurde anhand der komplexen Trennungsgänge (24 Pestizide/18 Aminosäuren) gezeigt, dass die HPLC und die IM-Spektrometrie eine hohe Orthogonalität besitzen. Eine effektive Peakkapazität von 240 wurde so realisiert. Auf der HPLC-Säule koeluierende Substanzen konnten über die Driftzeit getrennt und über ihre IM identifi-ziert werden, sodass die Gesamttrennzeiten erheblich minimiert werden konnten. Die Anwend-barkeit des ESI IM-Spektrometers zur Überwachung chemischer Synthesen wurde anhand einer dreistufigen Reaktion demonstriert. Es konnten die wichtigsten Edukte, Zwischenprodukte und Produkte aller Stufen identifiziert werden. Eine quantitative Auswertung war sowohl über eine kurze HPLC-Vortrennung als auch durch die Entwicklung eines eigenen Kalibrierverfahrens, welches die Ladungskonkurrenz bei ESI berücksichtigt, ohne HPLC möglich. Im zweiten Teil der Arbeit werden zwei Weiterentwicklungen des Spektrometers präsentiert. Eine Möglichkeit ist die Reduzierung des Drucks in den intermediären Bereich (300 - 1000 mbar) mit dem Ziel der Verringerung der benötigten Spannungen. Mithilfe von Streulichtbildern und Strom-Spannungs-Kurven wurden für geringe Drücke eine verminderte Freisetzung der Analyt-Ionen aus den Tropfen festgestellt. Die Verluste konnten jedoch über höhere elektrische Feldstärken ausgeglichen werden, sodass gleiche Nachweisgrenzen bei 500 mbar und bei 1 bar erreicht wurden. Die zweite Weiterentwicklung ist ein neuartiges Ionentors mit Pulsschaltung, welches eine Verdopplung der Auflösung auf bis zu R > 100 bei gleicher Sensitivität ermöglichte. Eine denkbare Anwendung im Bereich der Peptidanalytik wurde mit beachtlichen Auflösungen der Peptide von R = 90 gezeigt.
Die vorliegende Arbeit mit dem Titel „Eine Frage der Zeit. Wie Einflüsse individueller Merkmale auf Einkommen bei Frauen über ihre familiären Verpflichtungen vermittelt werden“ geht der Frage der Heterogenität bei weiblichen Einkommensergebnissen nach. Dabei steht die Thematik der individuellen Investitionen in die familiäre Arbeit als erklärender Faktor im Vordergrund und es wird der Frage nachgegangen, warum die einen Frauen viele und andere weniger häusliche Verpflichtungen übernehmen. Hierfür werden das individuelle Humankapital der Frauen, ihre Werteorientierungen und individuelle berufliche Motivationen aus der Jugendzeit und im Erwachsenenalter herangezogen. Die analysierten Daten (Daten der LifE-Studie) repräsentieren eine Langzeitperspektive vom 16. bis zum 45. Lebensjahr der befragten Frauen. Zusammenfassend kann im Ergebnis gezeigt werden, dass ein Effekt familiärer Verpflichtungen auf Einkommensergebnisse bei Frauen im frühen und mittleren Erwachsenenalter als Zeiteffekt über die investierte Erwerbsarbeitszeit vermittelt wird. Die Relevanz privater Routinearbeiten für Berufserfolge von Frauen und insbesondere Müttern stellt somit eine Frage der Zeit dar. Weiterhin kann für individuelle Einflüsse auf Einkommen bei Frauen gezeigt werden, dass höhere zeitliche Investitionen in den Beruf von Frauen mit hohem Bildungsniveau als indirect-only-Mediation nur über die Umverteilung häuslicher Arbeiten erklärbar werden. Frauen sind demnach zwar Gewinnerinnen der Bildungsexpansion. Die Bildungsexpansion stellt jedoch auch die Geschichte der Entstehung eines Vereinbarkeitskonflikts für eben diese Frauen dar, weil die bis heute virulenten Beharrungskräfte hinsichtlich der Frauen zugeschriebenen familiären Verpflichtungen mit ihren gestiegenen beruflichen Erwartungen und Chancen kollidieren. Die Arbeit leistet in ihren Analyseresultaten einen wichtigen Beitrag zur Erklärung heterogener Investitionen von Frauen in den Beruf und ihrer Einkommensergebnisse aus dem Privaten heraus.
In the arable soil landscape of hummocky ground moraines, an erosion-affected spatial differentiation of soils can be observed. Man-made erosion leads to soil profile modifications along slopes with changed solum thickness and modified properties of soil horizons due to water erosion in combination with tillage operations. Soil erosion creates, thereby, spatial patterns of soil properties (e.g., texture and organic matter content) and differences in crop development. However, little is known about the manner in which water fluxes are affected by soil-crop interactions depending on contrasting properties of differently-developed soil horizons and how water fluxes influence the carbon transport in an eroded landscape. To identify such feedbacks between erosion-induced soil profile modifications and the 1D-water and solute balance, high-precision weighing lysimeters equipped with a wide range of sensor technique were filled with undisturbed soil monoliths that differed in the degree of past soil erosion. Furthermore, lysimeter effluent concentrations were analyzed for dissolved carbon fractions in bi-weekly intervals.
The water balance components measured by high precision lysimeters varied from the most eroded to the less eroded monolith up to 83 % (deep drainage) primarily caused due to varying amounts of precipitation and evapotranspiration for a 3-years period. Here, interactions between crop development and contrasting rainfall interception by above ground biomass could explain differences in water balance components. Concentrations of dissolved carbon in soil water samples were relatively constant in time, suggesting carbon leaching was mainly affected by water fluxes in this observation period. For the lysimeter-based water balance analysis, a filtering scheme was developed considering temporal autocorrelation. The minute-based autocorrelation analysis of mass changes from lysimeter time series revealed characteristic autocorrelation lengths ranging from 23 to 76 minutes. Thereby, temporal autocorrelation provided an optimal approximation of precipitation quantities. However, the high temporal resolution in lysimeter time series is restricted by the lengths of autocorrelation.
Erosion-induced but also gradual changes in soil properties were reflected by dynamics of soil water retention properties in the lysimeter soils. Short-term and long-term hysteretic water retention data suggested seasonal wettability problems of soils increasingly limited rewetting of previously dried pore regions. Differences in water retention were assigned to soil tillage operations and the erosion history at different slope positions. The threedimensional spatial pattern of soil types that result from erosional soil profile modifications were also reflected in differences of crop root development at different landscape positions. Contrasting root densities revealed positive relations of root and aboveground plant characteristics. Differences in the spatially-distributed root growth between different eroded soil types provided indications that root development was affected by the erosion-induced soil evolution processes.
Overall, the current thesis corroborated the hypothesis that erosion-induced soil profile modifications affect the soil water balance, carbon leaching and soil hydraulic properties, but also the crop root system is influenced by erosion-induced spatial patterns of soil properties in the arable hummocky post glacial soil landscape. The results will help to improve model predictions of water and solute movement in arable soils and to understand interactions between soil erosion and carbon pathways regarding sink-or-source terms in landscapes.
This is a cumulative dissertation comprising three original studies (one published, one in revision, one submitted; Effective December 2017) investigating how reptile species in arid Australia respond to various climatic parameters at different spatial scales and analysing the two potential main underlying mechanisms: thermoregulatory behaviour and species interactions. This dissertation combines extensive individual-based field data across trophic levels, selected field experiments, statistical analyses, and predictive modelling techniques. Mechanisms and processes detected in this dissertation can now be used to predict potential future changes in the community of arid-zone lizards. This knowledge will help improving our fundamental understanding of the consequences of global change and thereby prevent biodiversity loss in a vulnerable ecosystem.
In this era of high-speed informatization and globalization, online education is no longer an exquisite concept in the ivory tower, but a rapidly developing industry closely relevant to people's daily lives. Numerous lectures are recorded in form of multimedia data, uploaded to the Internet and made publicly accessible from anywhere in this world. These lectures are generally addressed as e-lectures. In recent year, a new popular form of e-lectures, the Massive Open Online Courses (MOOCs), boosts the growth of online education industry and somehow turns "learning online" into a fashion.
As an e-learning provider, besides to keep improving the quality of e-lecture content, to provide better learning environment for online learners is also a highly important task. This task can be preceded in various ways, and one of them is to enhance and upgrade the learning materials provided: e-lectures could be more than videos. Moreover, this process of enhancement or upgrading should be done automatically, without giving extra burdens to the lecturers or teaching teams, and this is the aim of this thesis.
The first part of this thesis is an integrated framework of multi-lingual subtitles production, which can help online learners penetrate the language barrier. The framework consists of Automatic Speech Recognition (ASR), Sentence Boundary Detection (SBD) and Machine Translation (MT), among which the proposed SBD solution is major technical contribution, building on Deep Neural Network (DNN) and Word Vector (WV) and achieving state-of-the-art performance. Besides, a quantitative evaluation with dozens of volunteers is also introduced to measure how these auto-generated subtitles could actually help in context of e-lectures.
Secondly, a technical solution "TOG" (Tree-Structure Outline Generation) is proposed to extract textual content from the displaying slides recorded in video and re-organize them into a hierarchical lecture outline, which may serve in multiple functions, such like preview, navigation and retrieval. TOG runs adaptively and can be roughly divided into intra-slide and inter-slides phases. Table detection and lecture video segmentation can be implemented as sub- or post-application in these two phases respectively. Evaluation on diverse e-lectures shows that all the outlines, tables and segments achieved are trustworthily accurate.
Based on the subtitles and outlines previously created, lecture videos can be further split into sentence units and slide-based segment units. A lecture highlighting process is further applied on these units, in order to capture and mark the most important parts within the corresponding lecture, just as what people do with a pen when reading paper books. Sentence-level highlighting depends on the acoustic analysis on the audio track, while segment-level highlighting focuses on exploring clues from the statistical information of related transcripts and slide content. Both objective and subjective evaluations prove that the proposed lecture highlighting solution is with decent precision and welcomed by users.
All above enhanced e-lecture materials have been already implemented in actual use or made available for implementation by convenient interfaces.
The work done during the PhD studies has been focused on measurements of distribution functions of rotating galaxies using integral field spectroscopy observations.
Throughout the main body of research presented here we have been using CALIFA (Calar Alto Legacy Integral Field Area) survey stellar velocity fields to obtain robust measurements of circular velocities for rotating galaxies of all morphological types. A crucial part of the work was enabled by well-defined CALIFA sample selection criteria: it enabled reconstructing sample-independent distributions of galaxy properties.
In Chapter 2, we measure the distribution in absolute magnitude - circular velocity space for a well-defined sample of 199 rotating CALIFA galaxies using their stellar kinematics. Our aim in this analysis is to avoid subjective selection criteria and to take volume and large-scale structure factors into account. Using stellar velocity fields instead of gas emission line kinematics allows including rapidly rotating early type galaxies. Our initial sample contains 277 galaxies with available stellar velocity fields and growth curve r-band photometry. After rejecting 51 velocity fields that could not be modelled due to the low number of bins, foreground contamination or significant interaction we perform Markov Chain Monte Carlo (MCMC) modelling of the velocity fields, obtaining the rotation curve and kinematic parameters and their realistic uncertainties. We perform an extinction correction and calculate the circular velocity v_circ accounting for pressure support a given galaxy has. The resulting galaxy distribution on the M_r - v_circ plane is then modelled as a mixture of two distinct populations, allowing robust and reproducible rejection of outliers, a significant fraction of which are slow rotators. The selection effects are understood well enough that the incompleteness of the sample can be corrected and the 199 galaxies can be weighted by volume and large-scale structure factors enabling us to fit a volume-corrected Tully-Fisher relation (TFR). More importantly, we also provide the volume-corrected distribution of galaxies in the M_r - v_circ plane, which can be compared with cosmological simulations. The joint distribution of the luminosity and circular velocity space densities, representative over the range of -20 > M_r > -22 mag, can place more stringent constraints on the galaxy formation and evolution scenarios than linear TFR fit parameters or the luminosity function alone.
In Chapter 3, we measure one of the marginal distributions of the M_r - v_circ distribution: the circular velocity function of rotating galaxies. The velocity function is a fundamental observable statistic of the galaxy population, being of a similar importance as the luminosity function, but much more difficult to measure. We present the first directly measured circular velocity function that is representative between 60 < v_circ < 320 km s^-1 for galaxies of all morphological types at a given rotation velocity. For the low mass galaxy population 60 < v_circ < 170 km s^-1, we use the HIPASS velocity function. For the massive galaxy population 170 < v_circ < 320 km s^-1, we use stellar circular velocities from CALIFA. The CALIFA velocity function includes homogeneous velocity measurements of both late and early-type rotation-supported galaxies. It has the crucial advantage of not missing gas-poor massive ellipticals that HI surveys are blind to. We show that both velocity functions can be combined in a seamless manner, as their ranges of validity overlap. The resulting observed velocity function is compared to velocity functions derived from cosmological simulations of the z = 0 galaxy population. We find that dark matter-only simulations show a strong mismatch with the observed VF. Hydrodynamic Illustris simulations fare better, but still do not fully reproduce observations.
In Chapter 4, we present some other work done during the PhD studies, namely, a method that improves the precision of specific angular measurements by combining simultaneous Markov Chain Monte Carlo modelling of ionised gas 2D velocity fields and HI linewidths. To test the method we use a sample of 25 galaxies from the Sydney-AAO Multi-object Integral field (SAMI) survey that had matching ALFALFA HI linewidths. Such a method allows constraining the rotation curve both in the inner regions of a galaxy and in its outskirts, leading to increased precision of specific angular momentum measurements. It could be used to further constrain the observed relation between galaxy mass, specific angular momentum and morphology (Obreschkow & Glazebrook 2014).
Mathematical and computational methods are presented in the appendices.
Galaxies are among the most complex systems that can currently be modelled with a computer. A realistic simulation must take into account cosmology and gravitation as well as effects of plasma, nuclear, and particle physics that occur on very different time, length, and energy scales. The Milky Way is the ideal test bench for such simulations, because we can observe millions of its individual stars whose kinematics and chemical composition are records of the evolution of our Galaxy. Thanks to the advent of multi-object spectroscopic surveys, we can systematically study stellar populations in a much larger volume of the Milky Way. While the wealth of new data will certainly revolutionise our picture of the formation and evolution of our Galaxy and galaxies in general, the big-data era of Galactic astronomy also confronts us with new observational, theoretical, and computational challenges.
This thesis aims at finding new observational constraints to test Milky-Way models, primarily based on infra-red spectroscopy from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and asteroseismic data from the CoRoT mission. We compare our findings with chemical-evolution models and more sophisticated chemodynamical simulations. In particular we use the new powerful technique of combining asteroseismic and spectroscopic observations that allows us to test the time dimension of such models for the first time. With CoRoT and APOGEE (CoRoGEE) we can infer much more precise ages for distant field red-giant stars, opening up a new window for Galactic archaeology.
Another important aspect of this work is the forward-simulation approach that we pursued when interpreting these complex datasets and comparing them to chemodynamical models.
The first part of the thesis contains the first chemodynamical study conducted with the APOGEE survey. Our sample comprises more than 20,000 red-giant stars located within 6 kpc from the Sun, and thus greatly enlarges the Galactic volume covered with high-resolution spectroscopic observations. Because APOGEE is much less affected by interstellar dust extinction, the sample covers the disc regions very close to the Galactic plane that are typically avoided by optical surveys. This allows us to investigate the chemo-kinematic properties of the Milky Way's thin disc outside the solar vicinity. We measure, for the first time with high-resolution data, the radial metallicity gradient of the disc as a function of distance from the Galactic plane, demonstrating that the gradient flattens and even changes its sign for mid-plane distances greater than 1 kpc.
Furthermore, we detect a gap between the high- and low-[$\alpha$/Fe] sequences in the chemical-abundance diagram (associated with the thin and thick disc) that unlike in previous surveys can hardly be explained by selection effects. Using 6D kinematic information, we also present chemical-abundance diagrams cleaned from stars on kinematically hot orbits. The data allow us to confirm without doubt that the scale length of the (chemically-defined) thick disc is significantly shorter than that of the thin disc.
In the second part, we present our results of the first combination of asteroseismic and spectroscopic data in the context of Galactic Archaeology. We analyse APOGEE follow-up observations of 606 solar-like oscillating red giants in two CoRoT fields close to the Galactic plane. These stars cover a large radial range of the Galactic disc (4.5 kpc $\lesssim R_{\rm Gal}\lesssim15$ kpc) and a large age baseline (0.5 Gyr $\lesssim \tau\lesssim$ 13 Gyr), allowing us to study the age- and radius-dependence of the [$\alpha$/Fe] vs. [Fe/H] distributions. We find that the age distribution of the high-[$\alpha$/Fe] sequence appears to be broader than expected from a monolithically-formed old thick disc that stopped to form stars 10 Gyr ago. In particular, we discover a significant population of apparently young, [$\alpha$/Fe]-rich stars in the CoRoGEE data whose existence cannot be explained by standard chemical-evolution models. These peculiar stars are much more abundant in the inner CoRoT field LRc01 than in the outer-disc field LRc01, suggesting that at least part of this population has a chemical-evolution rather than a stellar-evolution origin, possibly due to a peculiar chemical-enrichment history of the inner disc. We also find that strong radial migration is needed to explain the abundance of super-metal-rich stars in the outer disc.
Finally, we use the CoRoGEE sample to study the time evolution of the radial metallicity gradient in the thin disc, an observable that has been the subject of observational and theoretical debate for more than 20 years. By dividing the CoRoGEE dataset into six age bins, performing a careful statistical analysis of the radial [Fe/H], [O/H], and [Mg/Fe] distributions, and accounting for the biases introduced by the observation strategy, we obtain reliable gradient measurements. The slope of the radial [Fe/H] gradient of the young red-giant population ($-0.058\pm0.008$ [stat.] $\pm0.003$ [syst.] dex/kpc) is consistent with recent Cepheid data. For the age range of $1-4$ Gyr, the gradient steepens slightly ($-0.066\pm0.007\pm0.002$ dex/kpc), before flattening again to reach a value of $\sim-0.03$ dex/kpc for stars with ages between 6 and 10 Gyr. This age dependence of the [Fe/H] gradient can be explained by a nearly constant negative [Fe/H] gradient of $\sim-0.07$ dex/kpc in the interstellar medium over the past 10 Gyr, together with stellar heating and migration. Radial migration also offers a new explanation for the puzzling observation that intermediate-age open clusters in the solar vicinity (unlike field stars) tend to have higher metallicities than their younger counterparts. We suggest that non-migrating clusters are more likely to be kinematically disrupted, which creates a bias towards high-metallicity migrators from the inner disc and may even steepen the intermediate-age cluster abundance gradient.