Refine
Has Fulltext
- yes (110) (remove)
Year of publication
- 2013 (110) (remove)
Document Type
- Doctoral Thesis (110) (remove)
Is part of the Bibliography
- yes (110) (remove)
Keywords
- remote sensing (3)
- Arctic (2)
- Escherichia coli (2)
- Fernerkundung (2)
- GIS (2)
- HCI (2)
- Hochwasser (2)
- Morphologie (2)
- Populationsdynamik (2)
- Syntax (2)
- Vorhersage (2)
- carbon (2)
- carbon dioxide (2)
- climate change (2)
- eye movements (2)
- floods (2)
- microbial communities (2)
- population dynamics (2)
- prosody (2)
- syntax (2)
- thermoresponsive (2)
- 19. Jahrhundert (1)
- 3-D Modellierung (1)
- 3-D outcrop modeling (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D numerical models (1)
- 3D numerische Modelle (1)
- AGN (1)
- Abbildende Spektroskopie (1)
- Adelbert von Chamisso (1)
- Adhäsion (1)
- Affiliationsnetzwerke (1)
- African states (1)
- Afrikanische Staaten (1)
- Akan (1)
- Aktiven Galaxienkerne (1)
- Alaunschiefer (1)
- Altersunterschiede (1)
- Altruimus (1)
- Alum shale (1)
- Ambiguität (1)
- Anisotroper Kuwahara Filter (1)
- Anpassung (1)
- Antarctica (1)
- Antarktis (1)
- Antibiotika-Toleranz (1)
- Antibiotikaresistenz (1)
- Antimikrobielle Peptide (1)
- Arabian Plate (1)
- Arabische Platte (1)
- Arbeitsgedächtniskapazität (1)
- Arctic tundra (1)
- Arktis (1)
- Astroteilchenphysik (1)
- Aufmerksamkeitskontrolle (1)
- Aufschluss-Modellierung (1)
- Augenbewegungen (1)
- Ausbreitung (1)
- Ausschüsse (1)
- Automobil (1)
- Automobildesign (1)
- BMI (1)
- BRDF (1)
- Bayes'sche Netze (1)
- Bayesian networks (1)
- Berufsbildungswissenschaften (1)
- Berührungseingaben (1)
- Besteuerung (1)
- Biaryle (1)
- Biaryles (1)
- Bifidobacterium (1)
- Bilddatenanalyse (1)
- Bildung (1)
- Bildungsrenditen (1)
- Biodiversity (1)
- Biodiversität (1)
- Bittergeschmack (1)
- Blickbewegungen (1)
- Boden (1)
- Bodenfeuchte (1)
- Bodenhydrologie (1)
- Bodenparameter (1)
- Bohrlochmessungen (1)
- Brandenburg (1)
- Brazil (1)
- Breast cancer (1)
- Brustkrebs (1)
- Bundling of PES (1)
- Bündelung von PES (1)
- CO₂ (1)
- CRS (1)
- Cambodia (1)
- Carbonate (1)
- Cartography (1)
- Cell proliferation (1)
- Chamisso (1)
- Chile (1)
- Chlorogensäure (1)
- Clusteranalyse (1)
- Colonkanzerogenese (1)
- Corruption (1)
- Cue-Gewichtung (1)
- Darmbakterien (1)
- Darmlänge (1)
- Databases (1)
- Datamodell (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenanalyse (1)
- Datenbanken (1)
- Datenintegration (1)
- Datenmodell (1)
- Dekomposition (1)
- Design (1)
- Design Management (1)
- Design Thinking (1)
- Design Thinking Diskurse (1)
- Deutsche Einheit (1)
- Deutschland (1)
- Deutschland und Ägypten Kulturvergleich (1)
- Dezentralisierung (1)
- Dielectric elastomer actuators (1)
- Dielektrische Elastomer Aktuatoren (1)
- Differenz von Gauss Filtern (1)
- Diversität (1)
- Downstep (1)
- Durchmusterung (1)
- ETAS (1)
- Ecology (1)
- Ecotoxicology (1)
- Eingabegenauigkeit (1)
- Einstein manifolds (1)
- Einstein-Hilbert action (1)
- Einstein-Hilbert-Wirkung (1)
- Einstein-Mannigfaltigkeiten (1)
- Einzelzellanalyse (1)
- Einzugsgebietsklassifizierung (1)
- Elektroaktive Polymere (1)
- Elementarteilchen (1)
- Eltern (1)
- Eltern-Kind-Assoziation (1)
- EnMAP (1)
- Enterolignanen (1)
- Enterolignans (1)
- Entwicklung des Projektunterrichts in der BRD (1)
- Entwicklungsökonomik (1)
- Entzündung (1)
- Erdbeben (1)
- Erdrutsch (1)
- Ereignisdokumentation (1)
- Ernährungsfaktoren (1)
- Erosion (1)
- Erzgebirge (1)
- European Union (1)
- European Union research policy (1)
- European expansion (1)
- Europäische Forschungspolitik (1)
- Europäische Union (1)
- Evaluationsnutzung (1)
- Evaluationsverwendung (1)
- Evolution (1)
- ExPEC (1)
- Expansion Europas (1)
- Expertise (1)
- Exportplattform (1)
- Extension (1)
- Fehlerquellen der Modellierung (1)
- Feld (1)
- Film (1)
- Flagellenbewegung (1)
- Fluktuations-Dissipations-Theorem (1)
- Fluoreszenzbildgebung (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- Forstwirtschaft (1)
- Fragmentierung (1)
- Frauen (1)
- Freiheit (1)
- GABA (1)
- Galaxienhaufen (1)
- Galaxy Struktur (1)
- Gartenkultur (1)
- Gefahrenanalyse (1)
- Gehirn (1)
- Georgia (1)
- Georgien (1)
- Geothermie (1)
- German (1)
- German past participles (1)
- Germany (1)
- Germany and Egypt culture comparison (1)
- Geschmack (1)
- Gletschervorfeld (1)
- Globalisierung (1)
- Glutathionperoxidase (1)
- Glycopeptoid (1)
- Glykogen (1)
- Grauliteratur (1)
- Gruppenfreistellungsverordnung (1)
- Gärten (1)
- Hemmung (1)
- Hochenergiephysik (1)
- Humankapital (1)
- Hydrogel (1)
- Hydrogenase (1)
- Hydrologie (1)
- Hyperschnellläufersterne (1)
- Hämolyse (1)
- IBD (1)
- Image (1)
- Imaging spectroscopy (1)
- Immediate-early-Gen (1)
- Impakt (1)
- InSAR (1)
- InSAR Datenanalyse (1)
- Index (1)
- Index Structures (1)
- Indexstrukturen (1)
- Informationsflüsse (1)
- Inklusionsabhängigkeit (1)
- Innovativität (1)
- Insekt (1)
- Institutionalisierte Evaluationsverfahren (1)
- Integralfeld-Spektroskopie (1)
- Interactive Rendering (1)
- Interaktives Rendering (1)
- Internalin J (1)
- Johann Friedrich Eschscholtz (1)
- Jugendalter (1)
- Kaffeeproteine (1)
- Kambodscha (1)
- Karbonat (1)
- Kartographie (1)
- Kfz (1)
- Kinder (1)
- Klimawandel (1)
- Kohlenstoff (1)
- Korrektursakkaden (1)
- Korruption (1)
- Kugelsternhaufen (1)
- Kultivierung (1)
- Kulturwissenschaft (1)
- Kurzkettige Fettsäuren (1)
- Körperbautyp (1)
- Körperfett (1)
- Körperunzufriedenheit (1)
- Lafora disease (1)
- Landepositionsfehler (1)
- Landnutzungswandel (1)
- Landschaftseffekte (1)
- Landslide (1)
- Lebensstil (1)
- Leistungsbewertung von Projekten (1)
- Leistungssport im interkulturellen Vergleich (1)
- Lesen (1)
- Lexikon (1)
- Lignan-converting bacteria (1)
- Lignan-umwandelnde Bakterien (1)
- Link-Entdeckung (1)
- Lipide (1)
- Lithosphäre (1)
- Louis Choris (1)
- Lusatia (1)
- MEO2MA (1)
- Marketing (1)
- Medicago truncatula (1)
- Methan (1)
- Methodik der Projektarbeit (1)
- Mikrobiologie (1)
- Mikrobiota (1)
- Mikrosakkaden (1)
- Milchstrassenmasse (1)
- Minderheiten (1)
- Mischmodelle (1)
- Mobilgeräte (1)
- Modell (1)
- Modellierung (1)
- Molkenproteine (1)
- Monoschichten (1)
- Morton Wormskiold (1)
- Motilität (1)
- Motivational and Volitional aspects of competitive sports (1)
- Motivationale und Volitionale Aspekte (1)
- Multivariate Analyse (1)
- Multivariate statistic (1)
- Musikrhythmus (1)
- Muttergalaxien (1)
- Mütter (1)
- NCA (1)
- NMR (1)
- NW Himalaja (1)
- NW Himalaya (1)
- Nachbeben (1)
- Naphthalenophane (1)
- Naphthalenophanes (1)
- Nation Branding (1)
- Naturgefahren (1)
- Naturgeschichte (1)
- Naturkunde (1)
- Negation (1)
- Neotektonik (1)
- Netzwerkanalyse (1)
- Nicht-photorealistisches Rendering (1)
- Nichtlineare Mikroskopie (1)
- Niedrigwasser (1)
- Nordostdeutsches Becken (1)
- Northeast German Basin (1)
- Nucleus tractus solitarii (1)
- OEGMA (1)
- OT-Modellierung (1)
- Oberflächenwärmefluß (1)
- Oberstufenzentren (1)
- Objektidentifikation (1)
- Optical sensor (1)
- Optische Sensoren (1)
- Organisationskultur (1)
- Otto von Kotzebue (1)
- OxyR (1)
- PES (1)
- PRM/Alf Maus (1)
- PRM/Alf mouse (1)
- PSF Analyse (1)
- PSF fitting (1)
- Paläo-Strain-Berechnung (1)
- Peers (1)
- Peptid-Membran-Wechselwirkung (1)
- Permafrost (1)
- Peter Bieri (1)
- Pflanzengemeinschaften (1)
- Pflanzliches Lignan (1)
- Pharmakologie (1)
- Phonetik (1)
- Phosphorylierung (1)
- Photonischer Kristall (1)
- Physik schwarzer Löcher (1)
- Phänotypische Heterogenität (1)
- Planetare Geologie (1)
- Planetary Geology (1)
- Plant lignan (1)
- Politikevaluation (1)
- Polyamine (1)
- Polyglycin (1)
- Polythiophen (1)
- Probenahmestrategie (1)
- Produkterleben (1)
- Projektdidaktik (1)
- Projektorganisation und –kultur (1)
- Prosodie (1)
- Protein (1)
- Protein-Wechselwirkungen (1)
- Proteinfaltung (1)
- Proteinmodifizierung (1)
- Proteom (1)
- Prozesskette (1)
- Prozessmodellsuche (1)
- Präsentation (1)
- Pseudobeobachtungen (1)
- Pseudomonas putida (1)
- Public Diplomacy (1)
- Qualitätsbewertung (1)
- Quantenfeldtheorie (1)
- Quantitative Daten (1)
- Query (1)
- RAVE (1)
- REDDplus (1)
- Random-Walk-Theorie (1)
- Raumzeitgeometrie (1)
- Reaktionszeitmethoden (1)
- Redox (1)
- Reform der öffentlichen Verwaltung (1)
- Regressionsanalyse (1)
- Reisebeschreibung (1)
- Reisen um 1800 (1)
- Reputation (1)
- Resonanzfluoreszenz (1)
- Rhizophagus irregularis (1)
- Rhizosphere (1)
- Ricci flow (1)
- Ricci-Fluss (1)
- Ringspannung (1)
- Romanzoff (1)
- Romanzoff Expedition (1)
- Romanzow (1)
- Rotationsbarriere (1)
- Rotationskurven (1)
- Rurik (1)
- Röntgenastronomie (1)
- Satzgefüge (1)
- Sauerstoff (1)
- Saxo-Thuringia (1)
- Schema-Entdeckung (1)
- Schwimmende Mikroorganismen (1)
- Search Algorithms (1)
- Seismische Geschwindigkeiten (1)
- Seismische Interferometrie (1)
- Seismische Tomographie (1)
- Sekundärsakkaden (1)
- Selbstwirksamkeit (1)
- Selen (1)
- Semantik (1)
- Service-orientierte Systeme (1)
- Silikonelastomere (1)
- Similarity Measures (1)
- Similarity Search (1)
- Skala (1)
- Skeletal robustness (1)
- Skelettrobustizität (1)
- Soil hydrology (1)
- Sozialdarwinismus (1)
- Sozialer Druck (1)
- Speicheldrüse (1)
- Sportringen (1)
- Sports Wrestling (1)
- Spracherwerb (1)
- Sprachrhythmus (1)
- Sterndynamik (1)
- Stochastischer Algorithmus (1)
- Strahlung Mechanismen (1)
- Stärke (1)
- Stärkemetabolismus (1)
- Subsurface Biosphere (1)
- Suchverfahren (1)
- Suigetsu (1)
- Synbiotika (1)
- Systems of Systems (1)
- Säuglingsnahrung (1)
- TAS2R (1)
- Temperaturfeld (1)
- Thermal-conductivity (1)
- Thermoresponsiv (1)
- Tien Shan (1)
- Tonsprache (1)
- Transdisziplinarität (1)
- Transformationsprozess (1)
- Transkriptom Sequenzierung (1)
- Tritium Assay (1)
- Tritium Versuchsanordnung (1)
- Tropen (1)
- Unification Treaty (1)
- Unruh effect (1)
- Unruh-Effekt (1)
- Unsicherheiten (1)
- Untergrunduntersuchung der Biosphäre (1)
- Unternehmerische Universitäten (1)
- Variationsstabilität (1)
- Verantwortung (1)
- Verifikation (1)
- Victorian (1)
- Vietnamese (1)
- Vietnamesen (1)
- Vulnerabilität (1)
- W-Fragen (1)
- Waldbewirtschaftung (1)
- Web of Data (1)
- Well-log analysis (1)
- Weltreisen (1)
- Weltumsegelung (1)
- Wertesystem der Beschäftigten des öffentlichen Dienstes (1)
- Wertorientierungen (1)
- Wissens- und Technologietransfer (1)
- Wissenschaftsgeschichte (1)
- Wärmeleitfähigkeit (1)
- X-ray astronomy (1)
- Zellimmobilisierung (1)
- Zellmembranen (1)
- Zellproliferation (1)
- Zuckertransporter (1)
- Zweizustandsmodell (1)
- activity-regulated cytoskeleton-associated protein (1)
- adaptation (1)
- adhesion (1)
- adolescence (1)
- aesthetic preferences (1)
- aesthetic user requirements (1)
- affiliation networks (1)
- aftershock (1)
- age differences (1)
- altruism (1)
- an intercultural comparison (1)
- anisotropic Kuwahara filter (1)
- antibiotic resistance (1)
- antimicrobial peptides (1)
- apoptosis (1)
- arbuscular mycorrhizal symbiosis (1)
- arbuskuläre Mykorrhizasymbiose (1)
- arktische Tundra (1)
- astroparticle physics (1)
- asymmetric synthesis (1)
- asymmetrische Synthese (1)
- attentional control (1)
- ausländische Direktinvestitionen (1)
- automated planning (1)
- automotive (1)
- automotive design (1)
- barrier of rotation energy (1)
- behavioral specification (1)
- berufliche Interessen (1)
- bifidobacterium (1)
- biomaterials (1)
- biopolymers (1)
- biorelevant (1)
- bitter (1)
- bitter taste (1)
- black hole physics (1)
- block exemption (1)
- body dissatisfaction (1)
- brain (1)
- car (1)
- case ambiguity (1)
- catFISH (1)
- catchment classification (1)
- cell immobilization (1)
- cell tracking (1)
- child language (1)
- childhood abuse (1)
- children (1)
- chlorogenic acid (1)
- chronic and acute inflammation (1)
- circumnavigation (1)
- clause linkage (1)
- climate (1)
- clustering (1)
- clusters of galaxies (1)
- coffee proteins (1)
- coherence-enhancing filtering (1)
- colon carcinogenesis (1)
- commensal (1)
- committee governance (1)
- corrective saccades (1)
- cosmic-ray (1)
- coupled fluid and heat transport (1)
- cultivation (1)
- cultural resources (1)
- cultural studies (1)
- cultural types (1)
- cytokines (1)
- dark matter (1)
- data analysis (1)
- data integration (1)
- decentralization (1)
- decomposition (1)
- dependency discovery (1)
- design (1)
- design management (1)
- design thinking (1)
- design thinking discourse (1)
- deutsche Partizipien (1)
- development economics (1)
- development of project-based-learning (1)
- developpement de l’enseignement des projets en Allemagne fédérale (1)
- didactics of project-based-learning (1)
- didactique des projets (1)
- dietary factors (1)
- difference of Gaussians (1)
- dispersal (1)
- displacement (1)
- disturbed eating (1)
- diversity (1)
- downstep (1)
- drug tolerance (1)
- dunkle Materie (1)
- earthquake (1)
- eco-hydrological modelling (1)
- ecological modelling (1)
- economies of scope (1)
- education (1)
- effective discourse (1)
- electroactive polymers (1)
- elementary particles (1)
- energy density (1)
- entity alignment (1)
- entrepreneurial mission (1)
- entrepreneurial university (1)
- erosion (1)
- evaluation use (1)
- evaluation utilization (1)
- event documentation (1)
- evidence-based policy (1)
- evolution (1)
- expedition (1)
- expertise (1)
- export platforms (1)
- extension (1)
- faults (1)
- film (1)
- flagellar filaments (1)
- flood events (1)
- flow-based bilateral filter (1)
- fluctuation dissipation theorem (1)
- fluorescence imaging (1)
- forecast (1)
- foreign direct investments (1)
- forest management (1)
- forestry (1)
- formal cognitive models (1)
- formale kognitive Modelle (1)
- fragmentation (1)
- freedom (1)
- functional diversity (1)
- funktionelle Variabilität (1)
- galactic structure (1)
- galaxy structure (1)
- garden cultures (1)
- gardening (1)
- gekoppelter Fluid-und Wärmetransport (1)
- genetic enhancement (1)
- geologische Störungen (1)
- geothermics (1)
- gesture (1)
- gestörtes Essverhalten (1)
- glacier forefield (1)
- globalization (1)
- globular clusters (1)
- glutathione peroxidase (1)
- glycogen (1)
- glycopeptoid (1)
- goblet cells (1)
- good governance (1)
- graph clustering (1)
- grey literature (1)
- groEL (1)
- gut length (1)
- gut microbiota (1)
- hazard assessments (1)
- hemolysis (1)
- heteroatom (1)
- heterologe Expression (1)
- heterologous expression (1)
- high energy astrophysics (1)
- high energy physics (1)
- high-pressure incubation system (1)
- history of science (1)
- hochenergetische Astrophysik (1)
- host galaxies (1)
- human capital (1)
- human-computer interaction (1)
- hydrogel (1)
- hydrogels (1)
- hydrological flow paths (1)
- hydrologische Fließpfade (1)
- hydrology (1)
- hypervelocity stars (1)
- image (1)
- image data analysis (1)
- immediate early gene (1)
- immune response (1)
- impact (1)
- inclusion dependency (1)
- index (1)
- indigene Völker (1)
- indigenous peoples (1)
- infection (1)
- inflammation (1)
- information flow (1)
- inhibition (1)
- innovativeness (1)
- input accuracy (1)
- insect (1)
- institutional leadership (1)
- integral field spectroscopy (1)
- interaction (1)
- interactive simulation (1)
- interface (1)
- international trade (1)
- internationaler Handel (1)
- intestinal microbiota (1)
- intestinale Mikrobiota (1)
- journey around the world (1)
- judging of projects (1)
- kindliche Sprachverarbeitung (1)
- knowledge- and technology transfer (1)
- kosmische Neutronenstrahlung (1)
- kulturelle Ressourcen (1)
- kulturelle Verhaltensformen (1)
- körperliche Bewegung (1)
- lamprophyre (1)
- land-use change (1)
- landscape effects (1)
- landscape hydrology (1)
- language acquisition (1)
- leucine-rich repeat protein (1)
- leucinreiches repeat-Protein (1)
- lexicon (1)
- liberal eugenics (1)
- liberale Eugenik (1)
- life-style analysis (1)
- lineare spektrale Entmischung (1)
- link discovery (1)
- lipid biomarkers (1)
- lipids (1)
- lithosphere (1)
- low flow (1)
- low molecular weight organic acids (1)
- machine learning (1)
- map/reduce (1)
- marketing (1)
- maschinelles Lernen (1)
- membranes (1)
- methane (1)
- methods of project-based-learning (1)
- microbial activity (1)
- microbiology (1)
- microbiota (1)
- microsaccades (1)
- mikrobielle Gemeinschaften (1)
- minorities (1)
- mixture models (1)
- mobile (1)
- mobile devices (1)
- model (1)
- model-based prototyping (1)
- modelling (1)
- modelling error sources (1)
- molecular doping (1)
- molekulares Dotieren (1)
- monitoring (1)
- monolayer (1)
- morphological processing (1)
- morphology (1)
- mothers (1)
- motility (1)
- mucus (1)
- multivariate Statistik (1)
- multivariate statistics (1)
- musical rhythm (1)
- méthodes des projets (1)
- nation branding (1)
- natural hazards (1)
- natural history (1)
- natural language generation (1)
- natural sciences (1)
- natural terrestrial landforms (1)
- naturalistic research (1)
- natürliche terrestrische Oberflächenformen (1)
- negation (1)
- neoinstitutional organizational theory (1)
- neoinstitutionale Organisationstheorie (1)
- neotectonics (1)
- network analysis (1)
- neutron field (1)
- nicht-Markovsche Dynamik (1)
- nicht-kanonische Nebensätze (1)
- non-Markovian dynamics (1)
- non-canonical clauses (1)
- non-linear microscopy (1)
- non-photorealistic rendering (1)
- nucleus of the solitary tract (1)
- object identification (1)
- offenes Quantensystem (1)
- open quantum system (1)
- organic dipoles (1)
- organic electronics (1)
- organic semiconductor (1)
- organisation and culture of project-based-learning (1)
- organisation et culture des projets (1)
- organische Dipole (1)
- organische Elektronik (1)
- organischer Halbleiter (1)
- organischer Kohlenstoff (1)
- organizational culture (1)
- orogenic evolution (1)
- overweight (1)
- oxygen (1)
- pH (1)
- paleo-strain calculation (1)
- parafoveal processing (1)
- parafoveale Verarbeitung (1)
- parent-child-association (1)
- parents (1)
- pathogen (1)
- peers (1)
- peptide-membrane-interaction (1)
- percentage of body fat (1)
- pharmacology (1)
- phenotypic heterogeneity (1)
- phonetics (1)
- phosphorylation (1)
- photonic crystal (1)
- physical activity (1)
- plant communities (1)
- policy-evaluation (1)
- political economics (1)
- politische Ökonomik (1)
- polyamines (1)
- polyglycine (1)
- polythiohene (1)
- pornography (1)
- prediction (1)
- presentation (1)
- probiotics (1)
- process chain (1)
- process model search (1)
- product experience (1)
- prosodisch (1)
- protein (1)
- protein folding (1)
- protein interactions (1)
- protein modification (1)
- proteome (1)
- proximity-concentration trade-off (1)
- pseudomonas putida (1)
- public administration reform (1)
- public diplomacy (1)
- public employees’ value system (1)
- quality assessment framework (1)
- quantitative data (1)
- quantum field theory (1)
- querying (1)
- radiation mechanisms (1)
- radiocarbon (1)
- random walk (1)
- rapid prototyping (1)
- reaction time methods (1)
- reading (1)
- redox (1)
- regional labor market (1)
- regionale Arbeitsmärkte (1)
- regionale Hydrologie (1)
- regression analysis (1)
- reproduktive Selbstbestimmung (1)
- reputation (1)
- requirements engineering (1)
- resonance fluorescence (1)
- responsibility (1)
- returns to education (1)
- rhizosphere (1)
- rotation curves (1)
- räumliche Kalibrierung (1)
- saccadic error (1)
- salivary gland (1)
- scale (1)
- schema discovery (1)
- science of vocational training (1)
- secondary saccades (1)
- sedimentary organic matter (1)
- seismic interferometry (1)
- seismic tomography (1)
- seismic velocities (1)
- selenium (1)
- self-efficacy expectations (1)
- semantics (1)
- service-oriented systems (1)
- sexual aggression (1)
- sexual scripts (1)
- short chain fatty acids (1)
- silicone elastomers (1)
- similarity (1)
- single cell analysis (1)
- situated context (1)
- social pressure (1)
- soil (1)
- soil constituents mapping (1)
- soil moisture (1)
- soil organic carbon (1)
- sol-gel (1)
- somatotype (1)
- spacetime geometry (1)
- spatial calibration (1)
- spectral unmixing (1)
- spectro-directional (1)
- speech rhythm (1)
- spektro-direktional (1)
- starch (1)
- starch metabolism (1)
- starter formula (1)
- stellar dynamics (1)
- stochastic algorithms (1)
- strain energy (1)
- sugar transporter (1)
- supercapacitor (1)
- surface heat flow (1)
- survey (1)
- synbiotics (1)
- syntactic economy (MP) (1)
- syntactic processing (1)
- syntaktische Ambiguität (1)
- systems of systems (1)
- taste (1)
- taxation (1)
- tectonics (1)
- temperature field analysis (1)
- thermal model (1)
- thermisches Modell (1)
- thermochronology (1)
- thermodynamic stability (1)
- thermodynamische Stabilität (1)
- thermoresponsiv (1)
- thermosensitive (1)
- tone language (1)
- topics (1)
- touch input (1)
- transcriptome sequencing (1)
- transdisciplinarity (1)
- transformation process (1)
- travelling around 1800 (1)
- travelogue (1)
- tropics (1)
- two-state model (1)
- uncertainties (1)
- unternehmerische Mission (1)
- values (1)
- variational stability (1)
- vehicle (1)
- verification (1)
- virulence-associated genes (1)
- virulenzassoziierte Gene (1)
- vocational interests (1)
- vocational training and education research (1)
- voyage around the world (1)
- vulnerability (1)
- wh-questions (1)
- whey proteins (1)
- women writers (1)
- working memory capacity (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Ökologie (1)
- Ökonomieprinzipien (MP) (1)
- Ökotoxikologie (1)
- Übergewicht (1)
- ästhetische Nutzeranforderungen (1)
- ästhetische Präferenzen (1)
- évaluation des projets (1)
- ökohydrologische Modellierung (1)
- ökologische Modellierung (1)
Institute
- Institut für Geowissenschaften (25)
- Institut für Physik und Astronomie (12)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (11)
- Institut für Biochemie und Biologie (9)
- Institut für Ernährungswissenschaft (9)
- Department Psychologie (8)
- Wirtschaftswissenschaften (8)
- Institut für Chemie (6)
- Department Linguistik (5)
- Institut für Informatik und Computational Science (5)
The nutrient exchange between plant and fungus is the key element of the arbuscular mycorrhizal (AM) symbiosis. The fungus improves the plant’s uptake of mineral nutrients, mainly phosphate, and water, while the plant provides the fungus with photosynthetically assimilated carbohydrates. Still, the knowledge about the mechanisms of the nutrient exchange between the symbiotic partners is very limited. Therefore, transport processes of both, the plant and the fungal partner, are investigated in this study. In order to enhance the understanding of the molecular basis underlying this tight interaction between the roots of Medicago truncatula and the AM fungus Rhizophagus irregularis, genes involved in transport processes of both symbiotic partners are analysed here. The AM-specific regulation and cell-specific expression of potential transporter genes of M. truncatula that were found to be specifically regulated in arbuscule-containing cells and in non-arbusculated cells of mycorrhizal roots was confirmed. A model for the carbon allocation in mycorrhizal roots is suggested, in which carbohydrates are mobilized in non-arbusculated cells and symplastically provided to the arbuscule-containing cells. New insights into the mechanisms of the carbohydrate allocation were gained by the analysis of hexose/H+ symporter MtHxt1 which is regulated in distinct cells of mycorrhizal roots. Metabolite profiling of leaves and roots of a knock-out mutant, hxt1, showed that it indeed does have an impact on the carbohydrate balance in the course of the symbiosis throughout the whole plant, and on the interaction with the fungal partner. The primary metabolite profile of M. truncatula was shown to be altered significantly in response to mycorrhizal colonization. Additionally, molecular mechanisms determining the progress of the interaction in the fungal partner of the AM symbiosis were investigated. The R. irregularis transcriptome in planta and in extraradical tissues gave new insight into genes that are differentially expressed in these two fungal tissues. Over 3200 fungal transcripts with a significantly altered expression level in laser capture microdissection-collected arbuscules compared to extraradical tissues were identified. Among them, six previously unknown specifically regulated potential transporter genes were found. These are likely to play a role in the nutrient exchange between plant and fungus. While the substrates of three potential MFS transporters are as yet unknown, two potential sugar transporters are might play a role in the carbohydrate flow towards the fungal partner. In summary, this study provides new insights into transport processes between plant and fungus in the course of the AM symbiosis, analysing M. truncatula on the transcript and metabolite level, and provides a dataset of the R. irregularis transcriptome in planta, providing a high amount of new information for future works.
A detailed description of the characteristics of antimicrobial peptides (AMPs) is highly demanded, since the resistance against traditional antibiotics is an emerging problem in medicine. They are part of the innate immune system in every organism, and they are very efficient in the protection against bacteria, viruses, fungi and even cancer cells. Their advantage is that their target is the cell membrane, in contrast to antibiotics which disturb the metabolism of the respective cell type. This allows AMPs to be more active and faster. The lack of an efficient therapy for some cancer types and the evolvement of resistance against existing antitumor agents make AMPs promising in cancer therapy besides being an alternative to traditional antibiotics. The aim of this work was the physical-chemical characterization of two fragments of LL-37, a human antimicrobial peptide from the cathelicidin family. The fragments LL-32 and LL-20 exhibited contrary behavior in biological experiments concerning their activity against bacterial cells, human cells and human cancer cells. LL-32 had even a higher activity than LL-37, while LL-20 had almost no effect. The interaction of the two fragments with model membranes was systematically studied in this work to understand their mode of action. Planar lipid films were mainly applied as model systems in combination with IR-spectroscopy and X-ray scattering methods. Circular Dichroism spectroscopy in bulk systems completed the results. In the first approach, the structure of the peptides was determined in aqueous solution and compared to the structure of the peptides at the air/water interface. In bulk, both peptides are in an unstructured conformation. Adsorbed and confined to at the air-water interface, the peptides differ drastically in their surface activity as well as in the secondary structure. While LL-32 transforms into an α-helix lying flat at the water surface, LL-20 stays partly unstructured. This is in good agreement with the high antimicrobial activity of LL-32. In the second approach, experiments with lipid monolayers as biomimetic models for the cell membrane were performed. It could be shown that the peptides fluidize condensed monolayers of negatively charged DPPG which can be related to the thinning of a bacterial cell membrane. An interaction of the peptides with zwitterionic PCs, as models for mammalian cells, was not clearly observed, even though LL-32 is haemolytic. In the third approach, the lipid monolayers were more adapted to the composition of human erythrocyte membranes by incorporating sphingomyelin (SM) into the PC monolayers. Physical-chemical properties of the lipid films were determined and the influence of the peptides on them was studied. It could be shown that the interaction of the more active LL-32 is strongly increased for heterogeneous lipid films containing both gel and fluid phases, while the interaction of LL-20 with the monolayers was unaffected. The results indicate an interaction of LL-32 with the membrane in a detergent-like way. Additionally, the modelling of the peptide interaction with cancer cells was performed by incorporating some negatively charged lipids into the PC/SM monolayers, but the increased charge had no effect on the interaction of LL-32. It was concluded, that the high anti-cancer activity of the peptide originates from the changed fluidity of cell membrane rather than from the increased surface charge. Furthermore, similarities to the physical-chemical properties of melittin, an AMP from the bee venom, were demonstrated.
Diet is a major force influencing the intestinal microbiota. This is obvious from drastic changes in microbiota composition after a dietary alteration. Due to the complexity of the commensal microbiota and the high inter-individual variability, little is known about the bacterial response at the cellular level. The objective of this work was to identify mechanisms that enable gut bacteria to adapt to dietary factors. For this purpose, germ-free mice monoassociated with the commensal Escherichia coli K-12 strain MG1655 were fed three different diets over three weeks: a diet rich in starch, a diet rich in non-digestible lactose and a diet rich in casein. Two dimensional gel electrophoresis and electrospray tandem mass spectrometry were applied to identify differentially expressed proteins of E. coli recovered from small intestine and caecum of mice fed the lactose or casein diets in comparison with those of mice fed the starch diet. Selected differentially expressed bacterial proteins were characterised in vitro for their possible roles in bacterial adaptation to the various diets. Proteins belonging to the oxidative stress regulon oxyR such as alkyl hydroperoxide reductase subunit F (AhpF), DNA protection during starvation protein (Dps) and ferric uptake regulatory protein (Fur), which are required for E. coli’s oxidative stress response, were upregulated in E. coli of mice fed the lactose-rich diet. Reporter gene analysis revealed that not only oxidative stress but also carbohydrate-induced osmotic stress led to the OxyR-dependent expression of ahpCF and dps. Moreover, the growth of E. coli mutants lacking the ahpCF or oxyR genes was impaired in the presence of non-digestible sucrose. This indicates that some OxyR-dependent proteins are crucial for the adaptation of E. coli to osmotic stress conditions. In addition, the function of two so far poorly characterised E. coli proteins was analysed: 2 deoxy-D gluconate 3 dehydrogenase (KduD) was upregulated in intestinal E. coli of mice fed the lactose-rich diet and this enzyme and 5 keto 4 deoxyuronate isomerase (KduI) were downregulated on the casein-rich diet. Reporter gene analysis identified galacturonate and glucuronate as inducers of the kduD and kduI gene expression. Moreover, KduI was shown to facilitate the breakdown of these hexuronates, which are normally degraded by uronate isomerase (UxaC), altronate oxidoreductase (UxaB), altronate dehydratase (UxaA), mannonate oxidoreductase (UxuB) and mannonate dehydratase (UxuA), whose expression was repressed by osmotic stress. The growth of kduID-deficient E. coli on galacturonate or glucuronate was impaired in the presence of osmotic stress, suggesting KduI and KduD to compensate for the function of the regular hexuronate degrading enzymes under such conditions. This indicates a novel function of KduI and KduD in E. coli’s hexuronate metabolism. Promotion of the intracellular formation of hexuronates by lactose connects these in vitro observations with the induction of KduD on the lactose-rich diet. Taken together, this study demonstrates the crucial influence of osmotic stress on the gene expression of E. coli enzymes involved in stress response and metabolic processes. Therefore, the adaptation to diet-induced osmotic stress is a possible key factor for bacterial colonisation of the intestinal environment.
Die Dissertation beschreibt die Herstellung von ringförmigen Verbindungen (Naphthalenophanen) mit Hilfe der Dehydro-Diels-Alder-Reaktion, wobei immer Enantiomerenpaare auftreten. Es wird der diastereoselektive Aufbau von Naphthalenophanen und der enantiomeren reine Aufbau von Biarylen untersucht. Desweiteren werden die physikalischen Eigenschaften der erhaltenen Verbindungen, wie die Phosphoreszenz, Trennbarkeit der entstehenden Enantiomere und die Ringspannung beschrieben.
Information flows in EU policy-making are heavily dependent on personal networks, both within the Brussels sphere but also reaching outside the narrow limits of the Belgian capital. These networks develop for example in the course of formal and informal meetings or at the sidelines of such meetings. A plethora of committees at European, transnational and regional level provides the basis for the establishment of pan-European networks. By studying affiliation to those committees, basic network structures can be uncovered. These affiliation network structures can then be used to predict EU information flows, assuming that certain positions within the network are advantageous for tapping into streams of information while others are too remote and peripheral to provide access to information early enough. This study has tested those assumptions for the case of the reform of the Common Fisheries Policy for the time after 2012. Through the analysis of an affiliation network based on participation in 10 different fisheries policy committees over two years (2009 and 2010), network data for an EU-wide network of about 1300 fisheries interest group representatives and more than 200 events was collected. The structure of this network showed a number of interesting patterns, such as – not surprisingly – a rather central role of Brussels-based committees but also close relations of very specific interests to the Brussels-cluster and stronger relations between geographically closer maritime regions. The analysis of information flows then focused on access to draft EU Commission documents containing the upcoming proposal for a new basic regulation of the Common Fisheries Policy. It was first documented that it would have been impossible to officially obtain this document and that personal networks were thus the most likely sources for fisheries policy actors to obtain access to these “leaks” in early 2011. A survey of a sample of 65 actors from the initial network supported these findings: Only a very small group had accessed the draft directly from the Commission. Most respondents who obtained access to the draft had received it from other actors, highlighting the networked flow of informal information in EU politics. Furthermore, the testing of the hypotheses connecting network positions and the level of informedness indicated that presence in or connections to the Brussels sphere had both advantages for overall access to the draft document and with regard to timing. Methodologically, challenges of both the network analysis and the analysis of information flows but also their relevance for the study of EU politics have been documented. In summary, this study has laid the foundation for a different way to study EU policy-making by connecting topical and methodological elements – such as affiliation network analysis and EU committee governance – which so far have not been considered together, thereby contributing in various ways to political science and EU studies.
Gegenstand dieser Arbeit sind sog. nicht-kanonische bzw. unintegrierte Nebensätze. Diese Nebensätze zeichnen sich dadurch aus, dass sie sich mittels gängiger Kriterien (Satzgliedstatus, Verbletztstellung) nicht klar als koordiniert oder subordiniert beschreiben lassen. Das Phänomen nicht-kanonischer Nebensätze ist ein Thema, welches in der Sprachwissenschaft generell seit den späten Siebzigern (Davison 1979) diskutiert wird und spätestens mit Fabricius-Hansen (1992) auch innerhalb der germanistischen Linguistik angekommen ist. Ein viel beachteter Komplex ist hierbei – neben der reinen Identifizierung nicht-kanonischer Satzgefüge – meist auch die Erstellung einer Klassifikation zur Erfassung zumindest einiger nicht-kanonischer Gefüge, wie dies etwa bei Fabricius-Hansen (1992) und Reis (1997) zu sehen ist. Das Ziel dieser Studie ist es, eine exhaustive Klassifikation der angesprochenen Nebensatztypen vorzunehmen. Dazu werden zunächst – unter Zuhilfenahme von Korpusdaten – alle potentiellen Subordinationsmerkmale genauer untersucht, da die meisten bisherigen Studien zu diesem Thema die stets gleichen Merkmale als gegeben voraussetzen. Dabei wird sich herausstellen, dass nur eine kleine Anzahl von Merkmalen sich wirklich zweifelsfrei dazu eignet, Aufschluss über die Satzverknüpfungsqualität zu geben. Die anschließend aufgestellte Taxonomie deutscher Nebensätze wird schließlich einzig mit der Postulierung einer nicht-kanonischen Nebensatzklasse auskommen. Sie ist darüber hinaus auch in der Lage, die zahlreich vorkommenden Ausnahmefälle zu erfassen. Dies heißt konkret, dass auch etwaige Nebensätze, die sich aufgrund bestimmter Eigenschaften teilweise idiosynkratisch verhalten, einfach in die vorgeschlagene Klassifikation übernommen werden können. In diesem Zuge werde ich weiterhin zeigen, wie eine Nebensatzklassifikation auch sog. sekundären Subordinationsmerkmalen gerecht werden kann, obwohl diese sich hinsichtlich der einzelnen Nebensatzklassen nicht einheitlich verhalten. Schließlich werde ich eine theoretische Modellierung der zuvor postulierten Taxonomie vornehmen, die auf Basis der HPSG mittels Merkmalsvererbung alle möglichen Nebensatztypen zu erfassen imstande ist.
Logging and large earthquakes are disturbances that may significantly affect hydrological and erosional processes and process rates, although in decisively different ways. Despite numerous studies that have documented the impacts of both deforestation and earthquakes on water and sediment fluxes, a number of details regarding the timing and type of de- and reforestation; seismic impacts on subsurface water fluxes; or the overall geomorphic work involved have remained unresolved. The main objective of this thesis is to address these shortcomings and to better understand and compare the hydrological and erosional process responses to such natural and man-made disturbances. To this end, south-central Chile provides an excellent natural laboratory owing to its high seismicity and the ongoing conversion of land into highly productive plantation forests. In this dissertation I combine paired catchment experiments, data analysis techniques, and physics-based modelling to investigate: 1) the effect of plantation forests on water resources, 2) the source and sink behavior of timber harvest areas in terms of overland flow generation and sediment fluxes, 3) geomorphic work and its efficiency as a function of seasonal logging, 4) possible hydrologic responses of the saturated zone to the 2010 Maule earthquake and 5) responses of the vadose zone to this earthquake. Re 1) In order to quantify the hydrologic impact of plantation forests, it is fundamental to first establish their water balances. I show that tree species is not significant in this regard, i.e. Pinus radiata and Eucalyptus globulus do not trigger any decisive different hydrologic response. Instead, water consumption is more sensitive to soil-water supply for the local hydro-climatic conditions. Re 2) Contradictory opinions exist about whether timber harvest areas (THA) generate or capture overland flow and sediment. Although THAs contribute significantly to hydrology and sediment transport because of their spatial extent, little is known about the hydrological and erosional processes occurring on them. I show that THAs may act as both sources and sinks for overland flow, which in turn intensifies surface erosion. Above a rainfall intensity of ~20 mm/h, which corresponds to <10% of all rainfall, THAs may generate runoff whereas below that threshold they remain sinks. The overall contribution of Hortonian runoff is thus secondary considering the local rainfall regime. The bulk of both runoff and sediment is generated by Dunne, saturation excess, overland flow. I also show that logging may increase infiltrability on THAs which may cause an initial decrease in streamflow followed by an increase after the groundwater storage has been refilled. Re 3) I present changes in frequency-magnitude distributions following seasonal logging by applying Quantile Regression Forests at hitherto unprecedented detail. It is clearly the season that controls the hydro-geomorphic work efficiency of clear cutting. Logging, particularly dry seasonal logging, caused a shift of work efficiency towards less flashy and mere but more frequent moderate rainfall-runoff events. The sediment transport is dominated by Dunne overland flow which is consistent with physics-based modelling using WASA-SED. Re 4) It is well accepted that earthquakes may affect hydrological processes in the saturated zone. Assuming such flow conditions, consolidation of saturated saprolitic material is one possible response. Consolidation raises the hydraulic gradients which may explain the observed increase in discharge following earthquakes. By doing so, squeezed water saturates the soil which in turn increases the water accessible for plant transpiration. Post-seismic enhanced transpiration is reflected in the intensification of diurnal cycling. Re 5) Assuming unsaturated conditions, I present the first evidence that the vadose zone may also respond to seismic waves by releasing pore water which in turn feeds groundwater reservoirs. By doing so, water tables along the valley bottoms are elevated thus providing additional water resources to the riparian vegetation. By inverse modelling, the transient increase in transpiration is found to be 30-60%. Based on the data available, both hypotheses, are not testable. Finally, when comparing the hydrological and erosional effects of the Maule earthquake with the impact of planting exotic plantation forests, the overall observed earthquake effects are comparably small, and limited to short time scales.
In the presence of a solid-liquid or liquid-air interface, bacteria can choose between a planktonic and a sessile lifestyle. Depending on environmental conditions, cells swimming in close proximity to the interface can irreversibly attach to the surface and grow into three-dimensional aggregates where the majority of cells is sessile and embedded in an extracellular polymer matrix (biofilm). We used microfluidic tools and time lapse microscopy to perform experiments with the polarly flagellated soil bacterium Pseudomonas putida (P. putida), a bacterial species that is able to form biofilms. We analyzed individual trajectories of swimming cells, both in the bulk fluid and in close proximity to a glass-liquid interface. Additionally, surface related growth during the early phase of biofilm formation was investigated. In the bulk fluid, P.putida shows a typical bacterial swimming pattern of alternating periods of persistent displacement along a line (runs) and fast reorientation events (turns) and cells swim with an average speed around 24 micrometer per second. We found that the distribution of turning angles is bimodal with a dominating peak around 180 degrees. In approximately six out of ten turning events, the cell reverses its swimming direction. In addition, our analysis revealed that upon a reversal, the cell systematically changes its swimming speed by a factor of two on average. Based on the experimentally observed values of mean runtime and rotational diffusion, we presented a model to describe the spreading of a population of cells by a run-reverse random walker with alternating speeds. We successfully recover the mean square displacement and, by an extended version of the model, also the negative dip in the directional autocorrelation function as observed in the experiments. The analytical solution of the model demonstrates that alternating speeds enhance a cells ability to explore its environment as compared to a bacterium moving at a constant intermediate speed. As compared to the bulk fluid, for cells swimming near a solid boundary we observed an increase in swimming speed at distances below d= 5 micrometer and an increase in average angular velocity at distances below d= 4 micrometer. While the average speed was maximal with an increase around 15% at a distance of d= 3 micrometer, the angular velocity was highest in closest proximity to the boundary at d=1 micrometer with an increase around 90% as compared to the bulk fluid. To investigate the swimming behavior in a confinement between two solid boundaries, we developed an experimental setup to acquire three-dimensional trajectories using a piezo driven objective mount coupled to a high speed camera. Results on speed and angular velocity were consistent with motility statistics in the presence of a single boundary. Additionally, an analysis of the probability density revealed that a majority of cells accumulated near the upper and lower boundaries of the microchannel. The increase in angular velocity is consistent with previous studies, where bacteria near a solid boundary were shown to swim on circular trajectories, an effect which can be attributed to a wall induced torque. The increase in speed at a distance of several times the size of the cell body, however, cannot be explained by existing theories which either consider the drag increase on cell body and flagellum near a boundary (resistive force theory) or model the swimming microorganism by a multipole expansion to account for the flow field interaction between cell and boundary. An accumulation of swimming bacteria near solid boundaries has been observed in similar experiments. Our results confirm that collisions with the surface play an important role and hydrodynamic interactions alone cannot explain the steady-state accumulation of cells near the channel walls. Furthermore, we monitored the number growth of cells in the microchannel under medium rich conditions. We observed that, after a lag time, initially isolated cells at the surface started to grow by division into colonies of increasing size, while coexisting with a comparable smaller number of swimming cells. After 5:50 hours, we observed a sudden jump in the number of swimming cells, which was accompanied by a breakup of bigger clusters on the surface. After approximately 30 minutes where planktonic cells dominated in the microchannel, individual swimming cells reattached to the surface. We interpret this process as an emigration and recolonization event. A number of complementary experiments were performed to investigate the influence of collective effects or a depletion of the growth medium on the transition. Similar to earlier observations on another bacterium from the same family we found that the release of cells to the swimming phase is most likely the result of an individual adaption process, where syntheses of proteins for flagellar motility are upregulated after a number of division cycles at the surface.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
Der Untersuchungsgegenstand der vorliegenden Arbeit ist, die mit dem Begriff „Design Thinking“ verbundenen Diskurse zu bestimmen und deren Themen, Konzepte und Bezüge herauszuarbeiten. Diese Zielstellung ergibt sich aus den mehrfachen Widersprüchen und Vieldeutigkeiten, die die gegenwärtigen Verwendungen des Design-Thinking-Begriffs charakterisieren und den kohärenten Gebrauch in Wissenschaft und Wirtschaft erschweren. Diese Arbeit soll einen Beitrag dazu leisten, „Design Thinking“ in den unterschiedlichen Diskurszusammenhängen grundlegend zu verstehen und für zukünftige Verwendungen des Design-Thinking-Begriffs eine solide Argumentationsbasis zu schaffen.
In this work, thermosensitive hydrogels having tunable thermo-mechanical properties were synthesized. Generally the thermal transition of thermosensitive hydrogels is based on either a lower critical solution temperature (LCST) or critical micelle concentration/ temperature (CMC/ CMT). The temperature dependent transition from sol to gel with large volume change may be seen in the former type of thermosensitive hydrogels and is negligible in CMC/ CMT dependent systems. The change in volume leads to exclusion of water molecules, resulting in shrinking and stiffening of system above the transition temperature. The volume change can be undesired when cells are to be incorporated in the system. The gelation in the latter case is mainly driven by micelle formation above the transition temperature and further colloidal packing of micelles around the gelation temperature. As the gelation mainly depends on concentration of polymer, such a system could undergo fast dissolution upon addition of solvent. Here, it was envisioned to realize a thermosensitive gel based on two components, one responsible for a change in mechanical properties by formation of reversible netpoints upon heating without volume change, and second component conferring degradability on demand. As first component, an ABA triblockcopolymer (here: Poly(ethylene glycol)-b-poly(propylene glycol)-b-poly(ethylene glycol) (PEPE) with thermosensitive properties, whose sol-gel transition on the molecular level is based on micellization and colloidal jamming of the formed micelles was chosen, while for the additional macromolecular component crosslinking the formed micelles biopolymers were employed. The synthesis of the hydrogels was performed in two ways, either by physical mixing of compounds showing electrostatic interactions, or by covalent coupling of the components. Biopolymers (here: the polysaccharides hyaluronic acid, chondroitin sulphate, or pectin, as well as the protein gelatin) were employed as additional macromolecular crosslinker to simultaneously incorporate an enzyme responsiveness into the systems. In order to have strong ionic/electrostatic interactions between PEPE and polysaccharides, PEPE was aminated to yield predominantly mono- or di-substituted PEPEs. The systems based on aminated PEPE physically mixed with HA showed an enhancement in the mechanical properties such as, elastic modulus (G′) and viscous modulus (G′′) and a decrease of the gelation temperature (Tgel) compared to the PEPE at same concentration. Furthermore, by varying the amount of aminated PEPE in the composition, the Tgel of the system could be tailored to 27-36 °C. The physical mixtures of HA with di-amino PEPE (HA·di-PEPE) showed higher elastic moduli G′ and stability towards dissolution compared to the physical mixtures of HA with mono-amino PEPE (HA·mono-PEPE). This indicates a strong influence of electrostatic interaction between –COOH groups of HA and –NH2 groups of PEPE. The physical properties of HA with di-amino PEPE (HA·di-PEPE) compare beneficially with the physical properties of the human vitreous body, the systems are highly transparent, and have a comparable refractive index and viscosity. Therefore,this material was tested for a potential biological application and was shown to be non-cytotoxic in eluate and direct contact tests. The materials will in the future be investigated in further studies as vitreous body substitutes. In addition, enzymatic degradation of these hydrogels was performed using hyaluronidase to specifically degrade the HA. During the degradation of these hydrogels, increase in the Tgel was observed along with decrease in the mechanical properties. The aminated PEPE were further utilised in the covalent coupling to Pectin and chondroitin sulphate by using EDC as a coupling agent. Here, it was possible to adjust the Tgel (28-33 °C) by varying the grafting density of PEPE to the biopolymer. The grafting of PEPE to Pectin enhanced the thermal stability of the hydrogel. The Pec-g-PEPE hydrogels were degradable by enzymes with slight increase in Tgel and decrease in G′ during the degradation time. The covalent coupling of aminated PEPE to HA was performed by DMTMM as a coupling agent. This method of coupling was observed to be more efficient compared to EDC mediated coupling. Moreover, the purification of the final product was performed by ultrafiltration technique, which efficiently removed the unreacted PEPE from the final product, which was not sufficiently achieved by dialysis. Interestingly, the final products of these reaction were in a gel state and showed enhancement in the mechanical properties at very low concentrations (2.5 wt%) near body temperature. In these hydrogels the resulting increase in mechanical properties was due to the combined effect of micelle packing (physical interactions) by PEPE and covalent netpoints between PEPE and HA. PEPE alone or the physical mixtures of the same components were not able to show thermosensitive behavior at concentrations below 16 wt%. These thermosensitive hydrogels also showed on demand solubilisation by enzymatic degradation. The concept of thermosensitivity was introduced to 3D architectured porous hydrogels, by covalently grafting the PEPE to gelatin and crosslinking with LDI as a crosslinker. Here, the grafted PEPE resulted in a decrease in the helix formation in gelatin chains and after fixing the gelatin chains by crosslinking, the system showed an enhancement in the mechanical properties upon heating (34-42 °C) which was reversible upon cooling. A possible explanation of the reversible changes in mechanical properties is the strong physical interactions between micelles formed by PEPE being covalently linked to gelatin. Above the transition temperature, the local properties were evaluated by AFM indentation of pore walls in which an increase in elastic modulus (E) at higher temperature (37 °C) was observed. The water uptake of these thermosensitive architectured porous hydrogels was also influenced by PEPE and temperature (25 °C and 37 °C), showing lower water up take at higher temperature and vice versa. In addition, due to the lower water uptake at high temperature, the rate of hydrolytic degradation of these systems was found to be decreased when compared to pure gelatin architectured porous hydrogels. Such temperature sensitive architectured porous hydrogels could be important for e.g. stem cell culturing, cell differentiation and guided cell migration, etc. Altogether, it was possible to demonstrate that the crosslinking of micelles by a macromolecular crosslinker increased the shear moduli, viscosity, and stability towards dissolution of CMC-based gels. This effect could be likewise be realized by covalent or non-covalent mechanisms such as, micelle interactions, physical interactions of gelatin chains and physical interactions between gelatin chains and micelles. Moreover, the covalent grafting of PEPE will create additional net-points which also influence the mechanical properties of thermosensitive architectured porous hydrogels. Overall, the physical and chemical interactions and reversible physical interactions in such thermosensitive architectured porous hydrogels gave a control over the mechanical properties of such complex system. The hydrogels showing change of mechanical properties without a sol-gel transition or volume change are especially interesting for further study with cell proliferation and differentiation.
Intensive Forschung hat in den vergangenen Jahrzehnten zu einer sehr detaillierten Charakterisierung des Geschmackssystems der Säugetiere geführt. Dennoch sind mit den bislang eingesetzten Methoden wichtige Fragestellungen unbeantwortet geblieben. Eine dieser Fragen gilt der Unterscheidung von Bitterstoffen. Die Zahl der Substanzen, die für den Menschen bitter schmecken und in Tieren angeborenes Aversionsverhalten auslösen, geht in die Tausende. Diese Substanzen sind sowohl von der chemischen Struktur als auch von ihrer Wirkung auf den Organismus sehr verschieden. Während viele Bitterstoffe potente Gifte darstellen, sind andere in den Mengen, die mit der Nahrung aufgenommen werden, harmlos oder haben sogar positive Effekte auf den Körper. Zwischen diesen Gruppen unterscheiden zu können, wäre für ein Tier von Vorteil. Ein solcher Mechanismus ist jedoch bei Säugetieren nicht bekannt. Das Ziel dieser Arbeit war die Untersuchung der Verarbeitung von Geschmacksinformation in der ersten Station der Geschmacksbahn im Mausgehirn, dem Nucleus tractus solitarii (NTS), mit besonderem Augenmerk auf der Frage nach der Diskriminierung verschiedener Bitterstoffe. Zu diesem Zweck wurde eine neue Untersuchungsmethode für das Geschmackssystem etabliert, die die Nachteile bereits verfügbarer Methoden umgeht und ihre Vorteile kombiniert. Die Arc-catFISH-Methode (cellular compartment analysis of temporal activity by fluorescent in situ hybridization), die die Charakterisierung der Antwort großer Neuronengruppen auf zwei Stimuli erlaubt, wurde zur Untersuchung geschmacksverarbeitender Zellen im NTS angewandt. Im Zuge dieses Projekts wurde erstmals eine stimulusinduzierte Arc-Expression im NTS gezeigt. Die ersten Ergebnisse offenbarten, dass die Arc-Expression im NTS spezifisch nach Stimulation mit Bitterstoffen auftritt und sich die Arc exprimierenden Neurone vornehmlich im gustatorischen Teil des NTS befinden. Dies weist darauf hin, dass Arc-Expression ein Marker für bitterverarbeitende gustatorische Neurone im NTS ist. Nach zweimaliger Stimulation mit Bittersubstanzen konnten überlappende, aber verschiedene Populationen von Neuronen beobachtet werden, die unterschiedlich auf die drei verwendeten Bittersubstanzen Cycloheximid, Chininhydrochlorid und Cucurbitacin I reagierten. Diese Neurone sind vermutlich an der Steuerung von Abwehrreflexen beteiligt und könnten so die Grundlage für divergentes Verhalten gegenüber verschiedenen Bitterstoffen bilden.
An important strand of research has investigated the question of how children acquire a morphological system using offline data from spontaneous or elicited child language. Most of these studies have found dissociations in how children apply regular and irregular inflection (Marcus et al. 1992, Weyerts & Clahsen 1994, Rothweiler & Clahsen 1993). These studies have considerably deepened our understanding of how linguistic knowledge is acquired and organised in the human mind. Their methodological procedures, however, do not involve measurements of how children process morphologically complex forms in real time. To date, little is known about how children process inflected word forms. The aim of this study is to investigate children’s processing of inflected words in a series of on-line reaction time experiments. We used a cross-modal priming experiment to test for decompositional effects on the central level. We used a speeded production task and a lexical decision task to test for frequency effects on access level in production and recognition. Children’s behaviour was compared to adults’ behaviour towards three participle types (-t participles, e.g. getanzt ‘danced’ vs. -n participles with stem change, e.g. gebrochen ‘broken’ vs.-n participles without stem change, e.g. geschlafen ‘slept’). For the central level, results indicate that -t participles but not -n participles have decomposed representations. For the access level, results indicate that -t participles are represented according to their morphemes and additionally as full forms, at least from the age of nine years onwards (Pinker 1999 and Clahsen et al. 2004). Further evidence suggested that -n participles are represented as full-form entries on access level and that -n participles without stem change may encode morphological structure (cf. Clahsen et al. 2003). Out data also suggests that processing strategies for -t participles are differently applied in recognition and production. These results provide evidence that children (within the age range tested) employ the same mechanisms for processing participles as adults. The child lexicon grows as children form additional full-form representations for -t participles on access level and elaborate their full-form lexical representations of -n participles on central level. These results are consistent with processing as explained in dual-system theories.
Landslides are one of the biggest natural hazards in Georgia, a mountainous country in the Caucasus. So far, no systematic monitoring and analysis of the dynamics of landslides in Georgia has been made. Especially as landslides are triggered by extrinsic processes, the analysis of landslides together with precipitation and earthquakes is challenging. In this thesis I describe the advantages and limits of remote sensing to detect and better understand the nature of landslide in Georgia. The thesis is written in a cumulative form, composing a general introduction, three manuscripts and a summary and outlook chapter. In the present work, I measure the surface displacement due to active landslides with different interferometric synthetic aperture radar (InSAR) methods. The slow landslides (several cm per year) are well detectable with two-pass interferometry. In same time, the extremely slow landslides (several mm per year) could be detected only with time series InSAR techniques. I exemplify the success of InSAR techniques by showing hitherto unknown landslides, located in the central part of Georgia. Both, the landslide extent and displacement rate is quantified. Further, to determine a possible depth and position of potential sliding planes, inverse models were developed. Inverse modeling searches for parameters of source which can create observed displacement distribution. I also empirically estimate the volume of the investigated landslide using displacement distributions as derived from InSAR combined with morphology from an aerial photography. I adapted a volume formula for our case, and also combined available seismicity and precipitation data to analyze potential triggering factors. A governing question was: What causes landslide acceleration as observed in the InSAR data? The investigated area (central Georgia) is seismically highly active. As an additional product of the InSAR data analysis, a deformation area associated with the 7th September Mw=6.0 earthquake was found. Evidences of surface ruptures directly associated with the earthquake could not be found in the field, however, during and after the earthquake new landslides were observed. The thesis highlights that deformation from InSAR may help to map area prone landslides triggering by earthquake, potentially providing a technique that is of relevance for country wide landslide monitoring, especially as new satellite sensors will emerge in the coming years.
Die Arbeit thematisiert die Veränderungen im deutschen Wissenschafts- und Hochschulsystem. Im Mittelpunkt steht die "unternehmerische Mission" von Universitäten. Der Blick wird auf das Aufgabenfeld Wissens- und Technologietransfer (WTT) gerichtet. Anhand dessen werden die Veränderungen, die innerhalb des deutschen Universitätssystems in den vergangenen Jahren erfolgten, nachgezeichnet. Die Erwartungshaltungen an Universitäten haben sich verändert. Ökonomische Sichtweisen nehmen einen immer größeren Stellenwert ein. Die Arbeit baut auf den Prämissen der neoinstitutionalistischen Organisationstheorie auf. Anhand dieser wird gezeigt, wie Erwartungen externer Stakeholder Eingang in Hochschulen finden und sich auf ihre organisatorische Ausgestaltung auswirken. Der Arbeit liegt ein exploratives, qualitatives Untersuchungsdesign zugrunde. In einer Fallstudie werden zwei Universitäten als Fallbeispiele untersucht. Die Untersuchung liefert Antworten auf die Fragen, wie der WTT als Aufgabenbereich an deutschen Universitäten umgesetzt wird, welche Strukturen sich herausgebildet haben und inwieweit eine Institutionalisierung des WTTs an Universitäten erfolgt ist. In der Arbeit werden verschiedene Erhebungsinstrumente im Rahmen einer Triangulation genutzt. Experteninterviews bilden das Hauptanalyseinstrument. Ziel der Untersuchung ist neben der Beantwortung der Forschungsfragen, Hypothesen zu bilden, die für weiterführende Untersuchungen genutzt werden können. Darüber hinaus werden Handlungsempfehlungen für die Umsetzung des WTTs an deutschen Hochschulen gegeben. Die Arbeit richtet sich sowohl an Wissenschaftler als auch Praktiker aus dem Bereich Wissens- und Technologietransfer.
One of the central questions in psycholinguistic is understanding whether and how prosodic phrase boundaries are used to resolve syntactic ambiguities in sentence processing. The present work aimed to answer both, first, the effects of φ- and ι-boundaries on syntactic ambiguity resolution, and second, how the prosodic correlates of the auditory input are taken for the phonetic-phonology mapping in order to attain a meaningful sentence interpretation.
With regard to the first aim, we investigated locally syntactic ambiguities involving either φ- or ι-phrase boundaries in German and the structural preference that listeners have, based on the prosodic content. The experiments described in this work show that German listeners exploit both types of prosodic phrase boundaries to resolve local syntactic ambiguities, that however, their disambiguation altered by the presence or absence of prosodic cues correlated with the corresponding boundary. Specifically, the perception data revealed that the phonetically measured prosodic correlates of each prosodic boundary such as pitch accents, boundary tones, deaccentuation and durational properties do not contribute to ambiguity resolution in equal measure. Rather, it is the case that listeners rely primarily on prefinal lengthening as a correlate of phrasing in the vicinity of φ-phrase boundaries, while at the level of the ι-phrase boundary, boundary tones serve as phrasal cues. This way the results of the present work take account of the as yet missing information on individual contributions of prosodic correlates on listeners’ disambiguation of syntactically ambiguous sentences in German. It further implies that the question of how German listeners resolve syntactic ambiguities cannot simply be attributed to the presence or absence of prosodic correlates. The interpretation of the phrasal structure rather depends on a more general picture of cohesion between prosodic correlates and prosodic boundary sizes.
With respect to the second aim, the processing models proposed in the present work describe a specific phonetics-phonology mapping in the vicinity of both phrase boundaries. It is assumed that auditory sentence processing proceeds in several successively organized steps, during which listeners transform overt phonetic forms into language specific abstract surface forms. This process is referred to as phonetics-phonology mapping in the present work. Perceptual evidence resulting from the experiments of the present work suggest that the phonetics-phonology mapping is guided by the above mentioned boundary related prosodic correlates. The resulting abstract phonological structure is subjected to the syntax-prosody mapping, in turn. The outcome of the presented perception experiments are modulated in an Optimality-Theoretic framework. The offered OT-models are grounded on the assumption that single prosodic correlates are used by listeners as a signal to syntax in sentence processing. This is in line with studies arguing that the prosodic phrase structure determines the syntactic parse (Cutler et al., 1997; Warren et al., 1995; Pynte & Prieur, 1996; Snedeker & Trueswell, 2003; Kjelgaard & Speer, 1999), to name just a few.
Folgt tatsächlich aus einem liberalen Wertekanon eine generative Selbstbestimmung, eine weitgehende elterliche Handlungsfreiheit bei eugenischen Maßnahmen, wie es Vertreter einer „liberalen Eugenik“ versichern? Diese Arbeit diskutiert die Rolle Staates und die Handlungsspielräume der Eltern bei der genetischen Gestaltung von Nachkommen im Rahmen eines liberalen Wertverständnisses.
Den Schwerpunkt/Fokus der Betrachtungen liegt hier Maßnahmen des genetic enhancement.
Darüber hinaus wird auch das Verhältnis der „liberalen Eugenik“ zur „autoritären Eugenik“ neu beleuchtet.
Die Untersuchung beginnt bei der Analyse zentraler liberaler Werte und Normen, wie Freiheit, Autonomie und Gerechtigkeit und deren Funktionen in der „liberalen Eugenik“. Wobei nur sehr eingeschränkt von der „liberalen Eugenik“ gesprochen werden kann, sondern viel mehr von Varianten einer „liberalen Eugenik“.
Darüber hinaus wird in dieser Arbeit die historische Entwicklung der „liberalen“ und der „autoritären Eugenik“, speziell des Sozialdarwinismus, untersucht und verglichen, insbesondere im Hinblick auf liberale Werte und Normen und der generativen Selbstbestimmung.
Den Kern der Arbeit bildet der Vergleich der „liberalen Eugenik“ mit der „liberalen Erziehung“. Da hier die grundlegenden Aufgaben der Eltern, aber auch des Staates, analysiert und deren Verhältnis diskutiert wird.
Es zeigt sich, dass sich aus einem liberalen Wertverständnisses heraus keine umfangreiche generative Selbstbestimmung ableiten lässt, sondern sich viel mehr staatlich kontrollierte enge Grenzen bei eugenischen Maßnahmen zum Wohle der zukünftigen Person, begründen.
Zudem wurde der Weg zur autoritären Eugenik nicht durch die Abkehr von der generativen Selbstbestimmung geebnet, sondern viel mehr durch die Übertragung des Fortschrittsgedankens auf den Menschen selbst. Damit verliert die generative Selbstbestimmung auch ihre Funktion als Brandmauer gegen eine autoritäre Eugenik. Nicht der Verlust der generativen Selbstbestimmung, sondern viel mehr die Idee der Perfektionierung des Menschen muss kritisch betrachtet und letztlich abgelehnt werden.
Ohne generative Selbstbestimmung und einer Perfektionierung des Menschen, bleibt nur eine Basis-Eugenik, bei der die Entwicklungsfähigkeit des Menschen sichergestellt wird, nicht jedoch seine Verbesserung.
Darüber hinaus muss auch über eine Entwicklungsmöglichkeit des zukünftigen Menschen gesprochen werden, d. h. ein minimales Potential zu gesellschaftlicher Integration muss gegeben sein. Nur wenn tatsächlich keine Möglichkeiten seitens der Gesellschaft bestehen eine Person zu integrieren und dieser eine Entwicklungsmöglichkeit zu bieten, wären eugenische Maßnahmen als letztes Mittel akzeptabel.
Die kumulative Dissertation zur Projektdidaktik trägt den Titel „Von der Konzeption zur Praxis: Zur Entwicklung der Projektdidaktik am Oberstufen-Kolleg Bielefeld und ihre Impulsgebung und Modellbildung für das deutsche Regelschulwesen“. Die Dissertation versteht sich als beispielgebende Umsetzung und Implementierung der Projektdidaktik für das Regelschulsystem. Auf der Basis von 22 bereits erschienenen Publikationen und einer Monographie werden mit fünf methodischen Zugriffen (bildungshistorisch, dichte Beschreibung, Aktionsforschung, empirische Untersuchung an Regelschulen und Implementierungsforschung, s. Kapitel 1) in sieben Kapiteln (2- 8) des systematischen ersten Teils die Entwicklung der Unterrichtsform Projektunterricht in der BRD, Projektbegriff und Weiterentwicklung des Konzepts, Methodik, Bewertung sowie Organisation des Projektunterrichts am Oberstufen-Kolleg, der Versuchsschule des Landes NRW, in Auseinandersetzung mit der allgemeinen Projektdidaktik dargestellt sowie Formen und Verfahren der erprobten Implementierung in das Regelschulsystem präsentiert.
Ein Schlusskapitel (9) fasst die Ergebnisse zusammen. Im umfangreichen Anhang finden sich verschiedene Publikationen zu Aspekten der Projektdidaktik, auf die der systematische Teil jeweils Bezug nimmt.
Die bildungshistorische Analyse (Kapitel 2) untersucht das Verhältnis von pädagogischer Theorie und schulischer Praxis, die weder in Literatur und noch in Praxis genügend verbunden sind. Nach der Rezeption der gut erforschten Konzeptgeschichte pädagogischer Theorie in Anlehnung an Dewey und Kilpatrick wird durch eine erste Analyse der „Praxisgeschichte“ des Projektunterrichts auf ein Forschungsdesiderat hingewiesen, dies auch um die Projektpraxis am Oberstufen-Kolleg in Beziehung zu der in den Regelschulen setzen zu können. Dabei wurden seit 1975 sechs Entwicklungslinien herausgearbeitet: Start, Krise und ihre Überwindung durch Öffnung und Vernetzung (1975-1990), didaktisch-methodische Differenzierung und Notwendigkeit von Professionalisierung (ab 1990) sowie Schulentwicklung und Institutionalisierung (seit Ende der 1990er Jahre).
Projektunterricht besteht am Oberstufen-Kolleg seit der Gründung 1974 als fest eingerichtete Unterrichtsform (seit 2002 zweimal jährlich 2 Wochen) mit dem Ziel, für das Regelschulsystem die Projektdidaktik zu erproben und weiterzuentwickeln. Als wichtige praxisorientierte Ziele wurden ein praxistauglicher Begriff, Bildungswert und Kompetenzen im Unterschied zum Lehrgang herausgearbeitet (z.B. handlungs- und anwendungsorientierte Kompetenzen) und das Verhältnis zum Fachunterricht bestimmt (Kapitel 3). Letzteres wurde am Beispiel des Fachs Geschichte entwickelt und exemplarisch in Formen der Verzahnung dargestellt (Kapitel 6).
Auch für die methodische Dimension galt, die allgemeine Projektdidaktik weiterzuentwickeln durch ihre Abgrenzung zu anderen Methoden der Öffnung von Schule und Unterricht (Kapitel 4). Dabei wurde als zentrales methodisches Prinzip die Handlungsorientierung bestimmt sowie sieben Phasen und jeweilige Handlungsschritte festgelegt. Besonders Planung und Rollenwechsel bedürfen dabei besonderer Beachtung, um Selbsttätigkeit der ProjektteilnehmerInnen zu erreichen. Verschiedene methodische „Etüden“ ( z.B. Gruppenarbeit, recherchieren, sich öffentlich verhalten), handlungsorientierte Vorformen und projektorientiertes Arbeiten sollten die Vollform Projektunterricht vorbereiten helfen.
Die Bewertung von Projekten (Kapitel 5) stellt andere Anforderungen als der Lehrgang, weil sie unterschiedliche Bewertungsebenen (z.B. Prozessbedeutung, Produktbeurteilung, Gruppenbewertung) umfasst. Dazu sind am Oberstufen-Kolleg andere Bewertungsformen als die Ziffernnote entwickelt worden: z.B. ein „Reflexionsbericht“ als individuelle Rückmeldung von SchülerInnen und LehrerInnen und ein „Zertifikat“ für besondere Leistungen im Projekt.
Zentral für die Entwicklung von Projektunterricht ist jedoch die Organisationsfrage (Kapitel 7). Dazu bedarf es einer Organisationsgruppe Projekt, die die Unterrichtsform didaktisch betreut und in einem Hearing die angemeldeten Projekte berät. Das Oberstufen-Kolleg hat damit eine entwickelte „Projektkultur“ organisatorisch umgesetzt. Für eine empirische Untersuchung an sechs Regelschulen in Ostwestfalen ist dann eine idealtypische Merkmalsliste von schulischer „Projektkultur“ als Untersuchungsinstrument entstanden, das zugleich als Leitlinie für Schulentwicklung im Bereich Projektlernen in den Regelschulen dienen kann. Zu dieser Implementierung (Kapitel 8) wurden Konzepte und Erfahrungen vom Oberstufen-Kolleg für schulinterne und schulexterne Fortbildungsformen sowie eine exemplarische Fortbildungseinheit entwickelt. So konnten in zahlreichen Lehrerfortbildungen durch die Versuchsschule Impulse für das Regelschulsystem gegeben werden.
Several mechanisms are proposed to be part of the earthquake triggering process, including static stress interactions and dynamic stress transfer. Significant differences of these mechanisms are particularly expected in the spatial distribution of aftershocks. However, testing the different hypotheses is challenging because it requires the consideration of the large uncertainties involved in stress calculations as well as the appropriate consideration of secondary aftershock triggering which is related to stress changes induced by smaller pre- and aftershocks. In order to evaluate the forecast capability of different mechanisms, I take the effect of smaller--magnitude earthquakes into account by using the epidemic type aftershock sequence (ETAS) model where the spatial probability distribution of direct aftershocks, if available, is correlated to alternative source information and mechanisms. Surface shaking, rupture geometry, and slip distributions are tested. As an approximation of the shaking level, ShakeMaps are used which are available in near real-time after a mainshock and thus could be used for first-order forecasts of the spatial aftershock distribution. Alternatively, the use of empirical decay laws related to minimum fault distance is tested and Coulomb stress change calculations based on published and random slip models. For comparison, the likelihood values of the different model combinations are analyzed in the case of several well-known aftershock sequences (1992 Landers, 1999 Hector Mine, 2004 Parkfield). The tests show that the fault geometry is the most valuable information for improving aftershock forecasts. Furthermore, they reveal that static stress maps can additionally improve the forecasts of off--fault aftershock locations, while the integration of ground shaking data could not upgrade the results significantly. In the second part of this work, I focused on a procedure to test the information content of inverted slip models. This allows to quantify the information gain if this kind of data is included in aftershock forecasts. For this purpose, the ETAS model based on static stress changes, which is introduced in part one, is applied. The forecast ability of the models is systematically tested for several earthquake sequences and compared to models using random slip distributions. The influence of subfault resolution and segment strike and dip is tested. Some of the tested slip models perform very good, in that cases almost no random slip models are found to perform better. Contrastingly, for some of the published slip models, almost all random slip models perform better than the published slip model. Choosing a different subfault resolution hardly influences the result, as long the general slip pattern is still reproducible. Whereas different strike and dip values strongly influence the results depending on the standard deviation chosen, which is applied in the process of randomly selecting the strike and dip values.
Zur Versorgung ausländischer Märkte bedienen sich Unternehmen unterschiedlicher Versorgungsformen. Die proximity-concentration trade-off-Literatur betrachtet die Wahl zwischen Export und Auslandsproduktion und erklärt die Entstehung von internationalem Handel und horizontalen ausländischen Direktinvestitionen. Das Standardmodell von Brainard (1993) integriert die Auslandsproduktion als alternative Versorgungsform zum Handel in ein allgemeines Gleichgewichtsmodell mit zwei Ländern, monopolistischer Konkurrenz, steigenden Skalenerträgen und Transportkosten. Im Gleichgewicht versorgen Unternehmen ausländische Märkte entweder durch Exporte oder eine Auslandsproduktion. Die real zu beobachtende Ko-Existenz von internationalem Handel und ausländischen Direktinvestitionen auf der Unternehmensebene kann mit diesem Modell nicht erklärt werden. Im Rahmen dieser Arbeit wird die Exportplattform (EP) als mögliche Antwort auf dieses Phänomen herangezogen. Eine Exportplattform ist eine Auslandsproduktion, durch die nicht nur der lokale Auslandsmarkt, sondern auch Drittländer versorgt werden. Im modelltheoretischen Teil dieser Arbeit wird ein partialanalytisches EP-Modell formuliert, dass auf Brainard (1993) aufbaut. Dabei wird ihr Modell um eine Mehr-Länder-Welt mit heterogener Verteilungsstruktur erweitert und die Versorgungsalternative der EP-Exporte nach dem Beispiel von Neary (2002) integriert. Durch die analytische Lösung des partiellen Gleichgewichts lässt sich die substitutive Beziehung zwischen Heimatexporten, Auslandsproduktion und EP-Exporten aufzeigen. Ferner kann die Wirkung der Versorgungskosten auf die Versorgungswahl analysiert werden. Dabei wird neben der analytischen Modellbeschreibung besonders auf die Gleichgewichtsbestimmung und die Existenz der Gleichgewichte eingegangen. Aufbauend auf den analytisch abzuleitenden Hypothesen wird das EP-Modell ferner einem empirischen Signifikanztest unterzogen. Unter Anwendung von nicht-linearen Regressionsverfahren wird die Wahl zwischen EP-Exporten und Auslandsproduktion, zwischen EP- und Heimatexporten sowie zwischen EP-Exporten und der EP-Produktion separat geschätzt. Hierfür wird auf Daten der Automobilindustrie zurückgegriffen, welche die regionalen PKW-Produktions- und -Absatzdaten sämtlicher Automobilhersteller in Osteuropa, Asien und Ozeanien umfassen.
This thesis deals with Einstein metrics and the Ricci flow on compact mani- folds. We study the second variation of the Einstein-Hilbert functional on Ein- stein metrics. In the first part of the work, we find curvature conditions which ensure the stability of Einstein manifolds with respect to the Einstein-Hilbert functional, i.e. that the second variation of the Einstein-Hilbert functional at the metric is nonpositive in the direction of transverse-traceless tensors. The second part of the work is devoted to the study of the Ricci flow and how its behaviour close to Einstein metrics is influenced by the variational be- haviour of the Einstein-Hilbert functional. We find conditions which imply that Einstein metrics are dynamically stable or unstable with respect to the Ricci flow and we express these conditions in terms of stability properties of the metric with respect to the Einstein-Hilbert functional and properties of the Laplacian spectrum.
Internalin J (InlJ) gehört zu der Klasse der bakteriellen, cysteinhaltigen (leucine-rich repeat) LRR Proteine. Bei den Internalinen handelt es sich um meist invasions-assoziierte Proteine der Listerien. Die LRR-Domäne von InlJ ist aus 15 regelmäßig wiederkehrenden, stark konservierten Sequenzeinheiten (repeats, 21 Aminosäuren) aufgebaut. Ein interessantes Detail dieses Internalins ist das stark konservierte Cystein innerhalb der repeats. Daraus ergibt sich eine ungewöhnliche Anordnung von 12 Cysteinen in einem Stapel. Die Häufigkeit von Cysteinen in InlJ ist für ein extrazelluläres Protein von L. monocytogenes außergewöhnlich, und die Frage nach ihrer Funktion daher umso brennender. Im Vergleich zum ubiquitären Vorkommen der sogenannten repeat-Proteine in der Natur sind Studien zu ihrer Stabilität und Faltung nicht äquivalent vertreten. Die zentrale Eigenschaft der repeat-Proteine ist ihr modularer Aufbau, der durch einfache Topologie gekennzeichnet ist und auf kurzreichenden Wechselwirkungen basiert. Diese Topologie macht repeat-Proteine zu idealen Modellproteinen, um die stabilitätsrelevanten Wechselwirkungen zu separieren und zuzuordnen. In der vorliegenden Arbeit wurde die Faltung und Entfaltung von InlJ umfassend charakterisiert und die Relevanz der Cysteine näher beleuchtet. Die spektroskopische Charakterisierung von InlJ zeigte, dass dessen Faltungszustand durch zwei Tryptophane im N- und C-Terminus fluoreszenzspektroskopisch gut zugänglich ist. Die thermodynamische Stabilität wurde mittels fluoreszenz-detektierten, Guanidiniumchlorid-induzierten Gleichgewichtsexperimenten bestimmt. Um die kinetischen Eigenschaften von InlJ zu erfassen, wurden die Faltungs- sowie die Entfaltungsreaktion spektroskopisch untersucht. Die Identifizierung der produktiven Faltungsreaktion war lediglich durch die Anwendung des reversen Doppelsprungexperiments möglich. Die Auswertung erfolgte nach dem Zweizustandsmodell, wonach die Faltung dem „Alles-oder-Nichts“ Prinzip folgt. Die Gültigkeit dieser Annahme wurde durch die kinetische Charakterisierung bestätigt. Es wurde sowohl in den Gleichgewichtsexperimenten als auch in den kinetisch erhaltenen Daten eine hohe freie Stabilisierungsenthalpie festgestellt. Die hohe Stabilität von InlJ geht mit hoher Kooperativität einher. Die kinetischen Daten zeigen zudem, dass die hohe Kooperativität hauptsächlich der Faltungsreaktion entstammt. Der Tanford-Wert von 0.93 impliziert, dass die Oberflächenänderung während der Faltung bereits zum größten Teil erfolgt ist, bevor der Übergangszustand ausgebildet wurde. Direkte strukturelle Informationen über den Übergangszustand wurden mit Hilfe von Mutationsstudien erhalten. Zu diesem Zweck wurden 12 der 14 Cysteine gegen ein Alanin ausgetauscht. Die repeats 1 bis 11 von InlJ beinhalten jeweils ein Cystein, deren Anordnung eine Leiter ergibt. Deren Substitutionen haben einen vergleichbar destabilisierenden Effekt auf InlJ von durchschnittlich 4.8 kJ/mol. Die Verlangsamung der Faltung deutet daraufhin, dass die Interaktionen der repeats 5 bis 11 im Übergangszustand bereits voll ausgebildet sind. Demnach liegt bei InlJ ein zentraler Faltungsnukleus vor. Im Rahmen dieser Promotionsarbeit wurde eine hohe Stabilität und ein stark-kooperatives Verhalten für das extrazelluläre Protein InlJ beobachtet. Diese Erkenntnisse könnten wichtige Beiträge zur Entwicklung artifizieller repeat-Proteine leisten, deren Verwendung sich stetig ausweitet.
This thesis presents novel ideas and research findings for the Web of Data – a global data space spanning many so-called Linked Open Data sources. Linked Open Data adheres to a set of simple principles to allow easy access and reuse for data published on the Web. Linked Open Data is by now an established concept and many (mostly academic) publishers adopted the principles building a powerful web of structured knowledge available to everybody. However, so far, Linked Open Data does not yet play a significant role among common web technologies that currently facilitate a high-standard Web experience. In this work, we thoroughly discuss the state-of-the-art for Linked Open Data and highlight several shortcomings – some of them we tackle in the main part of this work. First, we propose a novel type of data source meta-information, namely the topics of a dataset. This information could be published with dataset descriptions and support a variety of use cases, such as data source exploration and selection. For the topic retrieval, we present an approach coined Annotated Pattern Percolation (APP), which we evaluate with respect to topics extracted from Wikipedia portals. Second, we contribute to entity linking research by presenting an optimization model for joint entity linking, showing its hardness, and proposing three heuristics implemented in the LINked Data Alignment (LINDA) system. Our first solution can exploit multi-core machines, whereas the second and third approach are designed to run in a distributed shared-nothing environment. We discuss and evaluate the properties of our approaches leading to recommendations which algorithm to use in a specific scenario. The distributed algorithms are among the first of their kind, i.e., approaches for joint entity linking in a distributed fashion. Also, we illustrate that we can tackle the entity linking problem on the very large scale with data comprising more than 100 millions of entity representations from very many sources. Finally, we approach a sub-problem of entity linking, namely the alignment of concepts. We again target a method that looks at the data in its entirety and does not neglect existing relations. Also, this concept alignment method shall execute very fast to serve as a preprocessing for further computations. Our approach, called Holistic Concept Matching (HCM), achieves the required speed through grouping the input by comparing so-called knowledge representations. Within the groups, we perform complex similarity computations, relation conclusions, and detect semantic contradictions. The quality of our result is again evaluated on a large and heterogeneous dataset from the real Web. In summary, this work contributes a set of techniques for enhancing the current state of the Web of Data. All approaches have been tested on large and heterogeneous real-world input.
The Arctic is considered as a focal region in the ongoing climate change debate. The currently observed and predicted climate warming is particularly pronounced in the high northern latitudes. Rising temperatures in the Arctic cause progressive deepening and duration of permafrost thawing during the arctic summer, creating an ‘active layer’ with high bioavailability of nutrients and labile carbon for microbial consumption. The microbial mineralization of permafrost carbon creates large amounts of greenhouse gases, including carbon dioxide and methane, which can be released to the atmosphere, creating a positive feedback to global warming. However, to date, the microbial communities that drive the overall carbon cycle and specifically methane production in the Arctic are poorly constrained. To assess how these microbial communities will respond to the predicted climate changes, such as an increase in atmospheric and soil temperatures causing increased bioavailability of organic carbon, it is necessary to investigate the current status of this environment, but also how these microbial communities reacted to climate changes in the past. This PhD thesis investigated three records from two different study sites in the Russian Arctic, including permafrost, lake shore and lake deposits from Siberia and Chukotka. A combined stratigraphic approach of microbial and molecular organic geochemical techniques were used to identify and quantify characteristic microbial gene and lipid biomarkers. Based on this data it was possible to characterize and identify the climate response of microbial communities involved in past carbon cycling during the Middle Pleistocene and the Late Pleistocene to Holocene. It is shown that previous warmer periods were associated with an expansion of bacterial and archaeal communities throughout the Russian Arctic, similar to present day conditions. Different from this situation, past glacial and stadial periods experienced a substantial decrease in the abundance of Bacteria and Archaea. This trend can also be confirmed for the community of methanogenic archaea that were highly abundant and diverse during warm and particularly wet conditions. For the terrestrial permafrost, a direct effect of the temperature on the microbial communities is likely. In contrast, it is suggested that the temperature rise in scope of the glacial-interglacial climate variations led to an increase of the primary production in the Arctic lake setting, as can be seen in the corresponding biogenic silica distribution. The availability of this algae-derived carbon is suggested to be a driver for the observed pattern in the microbial abundance. This work demonstrates the effect of climate changes on the community composition of methanogenic archae. Methanosarcina-related species were abundant throughout the Russian Arctic and were able to adapt to changing environmental conditions. In contrast, members of Methanocellales and Methanomicrobiales were not able to adapt to past climate changes. This PhD thesis provides first evidence that past climatic warming led to an increased abundance of microbial communities in the Arctic, closely linked to the cycling of carbon and methane production. With the predicted climate warming, it may, therefore, be anticipated that extensive amounts of microbial communities will develop. Increasing temperatures in the Arctic will affect the temperature sensitive parts of the current microbiological communities, possibly leading to a suppression of cold adapted species and the prevalence of methanogenic archaea that tolerate or adapt to increasing temperatures. These changes in the composition of methanogenic archaea will likely increase the methane production potential of high latitude terrestrial regions, changing the Arctic from a carbon sink to a source.
African states are often called corrupt indicating that the political system in Africa differs from the one prevalent in the economically advanced democracies. This however does not give us any insight into what makes corruption the ruling norm of African statehood. Thus we must turn to the overly neglected theoretical work on the political economy of Africa in order to determine how the poverty of governance in Africa is firmly anchored both in Africa’s domestic socioeconomic reality, as well as in the region’s role in the international economic order. Instead of focusing on increased monitoring, enforcement and formal democratic procedures, this book integrates economic analysis with political theory in order to arrive at a better understanding of the political-economic roots of corruption in Sub-Saharan Africa.
The phenolic compounds as food components represent the largest group of secondary metabolites in plant foods. The phenolic compounds, e.g. chlorogenic acid (CQA), are susceptible to oxidation by enzymes specially, polyphenol oxidase (PPO) and at alkaline conditions. Both enzymatic and non-enzymatic oxidations occur in the presence of oxygen and produce quinone, which normally further react with other quinone to produce colored compounds (dimers), as well as is capable of undergoing a nucleophilic addition to proteins. The interactions of proteins with the phenolic compounds have received considerable attention in the recent years where, plant phenolic compounds have drawn increasing attention due to their antioxidant properties and their noticeable effects in the prevention of various oxidative stress associated diseases. Green coffee beans are one of the richest sources of chlorogenic acids. Therefore, a green coffee extract would provide an eligible food relevant source for phenolic compounds for modification of proteins. The interaction between 5-CQA and amino acid lysine showed decrease in both free CQA and amino acid groups and only a slight effect on the antioxidative capacity depending on the reaction time was found. Furthermore, this interaction showed a large number of intermediary substances of low intensities. The reaction of lysine with 5-CQA in a model system initially leads to formation of 3-CQA and 4-CQA (both are isomers of 5-CQA), oxidation giving rise to the formation of a dimer which subsequently forms an adduct with lysine to finally result in a benzacridine derivative as reported and confirmed with the aid of HPLC coupled with ESI-MSn. The benzacridine derivative containing a trihydroxy structural element, was found to be yellow, being very reactive with oxygen yielding semiquinone and quinone type of products with characteristic green colors. Finally, the optimal conditions for this interaction as assessed by both the loss of CQA and free amino groups of lysine can be given at pH 7 and 25°C, the interaction increasing with incubation time and depending also on the amount of tyrosinase present. Green coffee bean has a higher diversity and content of phenolics, where besides the CQA isomers and their esters, other conjugates like feruloylquinic acids were also identified, thus documenting differences in phenolic profiles for the two coffee types (Coffea arabica and Coffea robusta). Coffee proteins are modified by interactions with phenolic compounds during the extraction, where those from C. arabica are more susceptible to these interactions compared to C. robusta, and the polyphenol oxidase activity seems to be a crucial factor for the formation of these addition products. Moreover, In-gel digestion combined with MALDI-TOF-MS revealed that the most reactive and susceptible protein fractions to covalent reactions are the α-chains of the 11S storage protein. Thus, based on these results and those supplied by other research groups, a tentative list of possible adduct structures was derived. The diversity of the different CQA derivatives present in green coffee beans complicates the series of reactions occurring, providing a broad palette of reaction products. These interactions influence the properties of protein, where they exposed changes in the solubility and hydrophobicity of proteins compared to faba bean proteins (as control). Modification of milk whey protein products (primarily b-lactoglobulin) with coffee specific phenolics and commercial CQA under enzymatic and alkaline conditions seems to be affecting their chemical, structural and functional properties, where both modifications led to reduced free amino-,thiol groups and tryptophan content. We propose that the disulfide-thiol exchange in the C-terminus of b-lactoglobulin may be initiated by the redox conditions provided in the presence of CQA. The protein structure b-lactoglobulin thereupon becomes more disordered as simulated by molecular dynamic calculation. This unfolding process may additionally be supported by the reaction of the CQA at the proposed sites of modification of -amino groups of lysine (K77, K91, K138, K47) and the thiol group of cysteine (C121). These covalent modifications also decreased the solubility and hydrophobicity of b-lactoglobulin, moreover they provide modified protein samples with a high antioxidative power, thermally more stable as reflected by a higher Td, require less amount of energy to unfold and when emulsified with lutein esters, exhibit their higher stability against UV light. The MALDI-TOF and SDS-PAGE results revealed that proteins treated at alkaline conditions were more strongly modified than those treated under enzymatic conditions. Finally, the results showed a slight change in emulsifying properties of modified proteins.
Black shales are sedimentary rocks with a high content of organic carbon, which leads to a dark grayish to black color. Due to their potential to contain oil or gas, black shales are of great interest for the support of the worldwide energy supply. An integrated seismic investigation of the Lower Palaeozoic black shales was carried out at the Danish island Bornholm to locate the shallow-lying Alum Shale layer and its surrounding formations and to characterize its potential as a source rock. Therefore, two seismic experiments at a total of three crossing profiles were carried out in October 2010 and in June 2012 in the southern part of the island. Two different active measurements were conducted with either a weight drop source or a minivibrator. Additionally, the ambient noise field was recorded at the study location over a time interval of about one day, and also a laboratory analysis of borehole samples was carried out. The seismic profiles were positioned as close as possible to two scientific boreholes which were used for comparative purposes. The seismic field data was analyzed with traveltime tomography, surface wave inversion and seismic interferometry to obtain the P-wave and S-wave velocity models of the subsurface. The P-wave velocity models which were determined for all three profiles clearly locate the Alum Shale layer between the Komstad Limestone layer on top and the Læså Sandstone Formation at the base of the models. The black shale layer has P-wave velocities around 3 km/s which are lower compared to the adjacent formations. Due to a very good agreement of the sonic log and the vertical velocity profiles of the two seismic lines, which are directly crossing the borehole where the sonic log was conducted, the reliability of the traveltime tomography is proven. A correlation of the seismic velocities with the content of organic carbon is an important task for the characterization of the reservoir properties of a black shale formation. It is not possible without calibration but in combination with a full 2D tomographic image of the subsurface it gives the subsurface distribution of the organic material. The S-wave model obtained with surface wave inversion of the vibroseis data of one of the profiles images the Alum Shale layer also very well with S-wave velocities around 2 km/s. Although individual 1D velocity models for each of the source positions were determined, the subsurface S-wave velocity distribution is very uniform with a good match between the single models. A really new approach described here is the application of seismic interferometry to a really small study area and a quite short time interval. Also new is the selective procedure of only using time windows with the best crosscorrelation signals to achieve the final interferograms. Due to the small scale of the interferometry even P-wave signals can be observed in the final crosscorrelations. In the laboratory measurements the seismic body waves were recorded for different pressure and temperature stages. Therefore, samples of different depths of the Alum Shale were available from one of the scientific boreholes at the study location. The measured velocities have a high variance with changing pressure or temperature. Recordings with wave propagation both parallel and perpendicular to the bedding of the samples reveal a great amount of anisotropy for the P-wave velocity, whereas the S-wave velocity is almost independent of the wave direction. The calculated velocity ratio is also highly anisotropic with very low values for the perpendicular samples and very high values for the parallel ones. Interestingly, the laboratory velocities of the perpendicular samples are comparable to the velocities of the field experiments indicating that the field measurements are sensitive to wave propagation in vertical direction. The velocity ratio is also calculated with the P-wave and S-wave velocity models of the field experiments. Again, the Alum Shale can be clearly separated from the adjacent formations because it shows overall very low vP/vS ratios around 1.4. The very low velocity ratio indicates the content of gas in the black shale formation. With the combination of all the different methods described here, a comprehensive interpretation of the seismic response of the black shale layer can be made and the hydrocarbon source rock potential can be estimated.
Interactive generation of effective discourse in situated context : a planning-based approach
(2013)
As our modern-built structures are becoming increasingly complex, carrying out basic tasks such as identifying points or objects of interest in our surroundings can consume considerable time and cognitive resources. In this thesis, we present a computational approach to converting contextual information about a person's physical environment into natural language, with the aim of helping this person identify given task-related entities in their environment. Using efficient methods from automated planning - the field of artificial intelligence concerned with finding courses of action that can achieve a goal -, we generate discourse that interactively guides a hearer through completing their task. Our approach addresses the challenges of controlling, adapting to, and monitoring the situated context. To this end, we develop a natural language generation system that plans how to manipulate the non-linguistic context of a scene in order to make it more favorable for references to task-related objects. This strategy distributes a hearer's cognitive load of interpreting a reference over multiple utterances rather than one long referring expression. Further, to optimize the system's linguistic choices in a given context, we learn how to distinguish speaker behavior according to its helpfulness to hearers in a certain situation, and we model the behavior of human speakers that has been proven helpful. The resulting system combines symbolic with statistical reasoning, and tackles the problem of making non-trivial referential choices in rich context. Finally, we complement our approach with a mechanism for preventing potential misunderstandings after a reference has been generated. Employing remote eye-tracking technology, we monitor the hearer's gaze and find that it provides a reliable index of online referential understanding, even in dynamically changing scenes. We thus present a system that exploits hearer gaze to generate rapid feedback on a per-utterance basis, further enhancing its effectiveness. Though we evaluate our approach in virtual environments, the efficiency of our planning-based model suggests that this work could be a step towards effective conversational human-computer interaction situated in the real world.
Die vorliegende Arbeit beschäftigt sich mit einer klassischen aber noch immer zentralen und aktuellen Frage der Evaluationsforschung, der Hinterfragung der Verwendung bzw. Wirksamkeit von Evaluationsverfahren. Vor dem Hintergrund der seit Ende der 1990er Jahre vor allem in Europa starken Zunahme von institutionalisierten Politik-Evaluationsverfahren sowie der zugleich zunehmenden Kritik dieser Verfahren in Wissenschaft und Praxis, untersucht die Arbeit diese Wirksamkeit am Fallbeispiel der Forschungspolitik der Europäischen Union. Aufbauend auf einer Aufarbeitung des Forschungsstandes zur Evaluationsverwendungsforschung und einer Vorstellung des gewählten Politikfeldes sowie der spezifischen Evaluationspraxis, erfolgt dazu eine systematische Gegenüberstellung der zentralen Evaluationsempfehlungen und der Entwicklung im Politikfeld über die vergangenen 15 Jahre. Im Ergebnis kommt die Arbeit zu der Feststellung eines (überraschend) hohen Ausmaßes an Entsprechung der Evaluationsempfehlungen mit der Politikentwicklung im untersuchten Fallbeispiel. Auf der Basis der Untersuchung des Fallbeispiels aber auch unter Heranziehung weiterer empirischer Beiträge in der Literatur ist damit der Behauptung der fehlenden Wirksamkeit der institutionalisierten Evaluation auf die Politikgestaltung klar zu widersprechen. Eine weitergehende Diskussion des Ergebnisses der Fallstudie legt darüber hinaus nahe, dass einige spezifische Faktoren und Bedingungen die Wirksamkeit der Evaluationsverfahren im untersuchten Fallbeispiel positiv zu beeinflussen scheinen. Im Einzelnen sind dies: der Charakter und die Ausprägung der Evaluationsempfehlungen, das spezifische institutionelle Umfeld der Evaluation sowie das spezifische 'politische Klima'. Aus dem Ergebnis lässt sich andererseits aber auch folgern, dass insbesondere im Hinblick auf die Akzeptanzproblematik eine Verstärkung der Bemühungen zur Wahrnehmung der Evaluations-wirksamkeit auf Seiten aller Beteiligten geboten scheint. Die Arbeit stellt hierzu abschließend einige Vorschläge und Ideen zusammen, die diese Wahrnehmung verbessern können.
The main intention of the PhD project was to create a varve chronology for the Suigetsu Varves 2006' (SG06) composite profile from Lake Suigetsu (Japan) by thin section microscopy. The chronology was not only to provide an age-scale for the various palaeo-environmental proxies analysed within the SG06 project, but also and foremost to contribute, in combination with the SG06 14C chronology, to the international atmospheric radiocarbon calibration curve (IntCal). The SG06 14C data are based on terrestrial leaf fossils and therefore record atmospheric 14C values directly, avoiding the corrections necessary for the reservoir ages of the marine datasets, which are currently used beyond the tree-ring limit in the IntCal09 dataset (Reimer et al., 2009). The SG06 project is a follow up of the SG93 project (Kitagawa & van der Plicht, 2000), which aimed to produce an atmospheric calibration dataset, too, but suffered from incomplete core recovery and varve count uncertainties. For the SG06 project the complete Lake Suigetsu sediment sequence was recovered continuously, leaving the task to produce an improved varve count. Varve counting was carried out using a dual method approach utilizing thin section microscopy and micro X-Ray Fluorescence (µXRF). The latter was carried out by Dr. Michael Marshall in cooperation with the PhD candidate. The varve count covers 19 m of composite core, which corresponds to the time frame from ≈10 to ≈40 kyr BP. The count result showed that seasonal layers did not form in every year. Hence, the varve counts from either method were incomplete. This rather common problem in varve counting is usually solved by manual varve interpolation. But manual interpolation often suffers from subjectivity. Furthermore, sedimentation rate estimates (which are the basis for interpolation) are generally derived from neighbouring, well varved intervals. This assumes that the sedimentation rates in neighbouring intervals are identical to those in the incompletely varved section, which is not necessarily true. To overcome these problems a novel interpolation method was devised. It is computer based and automated (i.e. avoids subjectivity and ensures reproducibility) and derives the sedimentation rate estimate directly from the incompletely varved interval by statistically analysing distances between successive seasonal layers. Therefore, the interpolation approach is also suitable for sediments which do not contain well varved intervals. Another benefit of the novel method is that it provides objective interpolation error estimates. Interpolation results from the two counting methods were combined and the resulting chronology compared to the 14C chronology from Lake Suigetsu, calibrated with the tree-ring derived section of IntCal09 (which is considered accurate). The varve and 14C chronology showed a high degree of similarity, demonstrating that the novel interpolation method produces reliable results. In order to constrain the uncertainties of the varve chronology, especially the cumulative error estimates, U-Th dated speleothem data were used by linking the low frequency 14C signal of Lake Suigetsu and the speleothems, increasing the accuracy and precision of the Suigetsu calibration dataset. The resulting chronology also represents the age-scale for the various palaeo-environmental proxies analysed in the SG06 project. One proxy analysed within the PhD project was the distribution of event layers, which are often representatives of past floods or earthquakes. A detailed microfacies analysis revealed three different types of event layers, two of which are described here for the first time for the Suigetsu sediment. The types are: matrix supported layers produced as result of subaqueous slope failures, turbidites produced as result of landslides and turbidites produced as result of flood events. The former two are likely to have been triggered by earthquakes. The vast majority of event layers was related to floods (362 out of 369), which allowed the construction of a respective chronology for the last 40 kyr. Flood frequencies were highly variable, reaching their greatest values during the global sea level low-stand of the Glacial, their lowest values during Heinrich Event 1. Typhoons affecting the region represent the most likely control on the flood frequency, especially during the Glacial. However, also local, non-climatic controls are suggested by the data. In summary, the work presented here expands and revises knowledge on the Lake Suigetsu sediment and enabls the construction of a far more precise varve chronology. The 14C calibration dataset is the first such derived from lacustrine sediments to be included into the (next) IntCal dataset. References: Kitagawa & van der Plicht, 2000, Radiocarbon, Vol 42(3), 370-381 Reimer et al., 2009, Radiocarbon, Vol 51(4), 1111-1150
Die vorliegende Arbeit untersucht das sozio-kulturelle und institutionelle Umfeld der Organisationen des öffentlichen Sektors in der Mongolei, das signifikante Einflüsse auf die aktuellen Reformbemühungen in der öffentlichen Verwaltung hat. Die Studie stützt sich auf die Kultur- und Werttheorie. Die regelkonforme Verhaltensweise, Gemeinschaftsfavorisierende strenge Hierarchie, die fatalistische Annahme einer Autorität als unvermeidlich und unkontrollierbar sowie ein auf möglichst eigenständige Entscheidung und Meinungsbildung angestrebter Individualismus sind die weitverbreiteten kulturellen Verhaltensformen bei den Organisationen des öffentlichen Sektors der Mongolei. Dementsprechend streben die Beschäftigten des öffentlichen Dienstes uneigennützig das Wohlergehen der Bevölkerung, die Einhaltung der öffentlichen Regeln, die einvernehmlichen Beziehungen der Menschen zueinander sowie die Sicherheit und Nachhaltigkeit des Lebens an. Bestimmte Wertvorstellungen zur Selbstbestimmung, wie persönliche Geisteshaltung, eigenständiges Handeln sowie Kreativität sind für sie sehr wichtig. Dieser sozio- kulturelle Kontext hat große Auswirkungen auf das Arbeitsverhalten und auf die Aktivitäten der Beschäftigten des öffentlichen Dienstes zur Umsetzung von Reformen in der öffentlichen Verwaltung. Daher ist eine institutionelle Führung als Förderer und Beschützer von Wertesystemen bei der Umsetzung von Reformen in den hiesigen Institutionen unerlässlich.
In der Diskussion über Freiheit und Verantwortung vertritt die Hirnforschung die These, dass wir determiniert sind und unser Gehirn es ist, das denkt und entscheidet. Aus diesem Grunde könne uns für unsere Entscheidungen und Handlungen auch keine Verantwortung zugewiesen werden. Die Philosophie versucht in dieser Diskussion zu klären, ob wir trotz Determiniertheit für unsere Entscheidungen und Urteile verantwortlich sind oder ob Vereinbarkeit von Freiheit und Determinismus grundsätzlich nicht möglich ist. Diese Fragen stellt diese Untersuchung über Freiheit und Verantwortung nicht. In dieser Untersuchung wird ein gewisses Maß an Freiheit vorausgesetzt, weil diese Annahme der erste Schritt für unsere Freiheit ist. In dieser Arbeit geht es um die Verbindung von Freiheit und Verantwortung und was diese Verbindung in unserem Menschsein und Miteinander bedeutet. Ziel ist es, zu zeigen, dass wir uns zusätzliche Freiheit aneignen können, dass Bildung für unsere Freiheit nötig und dass Freiheit ohne Verantwortung nicht möglich ist. Die Untersuchung schließt sich Peter Bieris Thesen an, dass Aneignung von Freiheit und Bildung, die weder als Schul- noch als Ausbildung zu verstehen ist, möglich und nötig sind, um verantwortlich entscheiden und handeln zu können, lehnt jedoch Peter Bieris These ab, dass bedingte Freiheit Voraussetzung für unsere Freiheit ist. Zudem geht diese Arbeit über Peter Bieri hinaus, indem sie eine Lösungsmöglichkeit für unsere Freiheit und der damit verbundenen Verantwortlichkeit anbietet. Als Lösung wird eine Bildung vorgeschlagen, die uns die Verbundenheit mit den anderen und die Abhängigkeit von den anderen zeigt und die die Rechte und Bedürfnisse der anderen ebenso anerkennen lässt wie unsere eigenen. Es ist eine Bildung, die nicht nur Wissen, sondern auch bestimmte rationale und emotionale Kompetenzen beinhaltet. Es ist eine Bildung, die als lern- und lehrbar angesehen wird. Um diese Bildung als eine Notwendigkeit für unsere Freiheit und Verantwortlichkeit uns und den anderen gegenüber vermitteln zu können, ist es wichtig, uns in unserem Wesen verstehen. Deshalb werden in dieser Arbeit Faktoren dargestellt, die auf uns wirken und die uns als Menschen ausmachen. Es sind Faktoren, die auf unsere Freiheit und Verantwortung Einfluss nehmen, indem sie unsere Entscheidungen, unser Urteilsvermögen und in diesem Sinne auch unsere Handlungen ermöglichen oder einschränken. Durch die Darstellung dieser Faktoren werden wir auf unsere Möglichkeiten hingewiesen, die uns unser Leben in Selbstverantwortung und in Verantwortlichkeit den anderen gegenüber gestalten lassen. In dieser Untersuchung wird gezeigt, dass Freiheit ohne Verantwortung nicht möglich ist und es wird gezeigt, dass wir, wenn wir unsere Verantwortung abgeben, unsere Freiheit verlieren.
Water management and environmental protection is vulnerable to extreme low flows during streamflow droughts. During the last decades, in most rivers of Central Europe summer runoff and low flows have decreased. Discharge projections agree that future decrease in runoff is likely for catchments in Brandenburg, Germany. Depending on the first-order controls on low flows, different adaption measures are expected to be appropriate. Small catchments were analyzed because they are expected to be more vulnerable to a changing climate than larger rivers. They are mainly headwater catchments with smaller ground water storage. Local characteristics are more important at this scale and can increase vulnerability. This thesis mutually evaluates potential adaption measures to sustain minimum runoff in small catchments of Brandenburg, Germany, and similarities of these catchments regarding low flows. The following guiding questions are addressed: (i) Which first-order controls on low flows and related time scales exist? (ii) Which are the differences between small catchments regarding low flow vulnerability? (iii) Which adaption measures to sustain minimum runoff in small catchments of Brandenburg are appropriate considering regional low flow patterns? Potential adaption measures to sustain minimum runoff during periods of low flows can be classified into three categories: (i) increase of groundwater recharge and subsequent baseflow by land use change, land management and artificial ground water recharge, (ii) increase of water storage with regulated outflow by reservoirs, lakes and wetland water management and (iii) regional low flow patterns have to be considered during planning of measures with multiple purposes (urban water management, waste water recycling and inter-basin water transfer). The question remained whether water management of areas with shallow groundwater tables can efficiently sustain minimum runoff. Exemplary, water management scenarios of a ditch irrigated area were evaluated using the model Hydrus-2D. Increasing antecedent water levels and stopping ditch irrigation during periods of low flows increased fluxes from the pasture to the stream, but storage was depleted faster during the summer months due to higher evapotranspiration. Fluxes from this approx. 1 km long pasture with an area of approx. 13 ha ranged from 0.3 to 0.7 l\s depending on scenario. This demonstrates that numerous of such small decentralized measures are necessary to sustain minimum runoff in meso-scale catchments. Differences in the low flow risk of catchments and meteorological low flow predictors were analyzed. A principal component analysis was applied on daily discharge of 37 catchments between 1991 and 2006. Flows decreased more in Southeast Brandenburg according to meteorological forcing. Low flow risk was highest in a region east of Berlin because of intersection of a more continental climate and the specific geohydrology. In these catchments, flows decreased faster during summer and the low flow period was prolonged. A non-linear support vector machine regression was applied to iteratively select meteorological predictors for annual 30-day minimum runoff in 16 catchments between 1965 and 2006. The potential evapotranspiration sum of the previous 48 months was the most important predictor (r²=0.28). The potential evapotranspiration of the previous 3 months and the precipitation of the previous 3 months and last year increased model performance (r²=0.49, including all four predictors). Model performance was higher for catchments with low yield and more damped runoff. In catchments with high low flow risk, explanatory power of long term potential evapotranspiration was high. Catchments with a high low flow risk as well as catchments with a considerable decrease in flows in southeast Brandenburg have the highest demand for adaption. Measures increasing groundwater recharge are to be preferred. Catchments with high low flow risk showed relatively deep and decreasing groundwater heads allowing increased groundwater recharge at recharge areas with higher altitude away from the streams. Low flows are expected to stay low or decrease even further because long term potential evapotranspiration was the most important low flow predictor and is projected to increase during climate change. Differences in low flow risk and runoff dynamics between catchments have to be considered for management and planning of measures which do not only have the task to sustain minimum runoff.
The life of microorganisms is characterized by two main tasks, rapid growth under conditions permitting growth and survival under stressful conditions. The environments, in which microorganisms dwell, vary in space and time. The microorganisms innovate diverse strategies to readily adapt to the regularly fluctuating environments. Phenotypic heterogeneity is one such strategy, where an isogenic population splits into subpopulations that respond differently under identical environments. Bacterial persistence is a prime example of such phenotypic heterogeneity, whereby a population survives under an antibiotic attack, by keeping a fraction of population in a drug tolerant state, the persister state. Specifically, persister cells grow more slowly than normal cells under growth conditions, but survive longer under stress conditions such as the antibiotic administrations. Bacterial persistence is identified experimentally by examining the population survival upon an antibiotic treatment and the population resuscitation in a growth medium. The underlying population dynamics is explained with a two state model for reversible phenotype switching in a cell within the population. We study this existing model with a new theoretical approach and present analytical expressions for the time scale observed in population growth and resuscitation, that can be easily used to extract underlying model parameters of bacterial persistence. In addition, we recapitulate previously known results on the evolution of such structured population under periodically fluctuating environment using our simple approximation method. Using our analysis, we determine model parameters for Staphylococcus aureus population under several antibiotics and interpret the outcome of cross-drug treatment. Next, we consider the expansion of a population exhibiting phenotype switching in a spatially structured environment consisting of two growth permitting patches separated by an antibiotic patch. The dynamic interplay of growth, death and migration of cells in different patches leads to distinct regimes in population propagation speed as a function of migration rate. We map out the region in parameter space of phenotype switching and migration rate to observe the condition under which persistence is beneficial. Furthermore, we present an extended model that allows mutation from the two phenotypic states to a resistant state. We find that the presence of persister cells may enhance the probability of resistant mutation in a population. Using this model, we explain the experimental results showing the emergence of antibiotic resistance in a Staphylococcus aureus population upon tobramycin treatment. In summary, we identify several roles of bacterial persistence, such as help in spatial expansion, development of multidrug tolerance and emergence of antibiotic resistance. Our study provides a theoretical perspective on the dynamics of bacterial persistence in different environmental conditions. These results can be utilized to design further experiments, and to develop novel strategies to eradicate persistent infections.
Initiation and perpetuation of inflammatory bowel diseases (IBD) may result from an exaggerated mucosal immune response to the luminal microbiota in a susceptible host. We proposed that this may be caused either 1) by an abnormal microbial composition or 2) by weakening of the protective mucus layer due to excessive mucus degradation, which may lead to an easy access of luminal antigens to the host mucosa triggering inflammation. We tested whether the probiotic Enterococcus faecium NCIMB 10415 (NCIMB) is capable of reducing chronic gut inflammation by changing the existing gut microbiota composition and aimed to identify mechanisms that are involved in possible beneficial effects of the probiotic. To identify health-promoting mechanisms of the strain, we used interleukin (IL)-10 deficient mice that spontaneously develop gut inflammation and fed these mice a diet containing NCIMB (106 cells g-1) for 3, 8 and 24 weeks, respectively. Control mice were fed an identically composed diet but without the probiotic strain. No clear-cut differences between the animals were observed in pro-inflammatory cytokine gene expression and in intestinal microbiota composition after probiotic supplementation. However, we observed a low abundance of the mucin-degrading bacterium Akkermansia muciniphila in the mice that were fed NCIMB for 8 weeks. These low cell numbers were associated with significantly lower interferon gamma (IFN-γ) and IFN-γ-inducible protein (IP-10) mRNA levels as compared to the NCIMB-treated mice that were killed after 3 and 24 weeks of intervention. In conclusion, NCIMB was not capable of reducing gut inflammation in the IL-10-/- mouse model. To further identify the exact role of A. muciniphila and uncover a possible interaction between this bacterium, NCIMB and the host in relation to inflammation, we performed in vitro studies using HT-29 colon cancer cells. The HT-29 cells were treated with bacterial conditioned media obtained by growing either A. muciniphila (AM-CM) or NCIMB (NCIMB-CM) or both together (COMB-CM) in Dulbecco’s Modified Eagle Medium (DMEM) for 2 h at 37 °C followed by bacterial cell removal. HT-29 cells treated with COMB-CM displayed reduced cell viability after 18 h (p<0.01) and no viable cells were detected after 24 h of treatment, in contrast to the other groups or heated COMB-CM. Detection of activated caspase-3 in COMB-CM treated groups indicated that death of the HT-29 cells was brought about by apoptosis. It was concluded that either NCIMB or A. muciniphila produce a soluble and heat-sensitive factor during their concomitant presence that influences cell viability in an in vitro system. We currently hypothesize that this factor is a protein, which has not yet been identified. Based on the potential effect of A. muciniphila on inflammation (in vivo) and cell-viability (in vitro) in the presence of NCIMB, we investigated how the presence of A. muciniphila affects the severity of an intestinal Salmonella enterica Typhimurium (STm)-induced gut inflammation using gnotobiotic C3H mice with a background microbiota of eight bacterial species (SIHUMI, referred to as simplified human intestinal microbiota). Presence of A. muciniphila in STm-infected SIHUMI (SIHUMI-AS) mice caused significantly increased histopathology scores and elevated mRNA levels of IFN-γ, IP-10, tumor necrosis factor alpha (TNF-α), IL-12, IL-17 and IL-6 in cecal and colonic tissue. The number of mucin filled goblet cells was 2- to 3- fold lower in cecal tissue of SIHUMI-AS mice compared to SIHUMI mice associated with STm (SIHUMI-S) or A. muciniphila (SIHUMI-A) or SIHUMI mice. Reduced goblet cell numbers significantly correlated with increased IFN-γ (r2 = -0.86, ***P<0.001) in all infected mice. In addition, loss of cecal mucin sulphation was observed in SIHUMI-AS mice. Concomitant presence of A. muciniphila and STm resulted in a drastic change in microbiota composition of the SIHUMI consortium. The proportion of Bacteroides thetaiotaomicron in SIHUMI, SIHUMI-A and SIHUMI-S mice made up to 80-90% but was completely taken over by STm in SIHUMI-AS mice contributing 94% to total bacteria. These results suggest that A. muciniphila exacerbates STm-induced intestinal inflammation by its ability to disturb host mucus homeostasis. In conclusion, abnormal microbiota composition together with excessive mucus degradation contributes to severe intestinal inflammation in a susceptible host.
The fragmentation of natural habitat caused by anthropogenic land use changes is one of the main drivers of the current rapid loss of biodiversity. In face of this threat, ecological research needs to provide predictions of communities' responses to fragmentation as a prerequisite for the effective mitigation of further biodiversity loss. However, predictions of communities' responses to fragmentation require a thorough understanding of ecological processes, such as species dispersal and persistence. Therefore, this thesis seeks an improved understanding of community dynamics in fragmented landscapes. In order to approach this overall aim, I identified key questions on the response of plant diversity and plant functional traits to variations in species' dispersal capability, habitat fragmentation and local environmental conditions. All questions were addressed using spatially explicit simulations or statistical models. In chapter 2, I addressed scale-dependent relationships between dispersal capability and species diversity using a grid-based neutral model. I found that the ratio of survey area to landscape size is an important determinant of scale-dependent dispersal-diversity relationships. With small ratios, the model predicted increasing dispersal-diversity relationships, while decreasing dispersal-diversity relationships emerged, when the ratio approached one, i.e. when the survey area approached the landscape size. For intermediate ratios, I found a U-shaped pattern that has not been reported before. With this study, I unified and extended previous work on dispersal-diversity relationships. In chapter 3, I assessed the type of regional plant community dynamics for the study area in the Southern Judean Lowlands (SJL). For this purpose, I parameterised a multi-species incidence-function model (IFM) with vegetation data using approximate Bayesian computation (ABC). I found that the type of regional plant community dynamics in the SJL is best characterized as a set of isolated “island communities” with very low connectivity between local communities. Model predictions indicated a significant extinction debt with 33% - 60% of all species going extinct within 1000 years. In general, this study introduces a novel approach for combining a spatially explicit simulation model with field data from species-rich communities. In chapter 4, I first analysed, if plant functional traits in the SJL indicate trait convergence by habitat filtering and trait divergence by interspecific competition, as predicted by community assembly theory. Second, I assessed the interactive effects of fragmentation and the south-north precipitation gradient in the SJL on community-mean plant traits. I found clear evidence for trait convergence, but the evidence for trait divergence fundamentally depended on the chosen null-model. All community-mean traits were significantly associated with the precipitation gradient in the SJL. The trait associations with fragmentation indices (patch size and connectivity) were generally weaker, but statistically significant for all traits. Specific leaf area (SLA) and plant height were consistently associated with fragmentation indices along the precipitation gradient. In contrast, seed mass and seed number were interactively influenced by fragmentation and precipitation. In general, this study provides the first analysis of the interactive effects of climate and fragmentation on plant functional traits. Overall, I conclude that the spatially explicit perspective adopted in this thesis is crucial for a thorough understanding of plant community dynamics in fragmented landscapes. The finding of contrasting responses of local diversity to variations in dispersal capability stresses the importance of considering the diversity and composition of the metacommunity, prior to implementing conservation measures that aim at increased habitat connectivity. The model predictions derived with the IFM highlight the importance of additional natural habitat for the mitigation of future species extinctions. In general, the approach of combining a spatially explicit IFM with extensive species occupancy data provides a novel and promising tool to assess the consequences of different management scenarios. The analysis of plant functional traits in the SJL points to important knowledge gaps in community assembly theory with respect to the simultaneous consequences of habitat filtering and competition. In particular, it demonstrates the importance of investigating the synergistic consequences of fragmentation, climate change and land use change on plant communities. I suggest that the integration of plant functional traits and of species interactions into spatially explicit, dynamic simulation models offers a promising approach, which will further improve our understanding of plant communities and our ability to predict their dynamics in fragmented and changing landscapes.
Lamprophyre sind porphyrische, aus Mantelschmelzen gebildete Gesteine, die meist in Form von Gängen auftreten. Sie zeichnen sich durch auffällige und charakteristische texturelle, chemische und mineralogische Eigenschaften aus. Als ehemalige Mantelschmelzen liefern sie Information sowohl über Bedingungen der Schmelzbildung im Mantel als auch über geodynamische Prozesse, die zu metasomatischer Veränderung des Mantels geführt haben. Im Saxothuringikum Mitteleuropas, am Nordrand des Böhmischen Massivs, gibt es zahlreiche Lamprophyrvorkommen, die hier zur Charakterisierung der Mantelentwicklung während der variszischen Orogenese dienen. Die vorliegende Arbeit befaßt sich mit den mineralogischen, geochemischen und isotopischen (Sr-Nd-Pb) Signaturen von spätvariszischen kalkalkalischen Lamprophyren, von postvariszischen ultramafischen Lamprophyren, von Alkalibasalten der Lausitz und, zum Vergleich, von prävariszischen Gabbros. Darüberhinaus nutzt die Arbeit Lithium-Isotopensignaturen kombiniert mit Sr-Nd-Pb–Isotopendaten spätvariszischer kalkalkalischer Lamprophyre aus drei variszischen Domänen (Erzgebirge, Lausitz, Sudeten) zur Erkundung der lokalen Mantelüberprägungen während der variszischen Orogenese.
Die automatisierte Objektidentifikation stellt ein modernes Werkzeug in den Geoinformationswissenschaften dar (BLASCHKE et al., 2012). Um bei thematischen Kartierungen untereinander vergleichbare Ergebnisse zu erzielen, sollen aus Sicht der Geoinformatik Mittel für die Objektidentifikation eingesetzt werden. Anstelle von Feldarbeit werden deshalb in der vorliegenden Arbeit multispektrale Fernerkundungsdaten als Primärdaten verwendet. Konkrete natürliche Objekte werden GIS-gestützt und automatisiert über große Flächen und Objektdichten aus Primärdaten identifiziert und charakterisiert. Im Rahmen der vorliegenden Arbeit wird eine automatisierte Prozesskette zur Objektidentifikation konzipiert. Es werden neue Ansätze und Konzepte der objektbasierten Identifikation von natürlichen isolierten terrestrischen Oberflächenformen entwickelt und implementiert. Die Prozesskette basiert auf einem Konzept, das auf einem generischen Ansatz für automatisierte Objektidentifikation aufgebaut ist. Die Prozesskette kann anhand charakteristischer quantitativer Parameter angepasst und so umgesetzt werden, womit das Konzept der Objektidentifikation modular und skalierbar wird. Die modulbasierte Architektur ermöglicht den Einsatz sowohl einzelner Module als auch ihrer Kombination und möglicher Erweiterungen. Die eingesetzte Methodik der Objektidentifikation und die daran anschließende Charakteristik der (geo)morphometrischen und morphologischen Parameter wird durch statistische Verfahren gestützt. Diese ermöglichen die Vergleichbarkeit von Objektparametern aus unterschiedlichen Stichproben. Mit Hilfe der Regressionsund Varianzanalyse werden Verhältnisse zwischen Objektparametern untersucht. Es werden funktionale Abhängigkeiten der Parameter analysiert, um die Objekte qualitativ zu beschreiben. Damit ist es möglich, automatisiert berechnete Maße und Indizes der Objekte als quantitative Daten und Informationen zu erfassen und unterschiedliche Stichproben anzuwenden. Im Rahmen dieser Arbeit bilden Thermokarstseen die Grundlage für die Entwicklungen und als Beispiel sowie Datengrundlage für den Aufbau des Algorithmus und die Analyse. Die Geovisualisierung der multivariaten natürlichen Objekte wird für die Entwicklung eines besseren Verständnisses der räumlichen Relationen der Objekte eingesetzt. Kern der Geovisualisierung ist das Verknüpfen von Visualisierungsmethoden mit kartenähnlichen Darstellungen.
Der Einfluss von Bildung gewinnt gesellschaftlich und politisch an Bedeutung. Auch im wissenschaftlichen Bereich zeigt sich dies über eine vielseitige Diskussion zum Einfluss von Bildung auf das Einkommen. In dieser Arbeit werden nationale und regionale Disparitäten in der monetären Wertschätzung von allgemeinem Humankapital aufgedeckt und diskutiert. Dafür werden verschiedene Verfahren diskutiert und basierend darauf Intervalle für die mittleren Bildungsrenditen bestimmt. Im ersten Abschnitt wird die Thematik theoretisch über zwei verschiedene Modellansätze fundiert und kritisch diskutiert. Anschließend folgt die Darstellung des aktuellen empirischen Forschungsbestands. Der Hauptteil der Arbeit beginnt mit der Darstellung des verwendeten Datensatzes und seiner kritischen Repräsentativitätsprüfung. Eine nähere Variablenbeschreibung mit deskriptiver Analyse dient zur Erklärung der verwendeten Größen. Darauffolgend werden bestehende Verfahren zur Schätzung von Bildungsrenditen diskutiert. Unter ausschließlicher Berücksichtigung der Erwerbstätigen zeigt das 3SLS-Verfahren die besten Eigenschaften. Bezieht man jedoch alle Erwerbspersonen in die Analyse mit ein, so erweist sich das Heckman-Verfahren als sehr geeignet. Die Analyse - zunächst auf nationaler Ebene - bestätigt weitestgehend die bestehenden Erkenntnisse der Literatur. Eine Separierung des Datensatzes auf verschiedene Alterscluster, Voll- und Teilerwerbstätige sowie Erwerbstätige in der Privatwirtschaft und im öffentlichen Dienst zeigen keine signifikanten Unterschiede in der Höhe der gezahlten durchschnittlichen Bildungsrenditen. Anders verhält es sich bei der regionalen Analyse. Zunächst werden Ost- und Westdeutschland separat betrachtet. Für diese erste Analyse lassen sich über 95 %-Konfidenzintervalle deutliche Unterschiede in der Höhe der Bildungsrenditen ermitteln. Aufbauend auf diese Ergebnisse wird die Analyse vertieft. Eine Separierung auf Bundesländerebene und ein weiterer Vergleich der Konfidenzintervalle folgen. Zur besseren statistischen Vergleichbarkeit der Ergebnisse wird neben dem 3SLS-Verfahren, angewendet auf die separierten Datensätze, auch ein Modell ohne die Notwendigkeit der Separierung gewählt. Hierbei ist die Variation der Regionen über Interaktionsterme berücksichtigt. Dieses Regressionsmodell wird auf das OLS- und das Heckman-Verfahren angewendet. Der Vorteil hierbei ist, dass die Koeffizienten auf Gleichheit getestet werden können. Dabei kristallisieren sich deutlich unterschiedliche Bildungsrenditen für Mecklenburg-Vorpommern, aber auch für Sachsen-Anhalt und Thüringen im Vergleich zu den restlichen Bundesländern Deutschlands heraus. Diese Länder zeichnen sich durch eine besonders hohe jährliche Verzinsung von allgemeinem Humankapital aus. Es folgt eine Diskussion über mögliche Ursachen für die regional verschiedenen Bildungsrenditen. Dabei zeigt sich, dass in den Bundesländern mit hoher Rendite das mittlere Einkommensniveau und auch das durchschnittliche Preisniveau tendenziell geringer sind. Weiterhin wird deutlich, dass bei höheren relativen Abweichungen der durchschnittlichen Einkommen höhere Renditen zu verzeichnen sind. Auch die Wanderungsbewegungen je nach Qualifikation unterscheiden sich. Unter zusätzlicher Berücksichtigung der Arbeitslosenquoten zeigt sich in den Ländern mit hoher Rendite eine tendenziell höhere Arbeitslosigkeit. Im zusammenfassenden Fazit der Arbeit werden abschließend die Erkenntnisse gewürdigt. Dabei ist zu bemerken, dass der Beitrag einen Start in die bundesländerweite Analyse liefert, die eine Fortführung auf beispielsweise eine mehrperiodische Betrachtung anregt.
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains.
Korrelation zwischen der genetischen und der funktionellen Diversität humaner Bitterrezeptoren
(2013)
Der Mensch besitzt ~25 funktionelle Bitterrezeptoren (TAS2R), die für die Wahrnehmung potenziell toxischer Substanzen in der Nahrung verantwortlich sind. Aufgrund der großen genetischen Variabilität der TAS2R-Gene könnte es eine Vielzahl funktionell unterschiedlicher TAS2R-Haplotypen geben, die zu Unterschieden der Bitterwahrnehmung führen. Dies konnte bereits in funktionellen Analysen und sensorischen Studien für einzelne Bitterrezeptoren gezeigt werden. In dieser Arbeit wurden die häufigsten Haplotypen aller 25 Bitterrezeptoren verschiedener Ethnien funktionell charakterisiert. Das Ziel war eine umfassende Aussage über die funktionelle Diversität der TAS2Rs, die die molekulare Grundlage für individuelle Bitterwahrnehmung bildet, treffen zu können. Fehlende Varianten wurden aus genomischer DNA kloniert oder durch gezielte Mutagenese bereits vorhandener TAS2R-Konstrukte generiert. Die funktionelle Analyse erfolgte mittels Expression der TAS2R-Haplotypen in HEK293TG16gust44 Zellen und anschließenden Calcium-Imaging-Experimenten mit zwei bekannten Agonisten. Die Haplotypen der fünf orphanen TAS2Rs wurden mit über hundert Bitterstoffen stimuliert. Durch die gelungene Deorphanisierung des TAS2R41 in dieser Arbeit, wurden für die 21 aktivierbaren TAS2Rs 36 funktionell-unterschiedliche Haplotypen identifiziert. Die tatsächliche funktionelle Vielfalt blieb jedoch deutlich hinter der genetischen Variabilität der TAS2Rs zurück. Neun Bitterrezeptoren wiesen funktionell homogene Haplotypen auf oder besaßen nur eine weltweit vorherrschende Variante. Funktionell heterogene Haplotypen wurden für zwölf TAS2Rs identifiziert. Inaktive Varianten der Rezeptoren TAS2R9, TAS2R38 und TAS2R46 sollten die Wahrnehmung von Bitterstoffen wie Ofloxacin, Cnicin, Hydrocortison, Limonin, Parthenolid oder Strychnin beeinflussen. Unterschiedlich sensitive Varianten, besonders der Rezeptoren TAS2R47 und TAS2R49, sollten für Agonisten wie Absinthin, Amarogentin oder Cromolyn ebenfalls zu phänotypischen Unterschieden führen. Wie für den TAS2R16 bereits gezeigt, traten Haplotypen des funktionell heterogenen TAS2R7 und TAS2R41 ethnien-spezifisch auf, was auf lokale Anpassung und verschiedene Phänotypen hinweisen könnte. Weiterführend muss nun eine Analyse der funktionell-variablen TAS2Rs in sensorischen Tests erfolgen, um ihre phänotypische Relevanz zu prüfen. Die Analyse der funktionsmodulierenden Aminosäurepositionen, z.Bsp. des TAS2R44, TAS2R47 oder TAS2R49, könnte weiterführend zum besseren Verständnis der Rezeptor-Ligand- und Rezeptor-G-Protein-Interaktion beitragen.
Gewichts- und essstörungsrelevante Auffälligkeiten sind bereits im Kindesalter verbreitet. Neben genetischen Faktoren kommt auch die familiale Vermittlung gestörten Essverhaltens als Genesefaktor in Betracht. Ab dem Alter von zehn Jahren gibt es eine breite empirische Basis für die Verknüpfung gestörten Essverhaltens zwischen Müttern und ihren Kindern. Für das Alter unter zehn Jahren existiert bislang wenig gesichertes Wissen. Die Erforschung der spezifischen Wirkung des mütterlichen auf kindliches gestörtes Essverhalten ist jedoch im Hinblick auf Ansätze zur Prävention kindlicher Gewichts- und Essstörungen für dieses Alter von Bedeutung. Im Rahmen der vorliegenden Arbeit wurde gestörtes Essverhalten von Müttern und Kindern im Alter zwischen einem und zehn Jahren sowie die Beziehung gestörten Essverhaltens von Müttern und ihren Kindern in zwei Studien analysiert. Die erste Studie verfolgte das Ziel, gestörtes Essverhalten von Müttern und Kindern sowie deren Beziehung im Kontext mütterlichen Übergewichts zu analysieren. Es wurden 219 Mütter von Kindern im Alter von drei bis sechs Jahren befragt. In der zweiten Studie wurde neben mütterlichem Übergewicht die Rolle mütterlicher Essstörungssymptomatik fokussiert und in den Analysen des gestörten Essverhaltens von Kindern im Alter von einem bis zehn Jahren berücksichtigt. In die Untersuchung ging eine Stichprobe von 506 Müttern und deren Kindern ein. In beiden Studien beantworteten Mütter ein Fragebogenpaket, welches Instrumente zum gestörten Essverhalten der Mütter (emotionales, externales und gezügeltes Essverhalten) und gestörten Essverhalten des Kindes (emotionales und externales Essverhalten sowie Verlangen nach Essen) umfasste. In der zweiten Studie wurden darüber hinaus Primärsymptomatik einer Essstörung der Mutter (Schlankheitsstreben, Körperunzufriedenheit und bulimisches Essverhalten) und pathologisches Essverhalten der Kinder erfragt. Übergewichtige Mütter berichteten nicht nur höhere Ausprägungen emotionalen und externalen Essverhaltens, sondern auch mehr Schlankheitsstreben, Körperunzufriedenheit und bulimisches Essverhalten als normal- und untergewichtige Mütter. Insgesamt 26% der befragten Mütter der zweiten Studie berichteten eine relevante Essstörungssymptomatik, davon waren 62% übergewichtig. Für die Kinder konnten keine Geschlechtsunterschiede hinsichtlich des Essverhaltens nachgewiesen werden. Im Grundschulalter waren emotionales und pathologisches Essverhalten höher ausgeprägt als bei jüngeren Kindern. Kindliches Übergewicht war mit mehr emotionalem und externalem Essverhalten, Verlangen nach Essen sowie pathologischem Essverhalten verbunden. Das Vorliegen mütterlichen Übergewichts sowie einer mütterlichen Essstörungssymptomatik war mit höheren Ausprägungen v.a. emotionalen Essverhaltens des Kindes assoziiert. Die höchsten Ausprägungen emotionalen Essverhaltens zeigten Kinder, deren Mütter Übergewicht und eine komorbide Essstörungssymptomatik berichtet hatten. Darüber hinaus leisteten gestörte Essverhaltensweisen der Mutter über allgemeine und gewichtsspezifische Aspekte hinaus einen relevanten Beitrag zur Varianzaufklärung emotionalen und externalen Essverhaltens des Kindes. Dabei war emotionales und externales Essverhalten von Mutter und Kind spezifisch miteinander verknüpft. In der ersten Studie ließ sich im Rahmen eines Mediatormodells zeigen, dass die Beziehung zwischen mütterlichem BMI und emotionalem Essverhalten des Kindes vollständig durch das emotionale Essverhalten der Mutter vermittelt wurde. In der zweiten Studie moderierte das Alter des Kindes die Beziehung zwischen emotionalem Essverhalten von Müttern und ihren Kindern in Richtung einer signifikanten Assoziation ab dem Alter von 5,4 Jahren des Kindes. Die vorliegende Arbeit liefert deutliche Hinweise auf die Verknüpfung zwischen mütterlichen gewichts- und essstörungsrelevanten Merkmalen und kindlichem gestörtem Essverhalten. Die Befunde legen nahe, dass emotionales Essverhalten als spezifischer Übertragungsweg gewichts- und essbezogener Störungen zwischen Müttern und Kindern in Betracht kommt und in Präventionsansätzen berücksichtigt werden sollte.
Modifizierung von Silikonelastomeren mit organischen Dipolen für Dielektrische Elastomer Aktuatoren
(2013)
Ein Dielektrischer Elastomer Aktuator (DEA) ist ein dehnbarer Kondensator, der aus einem Elastomerfilm besteht, der sich zwischen zwei flexiblen Elektroden befindet. Bei Anlegen einer elektrischen Spannung, ziehen sich die Elektroden aufgrund elektrostatischer Wechselwirkungen an, wodurch das Elastomer in z-Richtung zusammengepresst wird und sich dementsprechend in der x-,y-Ebene ausdehnt. Hierdurch werden Aktuationsbewegungen erreicht, welche sehr präzise über die Spannung gesteuert werden können. Zusätzlich sind DEAs kostengünstig, leicht und aktuieren geräuschlos. DEAs können beispielsweise für Produkte im medizinischen Bereich oder für optischer Komponenten genutzt werden. Ebenso kann aus diesen Bauteilen Strom erzeugt werden. Das größte Hindernis für eine weite Implementierung dieser Materialien liegt in den erforderlichen hohen Spannungen zum Erzeugen der Aktuationsbewegung, welche sich tendenziell im Kilovolt-Bereich befinden. Dies macht die Elektronik teuer und die Bauteile unsicher für Anwender. Um geringere Betriebsspannungen für die DEAs zu erreichen, sind signifikante Materialverbesserungen - insbesondere des verwendeten Elastomers - erforderlich. Um dies zu erreichen, können die dielektrischen Eigenschaften (Permittivität) der Elastomere gesteigert und/oder deren Steifigkeit (Young-Modul) gesenkt werden. In der vorliegenden Arbeit konnte die Aktuationsleistung von Silikonfilmen durch die Addition organischer Dipole erheblich verbessert werden. Hierfür wurde ein Verfahren etabliert, um funktionalisierte Dipole kovalent an das Polymernetzwerk zu binden. Dieser als "One-Step-Verfahren" bezeichnete Ansatz ist einfach durchzuführen und es werden homogene Filme erhalten. Die Dipoladdition wurde anhand verschiedener Silikone erprobt, die sich hinsichtlich ihrer mechanischen Eigenschaften unterschieden. Bei maximalem Dipolgehalt verdoppelte sich die Permittivität aller untersuchten Silikone und die Filme wurden deutlich weicher. Hierbei war festzustellen, dass die Netzwerkstruktur der verwendeten Silikone einen erheblichen Einfluss auf die erreichte Aktuationsdehnung hat. Abhängig vom Netzwerk erfolgte eine enorme Steigerung der Aktuationsleistung im Bereich von 100 % bis zu 4000 %. Dadurch können die Betriebsspannungen in DEAs deutlich abgesenkt werden, so dass sie tendenziell bei Spannungen unterhalb von einem Kilovolt betrieben werden können.
Within a research project about future sustainable water management options in the Elbe River basin, quasi-natural discharge scenarios had to be provided. The semi-distributed eco-hydrological model SWIM was utilised for this task. According to scenario simulations driven by the stochastical climate model STAR, the region would get distinctly drier. However, this thesis focuses on the challenge of meeting the requirement of high model fidelity even for smaller sub-basins. Usually, the quality of the simulations is lower at inner points than at the outlet. Four research paper chapters and the discussion chapter deal with the reasons for local model deviations and the problem of optimal spatial calibration. Besides other assessments, the Markov Chain Monte Carlo method is applied to show whether evapotranspiration or precipitation should be corrected to minimise runoff deviations, principal component analysis is used in an unusual way to evaluate local precipitation alterations by land cover changes, and remotely sensed surface temperatures allow for an independent view on the evapotranspiration landscape. The overall insight is that spatially explicit hydrological modelling of such a large river basin requires a lot of local knowledge. It probably needs more time to obtain such knowledge as is usually provided for hydrological modelling studies.
Körperliche Attraktivität und gutes Aussehen spielen in der heutigen Gesellschaft eine entscheidende Rolle, was bereits frühzeitig auch Kinder und Jugendliche in ihren Einstellungen und der Wahrnehmung ihres Körpers prägt. Sorgen um den eigenen Körper gelten als normatives Problem unter Jugendlichen und bergen nicht selten das Risiko für gesundheitsgefährdendes Verhalten und psychische Erkrankungen. In der Suche nach den Ursachen gerieten in den letzten Jahren insbesondere soziokulturelle Faktoren, insbesondere der Einfluss von medial vermittelten Schönheitsidealen, in den Fokus der Forschung. Es ist jedoch fraglich, warum nicht alle Jugendlichen in gleicher Weise auf den allgegenwärtigen Mediendruck reagieren. Naheliegend ist, dass die Jugendlichen besonders gefährdet sind, deren unmittelbares soziales Umfeld das geltende Schönheitsideal direkt oder indirekt vermittelt und verstärkt. Das Verständnis der Rolle sozialen Drucks ist jedoch bislang noch durch zahlreiche inhaltliche und methodische Aspekte beschränkt (z.B. Einschränkungen in der Operationalisierung, ungenügende Berücksichtigung geschlechtsspezifischer Mechanismen, fehlende längsschnittliche Belege). Daher widmet sich die vorliegende Arbeit der Bedeutung aussehensbezogenen sozialen Drucks in der Entstehung von Körperunzufriedenheit im Jugendalter in drei aufeinander aufbauenden Untersuchungsschritten. Ausgehend von der Entwicklung eines umfassenden und zuverlässigen Erhebungsinstruments zielt die Arbeit darauf ab, unterschiedliche Aspekte sozialen Drucks gegenüberzustellen und hinsichtlich ihrer Verbreitung und Risikowirkung zu vergleichen. Die Umsetzung des Forschungsvorhabens erfolgte in unterschiedlichen Schülerstichproben der Klassen 7 bis 9 unterschiedlicher Gymnasien und Gesamtschulen (Hauptstichprobe N = 1112, im Mittel = 13.4 ± 0.8 Jahre). Dabei wurden sowohl quer- als auch längsschnittliche Analysen durchgeführt. Zusätzlich wurden zur Erprobung des Fragebogenverfahrens klinische Stichproben mit Ess- und Gewichtsstörungen herangezogen. Zur detaillierten Erfassung unterschiedlicher Formen aussehensbezogenen sozialen Drucks erfolgte im ersten Schritt die Entwicklung des Fragebogen zum aussehensbezogen sozialen Druck (FASD), welcher acht unterschiedliche Formen aussehensbezogene sozialen Drucks ausgehend von Eltern und Gleichaltrigen reliabel und valide erfasst. Dabei erwies sich das Verfahren gleichermaßen für Jungen und Mädchen, wie für Jugendliche mit unterschiedlichem Gewichtsstatus geeignet. Die psychometrische Güte des Verfahrens konnte sowohl für populationsbasierte als auch für klinische Stichproben mit Ess- und Gewichtsstörung belegt werden, wodurch eine breite Einsatzmöglichkeit in Forschung und Praxis denkbar ist. Im zweiten Schritt erfolgte die Untersuchung der Verbreitung aussehensbezogenen sozialen Drucks unter besonderer Berücksichtigung von Geschlechts-, Alters- und Gewichtsgruppenunterschieden. Dabei erwiesen sich Mädchen als stärker von aussehensbezogenem Druck durch Gleichaltrige betroffen als Jungen. Darüberhinaus legen die Ergebnisse nahe, dass Übergewicht ungeachtet des Geschlechts mit verstärkten aussehensbezogenen Abwertungen und Ausgrenzungserfahrungen verbunden ist. Zudem deuten die Alterseffekte der Studie darauf hin, dass der Übergang von früher zu mittlerer Adoleszenz aber auch Schulwechsel besonderes kritische Zeitpunkte für die Etablierung aussehensbezogener Einflüsse darstellen. Abschließend widmete sich die Arbeit der längsschnittlichen Risikowirkung unterschiedlicher Aspekte aussehensbezogenen sozialen Drucks in der Entstehung von Körperunzufriedenheit. Aussehensbezogene Einflüsse von Freunden verstärkten längsschnittlich Körpersorgen sowohl bei Mädchen als auch bei Jungen. Zudem ergab sich das Erleben von Ausgrenzung durch Gleichaltrige als entscheidender Risikofaktor für gewichtsbezogene Körpersorgen unter Jungen. Als bedeutsamster elterlicher Einfluss erwiesen sich Aufforderungen auf die Figur zu achten. Diese Aufforderungen verstärkten gleichermaßen für Mädchen und Jungen gewichtsbezogene Körpersorgen. Die vorliegende Arbeit widmete sich dem Ziel, die Rolle aussehensbezogener sozialer Einflüsse weiter aufzuklären. Das dazu vorgelegte umfassende Instrument ermöglichte eine differenzierte Betrachtung der Verbreitung und Wirkung unterschiedlicher Formen sozialen Drucks. Hierdurch weisen die Ergebnisse nicht nur auf wichtige geschlechtsspezifische Mechanismen hin, sondern leisten ebenso einen Beitrag zum vertieften Verständnis der Risikowirkung sozialen Drucks. Diese Erkenntnisse liefern somit einerseits konkrete Ansatzpunkte für Prävention und Intervention und ermöglichen andererseits auch eine weitere Konkretisierung bereits etablierter soziokultureller Wirkmodelle.
Das Selenoprotein Glutathionperoxidase 2 (GPx2) ist ein epithelzellspezifisches, Hydroperoxide-reduzierendes Enzym, welches im Darmepithel, vor allem in den proliferierenden Zellen des Kryptengrundes, exprimiert wird. Die Aufrechterhaltung der GPx2-Expression im Kryptengrund auch bei subadäquatem Selenstatus könnte darauf hinweisen, dass sie hier besonders wichtige Funktionen wahrnimmt. Tatsächlich weisen GPx2 knockout (KO)-Mäuse eine erhöhte Apoptoserate im Kryptengrund auf. Ein Ziel dieser Arbeit war es deshalb, die physiologische Funktion der GPx2 näher zu untersuchen. In Kryptengrundepithelzellen aus dem Colon selenarmer GPx2 KO-Mäuse wurde eine erhöhte Caspase 3/7-Aktivität im Vergleich zum Wildtyp (WT) festgestellt. Zudem wiesen diese Zellen eine erhöhte Suszeptibilität für oxidativen Stress auf. Die GPx2 gewährleistet also den Schutz der proliferierenden Zellen des Kryptengrundes auch bei subadäquater Selenversorgung. Des Weiteren wurde im Colon selenarmer (-Se) und -adäquater (+Se) GPx2 KO-Mäuse im Vergleich zum WT eine erhöhte Tumornekrosefaktor α-Expression und eine erhöhte Infiltration von Makrophagen festgestellt. Durch Fütterung einer selensupplementierten Diät (++Se) konnte dies verhindert werden. In GPx2 KO-Mäusen liegt demnach bereits basal eine niedriggradige Entzündung vor. Dies unterstreicht, dass GPx2 vor allem eine wichtige antiinflammatorische Funktion im Darmepithel besitzt. Dem Mikronährstoff Selen werden protektive Funktionen in der Colonkanzerogenese zugeschrieben. In einem Mausmodell der Colitis-assoziierten Colonkanzerogenese wirkte GPx2 antiinflammatorisch und hemmte so die Tumorentstehung. Auf der anderen Seite wurden jedoch auch prokanzerogene Eigenschaften der GPx2 aufgedeckt. Deshalb sollte in dieser Arbeit untersucht werden, welchen Effekt ein GPx2 knockout in einem Modell der sporadischen, durch Azoxymethan (AOM) induzierten, Colonkanzerogenese hat. Im WT kam es in präneoplastischen Läsionen häufig zu einer erhöhten GPx2-Expression im Vergleich zur normalen Darmmucosa. Eine derartige Steigerung der GPx2-Expression wurde auch in der humanen Colonkanzerogenese beschrieben. Das Fehlen der GPx2 resultierte in einer verminderten Entstehung von Tumoren (-Se und ++Se) und präneoplastischen Läsionen (-Se und +Se). Somit förderte GPx2 die Tumorentstehung im AOM-Modell. Acht Stunden nach AOM-Gabe war im GPx2 KO-Colon im Vergleich zum WT eine erhöhte Apoptoserate in der Kryptenmitte (-Se, +Se), nicht jedoch im Kryptengrund oder in der ++Se-Gruppe zu beobachten. Möglicherweise wirkte GPx2 prokanzerogen, indem sie die effiziente Elimination geschädigter Zellen in der Tumorinitiationsphase verhinderte. Eine ähnliche Wirkung wäre auch durch die erhöhte GPx2-Expression in der Promotionsphase denkbar. So könnte GPx2 proliferierende präneoplastische Zellen vor oxidativem Stress, Apoptosen, oder auch der Antitumorimmunität schützen. Dies könnte durch ein Zusammenwirken mit anderen Selenoproteinen wie GPx1 und Thioredoxinreduktasen, für die ebenfalls auch prokanzerogene Funktionen beschrieben wurden, verstärkt werden. Eine wichtige Rolle könnte hier die Modulation des Redoxstatus in Tumorzellen spielen. Die Variation des Selengehalts der Diät hatte im WT einen eher U-förmigen Effekt. So traten in der –Se und ++Se-Gruppe tendenziell mehr und größere Tumore auf, als in der +Se Gruppe. Zusammenfassend schützt GPx2 also die proliferierenden Zellen des Kryptengrundes. Sie könnte jedoch auch proliferierende transformierte Zellen schützen und so die sporadische, AOM-induzierte Colonkanzerogenese fördern. In einem Modell der Colitis-assoziierten Colonkanzerogenese hatte GPx2 auf Grund ihrer antiinflammatorischen Wirkung einen gegenteiligen Effekt und hemmte die Tumorentstehung. Die Rolle der GPx2 in der Colonkanzerogenese ist also abhängig vom zugrunde liegenden Mechanismus und wird maßgeblich von der Beteiligung einer Entzündung bestimmt.
The Arctic tundra, covering approx. 5.5 % of the Earth’s land surface, is one of the last ecosystems remaining closest to its untouched condition. Remote sensing is able to provide information at regular time intervals and large spatial scales on the structure and function of Arctic ecosystems. But almost all natural surfaces reveal individual anisotropic reflectance behaviors, which can be described by the bidirectional reflectance distribution function (BRDF). This effect can cause significant changes in the measured surface reflectance depending on solar illumination and sensor viewing geometries. The aim of this thesis is the hyperspectral and spectro-directional reflectance characterization of important Arctic tundra vegetation communities at representative Siberian and Alaskan tundra sites as basis for the extraction of vegetation parameters, and the normalization of BRDF effects in off-nadir and multi-temporal remote sensing data. Moreover, in preparation for the upcoming German EnMAP (Environmental Mapping and Analysis Program) satellite mission, the understanding of BRDF effects in Arctic tundra is essential for the retrieval of high quality, consistent and therefore comparable datasets. The research in this doctoral thesis is based on field spectroscopic and field spectro-goniometric investigations of representative Siberian and Alaskan measurement grids. The first objective of this thesis was the development of a lightweight, transportable, and easily managed field spectro-goniometer system which nevertheless provides reliable spectro-directional data. I developed the Manual Transportable Instrument platform for ground-based Spectro-directional observations (ManTIS). The outcome of the field spectro-radiometrical measurements at the Low Arctic study sites along important environmental gradients (regional climate, soil pH, toposequence, and soil moisture) show that the different plant communities can be distinguished by their nadir-view reflectance spectra. The results especially reveal separation possibilities between the different tundra vegetation communities in the visible (VIS) blue and red wavelength regions. Additionally, the near-infrared (NIR) shoulder and NIR reflectance plateau, despite their relatively low values due to the low structure of tundra vegetation, are still valuable information sources and can separate communities according to their biomass and vegetation structure. In general, all different tundra plant communities show: (i) low maximum NIR reflectance; (ii) a weakly or nonexistent visible green reflectance peak in the VIS spectrum; (iii) a narrow “red-edge” region between the red and NIR wavelength regions; and (iv) no distinct NIR reflectance plateau. These common nadir-view reflectance characteristics are essential for the understanding of the variability of BRDF effects in Arctic tundra. None of the analyzed tundra communities showed an even closely isotropic reflectance behavior. In general, tundra vegetation communities: (i) usually show the highest BRDF effects in the solar principal plane; (ii) usually show the reflectance maximum in the backward viewing directions, and the reflectance minimum in the nadir to forward viewing directions; (iii) usually have a higher degree of reflectance anisotropy in the VIS wavelength region than in the NIR wavelength region; and (iv) show a more bowl-shaped reflectance distribution in longer wavelength bands (>700 nm). The results of the analysis of the influence of high sun zenith angles on the reflectance anisotropy show that with increasing sun zenith angles, the reflectance anisotropy changes to azimuthally symmetrical, bowl-shaped reflectance distributions with the lowest reflectance values in the nadir view position. The spectro-directional analyses also show that remote sensing products such as the NDVI or relative absorption depth products are strongly influenced by BRDF effects, and that the anisotropic characteristics of the remote sensing products can significantly differ from the observed BRDF effects in the original reflectance data. But the results further show that the NDVI can minimize view angle effects relative to the contrary spectro-directional effects in the red and NIR bands. For the researched tundra plant communities, the overall difference of the off-nadir NDVI values compared to the nadir value increases with increasing sensor viewing angles, but on average never exceeds 10 %. In conclusion, this study shows that changes in the illumination-target-viewing geometry directly lead to an altering of the reflectance spectra of Arctic tundra communities according to their object-specific BRDFs. Since the different tundra communities show only small, but nonetheless significant differences in the surface reflectance, it is important to include spectro-directional reflectance characteristics in the algorithm development for remote sensing products.
Small eye movements during fixation : the case of postsaccadic fixation and preparatory influences
(2013)
Describing human eye movement behavior as an alternating sequence of saccades and fixations turns out to be an oversimplification because the eyes continue to move during fixation. Small-amplitude saccades (e.g., microsaccades) are typically observed 1-2 times per second during fixation. Research on microsaccades came in two waves. Early studies on microsaccades were dominated by the question whether microsaccades affect visual perception, and by studies on the role of microsaccades in the process of fixation control. The lack of evidence for a unique role of microsaccades led to a very critical view on the importance of microsaccades. Over the last years, microsaccades moved into focus again, revealing many interactions with perception, oculomotor control and cognition, as well as intriguing new insights into the neurophysiological implementation of microsaccades. In contrast to early studies on microsaccades, recent findings on microsaccades were accompanied by the development of models of microsaccade generation. While the exact generating mechanisms vary between the models, they still share the assumption that microsaccades are generated in a topographically organized saccade motor map that includes a representation for small-amplitude saccades in the center of the map (with its neurophysiological implementation in the rostral pole of the superior colliculus). In the present thesis I criticize that models of microsaccade generation are exclusively based on results obtained during prolonged presaccadic fixation. I argue that microsaccades should also be studied in a more natural situation, namely the fixation following large saccadic eye movements. Studying postsaccadic fixation offers a new window to falsify models that aim to account for the generation of small eye movements. I demonstrate that error signals (visual and extra-retinal), as well as non-error signals like target eccentricity influence the characteristics of small-amplitude eye movements. These findings require a modification of a model introduced by Rolfs, Kliegl and Engbert (2008) in order to account for the generation of small-amplitude saccades during postsaccadic fixation. Moreover, I present a promising type of survival analysis that allowed me to examine time-dependent influences on postsaccadic eye movements. In addition, I examined the interplay of postsaccadic eye movements and postsaccadic location judgments, highlighting the need to include postsaccadic eye movements as covariate in the analyses of location judgments in the presented paradigm. In a second goal, I tested model predictions concerning preparatory influences on microsaccade generation during presaccadic fixation. The observation, that the preparatory set significantly influenced microsaccade rate, supports the critical model assumption that increased fixation-related activity results in a larger number of microsaccades. In the present thesis I present important influences on the generation of small-amplitude saccades during fixation. These eye movements constitute a rich oculomotor behavior which still poses many research questions. Certainly, small-amplitude saccades represent an interesting source of information and will continue to influence future studies on perception and cognition.
Organic semiconductors combine the benefits of organic materials, i.e., low-cost production, mechanical flexibility, lightweight, and robustness, with the fundamental semiconductor properties light absorption, emission, and electrical conductivity. This class of material has several advantages over conventional inorganic semiconductors that have led, for instance, to the commercialization of organic light-emitting diodes which can nowadays be found in the displays of TVs and smartphones. Moreover, organic semiconductors will possibly lead to new electronic applications which rely on the unique mechanical and electrical properties of these materials. In order to push the development and the success of organic semiconductors forward, it is essential to understand the fundamental processes in these materials. This thesis concentrates on understanding how the charge transport in thiophene-based semiconductor layers depends on the layer morphology and how the charge transport properties can be intentionally modified by doping these layers with a strong electron acceptor. By means of optical spectroscopy, the layer morphologies of poly(3-hexylthiophene), P3HT, P3HT-fullerene bulk heterojunction blends, and oligomeric polyquaterthiophene, oligo-PQT-12, are studied as a function of temperature, molecular weight, and processing conditions. The analyses rely on the decomposition of the absorption contributions from the ordered and the disordered parts of the layers. The ordered-phase spectra are analyzed using Spano’s model. It is figured out that the fraction of aggregated chains and the interconnectivity of these domains is fundamental to a high charge carrier mobility. In P3HT layers, such structures can be grown with high-molecular weight, long P3HT chains. Low and medium molecular weight P3HT layers do also contain a significant amount of chain aggregates with high intragrain mobility; however, intergranular connectivity and, therefore, efficient macroscopic charge transport are absent. In P3HT-fullerene blend layers, a highly crystalline morphology that favors the hole transport and the solar cell efficiency can be induced by annealing procedures and the choice of a high-boiling point processing solvent. Based on scanning near-field and polarization optical microscopy, the morphology of oligo-PQT-12 layers is found to be highly crystalline which explains the rather high field-effect mobility in this material as compared to low molecular weight polythiophene fractions. On the other hand, crystalline dislocations and grain boundaries are identified which clearly limit the charge carrier mobility in oligo-PQT-12 layers. The charge transport properties of organic semiconductors can be widely tuned by molecular doping. Indeed, molecular doping is a key to highly efficient organic light-emitting diodes and solar cells. Despite this vital role, it is still not understood how mobile charge carriers are induced into the bulk semiconductor upon the doping process. This thesis contains a detailed study of the doping mechanism and the electrical properties of P3HT layers which have been p-doped by the strong molecular acceptor tetrafluorotetracyanoquinodimethane, F4TCNQ. The density of doping-induced mobile holes, their mobility, and the electrical conductivity are characterized in a broad range of acceptor concentrations. A long-standing debate on the nature of the charge transfer between P3HT and F4TCNQ is resolved by showing that almost every F4TCNQ acceptor undergoes a full-electron charge transfer with a P3HT site. However, only 5% of these charge transfer pairs can dissociate and induce a mobile hole into P3HT which contributes electrical conduction. Moreover, it is shown that the left-behind F4TCNQ ions broaden the density-of-states distribution for the doping-induced mobile holes, which is due to the longrange Coulomb attraction in the low-permittivity organic semiconductors.
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
Even though quite different in occurrence and consequences, from a modeling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding: uncertainty about the modeling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Nevertheless deterministic approaches are still widely used in natural hazard assessments, holding the risk of underestimating the hazard with disastrous effects. The all-round probabilistic framework of Bayesian networks constitutes an attractive alternative. In contrast to deterministic proceedings, it treats response variables as well as explanatory variables as random variables making no difference between input and output variables. Using a graphical representation Bayesian networks encode the dependency relations between the variables in a directed acyclic graph: variables are represented as nodes and (in-)dependencies between variables as (missing) edges between the nodes. The joint distribution of all variables can thus be described by decomposing it, according to the depicted independences, into a product of local conditional probability distributions, which are defined by the parameters of the Bayesian network. In the framework of this thesis the Bayesian network approach is applied to different natural hazard domains (i.e. seismic hazard, flood damage and landslide assessments). Learning the network structure and parameters from data, Bayesian networks reveal relevant dependency relations between the included variables and help to gain knowledge about the underlying processes. The problem of Bayesian network learning is cast in a Bayesian framework, considering the network structure and parameters as random variables itself and searching for the most likely combination of both, which corresponds to the maximum a posteriori (MAP score) of their joint distribution given the observed data. Although well studied in theory the learning of Bayesian networks based on real-world data is usually not straight forward and requires an adoption of existing algorithms. Typically arising problems are the handling of continuous variables, incomplete observations and the interaction of both. Working with continuous distributions requires assumptions about the allowed families of distributions. To "let the data speak" and avoid wrong assumptions, continuous variables are instead discretized here, thus allowing for a completely data-driven and distribution-free learning. An extension of the MAP score, considering the discretization as random variable as well, is developed for an automatic multivariate discretization, that takes interactions between the variables into account. The discretization process is nested into the network learning and requires several iterations. Having to face incomplete observations on top, this may pose a computational burden. Iterative proceedings for missing value estimation become quickly infeasible. A more efficient albeit approximate method is used instead, estimating the missing values based only on the observations of variables directly interacting with the missing variable. Moreover natural hazard assessments often have a primary interest in a certain target variable. The discretization learned for this variable does not always have the required resolution for a good prediction performance. Finer resolutions for (conditional) continuous distributions are achieved with continuous approximations subsequent to the Bayesian network learning, using kernel density estimations or mixtures of truncated exponential functions. All our proceedings are completely data-driven. We thus avoid assumptions that require expert knowledge and instead provide domain independent solutions, that are applicable not only in other natural hazard assessments, but in a variety of domains struggling with uncertainties.
Enterolignans (enterodiol and enterolactone) exhibit structural similarity to estradiol and have therefore been hypothesized to modulate hormone related cancers such as breast cancer. The bioactivation of the plant lignan secoisolariciresinol diglucoside (SDG) requires the transformation by intestinal bacteria including the deglycosylation of SDG to secoisolariciresinol (SECO) followed by demethylation and dehydroxylation of SECO to enterodiol (ED). Finally, ED is dehydrogenated to enterolactone (EL). It is unclear whether the bacterial activation of SDG to ED and EL is crucial for the cancer preventing effects of dietary lignans. The possible protective effect of bacterial lignan transformation on a 7,12 dimethylbenz(a)anthracene (DMBA)-induced breast cancer in gnotobiotic rats was investigated. Germ-free rats were associated with a defined lignan-converting consortium (Clostridium saccharogumia, Blautia producta, Eggerthella lenta, and Lactonifactor longoviformis). The rats colonized with lignan-converting bacteria consortium (LCC) were fed a lignan-rich flaxseed diet and breast cancer was chemical induced. Identically treated germ-free rats served as control. All bacteria of the consortium successfully colonized the intestine of the LCC rats. The plant lignan SDG was converted into the enterolignans ED and EL in the LCC rats but not in the germ-free rats. This transformation did not influence cancer incidence but significantly decreased tumor numbers per tumor-bearing rat, and tumor size. Cell proliferation was significantly inhibited and apoptosis was significantly induced in LCC rats. No differences between LCC and control rats were observed in the expression of the genes encoding the estrogen receptors (ERα and ERβ) and G-coupled protein receptor 30 (GPR30). Similar findings were observed for both insulin-like growth factor 1 (IGF-1) and epidermal growth factor receptor (EGFR) genes involved in tumor growth. Proteome analysis revealed that 24 proteins were differentially expressed in tumor tissue from LCC and germ-free. RanBP-type and C3HC4-type zinc finger-containing protein 1 (RBCK1) and poly(rC)-binding protein 1 (PBCP1) were down-regulated by 3.2- and 2.0-fold, respectively. These proteins are associated with cell proliferation. The activity of selected enzymes involved in the degradation of oxidants in plasma and liver was significantly increased in the LCC rats. However, plasma and liver concentrations of reduced glutathione (non-enzymatic antioxidant) and malondialdehyde (oxidative stress marker) did not differ between the groups. In conclusion, the bacterial conversion of plant lignan to enterolignans beneficially influences their anti-cancer effect. However, the mechanisms involved in these effects remain elusive.
Die Kartierung planetarer Körper stellt ein wesentliches Mittel der raumfahrtgestützten Exploration der Himmelskörper dar. Aktuell kommen zur Erstellung der planetaren Karten Geo-Informationssysteme (GIS) zum Einsatz. Ziel dieser Arbeit ist es, eine GIS-orientierte Prozesskette (Planetary Mapping System (PMS)) zu konzipieren, mit dem Schwerpunkt geologische und geomorphologische Karten planetarer Oberflächen einheitlich durchführen zu können und nachhaltig zugänglich zu machen.
Permafrost-affected ecosystems including peat wetlands are among the most obvious regions in which current microbial controls on organic matter decomposition are likely to change as a result of global warming. Wet tundra ecosystems in particular are ideal sites for increased methane production because of the waterlogged, anoxic conditions that prevail in seasonally increasing thawed layers. The following doctoral research project focused on investigating the abundance and distribution of the methane-cycling microbial communities in four different polygons on Herschel Island and the Yukon Coast. Despite the relevance of the Canadian Western Arctic in the global methane budget, the permafrost microbial communities there have thus far remained insufficiently characterized. Through the study of methanogenic and methanotrophic microbial communities involved in the decomposition of permafrost organic matter and their potential reaction to rising environmental temperatures, the overarching goal of the ensuing thesis is to fill the current gap in understanding the fate of the organic carbon currently stored in Artic environments and its implications regarding the methane cycle in permafrost environments. To attain this goal, a multiproxy approach including community fingerprinting analysis, cloning, quantitative PCR and next generation sequencing was used to describe the bacterial and archaeal community present in the active layer of four polygons and to scrutinize the diversity and distribution of methane-cycling microorganisms at different depths. These methods were combined with soil properties analyses in order to identify the main physico-chemical variables shaping these communities. In addition a climate warming simulation experiment was carried-out on intact active layer cores retrieved from Herschel Island in order to investigate the changes in the methane-cycling communities associated with an increase in soil temperature and to help better predict future methane-fluxes from polygonal wet tundra environments in the context of climate change. Results showed that the microbial community found in the water-saturated and carbon-rich polygons on Herschel Island and the Yukon Coast was diverse and showed a similar distribution with depth in all four polygons sampled. Specifically, the methanogenic community identified resembled the communities found in other similar Arctic study sites and showed comparable potential methane production rates, whereas the methane oxidizing bacterial community differed from what has been found so far, being dominated by type-II rather than type-I methanotrophs. After being subjected to strong increases in soil temperature, the active-layer microbial community demonstrated the ability to quickly adapt and as a result shifts in community composition could be observed. These results contribute to the understanding of carbon dynamics in Arctic permafrost regions and allow an assessment of the potential impact of climate change on methane-cycling microbial communities. This thesis constitutes the first in-depth study of methane-cycling communities in the Canadian Western Arctic, striving to advance our understanding of these communities in degrading permafrost environments by establishing an important new observatory in the Circum-Arctic.
Diese Arbeit befasst sich mit der Synthese und der Charakterisierung von thermoresponsiven Polymeren und ihrer Immobilisierung auf festen Oberflächen als nanoskalige dünne Schichten. Dabei wurden thermoresponsive Polymere vom Typ der unteren kritischen Entmischungstemperatur (engl.: lower critical solution temperature, LCST) verwendet. Sie sind bei niedrigeren Temperaturen im Lösungsmittel gut und nach Erwärmen oberhalb einer bestimmten kritischen Temperatur nicht mehr löslich; d. h. sie weisen bei einer bestimmten Temperatur einen Phasenübergang auf. Als Basismaterial wurden verschiedene thermoresponsive und biokompatible Polymere basierend auf Diethylenglykolmethylethermethacrylat (MEO2MA) und Oligo(ethylenglykol)methylethermethacrylat (OEGMA475, Mn = 475 g/ mol) über frei radikalische Copolymerisation synthetisiert. Der thermoresponsive Phasenübergang der Copolymere wurde in wässriger Lösung und in gequollenen vernetzten dünnen Schichten beobachtet. Außerdem wurde untersucht, inwiefern eine selektive Proteinbindung an geeignete funktionalisierte Copolymere die Phasenübergangstemperatur beeinflusst. Die thermoresponsiven Copolymere wurden über photovernetzbare Gruppen auf festen Oberflächen immobilisiert. Die nötigen lichtempfindlichen Vernetzereinheiten wurden mittels des polymerisierbaren Benzophenonderivates 2 (4 Benzoylphenoxy)ethylmethacrylat (BPEM) in das Copolymer integriert. Dünne Filme der Copolymere mit ca. 100 nm Schichtdicke wurden über Rotationsbeschichtung auf Siliziumwafer aufgeschleudert und anschließend durch Bestrahlung mit UV Licht vernetzt und auf der Oberfläche immobilisiert. Die Filme sind stabiler je größer der Vernetzeranteil und je größer die Molmasse der Copolymere ist. Bei einem Waschprozess nach der Vernetzung wird beispielsweise aus einem Film mit moderater Molmasse und geringem Vernetzeranteil mehr unvernetztes Copolymer ausgewaschen als bei einem höhermolekularen Copolymer mit hohem Vernetzeranteil. Die Quellbarkeit der Polymerschichten wurde mit Ellipsometrie untersucht. Sie ist größer je geringer der Vernetzeranteil in den Copolymeren ist. Schichten aus thermoresponsiven OEG Copolymeren zeigen einen Volumenphasenübergang vom Typ der LCST. Der thermoresponsive Kollaps der Schichten ist komplett reversibel, die Kollapstemperatur kann über die Zusammensetzung der Copolymere eingestellt werden. Für einen Vergleich dieser Eigenschaften mit dem gut charakterisierten und derzeit wohl am häufigsten untersuchten thermoresponsiven Polymer Poly(N-isopropylacrylamid) (PNIPAM) wurden zusätzlich photovernetzte Schichten aus PNIPAM hergestellt und ebenfalls ellipsometrisch vermessen. Im Vergleich zu PNIPAM verläuft der Phasenübergang der Schichten aus den Copolymeren mit Oligo(ethylenglykol)-seitenketten (OEG Copolymere) über einen größeren Temperaturbereich. Mit Licht einer Wellenlänge > 300 nm wurden die photosensitiven Benzophenongruppen selektiv angeregt. Bei der Verwendung kleinerer Wellenlängen vernetzten die Copolymerschichten auch ohne die Anwesenheit der lichtempfindlichen Benzophenongruppen. Dieser Effekt ließ sich zur kontrollierten Immobilisierung und Vernetzung der OEG Copolymere einsetzen. Als weitere Methode zur Immobilisierung der Copolymere wurde die Anbindung über Amidbindungen untersucht. Dazu wurden OEG Copolymere mit dem carboxylgruppenhaltigen 2 Succinyloxyethylmethacrylat (MES) auf mit 3 Aminopropyldimethylethoxysilan (APDMSi) silanisierte Siliziumwafer rotationsbeschichtet, und mit dem oligomeren α, ω Diamin Jeffamin® ED 900 vernetzt. Die Vernetzungsreaktion erfolgte ohne weitere Zusätze durch Erhitzen der Proben. Die Hydrogelschichten waren anschließend stabil und zeigten neben thermoresponsivem auch pH responsives Verhalten. Um zu untersuchen, ob die Phasenübergangstemperatur durch eine Proteinbindung beeinflusst werden kann, wurde ein polymerisierbares Biotinderivat 2 Biotinyl-aminoethylmethacrylat (BAEMA) in das thermoresponsive Copolymer eingebaut. Der Einfluss des biotinbindenen Proteins Avidin auf das thermoresponsive Verhalten des Copolymers in Lösung wurde untersucht. Die spezifische Bindung von Avidin an das biotinylierte Copolymer verschob die Übergangstemperatur deutlich zu höheren Temperaturen. Kontrollversuche zeigten, dass dieses Verhalten auf eine selektive Proteinbindung zurückzuführen ist. Thermoresponsive OEG Copolymere mit photovernetzbaren Gruppen aus BPEM und Biotingruppen aus BAEMA wurden über Rotationsbeschichtung auf Gold- und auf Siliziumoberflächen aufgetragen und durch UV Strahlung vernetzt. Die spezifische Bindung von Avidin an die Copolymerschicht wurde mit Oberflächenplasmonenresonanz und Ellipsometrie untersucht. Die Bindungskapazität der Schichten war umso größer, je kleiner der Vernetzeranteil, d. h. je größer die Maschenweite des Netzwerkes war. Die Quellbarkeit der Schichten wurde durch die Avidinbindung erhöht. Bei hochgequollenen Systemen verursachte eine Mehrfachbindung des tetravalenten Avidins allerdings eine zusätzliche Quervernetzung des Polymernetzwerkes. Dieser Effekt wirkt der erhöhten Quellbarkeit durch die Avidinbindung entgegen und lässt die Polymernetzwerke schrumpfen.
Given a large set of records in a database and a query record, similarity search aims to find all records sufficiently similar to the query record. To solve this problem, two main aspects need to be considered: First, to perform effective search, the set of relevant records is defined using a similarity measure. Second, an efficient access method is to be found that performs only few database accesses and comparisons using the similarity measure. This thesis solves both aspects with an emphasis on the latter. In the first part of this thesis, a frequency-aware similarity measure is introduced. Compared record pairs are partitioned according to frequencies of attribute values. For each partition, a different similarity measure is created: machine learning techniques combine a set of base similarity measures into an overall similarity measure. After that, a similarity index for string attributes is proposed, the State Set Index (SSI), which is based on a trie (prefix tree) that is interpreted as a nondeterministic finite automaton. For processing range queries, the notion of query plans is introduced in this thesis to describe which similarity indexes to access and which thresholds to apply. The query result should be as complete as possible under some cost threshold. Two query planning variants are introduced: (1) Static planning selects a plan at compile time that is used for all queries. (2) Query-specific planning selects a different plan for each query. For answering top-k queries, the Bulk Sorted Access Algorithm (BSA) is introduced, which retrieves large chunks of records from the similarity indexes using fixed thresholds, and which focuses its efforts on records that are ranked high in more than one attribute and thus promising candidates. The described components form a complete similarity search system. Based on prototypical implementations, this thesis shows comparative evaluation results for all proposed approaches on different real-world data sets, one of which is a large person data set from a German credit rating agency.
In the early days of computer graphics, research was mainly driven by the goal to create realistic synthetic imagery. By contrast, non-photorealistic computer graphics, established as its own branch of computer graphics in the early 1990s, is mainly motivated by concepts and principles found in traditional art forms, such as painting, illustration, and graphic design, and it investigates concepts and techniques that abstract from reality using expressive, stylized, or illustrative rendering techniques. This thesis focuses on the artistic stylization of two-dimensional content and presents several novel automatic techniques for the creation of simplified stylistic illustrations from color images, video, and 3D renderings. Primary innovation of these novel techniques is that they utilize the smooth structure tensor as a simple and efficient way to obtain information about the local structure of an image. More specifically, this thesis contributes to knowledge in this field in the following ways. First, a comprehensive review of the structure tensor is provided. In particular, different methods for integrating the minor eigenvector field of the smoothed structure tensor are developed, and the superiority of the smoothed structure tensor over the popular edge tangent flow is demonstrated. Second, separable implementations of the popular bilateral and difference of Gaussians filters that adapt to the local structure are presented. These filters avoid artifacts while being computationally highly efficient. Taken together, both provide an effective way to create a cartoon-style effect. Third, a generalization of the Kuwahara filter is presented that avoids artifacts by adapting the shape, scale, and orientation of the filter to the local structure. This causes directional image features to be better preserved and emphasized, resulting in overall sharper edges and a more feature-abiding painterly effect. In addition to the single-scale variant, a multi-scale variant is presented, which is capable of performing a highly aggressive abstraction. Fourth, a technique that builds upon the idea of combining flow-guided smoothing with shock filtering is presented, allowing for an aggressive exaggeration and an emphasis of directional image features. All presented techniques are suitable for temporally coherent per-frame filtering of video or dynamic 3D renderings, without requiring expensive extra processing, such as optical flow. Moreover, they can be efficiently implemented to process content in real-time on a GPU.
Interactive rendering techniques for focus+context visualization of 3D geovirtual environments
(2013)
This thesis introduces a collection of new real-time rendering techniques and applications for focus+context visualization of interactive 3D geovirtual environments such as virtual 3D city and landscape models. These environments are generally characterized by a large number of objects and are of high complexity with respect to geometry and textures. For these reasons, their interactive 3D rendering represents a major challenge. Their 3D depiction implies a number of weaknesses such as occlusions, cluttered image contents, and partial screen-space usage. To overcome these limitations and, thus, to facilitate the effective communication of geo-information, principles of focus+context visualization can be used for the design of real-time 3D rendering techniques for 3D geovirtual environments (see Figure). In general, detailed views of a 3D geovirtual environment are combined seamlessly with abstracted views of the context within a single image. To perform the real-time image synthesis required for interactive visualization, dedicated parallel processors (GPUs) for rasterization of computer graphics primitives are used. For this purpose, the design and implementation of appropriate data structures and rendering pipelines are necessary. The contribution of this work comprises the following five real-time rendering methods: • The rendering technique for 3D generalization lenses enables the combination of different 3D city geometries (e.g., generalized versions of a 3D city model) in a single image in real time. The method is based on a generalized and fragment-precise clipping approach, which uses a compressible, raster-based data structure. It enables the combination of detailed views in the focus area with the representation of abstracted variants in the context area. • The rendering technique for the interactive visualization of dynamic raster data in 3D geovirtual environments facilitates the rendering of 2D surface lenses. It enables a flexible combination of different raster layers (e.g., aerial images or videos) using projective texturing for decoupling image and geometry data. Thus, various overlapping and nested 2D surface lenses of different contents can be visualized interactively. • The interactive rendering technique for image-based deformation of 3D geovirtual environments enables the real-time image synthesis of non-planar projections, such as cylindrical and spherical projections, as well as multi-focal 3D fisheye-lenses and the combination of planar and non-planar projections. • The rendering technique for view-dependent multi-perspective views of 3D geovirtual environments, based on the application of global deformations to the 3D scene geometry, can be used for synthesizing interactive panorama maps to combine detailed views close to the camera (focus) with abstract views in the background (context). This approach reduces occlusions, increases the usage the available screen space, and reduces the overload of image contents. • The object-based and image-based rendering techniques for highlighting objects and focus areas inside and outside the view frustum facilitate preattentive perception. The concepts and implementations of interactive image synthesis for focus+context visualization and their selected applications enable a more effective communication of spatial information, and provide building blocks for design and development of new applications and systems in the field of 3D geovirtual environments.
Climatic variations and human activity now and increasingly in the future cause land cover changes and introduce perturbations in the terrestrial carbon reservoirs in vegetation, soil and detritus. Optical remote sensing and in particular Imaging Spectroscopy has shown the potential to quantify land surface parameters over large areas, which is accomplished by taking advantage of the characteristic interactions of incident radiation and the physico-chemical properties of a material. The objective of this thesis is to quantify key soil parameters, including soil organic carbon, using field and Imaging Spectroscopy. Organic carbon, iron oxides and clay content are selected to be analyzed to provide indicators for ecosystem function in relation to land degradation, and additionally to facilitate a quantification of carbon inventories in semiarid soils. The semiarid Albany Thicket Biome in the Eastern Cape Province of South Africa is chosen as study site. It provides a regional example for a semiarid ecosystem that currently undergoes land changes due to unadapted management practices and furthermore has to face climate change induced land changes in the future. The thesis is divided in three methodical steps. Based on reflectance spectra measured in the field and chemically determined constituents of the upper topsoil, physically based models are developed to quantify soil organic carbon, iron oxides and clay content. Taking account of the benefits limitations of existing methods, the approach is based on the direct application of known diagnostic spectral features and their combination with multivariate statistical approaches. It benefits from the collinearity of several diagnostic features and a number of their properties to reduce signal disturbances by influences of other spectral features. In a following step, the acquired hyperspectral image data are prepared for an analysis of soil constituents. The data show a large spatial heterogeneity that is caused by the patchiness of the natural vegetation in the study area that is inherent to most semiarid landscapes. Spectral mixture analysis is performed and used to deconvolve non-homogenous pixels into their constituent components. For soil dominated pixels, the subpixel information is used to remove the spectral influence of vegetation and to approximate the pure spectral signature coming from the soil. This step is an integral part when working in natural non-agricultural areas where pure bare soil pixels are rare. It is identified as the largest benefit within the multi-stage methodology, providing the basis for a successful and unbiased prediction of soil constituents from hyperspectral imagery. With the proposed approach it is possible (1) to significantly increase the spatial extent of derived information of soil constituents to areas with about 40 % vegetation coverage and (2) to reduce the influence of materials such as vegetation on the quantification of soil constituents to a minimum. Subsequently, soil parameter quantities are predicted by the application of the feature-based soil prediction models to the maps of locally approximated soil signatures. Thematic maps showing the spatial distribution of the three considered soil parameters in October 2009 are produced for the Albany Thicket Biome of South Africa. The maps are evaluated for their potential to detect erosion affected areas as effects of land changes and to identify degradation hot spots in regard to support local restoration efforts. A regional validation, carried out using available ground truth sites, suggests remaining factors disturbing the correlation of spectral characteristics and chemical soil constituents. The approach is developed for semiarid areas in general and not adapted to specific conditions in the study area. All processing steps of the developed methodology are implemented in software modules, where crucial steps of the workflow are fully automated. The transferability of the methodology is shown for simulated data of the future EnMAP hyperspectral satellite. Soil parameters are successfully predicted from these data despite intense spectral mixing within the lower spatial resolution EnMAP pixels. This study shows an innovative approach to use Imaging Spectroscopy for mapping of key soil constituents, including soil organic carbon, for large areas in a non-agricultural ecosystem and under consideration of a partially vegetation coverage. It can contribute to a better assessment of soil constituents that describe ecosystem processes relevant to detect and monitor land changes. The maps further provide an assessment of the current carbon inventory in soils, valuable for carbon balances and carbon mitigation products.
Inhibition, attentional control, and causes of forgetting in working memory: a formal approach
(2013)
In many cognitive activities, the temporary maintenance and manipulation of mental objects is a necessary step in order to reach a cognitive goal. Working memory has been regarded as the process responsible for those cognitive activities. This thesis addresses the question: what limits working-memory capacity (WMC)? A question that still remains controversial (Barrouillet & Camos, 2009; Lewandowsky, Oberauer, & Brown, 2009). This study attempted to answer this question by proposing that the dynamics between the causes of forgetting and the processes helping the maintenance, and the manipulation of the memoranda are the key aspects in understanding the limits of WMC.
Chapter 1 introduced key constructs and the strategy to examine the dynamics between inhibition, attentional control, and the causes of forgetting in working memory.
The study in Chapter 2 tested the performance of children, young adults, and old adults in a working-memory updating-task with two conditions: one condition included go steps and the other condition included go, and no-go steps. The interference model (IM; Oberauer & Kliegl, 2006), a model proposing interference-related mechanisms as the main cause of forgetting was used to simultaneously fit the data of these age groups. In addition to the interference-related parameters reflecting interference by feature overwriting and interference by confusion, and in addition to the parameters reflecting the speed of processing, the study included a new parameter that captured the time for switching between go steps and no-go steps. The study indicated that children and young adults were less susceptible than old adults to interference by feature overwriting; children were the most susceptible to interference by confusion, followed by old adults and then by young adults; young adults presented the higher rate of processing, followed by children and then by old adults; and young adults were the fastest group switching from go steps to no-go steps.
Chapter 3 examined the dynamics between causes of forgetting and the inhibition of a prepotent response in the context of three formal models of the limits of WMC: A resources model, a decay-based model, and three versions of the IM. The resources model was built on the assumption that a limited and shared source of activation for the maintenance and manipulation of the objects underlies the limits of WMC. The decay model assumes that memory traces of the working-memory objects decay over time if they are not reactivated via different mechanisms of maintenance. The IM, already described, proposes that interference-related mechanisms explain the limits of WMC. In two experiments and in a reanalysis of data of the second experiment, one version of the IM received more statistical support from the data. This version of the IM proposes that interference by feature overwriting and interference by confusion are the main factors underlying the limits of WMC. In addition, the model suggests that experimental conditions involving the inhibition of a prepotent response reduce the speed of processing and promotes the involuntary activation of irrelevant information in working memory.
Chapter 4 summed up Chapter 2 and 3 and discussed their findings and presented how this thesis has provided evidence of interference-related mechanisms as the main cause of forgetting, and it has attempted to clarify the role of inhibition and attentional control in working memory. With the implementation of formal models and experimental manipulations in the framework of nonlinear mixed models the data offered explanations of causes of forgetting and the role of inhibition in WMC at different levels: developmental effects, aging effects, effects related to experimental manipulations and individual differences in these effects. Thus, the present approach afforded a comprehensive view of a large number of factors limiting WMC.
Leuchtkäfer & Orgelkoralle
(2013)
Leuchtende Käfer und Medusen, phosphoreszierende Meereswellen oder zu Stein erstarrende Korallen faszinierten den bisher vornehmlich als Dichter portraitierten Naturforscher Adelbert von Chamisso (1781–1838). Intensiver noch als den zoologischen und geologischen Phänomenen, widmete er sich der Scientia amabilis – der liebenswerten Wissenschaft von den Pflanzen. Der vielseitig Talentierte verfasste seine Reise um die Welt (1836), die bis heute als eine der stilistisch anspruchvollsten und lesenswertesten Reisebeschreibungen gilt. Diese Studie widmet sich dezidiert den naturkundlichen Studien Chamissos im Kontext der dreijährigen Rurik-Expedition sowie den zugehörigen Textproduktionen. Mit einem umfassenden Text- und Materialkorpus werden literatur- und kulturwissenschaftliche sowie wissenschaftshistorische Fragestellungen an das Werk gelegt und ertragreich beantwortet. Für die Reiseliteraturforschung wird bisher unbeachtetes Quellenmaterial ans Licht gebracht, gängige Thesen werden widerlegt, Quellen anderer Besatzungsmitglieder vergleichend betrachtet. Die Studie stellt den Naturforscher Chamisso in den Fokus, ohne den Dichter auszublenden, und widmet sich Fragen der Generierung, Vernetzung und Darstellung naturkundlichen Wissens in Texten, Illustrationen und Materialien zur Expedition – sie ist insgesamt für die Literatur- und Geschichtswissenschaft ebenso innovativ wie für die interdisziplinäre Geschichte des Wissens.
Die nichtproteinogene Aminosäure GABA (γ-Aminobuttersäure) gilt als der wichtigste inhibitorische Neurotransmitter im Zentralnervensystem von Vertebraten sowie Invertebraten und vermittelt ihre Wirkung u. a. über die metabotropen GABAB-Rezeptoren. Bisher sind diese Rezeptoren bei Insekten nur rudimentär untersucht. Für die Amerikanische Großschabe als etablierter Modellorganismus konnte pharmakologisch eine modulatorische Rolle der GABAB-Rezeptoren bei der Bildung von Primärspeichel nachgewiesen werden. Ziel dieser Arbeit war eine umfassende Charakterisierung der GABAB-Rezeptor-Subtypen 1 und 2 von Periplaneta americana. Unter Verwendung verschiedenster Klonierungsstrategien sowie der Kooperationsmöglichkeit mit der Arbeitsgruppe von Prof. Dr. T. Miura (Hokkaido, Japan) in Hinsicht auf eine dort etablierte P. americana EST-Datenbank gelang die Klonierung von zwei Rezeptor-cDNAs. Die Analyse der abgeleiteten Aminosäuresequenzen auf GB-spezifische Domänen und konservierte Aminosäure-Reste, sowie der Vergleich zu bekannten GB Sequenzen anderer Arten legen nahe, dass es sich bei den isolierten Sequenzen um die GABAB-Rezeptor-Subtypen 1 und 2 (PeaGB1 und PeaGB2) handelt. Für die funktionelle und pharmakologische Charakterisierung des Heteromers aus PeaGB1 und PeaGB2 wurden Expressionskonstrukte für die Transfektion in HEK-flpTM-Zellen hergestellt. Das Heteromer aus PeaGB1 und PeaGB2 hemmt bei steigenden GABA-Konzentrationen die cAMP-Produktion. Die Substanzen SKF97541 und 3-APPA konnten als Agonisten identifiziert werden. CGP55845 und CGP54626 wirken als vollwertige Antagonisten. Das in vitro ermittelte pharmakologische Profil im Vergleich zur Pharmakologie an der isolierten Drüse bestätigt, dass die GABA-Wirkung in der Speicheldrüse tatsächlich von GBs vermittelt wird. Für die immunhistochemische Charakterisierung konnte ein spezifischer polyklonaler Antikörper gegen die extrazelluläre Schleife 2 des PeaGB1 generiert werden. Ein weiterer Antikörper, welcher gegen den PeaGB2 gerichtet ist, erwies sich hingegen nicht als ausreichend spezifisch. Western-Blot-Analysen bestätigen das Vorkommen beider Subtypen im Zentralnervensystem von P. americana. Zudem wird der PeaGB1 in der Speicheldrüse und in den Geschlechtsdrüsen der Schabenmännchen exprimiert. Immunhistochemische Analysen zeigen eine PeaGB1-ähnliche Markierung in den GABAergen Fasern der Speicheldrüse auf. Demnach fungiert der PeaGB1 hier als Autorezeptor. Weiterhin konnte eine PeaGB1-ähnliche Markierung in nahezu allen Gehirnneuropilen festgestellt werden. Auch die akzessorischen Drüsen der Männchen, Pilzdrüse und Phallusdrüse, sind PeaGB1-immunreaktiv.
This thesis rests on two large Active Galactic Nuclei (AGNs) surveys. The first survey deals with galaxies that host low-level AGNs (LLAGN) and aims at identifying such galaxies by quantifying their variability. While numerous studies have shown that AGNs can be variable at all wavelengths, the nature of the variability is still not well understood. Studying the properties of LLAGNs may help to understand better galaxy evolution, and how AGNs transit between active and inactive states. In this thesis, we develop a method to extract variability properties of AGNs. Using multi-epoch deep photometric observations, we subtract the contribution of the host galaxy at each epoch to extract variability and estimate AGN accretion rates. This pipeline will be a powerful tool in connection with future deep surveys such as PANSTARS. The second study in this thesis describes a survey of X-ray selected AGN hosts at redshifts z>1.5 and compares them to quiescent galaxies. This survey aims at studying environments, sizes and morphologies of star-forming high-redshift AGN hosts in the COSMOS Survey at the epoch of peak AGN activity. Between redshifts 1.5<z<3.8, the COSMOS HST/ACS imaging probes the UV regime, where separating the AGN flux from its host galaxy is very challenging. Nevertheless, we successfully derived the structural properties of 249 AGN hosts using two-dimensional surface-brightness profile fitting with the GALFIT package. This is the largest sample of AGN hosts at redshift z>1.5 to date. We analyzed the evolution of structural parameters of AGN and non-AGN host galaxies with redshift, and compared their disturbance rates to identify the more probable AGN triggering mechanism in the 43.5<log_10 L_X<45 luminosity range. We also conducted mock AGN and quiescent galaxies observations to determine errors and corrections for the derived parameters. We find that the size-absolute magnitude relations of AGN hosts and non-AGN galaxies are very similar, with estimated mean sizes in both samples decreasing by ~50% between redshifts z=1.5 and z=3.5. Morphological classification of both active and quiescent galaxies shows that the majority of the AGN host galaxies are disc-dominated, with disturbance rates that are significantly lower than among the non-AGN galaxies. Such a finding suggests that Major Mergers are probably not responsible for triggering AGN accretion in most of these galaxies. Other secular mechanisms should therefore be responsible.
Challenging Khmer citizenship : minorities, the state, and the international community in Cambodia
(2013)
The idea of a distinctly ‘liberal’ form of multiculturalism has emerged in the theory and practice of Western democracies and the international community has become actively engaged in its global dissemination via international norms and organizations. This thesis investigates the internationalization of minority rights, by exploring state-minority relations in Cambodia, in light of Will Kymlicka’s theory of multicultural citizenship. Based on extensive empirical research, the analysis explores the situation and aspirations of Cambodia’s ethnic Vietnamese, highland peoples, Muslim Cham, ethnic Chinese and Lao and the relationships between these groups and the state. All Cambodian regimes since independence have defined citizenship with reference to the ethnicity of the Khmer majority and have - often violently - enforced this conception through the assimilation of highland peoples and the Cham and the exclusion of ethnic Vietnamese and Chinese. Cambodia’s current constitution, too, defines citizenship ethnically. State-sponsored Khmerization systematically privileges members of the majority culture and marginalizes minority members politically, economically and socially. The thesis investigates various international initiatives aimed at promoting application of minority rights norms in Cambodia. It demonstrates that these initiatives have largely failed to accomplish a greater degree of compliance with international norms in practice. This failure can be explained by a number of factors, among them Cambodia’s neo-patrimonial political system, the geo-political fears of a ‘minoritized’ Khmer majority, the absence of effective regional security institutions, the lack of minority access to political decision-making, the significant differences between international and Cambodian conceptions of modern statehood and citizenship and the emergence of China as Cambodia’s most important bilateral donor and investor. Based on this analysis, the dissertation develops recommendations for a sequenced approach to minority rights promotion, with pragmatic, less ambitious shorter-term measures that work progressively towards achievement of international norms in the longer-term.
When we read a text, we obtain information at different levels of representation from abstract symbols. A reader’s ultimate aim is the extraction of the meaning of the words and the text. The reserach of eye movements in reading covers a broad range of psychological systems, ranging from low-level perceptual and motor processes to high-level cognition. Reading of skilled readers proceeds highly automatic, but is a complex phenomenon of interacting subprocesses at the same time. The study of eye movements during reading offers the possibility to investigate cognition via behavioral measures during the excercise of an everyday task. The process of reading is not limited to the directly fixated (or foveal) word but also extends to surrounding (or parafoveal) words, particularly the word to the right of the gaze position. This process may be unconscious, but parafoveal information is necessary for efficient reading. There is an ongoing debate on whether processing of the upcoming word encompasses word meaning (or semantics) or only superficial features. To increase the knowledge about how the meaning of one word helps processing another word, seven experiments were conducted. In these studies, words were exachanged during reading. The degree of relatedness between the word to the right of the currently fixated one and the word subsequently fixated was experimentally manipulated. Furthermore, the time course of the parafoveal extraction of meaning was investigated with two different approaches, an experimental one and a statistical one. As a major finding, fixation times were consistently lower if a semantically related word was presented compared to the presence of an unrelated word. Introducing an experimental technique that allows controlling the duration for which words are available, the time course of processing and integrating meaning was evaluated. Results indicated both facilitation and inhibition due to relatedness between the meanings of words. In a more natural reading situation, the effectiveness of the processing of parafoveal words was sometimes time-dependent and substantially increased with shorter distances between the gaze position and the word. Findings are discussed with respect to theories of eye-movement control. In summary, the results are more compatible with models of distributed word processing. The discussions moreover extend to language differences and technical issues of reading research.
Automobildesigner haben als Gestaltungsexperten die Aufgabe, die Identität und damit die Werte einer Marke in Formen zu übersetzen, welche eine Vielzahl von Kunden ansprechen (Giannini & Monti, 2003; Karjalainen, 2002). Für diesen Übersetzungsprozess ist es zielführend, ästhetische Kundenbedürfnisse zu kennen, denn die Qualität einer Designlösung hängt auch davon ab, inwieweit der Designer Kundenbe-dürfnisse und damit das Designproblem richtig erfasst hat (Ulrich, 2006). Eine Grundlage hierfür entsteht durch eine erfolgreiche Designer-Nutzer-Interaktion und den Aufbau eines gemeinsamen Kontextwissens (Lee, Popovich, Blackler & Lee, 2009). Zwischen Designern und Kunden findet jedoch häufig kein direkter Austausch statt (Zeisel, 2006). Zudem belegen Befunde der Kunst- und Produktästhetikforschung, dass der Erwerb von gestalterischem Wissen und damit die Entwicklung ästhetischer Expertise mit Veränderungen der kognitiven Verarbeitung ästhetischer Objekte einhergeht, die sich in Wahrnehmung, Bewertung und Verhalten manifestieren. Damit ist auch zu erwarten, dass die Präferenzurteile von Designern und Kunden bei der ästhetischen Bewertung von Design nicht immer konvergieren. Ziel der vorliegenden Arbeit war daher die systematische Untersuchung dieser expertisebedingten Wahrnehmungs- und Bewertungsunterschiede zwischen designge-schulten und ungeschulten Personen bei der Betrachtung von Automobildesign. Damit sollten Perzeption, Verarbeitung und Bewertung von Automobildesign durch design-ungeschulte Personen transparenter gemacht und mit der Verarbeitung designgeschul-ter Personen verglichen werden, um einen Beitrag zur gemeinsamen Wissensbasis und damit einer erfolgreichen Designer-Nutzer-Interaktion zu leisten. Die theoretische Einbettung der Arbeit basierte auf dem Modell ästhetischer Erfahrung und ästheti-schen Urteilens von Leder, Belke, Oeberst und Augustin (2004), welches konkrete Annahmen zu Verarbeitungsunterschieden von ästhetischen Objekten zwischen Experten und Laien bietet, die bisher allerdings noch nicht umfassend geprüft wurden. Den ersten Schwerpunkt dieser Arbeit bildete die Untersuchung von Unter-schieden zwischen Designern und designungeschulten Rezipienten bei der Beschrei-bung und Bewertung auf dem Markt vorhandenen Fahrzeugdesigns. Dabei sollte auch geprüft werden, ob eine lexikalische Verbindung zwischen Beschreibungsattributen von Fahrzeugrezipienten und den postulierten Markenwerten von Automobilmarken hergestellt werden kann. Diesem ersten Untersuchungsanliegen wurde in zwei Studien nachgegangen: Studie I diente der Erhebung von Beschreibungsattributen mittels Triadenvergleich in Anlehnung an Kelly (1955). Es wurde geprüft, ob designgeschulte Teilnehmer produkti-ver verbalisieren, dabei anteilig mehr symbolbezogene als formbezogene Attribute generieren und innerhalb ihrer Gruppe häufiger gleiche Attribute nutzen als designun-geschulte Teilnehmer. Hierfür beschrieben 20 designgeschulte Probanden und 20 designungeschulte Probanden mit selbst gewählten Adjektiven die Unterschiede zwischen vier präsentierten Fahrzeugen. Die Gruppen nutzten dabei entgegen der Annahmen sehr ähnliche Attribute und unterschieden sich somit auch nicht in ihrer Verwendung symbolbezogener und formbezogener Attribute. Die generierten Attribute wurden mittels Prototypenansatz (Amelang & Zielinski, 2002) den ermittelten und nachfolgend kategorisierten Markenwerten von 10 Automobilherstellern zugeordnet, so dass sechs Skalen zur Erfassung der ästhetischen Wirkung von Fahrzeugen entstanden. In Studie II wurde ein diese sechs Skalen umfassender Fragebogen an einer Stichprobe von 83 Designern und Designstudierenden sowie 98 Probanden ohne Designausbildung in einer Onlinebefragung hinsichtlich Skalenkonsistenz geprüft. Außerdem wurden erste Annahmen aus dem Modell von Leder et al. (2004) abgeleitet und durch einen Vergleich der beiden Teilnehmergruppen hinsichtlich der Bewertung der vier präsentierten Fahrzeugmodelle für die Skalen mit guter interner Konsistenz (Attraktivität, Dynamik, Fortschritt, Qualität), sowie eines ästhetischen Gesamturteils, der benötigten Bewertungszeit und der Automobilaffinität überprüft. Hierbei vergaben Designstudierende und insbesondere ausgebildete Designer radikalere Bewertungen als Designlaien, benötigten mehr Zeit bei der Bewertung und waren automobilaffiner als die ungeschulten Befragungsteilnehmer. Den zweiten Schwerpunkt der Arbeit bildete eine konzeptionelle Zusammen-führung der Annahmen des Modells von Leder et al. (2004) und der Postulate zur Wirkung von Objekteigenschaften auf ästhetische Urteile (Berlyne, 1971; Martindale, 1988; Silvia, 2005b). Konkret sollte geprüft werden, welchen Einfluss marktrelevante Objekteigenschaften, wie z.B. das Ausmaß an Innovativität, auf die durch Expertise moderierte Bewertung von Design haben. In den Studien III und IV wurden hierfür systematisch bezüglich Innovativität und Balance gestufte Linienmodelle von Fahrzeu-gen präsentiert. In Studie III wurden die Modelle in einer Onlinebefragung durch 18 Designstudierende und 20 Studenten der Fahrzeugtechnik hinsichtlich Attraktivität, Innovativität und Balance bewertet. Im Einklang mit den Annahmen konnte gezeigt werden, dass sehr neuartiges Design von den designungeschulten Probanden als weniger attraktiv bewertet wird als von Betrachtern eines Designstudienganges. In Studie IV wurden neben den Ästhetikbewertungen zusätzlich das Blickverhal-ten und der affektiver Zustand der Versuchsteilnehmer in einem Messwiederholungs-design mit einer zwischengelagerten Phase elaborierter Designbewertung, in welcher der in Studie II geprüfte Fragebogen eingesetzt wurde, erhoben. An der Laborstudie nahmen je 11 Designer, Ingenieure, und Geisteswissenschaftler teil. Wiederum wurde innovatives Design von den designungeschulten Gruppen als weniger attraktiv bewertet. Dieser Unterschied reduzierte sich jedoch nach wiederholter Bewertung der Modelle. Die Manifestation expertisebedingten Blickverhaltens konnte nicht beobach-tet werden, wie auch die durch eine angenommene bessere Bewältigung einherge-hende positivere Stimmung oder höhere Zufriedenheit in der Expertengruppe. Gemeinsam mit den Befunden aus den Studien II und III wurde deutlich, dass Designausbildung und, noch ausgeprägter, Designexpertise neben einer höheren Attraktivitätsbewertung innovativen Designs auch zu einer differenzierteren Beurtei-lung von Innovativität führt. Dies wurde mit der Erweiterung des mentalen Schemas für Fahrzeuge durch die Beschäftigung mit vielfältigen Modellvarianten bereits während des Studiums interpretiert. Es wurden Hinweise auf eine stilbezogene, elaboriertere Verarbeitung von Fahrzeugdesign durch designgeschulte Betrachter beobachtet sowie eine mit Expertise einhergehende Autonomität ästhetischer Urteile als Ausdruck einer hohen ästhetischen Entwicklungsstufe (Parsons, 1987). Mit diesen bei unterschiedlichen Stichproben beobachteten, stabilen expertisebedingten Bewer-tungsunterschieden wurde eine begründete Basis für die geforderte Sensibilisierung für ästhetische Kundenbedürfnisse im Gestaltungsprozess geschaffen. Der in dieser Arbeit entwickelte Fragebogen kann hierbei für eine elaborierte Messung von Fahrzeugdesignpräferenzen, zum Vergleich der ästhetischen Wirkung mit den intendierten Markenwerten sowie für die Diskussion von Nutzereindrücken eingesetzt werden. Die Ergebnisse der vorliegenden Arbeiten tragen somit zur Erweiterung und Präzisierung des theoretischen Verständnisses von Ästhetikbewertungen bei und lassen sich gleichzeitig in die Praxis der Designausbildung und des Designprozesses übertragen.
Functional metabolism of storage carbohydrates is vital to plants and animals. The water-soluble glycogen in animal cells and the amylopectin which is the major component of water-insoluble starch granules residing in plant plastids are chemically similar as they consist of α-1,6 branched α-1,4 glucan chains. Synthesis and degradation of transitory starch and of glycogen are accomplished by a set of enzymatic activities that to some extend are also similar in plants and animals. Chain elongation, branching, and debranching are achieved by synthases, branching enzymes, and debranching enzymes, respectively. Similarly, both types of polyglucans contain low amounts of phosphate esters whose abundance varies depending on species and organs. Starch is selectively phosphorylated by at least two dikinases (GWD and PWD) at the glucosyl carbons C6 and C3 and dephosphorylated by the phosphatase SEX4 and SEX4-like enzymes. In Arabidopsis insufficiency in starch phosphorylation or dephosphorylation results in largely impaired starch turnover, starch accumulation, and often in retardation of growth. In humans the progressive neurodegenerative epilepsy, Lafora disease, is the result of a defective enzyme (laforin) that is functional equivalent to the starch phosphatase SEX4 and capable of glycogen dephosphorylation. Patients lacking laforin progressively accumulate unphysiologically structured insoluble glycogen-derived particles (Lafora bodies) in many tissues including brain. Previous results concerning the carbon position of glycogen phosphate are contradictory. Currently it is believed that glycogen is esterified exclusively at the carbon positions C2 and C3 and that the monophosphate esters, being incorporated via a side reaction of glycogen synthase (GS), lack any specific function but are rather an enzymatic error that needs to be corrected. In this study a versatile and highly sensitive enzymatic cycling assay was established that enables quantification of very small G6P amounts in the presence of high concentrations of non-target compounds as present in hydrolysates of polysaccharides, such as starch, glycogen, or cytosolic heteroglycans in plants. Following validation of the G6P determination by analyzing previously characterized starches G6P was quantified in hydrolysates of various glycogen samples and in plant heteroglycans. Interestingly, glucosyl C6 phosphate is present in all glycogen preparations examined, the abundance varying between glycogens of different sources. Additionally, it was shown that carbon C6 is severely hyperphosphorylated in glycogen of Lafora disease mouse model and that laforin is capable of removing C6 phosphate from glycogen. After enrichment of phosphoglucans from amylolytically degraded glycogen, several techniques of two-dimensional NMR were applied that independently proved the existence of 6-phosphoglucosyl residues in glycogen and confirmed the recently described phosphorylation sites C2 and C3. C6 phosphate is neither Lafora disease- nor species-, or organ-specific as it was demonstrated in liver glycogen from laforin-deficient mice and in that of wild type rabbit skeletal muscle. The distribution of 6-phosphoglucosyl residues was analyzed in glycogen molecules and has been found to be uneven. Gradual degradation experiments revealed that C6 phosphate is more abundant in central parts of the glycogen molecules and in molecules possessing longer glucan chains. Glycogen of Lafora disease mice consistently contains a higher proportion of longer chains while most short chains were reduced as compared to wild type. Together with results recently published (Nitschke et al., 2013) the findings of this work completely unhinge the hypothesis of GS-mediated phosphate incorporation as the respective reaction mechanism excludes phosphorylation of this glucosyl carbon, and as it is difficult to explain an uneven distribution of C6 phosphate by a stochastic event. Indeed the results rather point to a specific function of 6-phosphoglucosyl residues in the metabolism of polysaccharides as they are present in starch, glycogen, and, as described in this study, in heteroglycans of Arabidopsis. In the latter the function of phosphate remains unclear but this study provides evidence that in starch and glycogen it is related to branching. Moreover a role of C6 phosphate in the early stages of glycogen synthesis is suggested. By rejecting the current view on glycogen phosphate to be a stochastic biochemical error the results permit a wider view on putative roles of glycogen phosphate and on alternative biochemical ways of glycogen phosphorylation which for many reasons are likely to be mediated by distinct phosphorylating enzymes as it is realized in starch metabolism of plants. Better understanding of the enzymology underlying glycogen phosphorylation implies new possibilities of Lafora disease treatment.
Crowded field spectroscopy and the search for intermediate-mass black holes in globular clusters
(2013)
Globular clusters are dense and massive star clusters that are an integral part of any major galaxy. Careful studies of their stars, a single cluster may contain several millions of them, have revealed that the ages of many globular clusters are comparable to the age of the Universe. These remarkable ages make them valuable probes for the exploration of structure formation in the early universe or the assembly of our own galaxy, the Milky Way. A topic of current research relates to the question whether globular clusters harbour massive black holes in their centres. These black holes would bridge the gap from stellar mass black holes, that represent the final stage in the evolution of massive stars, to supermassive ones that reside in the centres of galaxies. For this reason, they are referred to as intermediate-mass black holes. The most reliable method to detect and to weigh a black hole is to study the motion of stars inside its sphere of influence. The measurement of Doppler shifts via spectroscopy allows one to carry out such dynamical studies. However, spectroscopic observations in dense stellar fields such as Galactic globular clusters are challenging. As a consequence of diffraction processes in the atmosphere and the finite resolution of a telescope, observed stars have a finite width characterized by the point spread function (PSF), hence they appear blended in crowded stellar fields. Classical spectroscopy does not preserve any spatial information, therefore it is impossible to separate the spectra of blended stars and to measure their velocities. Yet methods have been developed to perform imaging spectroscopy. One of those methods is integral field spectroscopy. In the course of this work, the first systematic study on the potential of integral field spectroscopy in the analysis of dense stellar fields is carried out. To this aim, a method is developed to reconstruct the PSF from the observed data and to use this information to extract the stellar spectra. Based on dedicated simulations, predictions are made on the number of stellar spectra that can be extracted from a given data set and the quality of those spectra. Furthermore, the influence of uncertainties in the recovered PSF on the extracted spectra are quantified. The results clearly show that compared to traditional approaches, this method makes a significantly larger number of stars accessible to a spectroscopic analysis. This systematic study goes hand in hand with the development of a software package to automatize the individual steps of the data analysis. It is applied to data of three Galactic globular clusters, M3, M13, and M92. The data have been observed with the PMAS integral field spectrograph at the Calar Alto observatory with the aim to constrain the presence of intermediate-mass black holes in the centres of the clusters. The application of the new analysis method yields samples of about 80 stars per cluster. These are by far the largest spectroscopic samples that have so far been obtained in the centre of any of the three clusters. In the course of the further analysis, Jeans models are calculated for each cluster that predict the velocity dispersion based on an assumed mass distribution inside the cluster. The comparison to the observed velocities of the stars shows that in none of the three clusters, a massive black hole is required to explain the observed kinematics. Instead, the observations rule out any black hole in M13 with a mass higher than 13000 solar masses at the 99.7% level. For the other two clusters, this limit is at significantly lower masses, namely 2500 solar masses in M3 and 2000 solar masses in M92. In M92, it is possible to lower this limit even further by a combined analysis of the extracted stars and the unresolved stellar component. This component consists of the numerous stars in the cluster that appear unresolved in the integral field data. The final limit of 1300 solar masses is the lowest limit obtained so far for a massive globular cluster.
Multi-messenger constraints and pressure from dark matter annihilation into electron-positron pairs
(2013)
Despite striking evidence for the existence of dark matter from astrophysical observations, dark matter has still escaped any direct or indirect detection until today. Therefore a proof for its existence and the revelation of its nature belongs to one of the most intriguing challenges of nowadays cosmology and particle physics. The present work tries to investigate the nature of dark matter through indirect signatures from dark matter annihilation into electron-positron pairs in two different ways, pressure from dark matter annihilation and multi-messenger constraints on the dark matter annihilation cross-section. We focus on dark matter annihilation into electron-positron pairs and adopt a model-independent approach, where all the electrons and positrons are injected with the same initial energy E_0 ~ m_dm*c^2. The propagation of these particles is determined by solving the diffusion-loss equation, considering inverse Compton scattering, synchrotron radiation, Coulomb collisions, bremsstrahlung, and ionization. The first part of this work, focusing on pressure from dark matter annihilation, demonstrates that dark matter annihilation into electron-positron pairs may affect the observed rotation curve by a significant amount. The injection rate of this calculation is constrained by INTEGRAL, Fermi, and H.E.S.S. data. The pressure of the relativistic electron-positron gas is computed from the energy spectrum predicted by the diffusion-loss equation. For values of the gas density and magnetic field that are representative of the Milky Way, it is estimated that the pressure gradients are strong enough to balance gravity in the central parts if E_0 < 1 GeV. The exact value depends somewhat on the astrophysical parameters, and it changes dramatically with the slope of the dark matter density profile. For very steep slopes, as those expected from adiabatic contraction, the rotation curves of spiral galaxies would be affected on kiloparsec scales for most values of E_0. By comparing the predicted rotation curves with observations of dwarf and low surface brightness galaxies, we show that the pressure from dark matter annihilation may improve the agreement between theory and observations in some cases, but it also imposes severe constraints on the model parameters (most notably, the inner slope of the halo density profile, as well as the mass and the annihilation cross-section of dark matter particles into electron-positron pairs). In the second part, upper limits on the dark matter annihilation cross-section into electron-positron pairs are obtained by combining observed data at different wavelengths (from Haslam, WMAP, and Fermi all-sky intensity maps) with recent measurements of the electron and positron spectra in the solar neighbourhood by PAMELA, Fermi, and H.E.S.S.. We consider synchrotron emission in the radio and microwave bands, as well as inverse Compton scattering and final-state radiation at gamma-ray energies. For most values of the model parameters, the tightest constraints are imposed by the local positron spectrum and synchrotron emission from the central regions of the Galaxy. According to our results, the annihilation cross-section should not be higher than the canonical value for a thermal relic if the mass of the dark matter candidate is smaller than a few GeV. In addition, we also derive a stringent upper limit on the inner logarithmic slope α of the density profile of the Milky Way dark matter halo (α < 1 if m_dm < 5 GeV, α < 1.3 if m_dm < 100 GeV and α < 1.5 if m_dm < 2 TeV) assuming a dark matter annihilation cross-section into electron-positron pairs (σv) = 3*10^−26 cm^3 s^−1, as predicted for thermal relics from the big bang.
Galaxy clusters are the largest known gravitationally bound objects, their study is important for both an intrinsic understanding of their systems and an investigation of the large scale structure of the universe. The multi- component nature of galaxy clusters offers multiple observable signals across the electromagnetic spectrum. At X-ray wavelengths, galaxy clusters are simply identified as X-ray luminous, spatially extended, and extragalactic sources. X-ray observations offer the most powerful technique for constructing cluster catalogues. The main advantages of the X-ray cluster surveys are their excellent purity and completeness and the X-ray observables are tightly correlated with mass, which is indeed the most fundamental parameter of clusters. In my thesis I have conducted the 2XMMi/SDSS galaxy cluster survey, which is a serendipitous search for galaxy clusters based on the X-ray extended sources in the XMM-Newton Serendipitous Source Catalogue (2XMMi-DR3). The main aims of the survey are to identify new X-ray galaxy clusters, investigate their X-ray scaling relations, identify distant cluster candidates, and study the correlation of the X-ray and optical properties. The survey is constrained to those extended sources that are in the footprint of the Sloan Digital Sky Survey (SDSS) in order to be able to identify the optical counterparts as well as to measure their redshifts that are mandatory to measure their physical properties. The overlap area be- tween the XMM-Newton fields and the SDSS-DR7 imaging, the latest SDSS data release at the starting of the survey, is 210 deg^2. The survey comprises 1180 X-ray cluster candidates with at least 80 background-subtracted photon counts, which passed the quality control process. To measure the optical redshifts of the X-ray cluster candidates, I used three procedures; (i) cross-matching these candidates with the recent and largest optically selected cluster catalogues in the literature, which yielded the photometric redshifts of about a quarter of the X-ray cluster candidates. (ii) I developed a finding algorithm to search for overdensities of galaxies at the positions of the X-ray cluster candidates in the photometric redshift space and to measure their redshifts from the SDSS-DR8 data, which provided the photometric redshifts of 530 groups/clusters. (iii) I developed an algorithm to identify the cluster candidates associated with spectroscopically targeted Luminous Red Galaxies (LRGs) in the SDSS-DR9 and to measure the cluster spectroscopic redshift, which provided 324 groups and clusters with spectroscopic confirmation based on spectroscopic redshift of at least one LRG. In total, the optically confirmed cluster sample comprises 574 groups and clusters with redshifts (0.03 ≤ z ≤ 0.77), which is the largest X-ray selected cluster catalogue to date based on observations from the current X-ray observatories (XMM-Newton, Chandra, Suzaku, and Swift/XRT). Among the cluster sample, about 75 percent are newly X-ray discovered groups/clusters and 40 percent are new systems to the literature. To determine the X-ray properties of the optically confirmed cluster sample, I reduced and analysed their X-ray data in an automated way following the standard pipelines of processing the XMM-Newton data. In this analysis, I extracted the cluster spectra from EPIC(PN, MOS1, MOS2) images within an optimal aperture chosen to maximise the signal-to-noise ratio. The spectral fitting procedure provided the X-ray temperatures kT (0.5 - 7.5 keV) for 345 systems that have good quality X-ray data. For all the optically confirmed cluster sample, I measured the physical properties L500 (0.5 x 10^42 – 1.2 x 10^45 erg s-1 ) and M500 (1.1 x 10^13 – 4.9 x 10^14 M⊙) from an iterative procedure using published scaling relations. The present X-ray detected groups and clusters are in the low and intermediate luminosity regimes apart from few luminous systems, thanks to the XMM-Newton sensitivity and the available XMM-Newton deep fields The optically confirmed cluster sample with measurements of redshift and X-ray properties can be used for various astrophysical applications. As a first application, I investigated the LX - T relation for the first time based on a large cluster sample of 345 systems with X-ray spectroscopic parameters drawn from a single survey. The current sample includes groups and clusters with wide ranges of redshifts, temperatures, and luminosities. The slope of the relation is consistent with the published ones of nearby clusters with higher temperatures and luminosities. The derived relation is still much steeper than that predicted by self-similar evolution. I also investigated the evolution of the slope and the scatter of the LX - T relation with the cluster redshift. After excluding the low luminosity groups, I found no significant changes of the slope and the intrinsic scatter of the relation with redshift when dividing the sample into three redshift bins. When including the low luminosity groups in the low redshift subsample, I found its LX - T relation becomes after than the relation of the intermediate and high redshift subsamples. As a second application of the optically confirmed cluster sample from our ongoing survey, I investigated the correlation between the cluster X-ray and the optical parameters that have been determined in a homogenous way. Firstly, I investigated the correlations between the BCG properties (absolute magnitude and optical luminosity) and the cluster global proper- ties (redshift and mass). Secondly, I computed the richness and the optical luminosity within R500 of a nearby subsample (z ≤ 0.42, with a complete membership detection from the SDSS data) with measured X-ray temperatures from our survey. The relation between the estimated optical luminosity and richness is also presented. Finally, the correlation between the cluster optical properties (richness and luminosity) and the cluster global properties (X-ray luminosity, temperature, mass) are investigated.
Numerical simulations of galaxy formation and observational Galactic Astronomy are two fields of research that study the same objects from different perspectives. Simulations try to understand galaxies like our Milky Way from an evolutionary point of view while observers try to disentangle the current structure and the building blocks of our Galaxy. Due to great advances in computational power as well as in massive stellar surveys we are now able to compare resolved stellar populations in simulations and in observations. In this thesis we use a number of approaches to relate the results of the two fields to each other. The major observational data set we refer to for this work comes from the Radial Velocity Experiment (RAVE), a massive spectroscopic stellar survey that observed almost half a million stars in the Galaxy. In a first study we use three different models of the Galaxy to generate synthetic stellar surveys that can be directly compared to the RAVE data. To do this we evaluate the RAVE selection function to great detail. Among the Galaxy models is the widely used Besancon model that performs well when individual parameter distribution are considered, but fails when we study chemodynamic correlations. The other two models are based on distributions of mass particles instead of analytical distribution functions. This is the first time that such models are converted to the space of observables and are compared to a stellar survey. We show that these models can be competitive and in some aspects superior to analytic models, because of their self-consistent dynamic history. In the case of a full cosmological simulation of disk galaxy formation we can recover features in the synthetic survey that relate to the known issues of the model and hence proof that our technique is sensitive to the global structure of the model. We argue that the next generation of cosmological galaxy formation simulations will deliver valuable models for our Galaxy. Testing these models with our approach will provide a direct connection between stellar Galactic astronomy and physical cosmology. In the second part of the thesis we use a sample of high-velocity halo stars from the RAVE data to estimate the Galactic escape speed and the virial mass of the Milky Way. In the course of this study cosmological simulations of galaxy formation also play a crucial role. Here we use them to calibrate and extensively test our analysis technique. We find the local Galactic escape speed to be 533 (+54/-41) km/s (90% confidence). With this result in combination with a simple mass model of the Galaxy we then construct an estimate of the virial mass of the Galaxy. For the mass profile of the dark matter halo we use two extreme models, a pure Navarro, Frenk & White (NFW) profile and an adiabatically contracted NFW profile. When we use statistics on the concentration parameter of these profile taken from large dissipationless cosmological simulations we obtain an estimate of the virial mass that is almost independent of the choice of the halo profile. For the mass M_340 enclosed within R_340 = 180 kpc we find 1.3 (+0.4/-0.3) x 10^12 M_sun. This value is in very good agreement with a number of other mass estimates in the literature that are based on independent data sets and analysis techniques. In the last part of this thesis we investigate a new possible channel to generate a population of Hypervelocity stars (HVSs) that is observed in the stellar halo. Commonly, it is assumed that the velocities of these stars originate from an interaction with the super-massive black hole in the Galactic center. It was suggested recently that stars stripped-off a disrupted satellite galaxy could reach similar velocities and leave the Galaxy. Here we study in detail the kinematics of tidal debris stars to investigate the probability that the observed sample of HVSs could partly originate from such a galaxy collision. We use a suite of $N$-body simulations following the encounter of a satellite galaxy with its Milky Way-type host galaxy. We quantify the typical pattern in angular and phase space formed by the debris stars and develop a simple model that predicts the kinematics of stripped-off stars. We show that the distribution of orbital energies in the tidal debris has a typical form that can be described quite accurately by a simple function. The main parameters determining the maximum energy kick a tidal debris star can get is the initial mass of the satellite and only to a lower extent its orbit. Main contributors to an unbound stellar population created in this way are massive satellites (M_sat > 10^9 M_sun). The probability that the observed HVS population is significantly contaminated by tidal debris stars appears small in the light of our results.
The supercapacitor is one of the most important energy storage devices as its construction allows for addressing many of the drawbacks related to batteries, but the low energy density of current systems is a major issue. In this doctoral dissertation, with a view to attaining high energy density supercapacitor systems that can be comparable to those for batteries, new heteroatom-containing carbons in the form of particles and three-dimensional films were investigated. A nitrogen-containing material, acrodam, was chosen as the carbon precursor due to the inexpensiveness, high carbonization yield, oligomerizability, etc. The carbon particles were prepared from acrodam together with caesium acetate as a meltable flux agent, and disclosed excellent properties in hydroquinone-loaded sulphuric acid electrolyte with high energy densities (up to 133.0 Wh kg–1) and sufficient cycle stabilities. These properties are already now comparable to those of batteries. Besides, conductive carbon three-dimensional films were fabricated using acrodam oligomer as the precursor by the inexpensive spin coating method. The films were found to be homogeneous, flat, void- and crack-free, and high conductivities (up to 334 S cm–1) could be obtained at the carbonization temperature of 1000 ºC. Furthermore, a porous carbon three-dimensional film could be formed using an organic template at the first attempt. This finding demonstrates the film’s potentiality for various applications such as supercapacitor electrode; the essential absence of contact resistance within the network should contribute to effective transportation of electron within the electrode. The progress made in this dissertation will open a new way to further enhancement of energy density for supercapacitor as well as other applications that exceeds the current properties.
Various synthetic approaches were explored towards the preparation of poly(N-substituted glycine) homo/co-polymers (a.k.a. polypeptoids). In particular, monomers that would facilitate in the preparation of bio-relevant polymers via either chain- or step-growth polymerization were targeted. A 3-step synthetic approach towards N-substituted glycine N-carboxyanhydrides (NNCA) was implemented, or developed, and optimized allowing for an efficient gram scale preparation of the aforementioned monomer (chain-growth). After exploring several solvents and various conditions, a reproducible and efficient ring-opening polymerization (ROP) of NNCAs was developed in benzonitrile (PhCN). However, achieving molecular weights greater than 7 kDa required longer reaction times (>4 weeks) and sub-sequentially allowed for undesirable competing side reactions to occur (eg. zwitterion monomer mechanisms). A bulk-polymerization strategy provided molecular weights up to 11 kDa within 24 hours but suffered from low monomer conversions (ca. 25%). Likewise, a preliminary study towards alcohol promoted ROP of NNCAs suffered from impurities and a suspected alternative activated monomer mechanism (AAMM) providing poor inclusion of the initiator and leading to multi-modal dispersed polymeric systems. The post-modification of poly(N-allyl glycine) via thiol-ene photo-addition was observed to be quantitative, with the utilization of photo-initiators, and facilitated in the first glyco-peptoid prepared under environmentally benign conditions. Furthermore, poly(N-allyl glycine) demonstrated thermo-responsive behavior and could be prepared as a semi-crystalline bio-relevant polymer from solution (ie. annealing). Initial efforts in preparing these polymers via standard poly-condensation protocols were insufficient (step-growth). However, a thermally induced side-product, diallyl diketopiperazine (DKP), afforded the opportunity to explore photo-induced thiol-ene and acyclic diene metathesis (ADMET) polymerizations. Thiol-ene polymerization readily led to low molecular weight polymers (<2.5 kDa), that were insoluble in most solvents except heated amide solvents (ie. DMF), whereas ADMET polymerization, with diallyl DKP, was unsuccessful due to a suspected 6 member complexation/deactivation state of the catalyst. This understanding prompted the preparation of elongated DKPs most notably dibutenyl DKP. SEC data supports the aforementioned understanding but requires further optimization studies in both the preparation of the DKP monomers and following ADMET polymerization. This work was supported by NMR, GC-MS, FT-IR, SEC-IR, and MALDI-Tof MS characterization. Polymer properties were measured by UV-Vis, TGA, and DSC.
Escherichia (E.) coli ist als kommensales Bakterium ein wichtiger Bestandteil des Mikrobioms von Säugern, jedoch zudem der häufigste Infektionserreger des Menschen. Entsprechend des Infektionsortes werden intestinal (InPEC) und extraintestinal pathogene E. coli (ExPEC) unterschieden. Die Pathogenese von E. coli-Infektionen ist durch Virulenzfaktoren determiniert, welche von jeweils spezifischen virulenzassoziierten Genen (inVAGs und exVAGs) kodiert werden. Häufig werden exVAGs auch in E. coli-Isolaten aus dem Darm gesunder Wirte nachgewiesen. Dies führte zu der Vermutung, dass exVAGs die intestinale Kolonisierung des Wirtes durch E. coli unterstützen. Das Hauptziel dieser Arbeit bestand darin, das Wissen über den Einfluss von exVAGs auf die Besiedlung und damit die Adhäsion von E. coli an Epithelzellen des Darmtraktes zu erweitern. Die Durchführung einer solch umfassenden E. coli-Populationsstudie erforderte die Etablierung neuer Screeningmethoden. Für die genotypische Charakterisierung wurden mikropartikelbasierte Multiplex-PCR-Assays zum Nachweis von 44 VAGs und der Phylogenie etabliert. Für die phänotypische Charakterisierung wurden Adhäsions- und Zytotoxizitätsassays etabliert. Die Screeningmethoden basieren auf der VideoScan-Technologie, einem automatisierten bildbasierten Multifluoreszenzdetektionssystem. Es wurden 398 E. coli-Isolate aus 13 Wildsäugerarten und 5 Wildvogelarten sowie aus gesunden und harnwegserkrankten Menschen und Hausschweinen charakterisiert. Die Adhäsionsassays hatten zum Ziel, sowohl die Adhäsionsraten als auch die Adhäsionsmuster der 317 nicht hämolytischen Isolate auf 5 Epithelzelllinien zu bestimmen. Die Zytotoxizität der 81 hämolytischen Isolate wurde in Abhängigkeit der Inkubationszeit auf 4 Epithelzelllinien geprüft. In den E. coli-Isolaten wurde eine Reihe von VAGs nachgewiesen. Potentielle InPEC, insbesondere shigatoxinproduzierende und enteropathogene E. coli wurden aus Menschen, Hausschweinen und Wildtieren, vor allem aus Rehen und Feldhasen isoliert. exVAGs wurden mit stark variierender Prävalenz in Isolaten aus allen Arten detektiert. Die größte Anzahl und das breiteste Spektrum an exVAGs wurde in Isolaten aus Urin harnwegserkrankter Menschen, gefolgt von Isolaten aus Dachsen und Rehen nachgewiesen. In Isolaten der phylogenetischen Gruppe B2 wurden mehr exVAGs detektiert als in den Isolaten der phylogenetischen Gruppen A, B1 und D. Die Ergebnisse der Adhäsionsassays zeigten, dass die meisten Isolate zelllinien-, gewebe- oder wirtsspezifisch adhärierten. Ein Drittel der Isolate adhärierte an keiner Zelllinie und nur zwei Isolate adhärierten stark an allen Zelllinien. Grundsätzlich adhärierten mehr Isolate an humanen sowie an intestinalen Zelllinien. Besonders Isolate aus Eichhörnchen und Amseln sowie aus Urin harnwegserkrankter Menschen und Hausschweine waren in der Lage, stark zu adhärieren. Hierbei bildeten die Isolate als Adhäsionsmuster diffuse Adhäsion, Mikrokolonien, Ketten und Agglomerationen. Mittels statistischer Analysen wurden Assoziationen zwischen exVAGs und einer hohen Adhäsionsrate ersichtlich. So war beispielsweise das Vorkommen von afa/dra mit einer höheren Adhäsionsrate auf Caco-2- und 5637-Zellen und von sfa/foc auf IPEC-J2-Zellen assoziiert. Die Ergebnisse der Zytotoxizitätsassays zeigten eine sehr starke und zeitabhängige Zerstörung der Monolayer aller Epithelzelllinien durch die α-Hämolysin-positiven Isolate. Auffallend war die hohe Toxizität hämolytischer Isolate aus Wildtieren gegenüber den humanen Zelllinien. Mit den innerhalb dieser Arbeit entwickelten Screeningmethoden war es möglich, große Mengen an Bakterien zu charakterisieren. Es konnte ein Überblick über die Verbreitung von VAGs in E. coli aus unterschiedlichen Wirten gewonnen werden. Besonders Wildtiere wurden sowohl durch den Nachweis von VAGs in den entsprechenden Isolaten, verbunden mit deren Adhäsionsfähigkeit und ausgeprägter Zytotoxizität als Reservoire pathogener E. coli identifiziert. Ebenso wurde eine zelllinienspezifische Adhäsion von Isolaten mit bestimmten exVAGs deutlich. Damit konnte der mögliche Einfluss von exVAGs auf die intestinale Kolonisierung bestätigt werden. In weiterführenden Arbeiten sind jedoch Expressions- und Funktionsanalysen der entsprechenden Proteine unerlässlich. Es wird anhand der Mikrokoloniebildung durch kommensale E. coli vermutet, dass Adhäsionsmuster und demzufolge Kolonisierungsstrategien, die bisher pathogenen E. coli zugeschrieben wurden, eher als generelle Kolonisierungsstrategien zu betrachten sind. Das E. coli-α-Hämolysin wirkt im Allgemeinen zytotoxisch auf Epithelzellen. Ein in der Fachliteratur diskutierter adhäsionsunterstützender Mechanismus dieses Toxins ist demnach fragwürdig. Innerhalb dieser Arbeit konnte gezeigt werden, dass die entwickelten Screeningmethoden umfassende Analysen einer großen Anzahl an E. coli-Isolaten ermöglichen.
This cumulative dissertation explored the use of the detection of natural background of fast neutrons, the so-called cosmic-ray neutron sensing (CRS) approach to measure field-scale soil moisture in cropped fields. Primary cosmic rays penetrate the top atmosphere and interact with atmospheric particles. Such interaction results on a cascade of high-energy neutrons, which continue traveling through the atmospheric column. Finally, neutrons penetrate the soil surface and a second cascade is produced with the so-called secondary cosmic-ray neutrons (fast neutrons). Partly, fast neutrons are absorbed by hydrogen (soil moisture). Remaining neutrons scatter back to the atmosphere, where its flux is inversely correlated to the soil moisture content, therefore allowing a non-invasive indirect measurement of soil moisture. The CRS methodology is mainly evaluated based on a field study carried out on a farmland in Potsdam (Brandenburg, Germany) along three crop seasons with corn, sunflower and winter rye; a bare soil period; and two winter periods. Also, field monitoring was carried out in the Schaefertal catchment (Harz, Germany) for long-term testing of CRS against ancillary data. In the first experimental site, the CRS method was calibrated and validated using different approaches of soil moisture measurements. In a period with corn, soil moisture measurement at the local scale was performed at near-surface only, and in subsequent periods (sunflower and winter rye) sensors were placed in three depths (5 cm, 20 cm and 40 cm). The direct transfer of CRS calibration parameters between two vegetation periods led to a large overestimation of soil moisture by the CRS. Part of this soil moisture overestimation was attributed to an underestimation of the CRS observation depth during the corn period ( 5-10 cm), which was later recalculated to values between 20-40 cm in other crop periods (sunflower and winter rye). According to results from these monitoring periods with different crops, vegetation played an important role on the CRS measurements. Water contained also in crop biomass, above and below ground, produces important neutron moderation. This effect was accounted for by a simple model for neutron corrections due to vegetation. It followed crop development and reduced overall CRS soil moisture error for periods of sunflower and winter rye. In Potsdam farmland also inversely-estimated soil hydraulic parameters were determined at the field scale, using CRS soil moisture from the sunflower period. A modelling framework coupling HYDRUS-1D and PEST was applied. Subsequently, field-scale soil hydraulic properties were compared against local scale soil properties (modelling and measurements). Successful results were obtained here, despite large difference in support volume. Simple modelling framework emphasizes future research directions with CRS soil moisture to parameterize field scale models. In Schaefertal catchment, CRS measurements were verified using precipitation and evapotranspiration data. At the monthly resolution, CRS soil water storage was well correlated to these two weather variables. Also clearly, water balance could not be closed due to missing information from other compartments such as groundwater, catchment discharge, etc. In the catchment, the snow influence to natural neutrons was also evaluated. As also observed in Potsdam farmland, CRS signal was strongly influenced by snow fall and snow accumulation. A simple strategy to measure snow was presented for Schaefertal case. Concluding remarks of this dissertation showed that (a) the cosmic-ray neutron sensing (CRS) has a strong potential to provide feasible measurement of mean soil moisture at the field scale in cropped fields; (b) CRS soil moisture is strongly influenced by other environmental water pools such as vegetation and snow, therefore these should be considered in analysis; (c) CRS water storage can be used for soil hydrology modelling for determination of soil hydraulic parameters; and (d) CRS approach has strong potential for long term monitoring of soil moisture and for addressing studies of water balance.
In soils and sediments there is a strong coupling between local biogeochemical processes and the distribution of water, electron acceptors, acids and nutrients. Both sides are closely related and affect each other from small scale to larger scales. Soil structures such as aggregates, roots, layers or macropores enhance the patchiness of these distributions. At the same time it is difficult to access the spatial distribution and temporal dynamics of these parameter. Noninvasive imaging techniques with high spatial and temporal resolution overcome these limitations. And new non-invasive techniques are needed to study the dynamic interaction of plant roots with the surrounding soil, but also the complex physical and chemical processes in structured soils. In this study we developed an efficient non-destructive in-situ method to determine biogeochemical parameters relevant to plant roots growing in soil. This is a quantitative fluorescence imaging method suitable for visualizing the spatial and temporal pH changes around roots. We adapted the fluorescence imaging set-up and coupled it with neutron radiography to study simultaneously root growth, oxygen depletion by respiration activity and root water uptake. The combined set up was subsequently applied to a structured soil system to map the patchy structure of oxic and anoxic zones induced by a chemical oxygen consumption reaction for spatially varying water contents. Moreover, results from a similar fluorescence imaging technique for nitrate detection were complemented by a numerical modeling study where we used imaging data, aiming to simulate biodegradation under anaerobic, nitrate reducing conditions.
Famously, Einstein read off the geometry of spacetime from Maxwell's equations. Today, we take this geometry that serious that our fundamental theory of matter, the standard model of particle physics, is based on it. However, it seems that there is a gap in our understanding if it comes to the physics outside of the solar system. Independent surveys show that we need concepts like dark matter and dark energy to make our models fit with the observations. But these concepts do not fit in the standard model of particle physics. To overcome this problem, at least, we have to be open to matter fields with kinematics and dynamics beyond the standard model. But these matter fields might then very well correspond to different spacetime geometries. This is the basis of this thesis: it studies the underlying spacetime geometries and ventures into the quantization of those matter fields independently of any background geometry. In the first part of this thesis, conditions are identified that a general tensorial geometry must fulfill to serve as a viable spacetime structure. Kinematics of massless and massive point particles on such geometries are introduced and the physical implications are investigated. Additionally, field equations for massive matter fields are constructed like for example a modified Dirac equation. In the second part, a background independent formulation of quantum field theory, the general boundary formulation, is reviewed. The general boundary formulation is then applied to the Unruh effect as a testing ground and first attempts are made to quantize massive matter fields on tensorial spacetimes.
The problem under consideration in the thesis is a two level atom in a photonic crystal and a pumping laser. The photonic crystal provides an environment for the atom, that modifies the decay of the exited state, especially if the atom frequency is close to the band gap. The population inversion is investigated als well as the emission spectrum. The dynamics is analysed in the context of open quantum systems. Due to the multiple reflections in the photonic crystal, the system has a finite memory that inhibits the Markovian approximation. In the Heisenberg picture the equations of motion for the system variables form a infinite hierarchy of integro-differential equations. To get a closed system, approximations like a weak coupling approximation are needed. The thesis starts with a simple photonic crystal that is amenable to analytic calculations: a one-dimensional photonic crystal, that consists of alternating layers. The Bloch modes inside and the vacuum modes outside a finite crystal are linked with a transformation matrix that is interpreted as a transfer matrix. Formulas for the band structure, the reflection from a semi-infinite crystal, and the local density of states in absorbing crystals are found; defect modes and negative refraction are discussed. The quantum optics section of the work starts with the discussion of three problems, that are related to the full resonance fluorescence problem: a pure dephasing model, the driven atom and resonance fluorescence in free space. In the lowest order of the system-environment coupling, the one-time expectation values for the full problem are calculated analytically and the stationary states are discussed for certain cases. For the calculation of the two time correlation functions and spectra, the additional problem of correlations between the two times appears. In the Markovian case, the quantum regression theorem is valid. In the general case, the fluctuation dissipation theorem can be used instead. The two-time correlation functions are calculated by the two different methods. Within the chosen approximations, both methods deliver the same result. Several plots show the dependence of the spectrum on the parameters. Some examples for squeezing spectra are shown with different approximations. A projection operator method is used to establish two kinds of Markovian expansion with and without time convolution. The lowest order is identical with the lowest order of system environment coupling, but higher orders give different results.
This dissertation is about factors that contribute to the surface forms of tones in connected speech in Akan. Akan is an African tone language, which is spoken in Ghana. It has two level tones (low and high), automatic and non-automatic downstep. Downstep is the major factor that influences the surface forms of tones. The thesis shows that downstep is caused by declination. It is argued that declination is an intonational property of Akan, which serves to signal coherence. A phonological representation using a high and a low register tone, associating to the left and right edge of an intonational phrase (IP), respectively, is proposed. Declination/downstep is modelled using a (phonetic) pitch implementation algorithm (Liberman & Pierrehumbert, 1984). An innovative application of the algorithm is presented, which naturally captures the relation between declination and downstep in Akan. Another important factor is the prosodic manifestation of sentence level pragmatic meanings, such as sentence mode and focus. Regarding the former, the thesis shows that a post-lexical low tone, which associates with the right edge of an IP, signals interrogativity. Additionally, lexical tones in Yes – No questions are realized in a higher pitch register, which does not lead to a reduction of declination. It is claimed that the higher register is not part of the phonological representation in Akan, but that it emerges at the phonetic level to compensate for the ‘unnatural’ form of the question morpheme and to satisfy the Frequency code (Gussenhoven, 2002; 2004). An extension of Rialland’s (2007) typology in terms of a new category called “low tense” question prosody is proposed. Concerning focus marking, it is argued that the use of the morpho-syntactic focus marking strategy is related to extra grammatical factors, such as hearer expectation, discourse expectability (Zimmermann, 2007) and emphasis (Hartmann, 2008). If a speaker of Akan wants to highlight a particular element in a sentence, in-situ, i.e. by means of prosody, the default prosodic structure is modified in such a way that the focused element forms its own phonological phrase (pP). If it is already contained in a pP, the boundary deliminating the focused element is enhanced (Féry, 2012). This restructuring/enhancement is accompanied by an interruption of the otherwise continuous melody due to insertion of a pause and/or a glottal stop. Beside declination and intonation, raising of H tones applies in Akan. H raising is analyzed as a local anticipatory planning effect, employed at the phonetic level, which enhances the perceptual distance between low and high tones. Low tones are raised, if they are wedged between two high tones. L raising is argued to be a local carryover effect (co-articulation). Further, it is demonstrated that global anticipatory raising takes place. It is shown that Akan speakers anticipate the length of an IP. Preplanning (anticipatory raising) is argued to be an important process at the level of pitch implementation. It serves to ensure that declination can be maintained throughout the IP, which prevents pitch resetting.
The melody of an Akan sentence is largely determined by the choice of words. The inventory of post-lexical tones is small. It consists of post-lexical register tones, which trigger declination and post-lexical intonational tones, which signal sentence type. The overall melodic shape is falling. At the local level, H raising and L raising occur. At the global level, initial low and high tones are realized higher if they occur in a long and/or complex sentence. This dissertation shows that many factors, which emerge at different levels of the tone production process, contribute to the surface form of tones in Akan.
Rhythm is a temporal and systematic organization of acoustic events in terms of prominence, timing and grouping, helping to structure our most basic experiences, such as body movement, music and speech. In speech, rhythm groups auditory events, e.g., sounds and pauses, together into words, making their boundaries acoustically prominent and aiding word segmentation and recognition by the hearer. After word recognition, the hearer is able to retrieve word meaning form his mental lexicon, integrating it with information from other linguistic domains, such as semantics, syntax and pragmatics, until comprehension is achieved. The importance of speech rhythm, however, is not restricted to word segmentation and recognition only. Beyond the word level rhythm continues to operate as an organization device, interacting with different linguistic domains, such as syntax and semantics, and grouping words into larger prosodic constituents, organized in a prosodic hierarchy. This dissertation investigates the function of speech rhythm as a sentence segmentation device during syntactic ambiguity processing, possible limitations on its use, i.e., in the context of second language processing, and its transferability as cognitive skill to the music domain.
Der Europäische Automobilsektor : Besteuerung, Marktliberalisierung und Beitrag zur CO₂-Reduktion
(2013)
Der Automobilsektor ist derzeit einer der wichtigsten Industriezweige in Europa. Ca. 2,2 Millionen Mitarbeiter sind direkt, weitere 9,8 Millionen indirekt darin beschäftigt (sechs Prozent aller Beschäftigten in Europa) und erwirtschaften mit einem Umsatz von ca. € 780 Milliarden im Jahr einen bedeutenden Teil des europäischen Bruttoinlandprodukts (BIP). Auch aus Sicht der Konsumenten ist das Auto für 80 % der Europäischen Haushalte, die ein Auto besitzen, aus dem täglichen Leben nicht mehr weg zu denken. Die europäischen Staaten beziehen ca. € 380 Milliarden ihrer Steuereinnahmen aus der Automobilindustrie. Damit haben sowohl Staaten, Konsumenten als auch die Automobilindustrie ein gewichtiges Interesse am Florieren der Branche. Die Schattenseite der Automobilindustrie sind die CO2-Emissionen, die mit 20 % (der Anteil lag 1970 bei 12 %) aller CO2-Emissionen in Europa einen wesentlichen Teil zum Klimawandel beitragen, Tendenz steigend. So haben im Lauf der vergangenen Jahre mehr und mehr Staaten ihre Besteuerung der Pkw explizit oder implizit nach Umweltstandards ausgerichtet. Damit soll das Autofahren verteuert oder eingeschränkt und / oder die Nutzung von schadstoffarmen Pkw gefördert werden. Neben den Klimaschutz verfolgt die Europäische Union (EU) das Ziel, einen einheitlichen europäischen Binnenmarkt zu schaffen. Durch den Mangel an verbindlichen EU-Richtlinien im Bezug auf Abgaben, haben die Gesetze, Steuern und Abgaben auf einzelstaatlicher und regionaler Ebene ein kaum mehr zu überschauendes Maß angenommen, da Im Rahmen des Subsidiaritätsprinzips die Mitgliedsländer weiterhin ihr Steuersystem überwiegend autonom gestalten, sofern dies im Einklang mit bestimmten europaweiten Vorgaben steht. Dies führt zu einem sehr heterogenen Steuersystem, das gerade im Bereich des Verkehrs bedeutende Markteffekte (z. B. höhere Zulassungszahlen für Diesel betriebene Pkw oder kürzere Haltedauern) nach sich zieht. Europaweit einheitlich sind lediglich Regelungen zu Wettbewerbsbeschränkungen. Es lassen sich nach wie vor viele Beispiele für Marktverzerrungen und die Heterogenität der Steuersysteme in Europa finden: Marktverzerrungen und Wettbewerbsbeschränkungen zeigen sich bei Fahrzeugpreisen, die in Europa stark variieren. Beispielsweise kosten in Dänemark Pkw bis zu 37 % weniger als in Deutschland. Diese Arbeit untersucht und vergleicht die Abgaben auf den Erwerb, den Besitz und die Nutzung von Pkw in den EU-Ländern sowie Norwegen und der Schweiz sowohl systematisch als auch quantitativ. Zur quantitativen Analyse wurde eine Datenbank mit den zur Steuerberechnung notwendigen Informationen erstellt. Darunter sind beispielswiese: Steuersätze und –tarife, Fahrzeugdaten, Kraftstoffpreise, Instandhaltungskosten, Versicherungsbeitrage, Wechselkurse und der durchschnittliche Wertverlust der Fahrzeuge. Darauf und auf bestimmten Annahmen basierend wurden die absoluten Abgaben für repräsentative Fahrzeuge in den verschiedenen Ländern berechnet. Besonderes Augenmerk gilt der CO2-orientierten Besteuerung, die in 17 Ländern (zumindest teilweise) eingeführt wurde. Diese Arbeit betrachtet auch andere, d. h. nicht fiskalische Maßnahmen der Europäischen Union zur CO2-Reduzierung, vergleicht diese mit alternativen Instrumenten, und analysiert die Wirkung unterschiedlicher Besteuerung auf den Pkw-Binnenmarkt, z. B. den Einfluss von Abgaben auf die europäischen Automobilpreise und damit auf Arbitrageeffekte. Es zeigt sich, dass die Besteuerung in Europa sowohl in der Abgabenhöhe als auch konzeptionell in der Vielzahl der Bemessungsgrundlagen und Steuertarife sehr heterogen ist und maßgeblich zu den sehr unterschiedlichen Gesamtkosten der Pkw-Nutzung beiträgt. Die relative Abgabenlast ist in einkommensstarken Ländern Westeuropas nicht hoch genug, um den Kraftstoffverbrauch spürbar zu reduzieren. Aus dem gleichen Grund ist von der CO2-orientierten Novellierung der deutschen Kfz-Steuer kein ausreichender Kaufanreiz zugunsten effizienterer Fahrzeuge zu erwarten. Die in der Vergangenheit von der Europäischen Union eingeführten Instrumente zur Reduzierung von CO2-Emissionen aus dem Straßenverkehr führten nicht zu den erwünschten Emissionsverringerungen. Die jüngste Maßnahme der Europäischen Union, den Automobilherstellern Emissionsgrenzen vorzuschreiben, ist weder effektiv noch effizient. Im letzten Jahrzehnt haben sich die Automobilpreise in Europa zwar etwas angeglichen. Dies liegt weniger an einer Angleichung in der Besteuerung als an der schrittweisen Liberalisierung des europäischen Automobilmarktes und den Novellierungen der Gruppenfreistellungsverordnung.
Die Arbeit leistet einen Beitrag zum Transformationsprozess von der schulischen beruflichen Bildung der DDR hin zu den Oberstufenzentren im Land Brandenburg in den 1990er Jahren. Es wird die Triade der Faktoren Institutionentransfer, Personen- und Knowhow-Transfer und Finanztransfer analysiert. Die vollständige Umstrukturierung ist das berufsschulpädagogisch herausragende Ergebnis der Transformation. Es wird herausgearbeitet welche Folgerungen aus dem Transformationsprozess für berufsschulpolitische Reformprozesse zu ziehen sind. Dabei wird der Frage nachgegangen, ob die Übernahme des bundesrepublikanischen beruflichen Schulsystems zwangsläufig der Nutzung einer Blaupause gleichkam oder ob es auch abweichende Wege gegeben hat bzw. generell hätte geben können. Der Transformationsprozess wird nicht auf der Ebene der Einzelschulen, der curricularen und konkreten Ausgestaltung der verschiedenen Bildungsgänge und des Lehrkräfteeinsatzes untersucht, sondern auf der Ebene der Entwicklung der Rahmenbedingungen und der Strukturen der Oberstufenzentren, also auf der Metaebene der Policy als inhaltlicher Dimension von Politik. Damit liegt der Fokus auf dem System „Oberstufenzentrum“. Die Untersuchung dieses Systems klammert die Untersuchung der Handelnden aus: Die untere Schulaufsicht, die OSZ-Leitungen, die Lehrkräfte und nicht zuletzt die Schülerinnen und Schüler. Die Analyse dient dem Verständnis und der Interpretation dessen, was sich sowohl durch die bundesstaatlichen Vorgaben – Einigungsvertrag u.a. – als auch durch das berufsbildungs- und berufsschulpolitische Handeln beteiligter Akteure herausbilden konnte. Die inhaltliche Dimension des berufsbildungs- und berufsschulpolitischen Feldes innerhalb des politisch-administrativen Systems im engeren Sinn wird überwiegend bezogen auf die KMK untersucht; ebenso die externen Akteure und ihre Rolle. Es wird herausgearbeitet welche Folgerungen aus dem Transformationsprozess für berufsbildungs- und berufsschulpolitische Reformprozesse gezogen werden können. Gegenstand der Untersuchung sind ausgewählte Beiträge zu verschiedenen Aspekten der Entwicklung der Oberstufenzentren im Land Brandenburg. Gegenstand sind deshalb ausgewählte Rechtsgrundlagen, Fachaufsätze etc. sowie durch Arbeitspapiere dokumentierte Aspekte der berufsschul- und berufsbildungspolitischen Diskussion in der Wendezeit und nachfolgenden nach der Gründung des Landes Brandenburg. Anhand von Vergleichen der rechtlichen Ausgangslagen in den fünf ostdeutschen Ländern wird analysiert, ob es verfassungs- respektive schulrechtliche Besonderheiten gab, die zu unterschiedlichem ministeriellen Handeln führten bzw. führen mussten. Die wiedergegebenen Beiträge zur Entwicklung der Oberstufenzentren werden bezüglich der Relevanz von KMK-Beschlüssen bzw. -Rahmenvereinbarungen untersucht und kommentiert. Der Institutionentransfer besteht aus dem Beitritt der ostdeutschen Länder zur Geschäftsordnung der KMK, der daraus resultierenden Übernahme der Schulformen und Bildungsgänge einschließlich der rechtlichen Folgen durch Schaffung des Schulrechts in den Ländern. Der Personen- und Knowhow-Transfer fand auf der Ebene der entsandten Verwaltungsmitarbeiterinnen und -mitarbeiter der Kultusministerien und der Mittelbehörden statt. Für den Transformationsprozess während des hier interessierenden Untersuchungszeitraums werden Berufsschul- und berufsbildungspolitische Aspekte, bzw. im umfassenderen Sinne die der Berufsbildungswissenschaft erstmalig mit der vorliegenden Untersuchung reflektiert. Insofern kann diese als Beitrag zur historisch-berufsbildungswissenschaftlichen Forschung betrachtet werden.
The surface heat flow (qs) is paramount for modeling the thermal structure of the lithosphere. Changes in the qs over a distinct lithospheric unit are normally directly reflecting changes in the crustal composition and therewith the radiogenic heat budget (e.g., Rudnick et al., 1998; Förster and Förster, 2000; Mareschal and Jaupart, 2004; Perry et al., 2006; Hasterok and Chapman, 2011, and references therein) or, less usual, changes in the mantle heat flow (e.g., Pollack and Chapman, 1977). Knowledge of this physical property is therefore of great interest for both academic research and the energy industry. The present study focuses on the qs of central and southern Israel as part of the Sinai Microplate (SM). Having formed during Oligocene to Miocene rifting and break-up of the African and Arabian plates, the SM is characterized by a young and complex tectonic history. Resulting from the time thermal diffusion needs to pass through the lithosphere, on the order of several tens-of-millions of years (e.g., Fowler, 1990); qs-values of the area reflect conditions of pre-Oligocene times. The thermal structure of the lithosphere beneath the SM in general, and south-central Israel in particular, has remained poorly understood. To address this problem, the two parameters needed for the qs determination were investigated. Temperature measurements were made at ten pre-existing oil and water exploration wells, and the thermal conductivity of 240 drill core and outcrop samples was measured in the lab. The thermal conductivity is the sensitive parameter in this determination. Lab measurements were performed on both, dry and water-saturated samples, which is labor- and time-consuming. Another possibility is the measurement of thermal conductivity in dry state and the conversion to a saturated value by using mean model approaches. The availability of a voluminous and diverse dataset of thermal conductivity values in this study allowed (1) in connection with the temperature gradient to calculate new reliable qs values and to use them to model the thermal pattern of the crust in south-central Israel, prior to young tectonic events, and (2) in connection with comparable datasets, controlling the quality of different mean model approaches for indirect determination of bulk thermal conductivity (BTC) of rocks. The reliability of numerically derived BTC values appears to vary between different mean models, and is also strongly dependent upon sample lithology. Yet, correction algorithms may significantly reduce the mismatch between measured and calculated conductivity values based on the different mean models. Furthermore, the dataset allowed the derivation of lithotype-specific conversion equations to calculate the water-saturated BTC directly from data of dry-measured BTC and porosity (e.g., well log derived porosity) with no use of any mean model and thus provide a suitable tool for fast analysis of large datasets. The results of the study indicate that the qs in the study area is significantly higher than previously assumed. The new presented qs values range between 50 and 62 mW m⁻². A weak trend of decreasing heat flow can be identified from the east to the west (55-50 mW m⁻²), and an increase from the Dead Sea Basin to the south (55-62 mW m⁻²). The observed range can be explained by variation in the composition (heat production) of the upper crust, accompanied by more systematic spatial changes in its thickness. The new qs data then can be used, in conjunction with petrophysical data and information on the structure and composition of the lithosphere, to adjust a model of the pre-Oligocene thermal state of the crust in south-central Israel. The 2-D steady-state temperature model was calculated along an E-W traverse based on the DESIRE seismic profile (Mechie et al., 2009). The model comprises the entire lithosphere down to the lithosphere–asthenosphere boundary (LAB) involving the most recent knowledge of the lithosphere in pre-Oligocene time, i.e., prior to the onset of rifting and plume-related lithospheric thermal perturbations. The adjustment of modeled and measured qs allows conclusions about the pre-Oligocene LAB-depth. After the best fitting the most likely depth is 150 km which is consistent with estimations made in comparable regions of the Arabian Shield. It therefore comprises the first ever modelled pre-Oligocene LAB depth, and provides important clues on the thermal state of lithosphere before rifting. This, in turn, is vital for a better understanding of the (thermo)-dynamic processes associated with lithosphere extension and continental break-up.
Learning a model for the relationship between the attributes and the annotated labels of data examples serves two purposes. Firstly, it enables the prediction of the label for examples without annotation. Secondly, the parameters of the model can provide useful insights into the structure of the data. If the data has an inherent partitioned structure, it is natural to mirror this structure in the model. Such mixture models predict by combining the individual predictions generated by the mixture components which correspond to the partitions in the data. Often the partitioned structure is latent, and has to be inferred when learning the mixture model. Directly evaluating the accuracy of the inferred partition structure is, in many cases, impossible because the ground truth cannot be obtained for comparison. However it can be assessed indirectly by measuring the prediction accuracy of the mixture model that arises from it. This thesis addresses the interplay between the improvement of predictive accuracy by uncovering latent cluster structure in data, and further addresses the validation of the estimated structure by measuring the accuracy of the resulting predictive model. In the application of filtering unsolicited emails, the emails in the training set are latently clustered into advertisement campaigns. Uncovering this latent structure allows filtering of future emails with very low false positive rates. In order to model the cluster structure, a Bayesian clustering model for dependent binary features is developed in this thesis. Knowing the clustering of emails into campaigns can also aid in uncovering which emails have been sent on behalf of the same network of captured hosts, so-called botnets. This association of emails to networks is another layer of latent clustering. Uncovering this latent structure allows service providers to further increase the accuracy of email filtering and to effectively defend against distributed denial-of-service attacks. To this end, a discriminative clustering model is derived in this thesis that is based on the graph of observed emails. The partitionings inferred using this model are evaluated through their capacity to predict the campaigns of new emails. Furthermore, when classifying the content of emails, statistical information about the sending server can be valuable. Learning a model that is able to make use of it requires training data that includes server statistics. In order to also use training data where the server statistics are missing, a model that is a mixture over potentially all substitutions thereof is developed. Another application is to predict the navigation behavior of the users of a website. Here, there is no a priori partitioning of the users into clusters, but to understand different usage scenarios and design different layouts for them, imposing a partitioning is necessary. The presented approach simultaneously optimizes the discriminative as well as the predictive power of the clusters. Each model is evaluated on real-world data and compared to baseline methods. The results show that explicitly modeling the assumptions about the latent cluster structure leads to improved predictions compared to the baselines. It is beneficial to incorporate a small number of hyperparameters that can be tuned to yield the best predictions in cases where the prediction accuracy can not be optimized directly.
Die professionalisierte Kommunikation komplexer Gebilde wie Staaten und Nationen, die ihre Hinwendung politischer Fragestellungen in die Sphären von Image und Einfluss verlegt, kommt vor dem Hintergrund wachsenden Wettbewerbs an der Bedeutung der Reputation nicht vorbei. Denn neben ihrer ökonomischen Bedeutung legitimiert Reputation als mittel- oder langfristiges öffentliches Ansehen, das ihren Trägern Definitions- und Überzeugungsmacht verschafft, Macht- und Herrschaftspositionen. In einer mediatisierten Gesellschaft wächst die Bedeutung der Kommunikation mit der Öffentlichkeit sowohl für Erwerb und Erhalt von Reputation – wie auch für deren Aberkennung. Dabei spielt eine zunehmende Skandalisierung als Eigenheit der Mediengesellschaft eine Rolle, die eine erhöhte Fragilisierung der Reputation zur Folge hat und als wirksamster Mechanismus bei der Aberkennung von Reputation gilt, wie das Beispiel der dänischen „Karikaturen-Affäre“ veranschaulicht. In einer kommunikativ schnelllebigen Welt, zunehmend frei verfügbarer Information, geistern durch die Außenministerien des Global Village Begriffe wie Public Diplomacy, Nation Branding, Country Branding oder Place Branding, deren gemeinsamer Nenner zunächst die nach außen gerichtete Kommunikation ist. Aber schon die Frage nach Absender und Adressat, nach Akteur und Rezipient, Botschaften und Zielgruppen verweist auf die Komplexität der Kommunikation eines Landes. Ziel der Untersuchung ist es in den Kommunikationsbemühungen von Ländern, Faktoren zu identifizieren, die Image und Reputation nachhaltig beeinflussen können. Dabei stehen die folgenden Fragen im Mittelpunkt der Untersuchung: Wie werden Länder wahrgenommen und Reputation gebildet? Können die Bezugsgruppen eines Landes durch die Stakeholdertheorie (Freeman 1984) beschrieben werden – und wenn ja, welche Konsequenzen hat eine solche Berücksichtigung der Anspruchs- und Adressatengruppen auf die Kommunikation eines Landes? Welche Aspekte der theoretischen Ansätze können für die Kommunikation von Ländern als zentral bewertet werden? Und schließlich: Kann die Reflexion durch die Praxis, am Fallbeispiel der Schweiz, die Relevanz der eruierten Aspekte bestätigen bzw. um weitere Aspekte ergänzen und können auf dieser Grundlage Erfolgsfaktoren identifiziert und geeignete Instrumente für die Reputationskommunikation aufgezeigt werden?
Antarctic glacier forfields are extreme environments and pioneer sites for ecological succession. The Antarctic continent shows microbial community development as a natural laboratory because of its special environment, geographic isolation and little anthropogenic influence. Increasing temperatures due to global warming lead to enhanced deglaciation processes in cold-affected habitats and new terrain is becoming exposed to soil formation and accessible for microbial colonisation. This study aims to understand the structure and development of glacier forefield bacterial communities, especially how soil parameters impact the microorganisms and how those are adapted to the extreme conditions of the habitat. To this effect, a combination of cultivation experiments, molecular, geophysical and geochemical analysis was applied to examine two glacier forfields of the Larsemann Hills, East Antarctica. Culture-independent molecular tools such as terminal restriction length polymorphism (T-RFLP), clone libraries and quantitative real-time PCR (qPCR) were used to determine bacterial diversity and distribution. Cultivation of yet unknown species was carried out to get insights in the physiology and adaptation of the microorganisms. Adaptation strategies of the microorganisms were studied by determining changes of the cell membrane phospholipid fatty acid (PLFA) inventory of an isolated bacterium in response to temperature and pH fluctuations and by measuring enzyme activity at low temperature in environmental soil samples. The two studied glacier forefields are extreme habitats characterised by low temperatures, low water availability and small oligotrophic nutrient pools and represent sites of different bacterial succession in relation to soil parameters. The investigated sites showed microbial succession at an early step of soil formation near the ice tongue in comparison to closely located but rather older and more developed soil from the forefield. At the early step the succession is influenced by a deglaciation-dependent areal shift of soil parameters followed by a variable and prevalently depth-related distribution of the soil parameters that is driven by the extreme Antarctic conditions. The dominant taxa in the glacier forefields are Actinobacteria, Acidobacteria, Proteobacteria, Bacteroidetes, Cyanobacteria and Chloroflexi. The connection of soil characteristics with bacterial community structure showed that soil parameter and soil formation along the glacier forefield influence the distribution of certain phyla. In the early step of succession the relative undifferentiated bacterial diversity reflects the undifferentiated soil development and has a high potential to shift according to past and present environmental conditions. With progressing development environmental constraints such as water or carbon limitation have a greater influence. Adapting the culturing conditions to the cold and oligotrophic environment, the number of culturable heterotrophic bacteria reached up to 108 colony forming units per gram soil and 148 isolates were obtained. Two new psychrotolerant bacteria, Herbaspirillum psychrotolerans PB1T and Chryseobacterium frigidisoli PB4T, were characterised in detail and described as novel species in the family of Oxalobacteraceae and Flavobacteriaceae, respectively. The isolates are able to grow at low temperatures tolerating temperature fluctuations and they are not specialised to a certain substrate, therefore they are well-adapted to the cold and oligotrophic environment. The adaptation strategies of the microorganisms were analysed in environmental samples and cultures focussing on extracellular enzyme activity at low temperature and PLFA analyses. Extracellular phosphatases (pH 11 and pH 6.5), β-glucosidase, invertase and urease activity were detected in the glacier forefield soils at low temperature (14°C) catalysing the conversion of various compounds providing necessary substrates and may further play a role in the soil formation and total carbon turnover of the habitat. The PLFA analysis of the newly isolated species C. frigidisoli showed that the cold-adapted strain develops different strategies to maintain the cell membrane function under changing environmental conditions by altering the PLFA inventory at different temperatures and pH values. A newly discovered fatty acid, which was not found in any other microorganism so far, significantly increased at decreasing temperature and low pH and thus plays an important role in the adaption of C. frigidisoli. This work gives insights into the diversity, distribution and adaptation mechanisms of microbial communities in oligotrophic cold-affected soils and shows that Antarctic glacier forefields are suitable model systems to study bacterial colonisation in connection to soil formation.
Large areas in the humid tropics are currently undergoing land-use change. The decrease of tropical rainforest, which is felled for land clearing and timber production, is countered by increasing areas of tree plantations and secondary forests. These changes are known to affect the regional water cycle as a result of plant-specific water demand and by influencing key soil properties which determine hydrological flow paths. One of these key properties sensitive to land-use change is the saturated hydraulic conductivity (Ks) as it governs vertical percolation of water within the soil profile. Low values of Ks in a certain soil depth can form an impeding layer and lead to perched water tables and the development of predominantly lateral flow paths such as overland flow. These processes can induce nutrient redistribution, erosion and soil degradation and thus affect ecosystem services and human livelihoods. Due to its sensitivity to land-use change, Ks is commonly used to assess the associated changes in hydrological flow paths. The objective of this dissertation was to assess the effect of land-use change on hydrological flow paths by analysing Ks as indicator variable. Sources of Ks variability, their implications for Ks monitoring and the relationship between Ks and near-surface hydrological flow paths in the context of land-use change were studied. The research area was located in central Panama, a country widely experiencing the abovementioned changes in land use. Ks is dependent on both static, soil-inherent properties such as particle size and clay mineralogy and dynamic, land use-dependent properties such as organic carbon content. By conducting a pair of studies with one of these influences held constant in each, the importance of static and dynamic properties for Ks was assessed. Applying a space-for-time approach to sample Ks under secondary forests of different age classes on comparable soils, a recovery of Ks from the former pasture use was shown to require more than eight years. The process was limited to the 0−6 cm sampling depth and showed large variability among replicates. A wavelet analysis of a Ks transect crossing different soil map units under comparable land cover, old-growth tropical rainforest, showed large small-scale variability, which was attributed to biotic influences, as well as a possible but non-conclusive influence of soil types. The two results highlight the importance of dynamic, land use-dependent influences on Ks. Monitoring studies can help to quantify land use-induced change of Ks, but there is a variety of sampling designs which differ in efficiency of estimating mean Ks. A comparative study of four designs and their suitability for Ks monitoring is used to give recommendations about designing a Ks monitoring scheme. Quantifying changes in spatial means of Ks for small catchments with a rotational stratified sampling design did not prove to be more efficient than Simple Random Sampling. The lack of large-scale spatial structure prevented benefits of stratification, and large small-scale variability resulting from local biotic processes and artificial effects of destructive sampling caused a lack of temporal consistency in the re-sampling of locations, which is part of the rotational design. The relationship between Ks and near-surface hydrological flow paths is of critical importance when assessing the consequences of land-use change in the humid tropics. The last part of this dissertation aimed at disclosing spatial relationships between Ks and overland flow as influenced by different land cover types. The effects of Ks on overland-flow generation were spatially variable, different between planar plots and incised flowlines and strongly influenced by land-cover characteristics. A simple comparison of Ks values and rainfall intensities was insufficient to describe the observed pattern of overland flow. Likewise, event flow in the stream was apparently not directly related to overland flow response patterns within the catchments. The study emphasises the importance of combining pedological, hydrological, meteorological and botanical measurements to comprehensively understand the land use-driven change in hydrological flow paths. In summary, Ks proved to be a suitable parameter for assessing the influence of land-use change on soils and hydrological processes. The results illustrated the importance of land cover and spatial variability of Ks for decisions on sampling designs and for interpreting overland-flow generation. As relationships between Ks and overland flow were shown to be complex and dependent on land cover, an interdisciplinary approach is required to comprehensively understand the effects of land-use change on soils and near-surface hydrological flow paths in the humid tropics.
Die Dissertation untersucht von Autorinnen (Louisa Johnson, Jane Loudon, Maria Theresa Earle, Gertrude Jekyll, Elizabeth von Arnim) verfasste Ratgeberliteratur zum Hausgarten für ein weibliches Lesepublikum, mit dem Anspruch an eine praktische Gartentätigkeit, im Zeitraum von 1839 bis 1900. Die Genderperspektive steht hieraus folgend im Mittelpunkt der vorliegenden Arbeit. Der Fokus auf die bürgerliche Mittelklasse ergibt sich aus der Autorinnenperspektive und der angesprochenen Leserschaft. Die Behandlung des Gartens wird einer Analyse unterzogen, die nach der weiblichen Sicht auf den Garten und einem spezifisch weiblichen Selbstverständnis der garteninteressierten bzw. gärtnernden Frauen fragt. In ihrer Beschäftigung mit dem Garten leisten die Frauen einen Beitrag zur Konzeption von männlich und weiblich, zur Bewertung von Geschlechternormen und deren Verhandlung. Das Schreiben und Lesen über den Garten sowie hieraus resultierende Handlungen waren mit der Konstruktion weiblicher Identität verknüpft. In ihrer befreienden Konzeption des Gartens heben sich diese Frauenstimmen zu Weiblichkeitsvorstellungen von anderen gesellschaftlichen zugeschriebenen Wirkungsbereichen ab. An die bürgerliche Frau herangetragene Rollenerwartungen werden in den Werken weder affirmativ bestätigt noch offen subversiv hinterfragt. Es handelt sich vielmehr um ein subtiles Unterlaufen durch das Anbieten von Handlungsfeldern, die dem Wunsch nach Selbstverwirklichung und Selbstbestimmung entgegen kamen. Im Garten als vermeintlich kleinem, hausnah-restriktivem Kontext nehmen die Frauen neue Rollen an und variieren diese. Der Beschäftigung mit dem Garten kommt daher ein protofeministischer Charakter vor dem Einsetzen der Ersten Frauenbewegung zu, so dass von einem Gartenfeminismus als Instrument zur weiblichen Bewusstwerdung gesprochen werden kann.
Business processes are fundamental to the operations of a company. Each product manufactured and every service provided is the result of a series of actions that constitute a business process. Business process management is an organizational principle that makes the processes of a company explicit and offers capabilities to implement procedures, control their execution, analyze their performance, and improve them. Therefore, business processes are documented as process models that capture these actions and their execution ordering, and make them accessible to stakeholders. As these models are an essential knowledge asset, they need to be managed effectively. In particular, the discovery and reuse of existing knowledge becomes challenging in the light of companies maintaining hundreds and thousands of process models. In practice, searching process models has been solved only superficially by means of free-text search of process names and their descriptions. Scientific contributions are limited in their scope, as they either present measures for process similarity or elaborate on query languages to search for particular aspects. However, they fall short in addressing efficient search, the presentation of search results, and the support to reuse discovered models. This thesis presents a novel search method, where a query is expressed by an exemplary business process model that describes the behavior of a possible answer. This method builds upon a formal framework that captures and compares the behavior of process models by the execution ordering of actions. The framework contributes a conceptual notion of behavioral distance that quantifies commonalities and differences of a pair of process models, and enables process model search. Based on behavioral distances, a set of measures is proposed that evaluate the quality of a particular search result to guide the user in assessing the returned matches. A projection of behavioral aspects to a process model enables highlighting relevant fragments that led to a match and facilitates its reuse. The thesis further elaborates on two search techniques that provide concrete behavioral distance functions as an instantiation of the formal framework. Querying enables search with a notion of behavioral inclusion with regard to the query. In contrast, similarity search obtains process models that are similar to a query, even if the query is not precisely matched. For both techniques, indexes are presented that enable efficient search. Methods to evaluate the quality and performance of process model search are introduced and applied to the techniques of this thesis. They show good results with regard to human assessment and scalability in a practical setting.
Introduction: Intestinal bacteria influence gut morphology by affecting epithelial cell proliferation, development of the lamina propria, villus length and crypt depth [1]. Gut microbiota-derived factors have been proposed to also play a role in the development of a 30 % longer intestine, that is characteristic of PRM/Alf mice compared to other mouse strains [2, 3]. Polyamines and SCFAs produced by gut bacteria are important growth factors, which possibly influence mucosal morphology, in particular villus length and crypt depth and play a role in gut lengthening in the PRM/Alf mouse. However, experimental evidence is lacking. Aim: The objective of this work was to clarify the role of bacterially-produced polyamines on crypt depth, mucosa thickness and epithelial cell proliferation. For this purpose, C3H mice associated with a simplified human microbiota (SIHUMI) were compared with mice colonized with SIHUMI complemented by the polyamine-producing Fusobacterium varium (SIHUMI + Fv). In addition, the microbial impact on gut lengthening in PRM/Alf mice was characterized and the contribution of SCFAs and polyamines to this phenotype was examined. Results: SIHUMI + Fv mice exhibited an up to 1.7 fold higher intestinal polyamine concentration compared to SIHUMI mice, which was mainly due to increased putrescine concentrations. However, no differences were observed in crypt depth, mucosa thickness and epithelial proliferation. In PRM/Alf mice, the intestine of conventional mice was 8.5 % longer compared to germfree mice. In contrast, intestinal lengths of C3H mice were similar, independent of the colonization status. The comparison of PRM/Alf and C3H mice, both associated with SIHUMI + Fv, demonstrated that PRM/Alf mice had a 35.9 % longer intestine than C3H mice. However, intestinal SCFA and polyamine concentrations of PRM/Alf mice were similar or even lower, except N acetylcadaverine, which was 3.1-fold higher in PRM/Alf mice. When germfree PRM/Alf mice were associated with a complex PRM/Alf microbiota, the intestine was one quarter longer compared to PRM/Alf mice colonized with a C3H microbiota. This gut elongation correlated with levels of the polyamine N acetylspermine. Conclusion: The intestinal microbiota is able to influence intestinal length dependent on microbial composition and on the mouse genotype. Although SCFAs do not contribute to gut elongation, an influence of the polyamines N acetylcadaverine and N acetylspermine is conceivable. In addition, the study clearly demonstrated that bacterial putrescine does not influence gut morphology in C3H mice.
Hintergrund: Gestillte Kinder haben im Vergleich zu nicht gestillten Kindern eine geringere Inzidenz von gastrointestinalen Infektionen und atopischen Erkrankungen. Man geht davon aus, dass der gesundheitsfördernde Effekt der Muttermilch teilweise über die intestinale Mikrobiota vermittelt wird. Diese ist in Stillkindern durch eine geringe Diversität und einen hohen Anteil an Bifidobakterien charakterisiert. Neueste Ansätze in der Weiterentwicklung industriell hergestellter Säuglingsnahrung zielen darauf ab, eine intestinale Mikrobiota zu fördern, die der von gestillten Kindern ähnelt. Die Supplementation von Säuglingsnahrung mit Probiotika (lebende Mikroorganismen) oder Präbiotika (unverdauliche Kohlenhydrate, die als Energiesubstrat für probiotische Bakterien dienen) könnte die bifidogene und antipathogene, aber auch immunmodulierende Wirkung der Muttermilch nachahmen. Aufgrund unterschiedlicher Interaktionen mit der Darmmikrobiota und dem Immunsystem fokussiert man mit der gleichzeitigen Gabe von Pro- und Präbiotika (Synbiotika) eine synergistische Wirkung an. Zielstellung und Studiendesign: In einer randomisiert-kontrollierten, klinischen Studie wurde untersucht, ob sich in den ersten drei Lebensmonaten von gesunden und termingerecht geborenen Kindern mit einer Synbiotikum-haltigen Säuglingsnahrung eine intestinale Mikrobiota etabliert, die der von gestillten Kindern gleicht. Das Synbiotikum setzte sich aus Bifidobacterium animalis ssp. lactis CNCM I 3446 (ältere Bezeichnung B. lactis BB-12) und Kuhmilcholigosacchariden zusammen. Die Studie umfasste zwei Gruppen von Kindern, die eine Säuglingsnahrung mit (SYN-Gruppe, n=21) oder ohne Supplement (KON-Gruppe, n=18) erhielten. Gestillte Kinder dienten als Referenz (REF-Gruppe, n=23). Um die Diversität der Bifidobakterien auf Speziesebene umfassend zu charakterisieren, wurden quantitative Real-Time PCR (qPCR)-Verfahren, basierend auf dem single-copy groEL als phylogenetisches Zielgen, zur spezifischen Quantifizierung von zwölf Bifidobakterienspezies in humanen Fäzes entwickelt und validiert. Ergebnisse: Die supplementierte Säuglingsnahrung war gut verträglich und unterstützte eine gesunde Entwicklung; vergleichbare anthropometrische Daten von SYN- und REF-Gruppe. Das Synbiotikum stimulierte selektiv das Wachstum von Laktobazillen und Bifidobakterien. Die Zellzahl für Laktobazillen der SYN-Gruppe war zur REF-Gruppe äquivalent (9,07±0,32 versus 9,90±0,27 log10 Zellen/g Fäzes TM [MW±SEM]; p<0,0019; Äquivalenzdifferenz von 1 log10 Zellen/g Fäzes TM) und höher als in der KON-Gruppe (8,27±0,31 log10 Zellen/g Fäzes TM [MW±SEM]). Die Zellzahl für Bifidobakterien war in der SYN-Gruppe am höchsten (11,54±0,05 versus 11,00±0,17 [REF-Gruppe] und 10,54±0,24 [KON-Gruppe] log10 Zellen/g Fäzes TM [MW±SEM]). In der SYN-Gruppe wurde die höchste Anzahl an Bifidobakterienspezies erfasst (167 mit [128 ohne] B. animalis in 56 Fäzesproben versus 98 und 93 in jeweils 51 Fäzesproben der REF- und KON-Gruppe). Neben Kinder-typischen Spezies wie B. bifidum und B. breve wurden auch Spezies, die für Erwachsene charakteristisch sind (B. adolescentis), häufiger in der SYN-Gruppe als in den Vergleichsgruppen nachgewiesen. Der pH-Wert in Fäzes von Kindern aus der SYN-Gruppe war niedriger als der aus der KON-Gruppe (6,07±0,20 versus 6,45±0,17 [MW±SEM]) und näher an dem von gestillten Kindern mit 5,29±0,12 (MW±SEM). Schlussfolgerung: Die Supplementation einer Säuglingsnahrung mit dem Synbiotikum aus CNCM I-3446 und Kuhmilcholigosacchariden führte zu einer Angleichung in der Zusammensetzung der intestinalen Mikrobiota und des fäkalen pH-Wertes an gestillte Kinder. Die in dieser Arbeit entwickelten groEL-basierten qPCR-Verfahren erlaubten eine spezifische und genaue Analyse der Bifidobakterienpopulation unter dem Einfluss eines Synbiotikums.
Nur langsam scheinen jene Schockwellen abzuebben, die ausgelöst durch die Ergebnisse der PISA-Erhebungen seit mehr als einem Jahrzehnt die Bildungsrepublik Deutschland durchqueren und weite Teile der Gesellschaft in den Zustand regelrechter Bildungspanik versetzten. An der Schwelle zum 21. Jahrhundert belegte eine Reihe von Studien für das wiedervereinte Deutschland eine im OECD-Vergleich besonders ausgeprägte Abhängigkeit des Bildungserfolges von der sozialen Herkunft. Als eine Konsequenz ist der Zugang zu tertiärer Bildung bis dato deutlich durch soziale Ungleichheit gekennzeichnet. Vor diesem Hintergrund leistet die vorliegende Dissertationsschrift einen wesentlichen Beitrag zur ursächlichen Erklärung von Mustern sozialer Selektivität, die an den Gelenkstellen zwischen sekundären und postsekundären Bildungsangeboten sichtbar werden. Auf innovative Weise verbindet die Arbeit ein zeitgemäßes handlungstheoretisches Modell mit einer komplexen Lebensstilanalyse. Die Analyse stützt sich auf Erhebungsdaten, die zwischen Januar und April 2010 an mehr als 30 weiterführenden Schulen des Bundeslandes Brandenburg erhoben wurden. Im Mittelpunkt des Forschungsinteresses steht einerseits die Identifikation von sozial-kognitiven Determinanten, die das Niveau und die Richtung postsekundärer Bildungsaspirationen maßgeblich vorstrukturieren sowie andererseits deren Verortung im Kontext jugendlicher Lebensstile. Das komplexe Analysedesign erweist sich als empirisch fruchtbar: So erbringt die Arbeit den empirischen Nachweis, dass die spezifischen Konfigurationen der bestätigten psychosozialen Prädiktoren nicht nur statistisch bedeutsam zwischen jugendlichen Stilmustern variieren, sondern sich diesbezüglich erfolgreiche von weniger erfolgreichen Typen unterscheiden lassen.
Kultur gibt den Menschen eine Orientierung. Sie machen darin ganz spezifische Erfahrungen. Hieraus entwickeln sich auch motivationale Orientierungen. Dadurch werden andere Erfahrungen gemacht, die Sportler können andere Motivation und Volition entwickeln. Dabei sind mehr kollektivistische Kulturen eher vermeidungs-motiviert und mehr individualistische Kulturen mehr erfolgsorientiert. Beim Kollektivismus erscheint die Leistungsmotivation eher unter einem sozialen Aspekt, nämlich die Auseinandersetzung mit einem Gütemaßstab, der eher von außen vorgegeben wird und weniger einem ausschließlich eigenen Maßstab. Ägypten erweist sich im Vergleich zu Deutschland als eine eher kollektivistisch geprägte Kultur. Daraus ergeben sich folgende Unterschiede: Einen signifikanten Unterschied zwischen deutschen und ägyptischen Ringern gibt es in der Wettkampforientierung und bei der Sieg- bzw. Gewinn-Orientierung. Die ägyptischen Ringer habe eine höhere Ausprägung als die Deutschen. Sie weisen auch eine etwas höhere Zielorientierung auf als die Deutschen. Entgegen den Erwartungen zeigte sich, dass es keine signifikanten Unterschiede zwischen den ägyptischen und deutschen Ringern gibt in der Variable: Sieg- bzw. Gewinn-Orientierung. Die Furcht vor Misserfolg sowie auch die Hoffnung auf Erfolg liegen höher bei den Ägyptern als bei den Deutschen. Bezogen auf die Modi der Handlungskontrolle verfügen die Deutschen Ringer über eine höher Ausprägung auf allen drei Komponenten. Sie haben eine höhere Handlungsorientierung nach Misserfolg, eine höhere Handlungsplanung sowie eine höhere Handlungstätigkeitsausführung. Diese kulturell kontrastive Studie über die psychologischen Aspekte, im Bereich der Leistungsmotivation und der Handlungskontrolle, kann für die Sportart Ringen sehr nützlich werden, da sie sehr wichtig ist beim Erkennen der sportlichen Überlegenheits- und Schwächemerkmale. Sie wiederspiegelt auch die Hochstimmung in den entwickelten Staaten oder die Misere in den anderen Staaten. Aus den interkulturellen Unterschieden in der Motivation und Volition können somit verschiedene Maßnahmen zu sportpsychologischen Interventionen entwickelt werden. Es sollte unbedingt darauf wert gelegt werden, dass die kulturell bedingten Unterschiede im Trainingsalltag beachtet werden, bei Teams, die aus Personen aus unterschiedlichen Kulturkreisen stammen.
Unter geeigneten Wachstumsbedingungen weisen Algenkulturen oft eine größere Produktivität der Zellen auf, als sie bei höheren Pflanzen zu beobachten ist. Chlamydomonas reinhardtii-Zellen sind vergleichsweise klein. So beträgt das Zellvolumen während des vegetativen Zellzyklus etwa 50–3500 µm³. Im Vergleich zu höheren Pflanzen ist in einer Algensuspension die Konzentration der Biomasse allerdings gering. So enthält beispielsweise 1 ml einer üblichen Konzentration zwischen 10E6 und 10E7 Algenzellen. Quantifizierungen von Metaboliten oder Makromolekülen, die zur Modellierung von zellulären Prozessen genutzt werden, werden meist im Zellensemble vorgenommen. Tatsächlich unterliegt jedoch jede Algenzelle einer individuellen Entwicklung, die die Identifizierung charakteristischer allgemeingültiger Systemparameter erschwert. Ziel dieser Arbeit war es, biochemisch relevante Messgrößen in-vivo und in-vitro mit Hilfe optischer Verfahren zu identifizieren und zu quantifizieren. Im ersten Teil der Arbeit wurde ein Puls-Amplituden-Modulation(PAM)-Fluorimetriemessplatz zur Messung der durch äußere Einflüsse bedingten veränderlichen Chlorophyllfluoreszenz an einzelnen Zellen vorgestellt. Die Verwendung eines kommerziellen Mikroskops, die Implementierung empfindlicher Nachweiselektronik und einer geeignete Immobilisierungsmethode ermöglichten es, ein Signal-zu-Rauschverhältnis zu erreichen, mit dem Fluoreszenzsignale einzelner lebender Chlamydomonas-Zellen gemessen werden konnten. Insbesondere wurden das Zellvolumen und der als Maß für die Effizienz des Photosyntheseapparats bzw. die Zellfitness geltende Chlorophyllfluoreszenzparameter Fv/Fm ermittelt und ein hohes Maß an Heterogenität dieser zellulären Parameter in verschiedenen Entwicklungsstadien der synchronisierten Chlamydomonas-Zellen festgestellt. Im zweiten Teil der Arbeit wurden die bildgebende Laser-Scanning-Mikroskopie und anschließende Bilddatenanalyse zur quantitativen Erfassung der wachstumsabhängigen zellulären Parameter angewandt. Ein kommerzielles konfokales Mikroskop wurde um die Möglichkeit der nichtlinearen Mikroskopie erweitert. Diese hat den Vorteil einer lokalisierten Anregung, damit verbunden einer höheren Ortsauflösung und insgesamt geringeren Probenbelastung. Weiterhin besteht neben der Signalgewinnung durch Fluoreszenzanregung die Möglichkeit der Erzeugung der Zweiten Harmonischen (SHG) an biophotonischen Strukturen, wie der zellulären Stärke. Anhand der Verteilungsfunktionen war es möglich mit Hilfe von modelltheoretischen Ansätzen zelluläre Parameter zu ermitteln, die messtechnisch nicht unmittelbar zugänglich sind. Die morphologischen Informationen der Bilddaten ermöglichten die Bestimmung der Zellvolumina und die Volumina subzellularer Strukturen, wie Nuclei, extranucleäre DNA oder Stärkegranula. Weiterhin konnte die Anzahl subzellulärer Strukturen innerhalb einer Zelle bzw. eines Zellverbunds ermittelt werden. Die Analyse der in den Bilddaten enthaltenen Signalintensitäten war Grundlage einer relativen Konzentrationsbestimmung von zellulären Komponenten, wie DNA bzw. Stärke. Mit dem hier vorgestellten Verfahren der nichtlinearen Mikroskopie und nachfolgender Bilddatenanalyse konnte erstmalig die Verteilung des zellulären Stärkegehalts in einer Chlamydomonas-Population während des Wachstums bzw. nach induziertem Stärkeabbau verfolgt werden. Im weiteren Verlauf wurde diese Methode auch auf Gefrierschnitte höherer Pflanzen, wie Arabidopsis thaliana, angewendet. Im Ergebnis wurde gezeigt, dass viele zelluläre Parameter, wie das Volumen, der zelluläre DNA- und Stärkegehalt bzw. die Anzahl der Stärkegranula durch eine Lognormalverteilung, mit wachstumsabhängiger Parametrisierung, beschrieben werden. Zelluläre Parameter, wie Stoffkonzentration und zelluläres Volumen, zeigen keine signifikanten Korrelationen zueinander, woraus geschlussfolgert werden muss, dass es ein hohes Maß an Heterogenität der zellulären Parameter innerhalb der synchronisierten Chlamydomonas-Populationen gibt. Diese Aussage gilt sowohl für die als homogenste Form geltenden Synchronkulturen von Chlamydomonas reinhardtii als auch für die gemessenen zellulären Parameter im intakten Zellverbund höherer Pflanzen. Dieses Ergebnis ist insbesondere für modelltheoretische Betrachtungen von Relevanz, die sich auf empirische Daten bzw. zelluläre Parameter stützen welche im Zellensemble gemessen wurden und somit nicht notwendigerweise den zellulären Status einer einzelnen Zelle repräsentieren.
In children the way of life, nutrition and recreation changed in recent years and as a consequence body composition shifted as well. It is established that overweight belongs to a global problem. In addition, German children exhibit a less robust skeleton than ten years ago. These developments may elevate the risk of cardiovascular diseases and skeletal modifications. Heredity and environmental factors as nutrition, socioeconomic status, physical activity and inactivity influence fat accumulation and the skeletal system. Based on these negative developments associations between type of body shape, skeletal measures and physical activity; relations between external skeletal robustness, physical activity and inactivity, BMI and body fat and also the progress of body composition especially external skeletal robustness in comparison in Russian and German children were investigated. In a cross-sectional study 691 German boys and girls aged 6 to 10 years were examined. Anthropometric measurements were taken and questionnaires about physical activity and inactivity were answered by parents. Additionally, pedometers were worn to determinate the physical activity in children. To compare the body composition in Russian and German children data from the years 2000 and 2010 were used. The study has shown that pyknomorphic individuals exhibit the highest external skeletal robustness and leptomorphic ones the lowest. Leptomorphic children may have a higher risk for bone diseases in adulthood. Pyknomorphic boys are more physically active by tendency. This is assessed as positive because pyknomorphic types display the highest BMI and body fat. Results showed that physical activity may reduce BMI and body fat. In contrast physical inactivity may lead to an increase of BMI and body fat and may rise with increasing age. Physical activity encourages additionally a robust skeleton. Furthermore external skeletal robustness is associated with BMI in order that BMI as a measure of overweight should be consider critically. The international 10-year comparison has shown an increase of BMI in Russian children and German boys. Currently, Russian children exhibit a higher external skeletal robustness than the Germans. However, in Russian boys skeleton is less robust than ten years ago. This trend should be observed in the future as well in other countries. All in all, several measures should be used to describe health situation in children and adults. Furthermore, in children it is essential to support physical activity in order to reduce the risk of obesity and to maintain a robust skeleton. In this way diseases are able to prevent in adulthood.
Despite the increased attention devoted to sexual aggression among young people in the international scientific literature, Brazil has little research on the subject exclusively among this group. There is evidence that sexual aggression and victimization may start early. Identifying the magnitude and factors that increase the chance for the onset and persistence of sexual victimization are the first steps for prevention efforts among this group. Using both cross-sectional and prospective analyses, this study examined the prevalence of, and vulnerability factors for sexual aggression and victimization in female and male college students (N = 742; M = 20.1 years) in Brazil, of whom a subgroup (n = 354) took part in two measurements six months apart. At Time 1, a Portuguese version of the Short Form of the Sexual Experiences Survey (Koss et al., 2007) was administered to collect information from men and women as both victims and perpetrators of sexual aggression since the age of 14. The students were also asked to provide information on their cognitive representations (sexual scripts) of a consensual sexual encounter, their actual sexual behavior, use of pornography, and experiences of child abuse. At Time 2, the same items from the SES were presented again to assess the incidence of sexual aggression in the 6-month period since T1. The overall prevalence rate of victimization was 27% among men and 29% among women. In contrast, perpetration rates were significantly higher among men (33.7%) than among women (3%). Confirming the hypotheses, cognitive (i.e., risky sexual scripts, normative beliefs), behavioral (i.e., pornography use, sexual behavior patterns) and biographical (i.e., childhood abuse) risk factors were linked to male sexual aggression and to male and female victimization both cross-sectionally and longitudinally with the path models analyses demonstrating good fit with the data. The results supported: a) the role of the sexual script for a first consensual sexual encounter as an underlying factor of real sexual behavior and sexual victimization or perpetration; b) the role of pornography as “inputs” for sexual scripts, increasing indirectly the risk for victimization, and directly and indirectly the risk for perpetration; c) the direct and indirect link between childhood experiences of (sexual) abuse and male sexual aggression and victimization mediated by sexual behavior; and d) the direct link between child sexual abuse and sexual victimization among women. Few gender differences were found in the victimization model. The findings challenge societal beliefs that sexual aggression is restricted to groups with low socio-economic status and that men are unlikely to be sexually coerced. The disparity between male victimization and female perpetration rates is discussed based on traditional gender roles in Brazil. This study is also the first prospective investigation of risk factors for sexual aggression and victimization in Brazil, demonstrating the role of behavioral, cognitive and biographical factors that increase the vulnerability among college students.
In the context of ecological risk assessment of chemicals, individual-based population models hold great potential to increase the ecological realism of current regulatory risk assessment procedures. However, developing and parameterizing such models is time-consuming and often ad hoc. Using standardized, tested submodels of individual organisms would make individual-based modelling more efficient and coherent. In this thesis, I explored whether Dynamic Energy Budget (DEB) theory is suitable for being used as a standard submodel in individual-based models, both for ecological risk assessment and theoretical population ecology. First, I developed a generic implementation of DEB theory in an individual-based modeling (IBM) context: DEB-IBM. Using the DEB-IBM framework I tested the ability of the DEB theory to predict population-level dynamics from the properties of individuals. We used Daphnia magna as a model species, where data at the individual level was available to parameterize the model, and population-level predictions were compared against independent data from controlled population experiments. We found that DEB theory successfully predicted population growth rates and peak densities of experimental Daphnia populations in multiple experimental settings, but failed to capture the decline phase, when the available food per Daphnia was low. Further assumptions on food-dependent mortality of juveniles were needed to capture the population dynamics after the initial population peak. The resulting model then predicted, without further calibration, characteristic switches between small- and large-amplitude cycles, which have been observed for Daphnia. We conclude that cross-level tests help detecting gaps in current individual-level theories and ultimately will lead to theory development and the establishment of a generic basis for individual-based models and ecology. In addition to theoretical explorations, we tested the potential of DEB theory combined with IBMs to extrapolate effects of chemical stress from the individual to population level. For this we used information at the individual level on the effect of 3,4-dichloroanailine on Daphnia. The individual data suggested direct effects on reproduction but no significant effects on growth. Assuming such direct effects on reproduction, the model was able to accurately predict the population response to increasing concentrations of 3,4-dichloroaniline. We conclude that DEB theory combined with IBMs holds great potential for standardized ecological risk assessment based on ecological models.
Imaginary Interfaces
(2013)
The size of a mobile device is primarily determined by the size of the touchscreen. As such, researchers have found that the way to achieve ultimate mobility is to abandon the screen altogether. These wearable devices are operated using hand gestures, voice commands or a small number of physical buttons. By abandoning the screen these devices also abandon the currently dominant spatial interaction style (such as tapping on buttons), because, seemingly, there is nothing to tap on. Unfortunately this design prevents users from transferring their learned interaction knowledge gained from traditional touchscreen-based devices. In this dissertation, I present Imaginary Interfaces, which return spatial interaction to screenless mobile devices. With these interfaces, users point and draw in the empty space in front of them or on the palm of their hands. While they cannot see the results of their interaction, they obtain some visual and tactile feedback by watching and feeling their hands interact. After introducing the concept of Imaginary Interfaces, I present two hardware prototypes that showcase two different forms of interaction with an imaginary interface, each with its own advantages: mid-air imaginary interfaces can be large and expressive, while palm-based imaginary interfaces offer an abundance of tactile features that encourage learning. Given that imaginary interfaces offer no visual output, one of the key challenges is to enable users to discover the interface's layout. This dissertation offers three main solutions: offline learning with coordinates, browsing with audio feedback and learning by transfer. The latter I demonstrate with the Imaginary Phone, a palm-based imaginary interface that mimics the layout of a physical mobile phone that users are already familiar with. Although these designs enable interaction with Imaginary Interfaces, they tell us little about why this interaction is possible. In the final part of this dissertation, I present an exploration into which human perceptual abilities are used when interacting with a palm-based imaginary interface and how much each accounts for performance with the interface. These findings deepen our understanding of Imaginary Interfaces and suggest that palm-based Imaginary Interfaces can enable stand-alone eyes-free use for many applications, including interfaces for visually impaired users.
3D from 2D touch
(2013)
While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices.
Large Central European flood events of the past have demonstrated that flooding can affect several river basins at the same time leading to catastrophic economic and humanitarian losses that can stretch emergency resources beyond planned levels of service. For Germany, the spatial coherence of flooding, the contributing processes and the role of trans-basin floods for a national risk assessment is largely unknown and analysis is limited by a lack of systematic data, information and knowledge on past events. This study investigates the frequency and intensity of trans-basin flood events in Germany. It evaluates the data and information basis on which knowledge about trans-basin floods can be generated in order to improve any future flood risk assessment. In particu-lar, the study assesses whether flood documentations and related reports can provide a valuable data source for understanding trans-basin floods. An adaptive algorithm was developed that systematically captures trans-basin floods using series of mean daily discharge at a large number of sites of even time series length (1952-2002). It identifies the simultaneous occurrence of flood peaks based on the exceedance of an initial threshold of a 10 year flood at one location and consecutively pools all causally related, spatially and temporally lagged peak recordings at the other locations. A weighted cumulative index was developed that accounts for the spatial extent and the individual flood magnitudes within an event and allows quantifying the overall event severity. The parameters of the method were tested in a sensitivity analysis. An intensive study on sources and ways of information dissemination of flood-relevant publications in Germany was conducted. Based on the method of systematic reviews a strategic search approach was developed to identify relevant documentations for each of the 40 strongest trans-basin flood events. A novel framework for assessing the quality of event specific flood reports from a user’s perspective was developed and validated by independent peers. The framework was designed to be generally applicable for any natural hazard type and assesses the quality of a document addressing accessibility as well as representational, contextual, and intrinsic dimensions of quality. The analysis of time-series of mean daily discharge resulted in the identification of 80 trans-basin flood events within the period 1952-2002 in Germany. The set is dominated by events that were recorded in the hydrological winter (64%); 36% occurred during the summer months. The occurrence of floods is characterised by a distinct clustering in time. Dividing the study period into two sub-periods, we find an increase in the percentage of winter events from 58% in the first to 70.5% in the second sub-period. Accordingly, we find a significant increase in the number of extreme trans-basin floods in the second sub-period. A large body of 186 flood relevant documentations was identified. For 87.5% of the 40 strongest trans-basin floods in Germany at least one report has been found and for the most severe floods a substantial amount of documentation could be obtained. 80% of the material can be considered grey literature (i.e. literature not controlled by commercial publishers). The results of the quality assessment show that the majority of flood event specific reports are of a good quality, i.e. they are well enough drafted, largely accurate and objective, and contain a substantial amount of information on the sources, pathways and receptors/consequences of the floods. The inclusion of this information in the process of knowledge building for flood risk assessment is recommended. Both the results as well as the data produced in this study are openly accessible and can be used for further research. The results of this study contribute to an improved spatial risk assessment in Germany. The identified set of trans-basin floods provides the basis for an assessment of the chance that flooding occurs simultaneously at a number of sites. The information obtained from flood event documentation can usefully supplement the analysis of the processes that govern flood risk.