Refine
Has Fulltext
- yes (260) (remove)
Year of publication
- 2013 (260) (remove)
Document Type
- Doctoral Thesis (89)
- Postprint (60)
- Article (57)
- Preprint (22)
- Monograph/Edited Volume (17)
- Conference Proceeding (9)
- Habilitation Thesis (2)
- Master's Thesis (2)
- Part of Periodical (2)
Language
- English (260) (remove)
Keywords
- Curriculum Framework (17)
- European values education (17)
- Europäische Werteerziehung (17)
- Familie (17)
- Family (17)
- Lehrevaluation (17)
- Studierendenaustausch (17)
- Unterrichtseinheiten (17)
- curriculum framework (17)
- lesson evaluation (17)
- student exchange (17)
- teaching units (17)
- Information Structure (4)
- Modellierung (3)
- ligand (3)
- morphology (3)
- prosody (3)
- remote sensing (3)
- Arctic (2)
- BMI (2)
- Bayes'sche Netze (2)
- Bayesian networks (2)
- CSCW (2)
- Cloud Computing (2)
- Evolution (2)
- Fernerkundung (2)
- Forschungsprojekte (2)
- Future SOC Lab (2)
- Germany (2)
- HCI (2)
- Hochwasser (2)
- In-Memory Technologie (2)
- Kontext (2)
- Markov chain (2)
- Modeling (2)
- Morphologie (2)
- Multicore Architekturen (2)
- Nanopartikel (2)
- Neuroenhancement (2)
- Populationsdynamik (2)
- Proteom (2)
- Sun: activity (2)
- Verifikation (2)
- Vorhersage (2)
- additive particle (2)
- adsorption (2)
- anomalous diffusion (2)
- aqueous-solution (2)
- arbuscular mycorrhizal symbiosis (2)
- biomaterials (2)
- carbon (2)
- carbon-dioxide (2)
- carbon-dioxide capture (2)
- climate change (2)
- cluster expansion (2)
- data analysis (2)
- eye movements (2)
- floods (2)
- international comparison (2)
- ionic liquid (2)
- metal-organic frameworks (2)
- microbial communities (2)
- models (2)
- molecular motor (2)
- phase-transitions (2)
- phosphorylation (2)
- photoisomerization (2)
- population dynamics (2)
- prediction (2)
- probiotics (2)
- protein interactions (2)
- random walk (2)
- sensor (2)
- topics (2)
- zinc (2)
- 1830 (1)
- 3-D Modellierung (1)
- 3-D outcrop modeling (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- AGN (1)
- Abbildende Spektroskopie (1)
- Adaptive hypermedia (1)
- Affiliationsnetzwerke (1)
- African states (1)
- Afrikanische Staaten (1)
- Akan (1)
- Aktiven Galaxienkerne (1)
- Aktuatoren (1)
- Alaunschiefer (1)
- Albania (1)
- Allozymes (1)
- Altersunterschiede (1)
- Alum shale (1)
- Ambiguität (1)
- Anaphora (1)
- Anisotroper Kuwahara Filter (1)
- Anomalien (1)
- Anoxie (1)
- Anpassung (1)
- Antarctica (1)
- Antarktis (1)
- Antibiotika-Toleranz (1)
- Antibiotikaresistenz (1)
- Antimikrobielle Peptide (1)
- Approximate Bayesian Computation (1)
- Arabian Plate (1)
- Arabidopsis thaliana (1)
- Arabische Platte (1)
- Arbeitsgedächtniskapazität (1)
- Arctic tundra (1)
- Arktis (1)
- Arnos Padiri (1)
- Aspect-Oriented Programming (1)
- Aspektorientierte Programmierung (1)
- Astroteilchenphysik (1)
- Attribut-Merge-Prozess (1)
- Attribute Merge Process (1)
- Aufmerksamkeitskontrolle (1)
- Aufschluss-Modellierung (1)
- Augenbewegungen (1)
- Ausbreitung (1)
- Ausführung von Modellen (1)
- Ausführungsgeschichte (1)
- Ausschüsse (1)
- Averaging principle (1)
- BPMN (1)
- BRDF (1)
- Backgrounding (1)
- Bankenregulierung (1)
- Bat rabies (1)
- Beltrami equation (1)
- Berlin (1)
- Berührungseingaben (1)
- Beschränkungen und Abhängigkeiten (1)
- Bimolecular Reaction (1)
- Binding Theory (1)
- Biochromophore (1)
- Blickbewegungen (1)
- Boden (1)
- Bodenfeuchte (1)
- Bodenhydrologie (1)
- Bodenparameter (1)
- Bohrlochmessungen (1)
- Brazil (1)
- Breast cancer (1)
- Brownian bridge (1)
- Brownification (1)
- Bruck-Reilly extension (1)
- Brustkrebs (1)
- Bulk-mediated diffusion; (1)
- CRS (1)
- CS Ed Research (1)
- CS at school (1)
- CS curriculum (1)
- Cambodia (1)
- Capture into resonance (1)
- Carbide (1)
- Carbides (1)
- Carbon Cycling (1)
- Carbon cycling (1)
- Carbonate (1)
- Cauchy data spaces (1)
- Cell proliferation (1)
- Cellulose (1)
- Centering Theory (1)
- Chile (1)
- Chlorogensäure (1)
- Cloud computing (1)
- Clusteranalyse (1)
- Cobalt (1)
- Comparing programming environments (1)
- Composites (1)
- Constraints (1)
- Contracts (1)
- Contrast (1)
- Correction (1)
- Corruption (1)
- Course of Study (1)
- Cue-Gewichtung (1)
- DLR equation (1)
- DNA cleavage (1)
- Darmbakterien (1)
- Darmlänge (1)
- Data Modeling (1)
- Data Privacy (1)
- Database Cost Model (1)
- Databases (1)
- Datenabhängigkeiten-Entdeckung (1)
- Datenanalyse (1)
- Datenbank-Kostenmodell (1)
- Datenbanken (1)
- Datenintegration (1)
- Datenmodellierung (1)
- Datenschutz (1)
- Dekomposition (1)
- Design Thinking (1)
- Deutschland (1)
- Dezentralisierung (1)
- Differenz von Gauss Filtern (1)
- Digitale Whiteboards (1)
- Diglossie (1)
- Dirichlet to Neumann operator (1)
- Disambiguierung (1)
- Diskursgegebenheit (1)
- Diversität (1)
- Downstep (1)
- Durchmusterung (1)
- Duria Antiquio (1)
- E-Learning (1)
- E. coli (1)
- ETAS (1)
- EU (1)
- Ecology (1)
- Ecotoxicology (1)
- Ego-depletion (1)
- Eingabegenauigkeit (1)
- Einstein manifolds (1)
- Einstein-Hilbert action (1)
- Einstein-Hilbert-Wirkung (1)
- Einstein-Mannigfaltigkeiten (1)
- Einzugsgebietsklassifizierung (1)
- Elektronendynamik (1)
- Elementarteilchen (1)
- EnMAP (1)
- Energie (1)
- English as a seond language (1)
- Enterolignanen (1)
- Enterolignans (1)
- Entwicklungsökonomik (1)
- Erdbeben (1)
- Erdgeschichte (1)
- Erdrutsch (1)
- Ereignisdokumentation (1)
- Ernährungsfaktoren (1)
- Erosion (1)
- Escherichia coli (1)
- Euler equations (1)
- Euler-Lagrange equations (1)
- Euro (1)
- Euro-Financial-Crisis (1)
- Euro-Finanzkrise (1)
- European Union (1)
- Europäische Union (1)
- Extension (1)
- Extremal problem (1)
- Fano Factor (1)
- Fehlerbeseitigung (1)
- Fehlerquellen der Modellierung (1)
- Feld (1)
- Finanzmärkte (1)
- Flagellenbewegung (1)
- Fluktuations-Dissipations-Theorem (1)
- Fluoreszenz (1)
- Fluoreszenzbildgebung (1)
- Flussgesteuerter Bilateraler Filter (1)
- Focus (1)
- Focus+Context Visualization (1)
- Focus-sensitivity (1)
- Fokus (1)
- Fokus-&-Kontext Visualisierung (1)
- Fokussensitivität (1)
- Forstwirtschaft (1)
- Fragmentierung (1)
- Francophonie (1)
- Frankophonie (1)
- Fredholm property (1)
- Galaxienhaufen (1)
- Galaxy Struktur (1)
- Gas (1)
- Gas Sorption (1)
- Gefahrenanalyse (1)
- Gene Regulatory Network (1)
- General subject “Information” (1)
- Geologie (1)
- Georgia (1)
- Georgien (1)
- German (1)
- German morphology (1)
- German past participles (1)
- Gibbs perturbation (1)
- Girsanov formula (1)
- Gitterdynamik (1)
- Givenness (1)
- Gletschervorfeld (1)
- Glycopeptoid (1)
- Glykogen (1)
- Grammatica (1)
- Grandonica (1)
- Grauliteratur (1)
- Greece (1)
- Hanxleden (1)
- Hauptspeicherdatenbank (1)
- Hemmung (1)
- Henry De La Beche (1)
- Hochenergiephysik (1)
- Hydrogenase (1)
- Hydrologie (1)
- Hydrothermale Karbonisierung (1)
- Hyperschnellläufersterne (1)
- IBD (1)
- ICT (1)
- ICT curriculum (1)
- ISSEP (1)
- Imaging spectroscopy (1)
- Impakt (1)
- In-Memory Database (1)
- In-Memory technology (1)
- InSAR (1)
- InSAR Datenanalyse (1)
- Index (1)
- Index Structures (1)
- Indexstrukturen (1)
- Infinite State (1)
- Informatics Education (1)
- Information Ethics (1)
- Informationsflüsse (1)
- Informationsstruktur (1)
- Inklusionsabhängigkeit (1)
- Integralfeld-Spektroskopie (1)
- Interactive Rendering (1)
- Interaktives Rendering (1)
- Internet applications (1)
- Internetanwendungen (1)
- Invarianten (1)
- Invariants (1)
- Ionic Liquid (1)
- Ishkashimi (1)
- Ivy (1)
- Java Security Framework (1)
- Kaffeeproteine (1)
- Kambodscha (1)
- Karbonat (1)
- Katalyse (1)
- Klassifikator (1)
- Klimawandel (1)
- Kohlenstoff (1)
- Kohlenstoffe auf Biomasse-Basis (1)
- Komposite (1)
- Konjugierten polyelektrolyt (1)
- Kooperation (1)
- Koreferenz (1)
- Korrektursakkaden (1)
- Korruption (1)
- Kugelsternhaufen (1)
- Kultivierung (1)
- Kurzkettige Fettsäuren (1)
- Kwa languages (1)
- Körperbautyp (1)
- Körperfett (1)
- Lafora disease (1)
- Landepositionsfehler (1)
- Landnutzungswandel (1)
- Landschaftseffekte (1)
- Landslide (1)
- Laplace-Beltrami operator (1)
- Laserpulskontrolle (1)
- Laufzeitmodelle (1)
- Leadership (1)
- Lefschetz number (1)
- Leistungsfähigkeit (1)
- Leopold von Buch (1)
- Lesen (1)
- Levy flights (1)
- Levy measure (1)
- Lexikon (1)
- Lignan-converting bacteria (1)
- Lignan-umwandelnde Bakterien (1)
- Liguistisch (1)
- Link-Entdeckung (1)
- Lipide (1)
- Lithosphäre (1)
- Lively Kernel (1)
- Lévy diffusion approximation (1)
- Lévy diffusions on manifolds (1)
- MHD (1)
- MOOCs (1)
- Markedness (1)
- Markov processes (1)
- Medicago truncatula (1)
- Mediterranean Sea (1)
- Megamodell (1)
- Megamodels (1)
- Metal-organic framework (1)
- Methan (1)
- Microsatellites (1)
- Mikrobiologie (1)
- Mikrobiota (1)
- Mikrosakkaden (1)
- Milchstrassenmasse (1)
- Minderheiten (1)
- Mischmodelle (1)
- Mitochondrial DNA (1)
- Mobile Application Development (1)
- Mobilgeräte (1)
- Model Execution (1)
- Model-Driven Engineering (1)
- Modeling Languages (1)
- Modell (1)
- Modellgetriebene Softwareentwicklung (1)
- Modellierungssprachen (1)
- Models at Runtime (1)
- Molkenproteine (1)
- Monoschichten (1)
- Motilität (1)
- Multicore architectures (1)
- Multivariate Analyse (1)
- Multivariate statistic (1)
- Musikrhythmus (1)
- Muttergalaxien (1)
- NCA (1)
- NMR (1)
- NW Himalaja (1)
- NW Himalaya (1)
- Nachbeben (1)
- Nanoparticles (1)
- Nanostruktur (1)
- Naturgefahren (1)
- Naturgeschichte (1)
- Navigation (1)
- Neotektonik (1)
- Netzwerkanalyse (1)
- Nicht-photorealistisches Rendering (1)
- Niedrigwasser (1)
- Nonlinear Laplace operator (1)
- Nordostdeutsches Becken (1)
- Northeast German Basin (1)
- OCP-Place (1)
- OT-Modellierung (1)
- OTDR (1)
- Oberflächenwärmefluß (1)
- Object Constraint Programming (1)
- Objekt-orientiertes Programmieren mit Constraints (1)
- Online Course (1)
- Online-Learning (1)
- Online-Lernen (1)
- Onlinekurs (1)
- Optical sensor (1)
- Optische Sensoren (1)
- Orchestia montagui (1)
- Owner-Retained Access Control (ORAC) (1)
- OxyR (1)
- PF Interface (1)
- POL (1)
- PRM/Alf Maus (1)
- PRM/Alf mouse (1)
- PSF Analyse (1)
- PSF fitting (1)
- Paläo-Strain-Berechnung (1)
- Peptid-Membran-Wechselwirkung (1)
- Performance (1)
- Performance Information Use (1)
- Permafrost (1)
- Perowskit (1)
- Person-Organization Fit (1)
- Perturbed complexes (1)
- Pflanzengemeinschaften (1)
- Pflanzliches Lignan (1)
- Phonetik (1)
- Phosphorylierung (1)
- Photonischer Kristall (1)
- Phylogeography (1)
- Physik schwarzer Löcher (1)
- Phänotypische Heterogenität (1)
- Plant lignan (1)
- Poisson bridge (1)
- Policy Languages (1)
- Policy Sprachen (1)
- Polyamine (1)
- Polyglycin (1)
- Polythiophen (1)
- Prevention (1)
- Primary informatics (1)
- Primärproduktion (1)
- Principal agent relation (1)
- Probenahmestrategie (1)
- Probiotika (1)
- Problem solving (1)
- Problem solving strategies (1)
- Process Enactment (1)
- Process Mining (1)
- Process Modeling (1)
- Programming environments for children (1)
- Programming learning (1)
- Pronomen (1)
- Pronouns (1)
- Prosodie (1)
- Prosody (1)
- Protein-Wechselwirkungen (1)
- Proteinmodifizierung (1)
- Prozessausführung (1)
- Prozessmodellierung (1)
- Prozessmodellsuche (1)
- Präsentation (1)
- Pseudobeobachtungen (1)
- Pseudomonas putida (1)
- Psycholinguistik (1)
- Public Service Motivation (1)
- Pytho n (1)
- Qualitätsbewertung (1)
- Quantenfeldtheorie (1)
- Quantitative Daten (1)
- Quasiconformal mapping (1)
- Query (1)
- RAVE (1)
- Random-Walk-Theorie (1)
- Raumzeitgeometrie (1)
- Reaction Rate Constant (1)
- Reaktionszeitmethoden (1)
- Reciprocal process (1)
- Regressionsanalyse (1)
- Relevanz (1)
- Research Projects (1)
- Resonanzfluoreszenz (1)
- Responsive Polymere (1)
- Rhizophagus irregularis (1)
- Rhizosphere (1)
- Ricci flow (1)
- Ricci-Fluss (1)
- Rohstoffe (1)
- Rotationskurven (1)
- Runtime Binding (1)
- Russia (1)
- Russian Scrambling (1)
- Russian Sign Language (1)
- Russland (1)
- Röntgenastronomie (1)
- SQL (1)
- Sanskrit (1)
- Sauerstoff (1)
- Scalability (1)
- Schema-Entdeckung (1)
- Schwimmende Mikroorganismen (1)
- Scientific understanding of Information (1)
- Search Algorithms (1)
- Second order elliptic equations (1)
- Seismische Geschwindigkeiten (1)
- Seismische Interferometrie (1)
- Seismische Tomographie (1)
- Sekundärsakkaden (1)
- Selektion (1)
- Self-Adaptive Software (1)
- Self-control (1)
- Semantik (1)
- Semantische Analyse (1)
- Sensor (1)
- Service-Oriented Architecture (1)
- Service-Orientierte Architekturen (1)
- Service-orientierte Systeme (1)
- Sign Language of the Netherlands (1)
- Similarity Measures (1)
- Similarity Search (1)
- Skala (1)
- Skalierbarkeit (1)
- Skeletal robustness (1)
- Skelettrobustizität (1)
- Skorokhod' s invariance principle (1)
- SoaML (1)
- Softwaretest (1)
- Soil hydrology (1)
- Sorption (1)
- Sprachkontakt (1)
- Sprachpolitik (1)
- Sprachrhythmus (1)
- Sterndynamik (1)
- Stochastic Simulation (1)
- Stochastic differential equations (1)
- Stochastischer Algorithmus (1)
- Strahlung Mechanismen (1)
- Stärke (1)
- Subsurface Biosphere (1)
- Suchverfahren (1)
- Suigetsu (1)
- Sun: coronal mass ejections (CMEs) (1)
- Systems of Systems (1)
- Talitrids (1)
- Teaching problem solving strategies (1)
- Tele-Lab (1)
- Tele-Teaching (1)
- Temperaturfeld (1)
- Test-getriebene Fehlernavigation (1)
- Theorembeweisen (1)
- Thermal-conductivity (1)
- Tien Shan (1)
- Toeplitz operators (1)
- Tonsprache (1)
- Topic (1)
- Topik (1)
- Trajectories (1)
- Transformational Leadership (1)
- Transitionmetals (1)
- Transkriptionsfaktoren (1)
- Transkriptom Sequenzierung (1)
- Transkriptomanalyse (1)
- Transmutation (1)
- Tritium Assay (1)
- Tritium Versuchsanordnung (1)
- Tropen (1)
- Two-level interacting process (1)
- Unbegrenzter Zustandsraum (1)
- Unifikation (1)
- Unruh effect (1)
- Unruh-Effekt (1)
- Unsicherheiten (1)
- Untergrunduntersuchung der Biosphäre (1)
- Variationsstabilität (1)
- Verification (1)
- Verteiltes Arbeiten (1)
- Videoanalyse (1)
- Videometadaten (1)
- Vietnamese (1)
- Vietnamesen (1)
- Vortrag (1)
- Vulnerabilität (1)
- Waldbewirtschaftung (1)
- Web applications (1)
- Web of Data (1)
- Well-log analysis (1)
- Wärmeleitfähigkeit (1)
- X-ray astronomy (1)
- Zeitgenossen (1)
- Zellmembranen (1)
- Zellproliferation (1)
- Zelltyp-spezifisch (1)
- Zellulose (1)
- Zellulärmaterialien (1)
- Zinc (1)
- Zuckertransporter (1)
- absorption (1)
- academic entrepreneurship (1)
- academic self-concept (1)
- actuating materials (1)
- adamantane (1)
- adaptation (1)
- adolescence (1)
- affiliation networks (1)
- aftershock (1)
- age (1)
- age differences (1)
- aggressive cognitions (1)
- air-water-interface (1)
- alignment (1)
- alkynol cycloisomerization (1)
- amorphous polymers (1)
- amp (1)
- anaphora (1)
- animal personalities (1)
- anionic polymerizations (1)
- anisotropic Kuwahara filter (1)
- anomalies (1)
- anomalous Brownian motion (1)
- anoxia (1)
- answer set programming (1)
- antibiotic resistance (1)
- antibodies (1)
- antimicrobial peptides (1)
- apoptosis (1)
- arbuskuläre Mykorrhiza-Symbiose (1)
- arbuskuläre Mykorrhizasymbiose (1)
- architectured materials (1)
- arenediazonium salts (1)
- arktische Tundra (1)
- artificial language learning (1)
- arylboronic acids (1)
- aryldiazonium salts (1)
- aspect-ratio (1)
- assemblies (1)
- associative networks (1)
- astroparticle physics (1)
- atropisomerism (1)
- attentional control (1)
- automated planning (1)
- back-in-time (1)
- bank regulation (1)
- behavior (1)
- behavioral specification (1)
- behavioural adaptations (1)
- bilingual processing (1)
- binding (1)
- biochromophores (1)
- biodiversity conservation (1)
- biomass-derived carbons (1)
- biopolymers (1)
- biorelevant (1)
- birth-death-mutation-competition point process (1)
- black hole physics (1)
- block-copolymers (1)
- boldness (1)
- boronic acid (1)
- boundary value problem (1)
- bracketing paradox (1)
- brownification (1)
- building-blocks (1)
- burrow system (1)
- candidates (1)
- canonical Marcus integration (1)
- capacity (1)
- carbohydrate-based oxepines (1)
- carbon flow (1)
- carbonyl-compounds (1)
- carbothermal (1)
- carbothermisch (1)
- cardiovascular magnetic resonance (1)
- case ambiguity (1)
- catalysis (1)
- catalyzed cross metathesis (1)
- catalyzed redox isomerization (1)
- catchment classification (1)
- cationic surfactants (1)
- cell tracking (1)
- cell type-specific (1)
- cellular materials (1)
- chain azobenzene polymers (1)
- chemical-synthesis (1)
- chemoattractant (1)
- chemotaxis (1)
- child language (1)
- childhood abuse (1)
- chiral recognition (1)
- chiral switches (1)
- chirale Erkennung (1)
- chirale Schalter (1)
- chlorogenic acid (1)
- chronic and acute inflammation (1)
- chronisch-entzündliche Darmerkrankungen (1)
- circuits (1)
- classifier (1)
- climate (1)
- cloud computing (1)
- clustering (1)
- clusters (1)
- clusters of galaxies (1)
- cobalt (1)
- codon usage (1)
- coffee proteins (1)
- cognitive activation (1)
- coherence-enhancing filtering (1)
- cohesive ends (1)
- collaboration (1)
- commensal (1)
- committee governance (1)
- common vole (1)
- community dynamics (1)
- compact groups (1)
- competence (1)
- complex (1)
- complexes (1)
- compounds (1)
- computational thinking (1)
- computer science (1)
- computing science education (1)
- concept of algorithm (1)
- conflict management (1)
- conjugated polyelectrolyte (1)
- constructionism (1)
- context (1)
- context awareness (1)
- continuous-flow (1)
- contrast (1)
- cooperation (1)
- coordination polymer (1)
- copper(II) halide salts (1)
- coreference (1)
- coronal mass ejections (CMEs) (1)
- coronary angiography (1)
- coronary artery disease (1)
- corrective saccades (1)
- cosmic-ray (1)
- cost-effectiveness (1)
- coupling methods (1)
- course timetabling (1)
- covalent organic framework (1)
- cross-cultural differences (1)
- crystals (1)
- cscw (1)
- cultivation (1)
- curvature (1)
- cyclic-gmp (1)
- cytokines (1)
- cytoskeleton (1)
- dark matter (1)
- data integration (1)
- data-storage (1)
- de-novo synthesis (1)
- debugging (1)
- decay dynamics (1)
- decentralization (1)
- decomposition (1)
- degree of givenness (1)
- delivery (1)
- dependency discovery (1)
- derivational morphology (1)
- design thinking (1)
- deutsche Partizipien (1)
- development economics (1)
- dictyostelium-discoideum (1)
- diels-alder reaction (1)
- dietary factors (1)
- difference of Gaussians (1)
- digital whiteboard (1)
- diglossia (1)
- dilute aqueous-solutions (1)
- discourse referent (1)
- discourse-givenness (1)
- disordered media (1)
- dispersal (1)
- displacement (1)
- diversity (1)
- doping (1)
- doubling (1)
- downstep (1)
- drug tolerance (1)
- duality formula (1)
- dunkle Materie (1)
- dye (1)
- dyes (1)
- dynamic HPLC (1)
- dynamic NMR (1)
- e-government (1)
- earthquake (1)
- eco-hydrological modelling (1)
- ecological modelling (1)
- educational timetabling (1)
- effective discourse (1)
- efficient (1)
- electron dynamics (1)
- electron-spin resonance (1)
- elementary particles (1)
- empathy (1)
- ena/vasp proteins (1)
- energy (1)
- energy density (1)
- engaged computing (1)
- english past tense (1)
- entity alignment (1)
- entrepreneurial motivation (1)
- entrepreneurial scientists (1)
- entrepreneurial types (1)
- entrepreneurship (1)
- epidemiology (1)
- equivalence (1)
- erosion (1)
- ether methacrylates (1)
- event documentation (1)
- evolution (1)
- evolutionary economics (1)
- exchange (1)
- extension (1)
- faecal corticosterone metabolites (1)
- fall risk assessment (1)
- fehlende Daten (1)
- filaments (1)
- financial markets (1)
- firm behaviour (1)
- first language acquisition (1)
- fixed point formula (1)
- flagellar filaments (1)
- flexibility (1)
- flood events (1)
- flow-based bilateral filter (1)
- fluctuation dissipation theorem (1)
- fluorescence (1)
- fluorescence imaging (1)
- focus (1)
- focus particle (1)
- foliated diffusion (1)
- forecast (1)
- foregrounding (1)
- forest management (1)
- forestry (1)
- formal cognitive models (1)
- formale kognitive Modelle (1)
- fractional dynamics (1)
- fragmentation (1)
- frame compliance (1)
- fulgides (1)
- functional annotation (1)
- galactic structure (1)
- galaxy structure (1)
- gas (1)
- gas sorption (1)
- gases (1)
- gender (1)
- gene family (1)
- gene ontology (1)
- general learning model (1)
- general secondary education (1)
- generalized Bruck-Reilly ∗-extension (1)
- generalized Langevin equation (1)
- genetic variation (1)
- genetic vectors (1)
- geomagnetic field (1)
- geomagnetic storm (1)
- gesture (1)
- glacier forefield (1)
- glass-transition temperature (1)
- globular clusters (1)
- glycogen (1)
- glycopeptoid (1)
- goblet cells (1)
- good governance (1)
- government-formation (1)
- grammatical judgments (1)
- grammaticalization (1)
- graph clustering (1)
- green chemistry (1)
- grey literature (1)
- growth strategy (1)
- grüne Chemie (1)
- gut length (1)
- gut microbiota (1)
- hard core interaction (1)
- hazard assessments (1)
- helping (1)
- heteroatom (1)
- heterogeneous catalysis (1)
- high energy astrophysics (1)
- high energy physics (1)
- hochenergetische Astrophysik (1)
- holographic diffraction gratings (1)
- host galaxies (1)
- human-computer interaction (1)
- hydrogel (1)
- hydrogels (1)
- hydrogen storage (1)
- hydrological flow paths (1)
- hydrologische Fließpfade (1)
- hydrology (1)
- hydrothermal carbonization (1)
- hypervelocity stars (1)
- immune response (1)
- impact (1)
- impairment (1)
- implementation (1)
- in-memory technology (1)
- in-situ (1)
- inclusion dependency (1)
- index (1)
- indigene Völker (1)
- indigenous peoples (1)
- individual based modeling (1)
- infants (1)
- infection (1)
- infection pathway (1)
- infinite-dimensional diffusion (1)
- inflammatory bowel disease (1)
- inflectional morphology (1)
- informatics curricula (1)
- informatics education (1)
- informatics in upper secondary education (1)
- information flow (1)
- information status (1)
- informed consent (1)
- inhibition (1)
- inhomogeneous-media (1)
- innovation (1)
- innovation systems (1)
- input accuracy (1)
- instabilities (1)
- instruction (1)
- integral field spectroscopy (1)
- interaction (1)
- interactive simulation (1)
- interface (1)
- international study (1)
- interspecific interactions (1)
- intracellular-transport (1)
- intramolecular charge-transfer (1)
- intrinsic microporosity (1)
- invariant (1)
- ionic liquids (1)
- ionothermal synthesis (1)
- kindliche Sprachverarbeitung (1)
- kosmische Neutronenstrahlung (1)
- körperliche Bewegung (1)
- land-use change (1)
- landmark visibility (1)
- landscape effects (1)
- landscape genetics (1)
- landscape hydrology (1)
- langevin equation (1)
- language contact (1)
- language policy (1)
- laser pulse control (1)
- late bilinguals (1)
- lattice dynamics (1)
- learning (1)
- lesson (1)
- lexicon (1)
- life history (1)
- light (1)
- lineare spektrale Entmischung (1)
- linguistic (1)
- link discovery (1)
- lipid biomarkers (1)
- lipids (1)
- liquid-crystal precursors (1)
- lithosphere (1)
- living cells (1)
- local time (1)
- logic programming (1)
- long distance movement (1)
- low flow (1)
- lyssavirus (1)
- mRNA structure (1)
- machine learning (1)
- magnetic fields (1)
- magnetic-properties (1)
- magnetosphere (1)
- mandatory computer science foundations (1)
- map/reduce (1)
- maschinelles Lernen (1)
- masked priming (1)
- mass media (1)
- matching of asymptotic expansions (1)
- media violence (1)
- membranes (1)
- memory effects (1)
- metabolism (1)
- metal nanoparticles (1)
- metal-organic framework (1)
- methane (1)
- methods: numerical (1)
- microbiology (1)
- microbiota (1)
- microsaccades (1)
- migrant integration (1)
- mikrobielle Gemeinschaften (1)
- mineralization beneath (1)
- minorities (1)
- misconceptions (1)
- missing data (1)
- mixed problems (1)
- mixed-matrix membranes (1)
- mixture models (1)
- mixtures (1)
- mobile (1)
- mobile devices (1)
- mobile links (1)
- model (1)
- model-based prototyping (1)
- modelling (1)
- modelling error sources (1)
- modified primers (1)
- modulation (1)
- molecular doping (1)
- molecular methods (1)
- molecular oxygen (1)
- molecular-dynamics (1)
- molecular-reorientation (1)
- molecular-structure (1)
- molecules (1)
- molekulares Dotieren (1)
- monitoring (1)
- monolayer (1)
- morphological priming (1)
- morphological processing (1)
- morphology processing (1)
- motility (1)
- mucus (1)
- multicore architectures (1)
- multiscale analysis (1)
- multivariate Statistik (1)
- multivariate statistics (1)
- musical rhythm (1)
- n-heterocyclic carbenes (1)
- n-isopropylacrylamide (1)
- nanoparticles (1)
- nanostructure (1)
- naphthalenophanes (1)
- natural hazards (1)
- natural language generation (1)
- neotectonics (1)
- nest predation (1)
- network analysis (1)
- neutron field (1)
- nicht-Markovsche Dynamik (1)
- non-Markov drift (1)
- non-Markovian dynamics (1)
- non-coercive boundary conditions (1)
- non-linear integro-differential equations (1)
- non-photorealistic rendering (1)
- nonword repetition (1)
- normal reflection (1)
- nucleation (1)
- o bond formation (1)
- offenes Quantensystem (1)
- olefin-metathesis (1)
- oligo(ethylene glycol) methacrylate (1)
- opal (1)
- open quantum system (1)
- openHPI (1)
- optical sensing (1)
- optische Anregung (1)
- organic electronics (1)
- organic semiconductor (1)
- organische Elektronik (1)
- organischer Halbleiter (1)
- organischer Kohlenstoff (1)
- orogenic evolution (1)
- orthophosphates (1)
- oscillations (1)
- osmotic-stress (1)
- oxygen (1)
- p-Laplace operator (1)
- pH (1)
- pace-of-life (1)
- paleo-strain calculation (1)
- palladium catalyst (1)
- parafoveal processing (1)
- parafoveale Verarbeitung (1)
- paramagnetic-resonance (1)
- parental pressure (1)
- past tense (1)
- pathogen (1)
- pedestrian navigation (1)
- peer pressure (1)
- peptide-membrane-interaction (1)
- percentage of body fat (1)
- perception (1)
- perovskite (1)
- phenanthrenes (1)
- phenotypic heterogeneity (1)
- phenotypic plasticity (1)
- phonetics (1)
- phonotactic probability (1)
- phonotactics (1)
- photo-dehydro-Diels-Alder reaction (1)
- photochemical synthesis (1)
- photoexcitation (1)
- photoinduced nonadiabatic dynamics (1)
- photonic crystal (1)
- physical activity (1)
- pitch accent (1)
- planning (1)
- plant communities (1)
- policy (1)
- political economics (1)
- politische Ökonomik (1)
- polyamines (1)
- polycationic monolayer (1)
- polyglycine (1)
- polymerase chain reaction (1)
- polysemy (1)
- polythiohene (1)
- pornography (1)
- postural stability (1)
- pre-lexical processing (1)
- presentation (1)
- pressure (1)
- primary production (1)
- primary school (1)
- process mining (1)
- process model search (1)
- prominence (1)
- prosocial behavior (1)
- prosocial media (1)
- prosodisch (1)
- protein (1)
- protein modification (1)
- protein-kinase inhibitors (1)
- proteome (1)
- proteomics (1)
- proving (1)
- pseudo-differential operators (1)
- pseudomonas putida (1)
- psycholinguistics (1)
- public policy (1)
- qualitative pathway interpretation (1)
- quality assessment framework (1)
- quantitative data (1)
- quantum field theory (1)
- querying (1)
- radiation mechanisms (1)
- radiocarbon (1)
- random-walks (1)
- rapid prototyping (1)
- ratchet transport (1)
- reaction time methods (1)
- reading (1)
- reciprocal processes (1)
- recombinant Escherichia coli (1)
- reconfigurable matter (1)
- reduction (1)
- reference database (1)
- reference groups (1)
- reference proteomes (1)
- regional development (1)
- regional identity (1)
- regionale Hydrologie (1)
- regionale Identität (1)
- regression analysis (1)
- regular monoid (1)
- regulatory environment (1)
- relevance (1)
- remote collaboration (1)
- requirements engineering (1)
- research ethics (1)
- research projects (1)
- resistance training (1)
- resonance fluorescence (1)
- resources (1)
- responsive (1)
- responsive polymer (1)
- responsive polymers (1)
- restriction enzymes (1)
- resultative sentences (1)
- reversible measure (1)
- rhizosphere (1)
- ring-closing metathesis (1)
- ring-closure (1)
- root functions (1)
- rotation curves (1)
- ruthenium carbene (1)
- räumliche Kalibrierung (1)
- saccadic error (1)
- scale (1)
- schema discovery (1)
- school (1)
- science (1)
- scintigraphy (1)
- second language (1)
- secondary saccades (1)
- seismic interferometry (1)
- seismic tomography (1)
- seismic velocities (1)
- selection (1)
- selectivity (1)
- semantic analysis (1)
- semantic change (1)
- semantics (1)
- semiempirical methods (1)
- seniors (1)
- sensitivity (1)
- sentence repetition (1)
- separation (1)
- service-oriented systems (1)
- sets (1)
- sexual aggression (1)
- sexual scripts (1)
- shallow structure hypothesis (1)
- shape (1)
- short chain fatty acids (1)
- shrews (1)
- silver nanowires (1)
- similarity (1)
- single-particle tracking (1)
- situated context (1)
- situated learning (1)
- small mammals (1)
- small parameter (1)
- social behavior (1)
- social comparison (1)
- social media analytics (1)
- social networking (1)
- soil (1)
- soil constituents mapping (1)
- soil moisture (1)
- soil organic carbon (1)
- sol-gel (1)
- solvatochromic fluorophore (1)
- solvent (1)
- somatotype (1)
- sorting (1)
- spacetime geometry (1)
- spatial calibration (1)
- speaking children (1)
- species coexistence (1)
- spectral exponent (1)
- spectral unmixing (1)
- spectro-directional (1)
- spectroscopy (1)
- speech (1)
- speech rhythm (1)
- speech segmentation (1)
- spektro-direktional (1)
- spreadsheets (1)
- square planar (1)
- stable isotope tracing (1)
- starch (1)
- starch synthases (1)
- state (1)
- states (1)
- statistical model selection (1)
- stellar dynamics (1)
- step process (1)
- stereoselective-synthesis (1)
- stochastic (1)
- stochastic Petri nets (1)
- stochastic algorithms (1)
- stochastic processes (1)
- stochastische Petri Netze (1)
- stress (1)
- stress response (1)
- subjectification (1)
- substituted stilbenes (1)
- sucrose (1)
- sugar transporter (1)
- sulfoxides (1)
- supercapacitor (1)
- surface (1)
- surface heat flow (1)
- surface-plasmon resonance (1)
- surveillance (1)
- survey (1)
- sustainable development (1)
- swelling (1)
- syntactic processing (1)
- syntaktische Ambiguität (1)
- system (1)
- systems biology (1)
- systems of systems (1)
- teacher (1)
- teacher education (1)
- teacher training (1)
- teaching material (1)
- tectonics (1)
- tele-TASK (1)
- temperature (1)
- temperature field analysis (1)
- temperature phase (1)
- test items (1)
- test-driven fault navigation (1)
- testing (1)
- tetrachlorocuprate(II) salts (1)
- tf-idf (1)
- the English progressive construction (1)
- theorem (1)
- thermal model (1)
- thermisches Modell (1)
- thermochemistry (1)
- thermochronology (1)
- thermometer (1)
- thermoresponsiv (1)
- thermoresponsive (1)
- thermoresponsive polymers (1)
- thermosensitive (1)
- thin-films (1)
- time duality (1)
- time symmetry (1)
- to-coil transition (1)
- tone language (1)
- touch input (1)
- tracking (1)
- trans-stilbenes (1)
- transaction costs (1)
- transcript level (1)
- transcription factors (1)
- transcriptome analysis (1)
- transcriptome sequencing (1)
- transduction (1)
- transformation (1)
- transition metal (1)
- transition path theory (1)
- translation (1)
- triangular-[4] phenylene (1)
- tropics (1)
- ultracontractivity (1)
- ultrafast X-ray diffraction (1)
- ultraschnelle Röntgendiffraktion (1)
- uncertainties (1)
- university spin-offs (1)
- usability testing (1)
- user-centred design (1)
- variational stability (1)
- verb classes (1)
- verification (1)
- video analysis (1)
- video metadata (1)
- violent media (1)
- viscoelasticity (1)
- voles (1)
- vulnerability (1)
- weighted spaces (1)
- whey proteins (1)
- word sense disambiguation (1)
- words (1)
- working memory capacity (1)
- zeolitic imidazolate frameworks (1)
- Ähnlichkeit (1)
- Ähnlichkeitsmaße (1)
- Ähnlichkeitssuche (1)
- Ökologie (1)
- Ökotoxikologie (1)
- Übergangsmetall (1)
- Übergangsmetalle (1)
- ökohydrologische Modellierung (1)
- ökologische Modellierung (1)
- π -inverse monoid (1)
Institute
- Institut für Chemie (29)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (26)
- Extern (24)
- Institut für Geowissenschaften (22)
- Wirtschaftswissenschaften (20)
- Institut für Mathematik (19)
- Institut für Umweltwissenschaften und Geographie (19)
- Institut für Biochemie und Biologie (15)
- Institut für Physik und Astronomie (15)
- Mathematisch-Naturwissenschaftliche Fakultät (14)
.NET Gadgeteer Workshop
(2013)
The study of outcrop modeling is located at the interface between two fields of expertise, Sedimentology and Computing Geoscience, which respectively investigates and simulates geological heterogeneity observed in the sedimentary record. During the last past years, modeling tools and techniques were constantly improved. In parallel, the study of Phanerozoic carbonate deposits emphasized the common occurrence of a random facies distribution along single depositional domain. Although both fields of expertise are intrinsically linked during outcrop simulation, their respective advances have not been combined in literature to enhance carbonate modeling studies. The present study re-examines the modeling strategy adapted to the simulation of shallow-water carbonate systems, based on a close relationship between field sedimentology and modeling capabilities. In the present study, the evaluation of three commonly used algorithms Truncated Gaussian Simulation (TGSim), Sequential Indicator Simulation (SISim), and Indicator Kriging (IK), were performed for the first time using visual and quantitative comparisons on an ideally suited carbonate outcrop. The results show that the heterogeneity of carbonate rocks cannot be fully simulated using one single algorithm. The operating mode of each algorithm involves capabilities as well as drawbacks that are not capable to match all field observations carried out across the modeling area. Two end members in the spectrum of carbonate depositional settings, a low-angle Jurassic ramp (High Atlas, Morocco) and a Triassic isolated platform (Dolomites, Italy), were investigated to obtain a complete overview of the geological heterogeneity in shallow-water carbonate systems. Field sedimentology and statistical analysis performed on the type, morphology, distribution, and association of carbonate bodies and combined with palaeodepositional reconstructions, emphasize similar results. At the basin scale (x 1 km), facies association, composed of facies recording similar depositional conditions, displays linear and ordered transitions between depositional domains. Contrarily, at the bedding scale (x 0.1 km), individual lithofacies type shows a mosaic-like distribution consisting of an arrangement of spatially independent lithofacies bodies along the depositional profile. The increase of spatial disorder from the basin to bedding scale results from the influence of autocyclic factors on the transport and deposition of carbonate sediments. Scale-dependent types of carbonate heterogeneity are linked with the evaluation of algorithms in order to establish a modeling strategy that considers both the sedimentary characteristics of the outcrop and the modeling capabilities. A surface-based modeling approach was used to model depositional sequences. Facies associations were populated using TGSim to preserve ordered trends between depositional domains. At the lithofacies scale, a fully stochastic approach with SISim was applied to simulate a mosaic-like lithofacies distribution. This new workflow is designed to improve the simulation of carbonate rocks, based on the modeling of each scale of heterogeneity individually. Contrarily to simulation methods applied in literature, the present study considers that the use of one single simulation technique is unlikely to correctly model the natural patterns and variability of carbonate rocks. The implementation of different techniques customized for each level of the stratigraphic hierarchy provides the essential computing flexibility to model carbonate systems. Closer feedback between advances carried out in the field of Sedimentology and Computing Geoscience should be promoted during future outcrop simulations for the enhancement of 3-D geological models.
3D from 2D touch
(2013)
While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices.
Background: With increasing age neuromuscular deficits (e.g., sarcopenia) may result in impaired physical performance and an increased risk for falls. Prominent intrinsic fall-risk factors are age-related decreases in balance and strength / power performance as well as cognitive decline. Additional studies are needed to develop specifically tailored exercise programs for older adults that can easily be implemented into clinical practice. Thus, the objective of the present trial is to assess the effects of a fall prevention program that was developed by an interdisciplinary expert panel on measures of balance, strength / power, body composition, cognition, psychosocial well-being, and falls self-efficacy in healthy older adults. Additionally, the time-related effects of detraining are tested.
Methods/Design: Healthy old people (n = 54) between the age of 65 to 80 years will participate in this trial. The testing protocol comprises tests for the assessment of static / dynamic steady-state balance (i.e., Sharpened Romberg Test, instrumented gait analysis), proactive balance (i.e., Functional Reach Test; Timed Up and Go Test), reactive balance (i.e., perturbation test during bipedal stance; Push and Release Test), strength (i.e., hand grip strength test; Chair Stand Test), and power (i.e., Stair Climb Power Test; countermovement jump). Further, body composition will be analysed using a bioelectrical impedance analysis system. In addition, questionnaires for the assessment of psychosocial (i.e., World Health Organisation Quality of Life Assessment-Bref), cognitive (i.e., Mini Mental State Examination), and fall risk determinants (i.e., Fall Efficacy Scale -International) will be included in the study protocol. Participants will be randomized into two intervention groups or the control / waiting group. After baseline measures, participants in the intervention groups will conduct a 12-week balance and strength / power exercise intervention 3 times per week, with each training session lasting 30 min. (actual training time). One intervention group will complete an extensive supervised training program, while the other intervention group will complete a short version (` 3 times 3') that is home-based and controlled by weekly phone calls. Post-tests will be conducted right after the intervention period. Additionally, detraining effects will be measured 12 weeks after program cessation. The control group / waiting group will not participate in any specific intervention during the experimental period, but will receive the extensive supervised program after the experimental period.
Discussion: It is expected that particularly the supervised combination of balance and strength / power training will improve performance in variables of balance, strength / power, body composition, cognitive function, psychosocial well-being, and falls self-efficacy of older adults. In addition, information regarding fall risk assessment, dose-response-relations, detraining effects, and supervision of training will be provided. Further, training-induced health-relevant changes, such as improved performance in activities of daily living, cognitive function, and quality of life, as well as a reduced risk for falls may help to lower costs in the health care system. Finally, practitioners, therapists, and instructors will be provided with a scientifically evaluated feasible, safe, and easy-to-administer exercise program for fall prevention.
A comparison of current trends within computer science teaching in school in Germany and the UK
(2013)
In the last two years, CS as a school subject has gained a lot of attention worldwide, although different countries have differing approaches to and experiences of introducing CS in schools. This paper reports on a study comparing current trends in CS at school, with a major focus on two countries, Germany and UK. A survey was carried out of a number of teaching professionals and experts from the UK and Germany with regard to the content and delivery of CS in school. An analysis of the quantitative data reveals a difference in foci in the two countries; putting this into the context of curricular developments we are able to offer interpretations of these trends and suggest ways in which curricula in CS at school should be moving forward.
In a recent paper with N. Tarkhanov, the Lefschetz number for endomorphisms (modulo trace class operators) of sequences of trace class curvature was introduced. We show that this is a well defined, canonical extension of the classical Lefschetz number and establish the homotopy invariance of this number. Moreover, we apply the results to show that the Lefschetz fixed point formula holds for geometric quasiendomorphisms of elliptic quasicomplexes.
Informatics as a school subject has been virtually absent from bilingual education programs in German secondary schools. Most bilingual programs in German secondary education started out by focusing on subjects from the field of social sciences. Teachers and bilingual curriculum experts alike have been regarding those as the most suitable subjects for bilingual instruction – largely due to the intercultural perspective that a bilingual approach provides. And though one cannot deny the gain that ensues from an intercultural perspective on subjects such as history or geography, this benefit is certainly not limited to social science subjects. In consequence, bilingual curriculum designers have already begun to include other subjects such as physics or chemistry in bilingual school programs. It only seems a small step to extend this to informatics. This paper will start out by addressing potential benefits of adding informatics to the range of subjects taught as part of English-language bilingual programs in German secondary education. In a second step it will sketch out a methodological (= didactical) model for teaching informatics to German learners through English. It will then provide two items of hands-on and tested teaching material in accordance with this model. The discussion will conclude with a brief outlook on the chances and prerequisites of firmly establishing informatics as part of bilingual school curricula in Germany.
In soils and sediments there is a strong coupling between local biogeochemical processes and the distribution of water, electron acceptors, acids and nutrients. Both sides are closely related and affect each other from small scale to larger scales. Soil structures such as aggregates, roots, layers or macropores enhance the patchiness of these distributions. At the same time it is difficult to access the spatial distribution and temporal dynamics of these parameter. Noninvasive imaging techniques with high spatial and temporal resolution overcome these limitations. And new non-invasive techniques are needed to study the dynamic interaction of plant roots with the surrounding soil, but also the complex physical and chemical processes in structured soils. In this study we developed an efficient non-destructive in-situ method to determine biogeochemical parameters relevant to plant roots growing in soil. This is a quantitative fluorescence imaging method suitable for visualizing the spatial and temporal pH changes around roots. We adapted the fluorescence imaging set-up and coupled it with neutron radiography to study simultaneously root growth, oxygen depletion by respiration activity and root water uptake. The combined set up was subsequently applied to a structured soil system to map the patchy structure of oxic and anoxic zones induced by a chemical oxygen consumption reaction for spatially varying water contents. Moreover, results from a similar fluorescence imaging technique for nitrate detection were complemented by a numerical modeling study where we used imaging data, aiming to simulate biodegradation under anaerobic, nitrate reducing conditions.
A polymer analogous reaction for the formation of imidazolium and NHC based porous polymer networks
(2013)
A polymer analogous reaction was carried out to generate a porous polymeric network with N-heterocyclic carbenes (NHC) in the polymer backbone. Using a stepwise approach, first a polyimine network is formed by polymerization of the tetrafunctional amine tetrakis(4-aminophenyl)methane. This polyimine network is converted in the second step into polyimidazolium chloride and finally to a polyNHC network. Furthermore a porous Cu(II)-coordinated polyNHC network can be generated. Supercritical drying generates polymer networks with high permanent surface areas and porosities which can be applied for different catalytic reactions. The catalytic properties were demonstrated for example in the activation of CO2 or in the deoxygenation of sulfoxides to the corresponding sulfides.
Transport Molecules play a crucial role for cell viability. Amongst others, linear motors transport cargos along rope-like structures from one location of the cell to another in a stochastic fashion. Thereby each step of the motor, either forwards or backwards, bridges a fixed distance. While moving along the rope the motor can also detach and is lost. We give here a mathematical formalization of such dynamics as a random process which is an extension of Random Walks, to which we add an absorbing state to model the detachment of the motor from the rope. We derive particular properties of such processes that have not been available before. Our results include description of the maximal distance reached from the starting point and the position from which detachment takes place. Finally, we apply our theoretical results to a concrete established model of the transport molecule Kinesin V.
A water soluble fluorescent polymer as a dual colour sensor for temperature and a specific protein
(2013)
We present two thermoresponsive water soluble copolymers prepared via free radical statistical copolymerization of N-isopropylacrylamide (NIPAm) and of oligo(ethylene glycol) methacrylates (OEGMAs), respectively, with a solvatochromic 7-(diethylamino)-3-carboxy-coumarin (DEAC)- functionalized monomer. In aqueous solutions, the NIPAm-based copolymer exhibits characteristic changes in its fluorescence profile in response to a change in solution temperature as well as to the presence of a specific protein, namely an anti-DEAC antibody. This polymer emits only weakly at low temperatures, but exhibits a marked fluorescence enhancement accompanied by a change in its emission colour when heated above its cloud point. Such drastic changes in the fluorescence and absorbance spectra are observed also upon injection of the anti-DEAC antibody, attributed to the specific binding of the antibody to DEAC moieties. Importantly, protein binding occurs exclusively when the polymer is in the well hydrated state below the cloud point, enabling a temperature control on the molecular recognition event. On the other hand, heating of the polymer–antibody complexes releases a fraction of the bound antibody. In the presence of the DEAC-functionalized monomer in this mixture, the released antibody competitively binds to the monomer and the antibody-free chains of the polymer undergo a more effective collapse and inter-aggregation. In contrast, the emission properties of the OEGMA-based analogous copolymer are rather insensitive to the thermally induced phase transition or to antibody binding. These opposite behaviours underline the need for a carefully tailored molecular design of responsive polymers aimed at specific applications, such as biosensing.
Academic entrepreneurship
(2013)
Research on entrepreneurial motivation of university scientists is often determined by quantitative methods without taking into account context-related influences. According to different studies, entrepreneurial scientists found a spin-off company due to motives like independency, market opportunity, money or risk of unemployment (short-term contracts). To give a comprehensive explanation, it is important to use a qualitative research view that considers academic rank, norms and values of university scientists. The author spoke with 35 natural scientists and asked professors and research fellows for their entrepreneurial motivation. The results of this study are used to develop a typology of entrepreneurial and non-entrepreneurial scientists within German universities. This paper presents the key findings of the study (Sass 2011).
As Albania is accelerating its preparations towards the European Union candidate status, numerous areas of public policy and practices undergo intensive development processes. Regional development policy is a very new area of public policy in Albania, and needs research and development. This study focuses on the process of sustainable development in Albania, by analyzing and comparing the regional development of regions of Tirana, Shkodra and Kukes. The methodology used consists of a literature/desk review; analytical and comparative approach; qualitative interviews; quantitative data collection; analysis. The research is organized in five chapters. First chapter provides an overview of the study framework. The second outlines the theory and scientific framework for sustainable and regional development in relation with geography. The third chapter presents the picture of the regional development in Albania, analyzing the disparities and regional development in the light of EU requirements and NUTS division. Chapter 4 continues by analyzing and comparing the regional development of the regions: Tirana – driver for change, Shkodra – the North in Development and Kukes – the “shrinking” region. Chapter 5 presents the conclusions and recommendations. This research comes to the conclusions that if growth in Albania is to be increased and sustained, a regional development policy needs to be established.
Derivatization of fullerene (C60) with branched aliphatic chains softens C60-based materials and enables the formation of thermotropic liquid crystals and room temperature nonvolatile liquids. This work demonstrates that by carefully tuning parameters such as type, number and substituent position of the branched chains, liquid crystalline C60 materials with mesophase temperatures suited for photovoltaic cell fabrication and room temperature nonvolatile liquid fullerenes with tunable viscosity can be obtained. In particular, compound 1, with branched chains, exhibits a smectic liquid crystalline phase extending from 84 °C to room temperature. Analysis of bulk heterojunction (BHJ) organic solar cells with a ca. 100 nm active layer of compound 1 and poly(3-hexylthiophene) (P3HT) as an electron acceptor and an electron donor, respectively, reveals an improved performance (power conversion efficiency, PCE: 1.6 ± 0.1%) in comparison with another compound, 10 (PCE: 0.5 ± 0.1%). The latter, in contrast to 1, carries linear aliphatic chains and thus forms a highly ordered solid lamellar phase at room temperature. The solar cell performance of 1 blended with P3HT approaches that of PCBM/P3HT for the same active layer thickness. This indicates that C60 derivatives bearing branched tails are a promising class of electron acceptors in soft (flexible) photovoltaic devices.
This study presents results from a cross-modal priming experiment investigating inflected verb forms of German. A group of late learners of German with Russian as their native language (L1) was compared to a control group of German L1 speakers. The experiment showed different priming patterns for the two participant groups. The L1 German data yielded a stem-priming effect for inflected forms involving regular affixation and a partial priming effect for irregular forms irrespective of stem allomorphy. By contrast, the data from the late bilinguals showed reduced priming effects for both regular and irregular forms. We argue that late learners rely more on lexically stored inflected word forms during word recognition and less on morphological parsing than native speakers.
We launched an original large-scale experiment concerning informatics learning in French high schools. We are using the France-IOI platform to federate resources and share observation for research. The first step is the implementation of an adaptive hypermedia based on very fine grain epistemic modules for Python programming learning. We define the necessary traces to be built in order to study the trajectories of navigation the pupils will draw across this hypermedia. It may be browsed by pupils either as a course support, or an extra help to solve the list of exercises (mainly for algorithmics discovery). By leaving the locus of control to the learner, we want to observe the different trajectories they finally draw through our system. These trajectories may be abstracted and interpreted as strategies and then compared for their relative efficiency. Our hypothesis is that learners have different profiles and may use the appropriate strategy accordingly. This paper presents the research questions, the method and the expected results.
The habilitation thesis covers theoretical investigations on light-induced processes in molecules. The study is focussed on changes of the molecular electronic structure and geometry, caused either by photoexcitation in the event of a spectroscopic analysis, or by a selective control with shaped laser pulses. The applied and developed methods are predominantly based on quantum chemistry as well as on electron and nuclear quantum dynamics, and in parts on molecular dynamics. The studied scientific problems deal with stereoisomerism and the question of how to either switch or distinguish chiral molecules using laser pulses, and with the essentials for the simulation of the spectroscopic response of biochromophores, in order to unravel their photophysics. The accomplished findings not only explain experimental results and extend existing approaches, but also contribute significantly to the basic understanding of the investigated light-driven molecular processes. The main achievements can be divided in three parts: First, a quantum theory for an enantio- and diastereoselective or, in general, stereoselective laser pulse control was developed and successfully applied to influence the chirality of molecular switches. The proposed axially chiral molecules possess different numbers of "switchable" stable chiral conformations, with one particular switch featuring even a true achiral "off"-state which allows to enantioselectively "turn on" its chirality. Furthermore, surface mounted chiral molecular switches with several well-defined orientations were treated, where a newly devised highly flexible stochastic pulse optimization technique provides high stereoselectivity and efficiency at the same time, even for coupled chirality-changing degrees of freedom. Despite the model character of these studies, the proposed types of chiral molecular switches and, all the more, the developed basic concepts are generally applicable to design laser pulse controlled catalysts for asymmetric synthesis, or to achieve selective changes in the chirality of liquid crystals or in chiroptical nanodevices, implementable in information processing or as data storage. Second, laser-driven electron wavepacket dynamics based on ab initio calculations, namely time-dependent configuration interaction, was extended by the explicit inclusion of magnetic field-magnetic dipole interactions for the simulation of the qualitative and quantitative distinction of enantiomers in mass spectrometry by means of circularly polarized ultrashort laser pulses. The developed approach not only allows to explain the origin of the experimentally observed influence of the pulse duration on the detected circular dichroism in the ion yield, but also to predict laser pulse parameters for an optimal distinction of enantiomers by ultrashort shaped laser pulses. Moreover, these investigations in combination with the previous ones provide a fundamental understanding of the relevance of electric and magnetic interactions between linearly or non-linearly polarized laser pulses and (pro-)chiral molecules for either control by enantioselective excitation or distinction by enantiospecific excitation. Third, for selected light-sensitive biological systems of central importance, like e.g. antenna complexes of photosynthesis, simulations of processes which take place during and after photoexcitation of their chromophores were performed, in order to explain experimental (spectroscopic) findings as well as to understand the underlying photophysical and photochemical principles. In particular, aspects of normal mode mixing due to geometrical changes upon photoexcitation and their impact on (time-dependent) vibronic and resonance Raman spectra, as well as on intramolecular energy redistribution were addressed. In order to explain unresolved experimental findings, a simulation program for the calculation of vibronic and resonance Raman spectra, accounting for changes in both vibrational frequencies and normal modes, was created based on a time-dependent formalism. In addition, the influence of the biochemical environment on the electronic structure of the chromophores was studied by electrostatic interactions and mechanical embedding using hybrid quantum-classical methods. Environmental effects were found to be of importance, in particular, for the excitonic coupling of chromophores in light-harvesting complex II. Although the simulations for such highly complex systems are still restricted by various approximations, the improved approaches and obtained results have proven to be important contributions for a better understanding of light-induced processes in biosystems which also adds to efforts of their artificial reproduction.
Water management and environmental protection is vulnerable to extreme low flows during streamflow droughts. During the last decades, in most rivers of Central Europe summer runoff and low flows have decreased. Discharge projections agree that future decrease in runoff is likely for catchments in Brandenburg, Germany. Depending on the first-order controls on low flows, different adaption measures are expected to be appropriate. Small catchments were analyzed because they are expected to be more vulnerable to a changing climate than larger rivers. They are mainly headwater catchments with smaller ground water storage. Local characteristics are more important at this scale and can increase vulnerability. This thesis mutually evaluates potential adaption measures to sustain minimum runoff in small catchments of Brandenburg, Germany, and similarities of these catchments regarding low flows. The following guiding questions are addressed: (i) Which first-order controls on low flows and related time scales exist? (ii) Which are the differences between small catchments regarding low flow vulnerability? (iii) Which adaption measures to sustain minimum runoff in small catchments of Brandenburg are appropriate considering regional low flow patterns? Potential adaption measures to sustain minimum runoff during periods of low flows can be classified into three categories: (i) increase of groundwater recharge and subsequent baseflow by land use change, land management and artificial ground water recharge, (ii) increase of water storage with regulated outflow by reservoirs, lakes and wetland water management and (iii) regional low flow patterns have to be considered during planning of measures with multiple purposes (urban water management, waste water recycling and inter-basin water transfer). The question remained whether water management of areas with shallow groundwater tables can efficiently sustain minimum runoff. Exemplary, water management scenarios of a ditch irrigated area were evaluated using the model Hydrus-2D. Increasing antecedent water levels and stopping ditch irrigation during periods of low flows increased fluxes from the pasture to the stream, but storage was depleted faster during the summer months due to higher evapotranspiration. Fluxes from this approx. 1 km long pasture with an area of approx. 13 ha ranged from 0.3 to 0.7 l\s depending on scenario. This demonstrates that numerous of such small decentralized measures are necessary to sustain minimum runoff in meso-scale catchments. Differences in the low flow risk of catchments and meteorological low flow predictors were analyzed. A principal component analysis was applied on daily discharge of 37 catchments between 1991 and 2006. Flows decreased more in Southeast Brandenburg according to meteorological forcing. Low flow risk was highest in a region east of Berlin because of intersection of a more continental climate and the specific geohydrology. In these catchments, flows decreased faster during summer and the low flow period was prolonged. A non-linear support vector machine regression was applied to iteratively select meteorological predictors for annual 30-day minimum runoff in 16 catchments between 1965 and 2006. The potential evapotranspiration sum of the previous 48 months was the most important predictor (r²=0.28). The potential evapotranspiration of the previous 3 months and the precipitation of the previous 3 months and last year increased model performance (r²=0.49, including all four predictors). Model performance was higher for catchments with low yield and more damped runoff. In catchments with high low flow risk, explanatory power of long term potential evapotranspiration was high. Catchments with a high low flow risk as well as catchments with a considerable decrease in flows in southeast Brandenburg have the highest demand for adaption. Measures increasing groundwater recharge are to be preferred. Catchments with high low flow risk showed relatively deep and decreasing groundwater heads allowing increased groundwater recharge at recharge areas with higher altitude away from the streams. Low flows are expected to stay low or decrease even further because long term potential evapotranspiration was the most important low flow predictor and is projected to increase during climate change. Differences in low flow risk and runoff dynamics between catchments have to be considered for management and planning of measures which do not only have the task to sustain minimum runoff.
For the first time the transcriptional reprogramming of distinct root cortex cells during the arbuscular mycorrhizal (AM) symbiosis was investigated by combining Laser Capture Mirodissection and Affymetrix GeneChip® Medicago genome array hybridization. The establishment of cryosections facilitated the isolation of high quality RNA in sufficient amounts from three different cortical cell types. The transcript profiles of arbuscule-containing cells (arb cells), non-arbuscule-containing cells (nac cells) of Rhizophagus irregularis inoculated Medicago truncatula roots and cortex cells of non-inoculated roots (cor) were successfully explored. The data gave new insights in the symbiosis-related cellular reorganization processes and indicated that already nac cells seem to be prepared for the upcoming fungal colonization. The mycorrhizal- and phosphate-dependent transcription of a GRAS TF family member (MtGras8) was detected in arb cells and mycorrhizal roots. MtGRAS shares a high sequence similarity to a GRAS TF suggested to be involved in the fungal colonization processes (MtRAM1). The function of MtGras8 was unraveled upon RNA interference- (RNAi-) mediated gene silencing. An AM symbiosis-dependent expression of a RNAi construct (MtPt4pro::gras8-RNAi) revealed a successful gene silencing of MtGras8 leading to a reduced arbuscule abundance and a higher proportion of deformed arbuscules in root with reduced transcript levels. Accordingly, MtGras8 might control the arbuscule development and life-time. The targeting of MtGras8 by the phosphate-dependent regulated miRNA5204* was discovered previously (Devers et al., 2011). Since miRNA5204* is known to be affected by phosphate, the posttranscriptional regulation might represent a link between phosphate signaling and arbuscule development. In this work, the posttranscriptional regulation was confirmed by mis-expression of miRNA5204* in M. truncatula roots. The miRNA-mediated gene silencing affects the MtGras8 transcript abundance only in the first two weeks of the AM symbiosis and the mis-expression lines seem to mimic the phenotype of MtGras8-RNAi lines. Additionally, MtGRAS8 seems to form heterodimers with NSP2 and RAM1, which are known to be key regulators of the fungal colonization process (Hirsch et al., 2009; Gobbato et al., 2012). These data indicate that MtGras8 and miRNA5204* are linked to the sym pathway and regulate the arbuscule development in phosphate-dependent manner.
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
Even though quite different in occurrence and consequences, from a modeling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding: uncertainty about the modeling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Nevertheless deterministic approaches are still widely used in natural hazard assessments, holding the risk of underestimating the hazard with disastrous effects. The all-round probabilistic framework of Bayesian networks constitutes an attractive alternative. In contrast to deterministic proceedings, it treats response variables as well as explanatory variables as random variables making no difference between input and output variables. Using a graphical representation Bayesian networks encode the dependency relations between the variables in a directed acyclic graph: variables are represented as nodes and (in-)dependencies between variables as (missing) edges between the nodes. The joint distribution of all variables can thus be described by decomposing it, according to the depicted independences, into a product of local conditional probability distributions, which are defined by the parameters of the Bayesian network. In the framework of this thesis the Bayesian network approach is applied to different natural hazard domains (i.e. seismic hazard, flood damage and landslide assessments). Learning the network structure and parameters from data, Bayesian networks reveal relevant dependency relations between the included variables and help to gain knowledge about the underlying processes. The problem of Bayesian network learning is cast in a Bayesian framework, considering the network structure and parameters as random variables itself and searching for the most likely combination of both, which corresponds to the maximum a posteriori (MAP score) of their joint distribution given the observed data. Although well studied in theory the learning of Bayesian networks based on real-world data is usually not straight forward and requires an adoption of existing algorithms. Typically arising problems are the handling of continuous variables, incomplete observations and the interaction of both. Working with continuous distributions requires assumptions about the allowed families of distributions. To "let the data speak" and avoid wrong assumptions, continuous variables are instead discretized here, thus allowing for a completely data-driven and distribution-free learning. An extension of the MAP score, considering the discretization as random variable as well, is developed for an automatic multivariate discretization, that takes interactions between the variables into account. The discretization process is nested into the network learning and requires several iterations. Having to face incomplete observations on top, this may pose a computational burden. Iterative proceedings for missing value estimation become quickly infeasible. A more efficient albeit approximate method is used instead, estimating the missing values based only on the observations of variables directly interacting with the missing variable. Moreover natural hazard assessments often have a primary interest in a certain target variable. The discretization learned for this variable does not always have the required resolution for a good prediction performance. Finer resolutions for (conditional) continuous distributions are achieved with continuous approximations subsequent to the Bayesian network learning, using kernel density estimations or mixtures of truncated exponential functions. All our proceedings are completely data-driven. We thus avoid assumptions that require expert knowledge and instead provide domain independent solutions, that are applicable not only in other natural hazard assessments, but in a variety of domains struggling with uncertainties.
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
We consider an SDE driven by a Lévy noise on a foliated manifold, whose trajectories stay on compact leaves. We determine the effective behavior of the system subject to a small smooth transversal perturbation of positive order epsilon. More precisely, we show that the average of the transversal component of the SDE converges to the solution of a deterministic ODE, according to the average of the perturbing vector field with respect to the invariant measures on the leaves (of the unpertubed system) as epsilon goes to 0. In particular we give upper bounds for the rates of convergence. The main results which are proved for pure jump Lévy processes complement the result by Gargate and Ruffino for Stratonovich SDEs to Lévy driven SDEs of Marcus type.
Constraints allow developers to specify desired properties of systems in a number of domains, and have those properties be maintained automatically. This results in compact, declarative code, avoiding scattered code to check and imperatively re-satisfy invariants. Despite these advantages, constraint programming is not yet widespread, with standard imperative programming still the norm. There is a long history of research on integrating constraint programming with the imperative paradigm. However, this integration typically does not unify the constructs for encapsulation and abstraction from both paradigms. This impedes re-use of modules, as client code written in one paradigm can only use modules written to support that paradigm. Modules require redundant definitions if they are to be used in both paradigms. We present a language – Babelsberg – that unifies the constructs for en- capsulation and abstraction by using only object-oriented method definitions for both declarative and imperative code. Our prototype – Babelsberg/R – is an extension to Ruby, and continues to support Ruby’s object-oriented se- mantics. It allows programmers to add constraints to existing Ruby programs in incremental steps by placing them on the results of normal object-oriented message sends. It is implemented by modifying a state-of-the-art Ruby virtual machine. The performance of standard object-oriented code without con- straints is only modestly impacted, with typically less than 10% overhead compared with the unmodified virtual machine. Furthermore, our architec- ture for adding multiple constraint solvers allows Babelsberg to deal with constraints in a variety of domains. We argue that our approach provides a useful step toward making con- straint solving a generic tool for object-oriented programmers. We also provide example applications, written in our Ruby-based implementation, which use constraints in a variety of application domains, including interactive graphics, circuit simulations, data streaming with both hard and soft constraints on performance, and configuration file Management.
Antarctic glacier forfields are extreme environments and pioneer sites for ecological succession. The Antarctic continent shows microbial community development as a natural laboratory because of its special environment, geographic isolation and little anthropogenic influence. Increasing temperatures due to global warming lead to enhanced deglaciation processes in cold-affected habitats and new terrain is becoming exposed to soil formation and accessible for microbial colonisation. This study aims to understand the structure and development of glacier forefield bacterial communities, especially how soil parameters impact the microorganisms and how those are adapted to the extreme conditions of the habitat. To this effect, a combination of cultivation experiments, molecular, geophysical and geochemical analysis was applied to examine two glacier forfields of the Larsemann Hills, East Antarctica. Culture-independent molecular tools such as terminal restriction length polymorphism (T-RFLP), clone libraries and quantitative real-time PCR (qPCR) were used to determine bacterial diversity and distribution. Cultivation of yet unknown species was carried out to get insights in the physiology and adaptation of the microorganisms. Adaptation strategies of the microorganisms were studied by determining changes of the cell membrane phospholipid fatty acid (PLFA) inventory of an isolated bacterium in response to temperature and pH fluctuations and by measuring enzyme activity at low temperature in environmental soil samples. The two studied glacier forefields are extreme habitats characterised by low temperatures, low water availability and small oligotrophic nutrient pools and represent sites of different bacterial succession in relation to soil parameters. The investigated sites showed microbial succession at an early step of soil formation near the ice tongue in comparison to closely located but rather older and more developed soil from the forefield. At the early step the succession is influenced by a deglaciation-dependent areal shift of soil parameters followed by a variable and prevalently depth-related distribution of the soil parameters that is driven by the extreme Antarctic conditions. The dominant taxa in the glacier forefields are Actinobacteria, Acidobacteria, Proteobacteria, Bacteroidetes, Cyanobacteria and Chloroflexi. The connection of soil characteristics with bacterial community structure showed that soil parameter and soil formation along the glacier forefield influence the distribution of certain phyla. In the early step of succession the relative undifferentiated bacterial diversity reflects the undifferentiated soil development and has a high potential to shift according to past and present environmental conditions. With progressing development environmental constraints such as water or carbon limitation have a greater influence. Adapting the culturing conditions to the cold and oligotrophic environment, the number of culturable heterotrophic bacteria reached up to 108 colony forming units per gram soil and 148 isolates were obtained. Two new psychrotolerant bacteria, Herbaspirillum psychrotolerans PB1T and Chryseobacterium frigidisoli PB4T, were characterised in detail and described as novel species in the family of Oxalobacteraceae and Flavobacteriaceae, respectively. The isolates are able to grow at low temperatures tolerating temperature fluctuations and they are not specialised to a certain substrate, therefore they are well-adapted to the cold and oligotrophic environment. The adaptation strategies of the microorganisms were analysed in environmental samples and cultures focussing on extracellular enzyme activity at low temperature and PLFA analyses. Extracellular phosphatases (pH 11 and pH 6.5), β-glucosidase, invertase and urease activity were detected in the glacier forefield soils at low temperature (14°C) catalysing the conversion of various compounds providing necessary substrates and may further play a role in the soil formation and total carbon turnover of the habitat. The PLFA analysis of the newly isolated species C. frigidisoli showed that the cold-adapted strain develops different strategies to maintain the cell membrane function under changing environmental conditions by altering the PLFA inventory at different temperatures and pH values. A newly discovered fatty acid, which was not found in any other microorganism so far, significantly increased at decreasing temperature and low pH and thus plays an important role in the adaption of C. frigidisoli. This work gives insights into the diversity, distribution and adaptation mechanisms of microbial communities in oligotrophic cold-affected soils and shows that Antarctic glacier forefields are suitable model systems to study bacterial colonisation in connection to soil formation.
In the presence of a solid-liquid or liquid-air interface, bacteria can choose between a planktonic and a sessile lifestyle. Depending on environmental conditions, cells swimming in close proximity to the interface can irreversibly attach to the surface and grow into three-dimensional aggregates where the majority of cells is sessile and embedded in an extracellular polymer matrix (biofilm). We used microfluidic tools and time lapse microscopy to perform experiments with the polarly flagellated soil bacterium Pseudomonas putida (P. putida), a bacterial species that is able to form biofilms. We analyzed individual trajectories of swimming cells, both in the bulk fluid and in close proximity to a glass-liquid interface. Additionally, surface related growth during the early phase of biofilm formation was investigated. In the bulk fluid, P.putida shows a typical bacterial swimming pattern of alternating periods of persistent displacement along a line (runs) and fast reorientation events (turns) and cells swim with an average speed around 24 micrometer per second. We found that the distribution of turning angles is bimodal with a dominating peak around 180 degrees. In approximately six out of ten turning events, the cell reverses its swimming direction. In addition, our analysis revealed that upon a reversal, the cell systematically changes its swimming speed by a factor of two on average. Based on the experimentally observed values of mean runtime and rotational diffusion, we presented a model to describe the spreading of a population of cells by a run-reverse random walker with alternating speeds. We successfully recover the mean square displacement and, by an extended version of the model, also the negative dip in the directional autocorrelation function as observed in the experiments. The analytical solution of the model demonstrates that alternating speeds enhance a cells ability to explore its environment as compared to a bacterium moving at a constant intermediate speed. As compared to the bulk fluid, for cells swimming near a solid boundary we observed an increase in swimming speed at distances below d= 5 micrometer and an increase in average angular velocity at distances below d= 4 micrometer. While the average speed was maximal with an increase around 15% at a distance of d= 3 micrometer, the angular velocity was highest in closest proximity to the boundary at d=1 micrometer with an increase around 90% as compared to the bulk fluid. To investigate the swimming behavior in a confinement between two solid boundaries, we developed an experimental setup to acquire three-dimensional trajectories using a piezo driven objective mount coupled to a high speed camera. Results on speed and angular velocity were consistent with motility statistics in the presence of a single boundary. Additionally, an analysis of the probability density revealed that a majority of cells accumulated near the upper and lower boundaries of the microchannel. The increase in angular velocity is consistent with previous studies, where bacteria near a solid boundary were shown to swim on circular trajectories, an effect which can be attributed to a wall induced torque. The increase in speed at a distance of several times the size of the cell body, however, cannot be explained by existing theories which either consider the drag increase on cell body and flagellum near a boundary (resistive force theory) or model the swimming microorganism by a multipole expansion to account for the flow field interaction between cell and boundary. An accumulation of swimming bacteria near solid boundaries has been observed in similar experiments. Our results confirm that collisions with the surface play an important role and hydrodynamic interactions alone cannot explain the steady-state accumulation of cells near the channel walls. Furthermore, we monitored the number growth of cells in the microchannel under medium rich conditions. We observed that, after a lag time, initially isolated cells at the surface started to grow by division into colonies of increasing size, while coexisting with a comparable smaller number of swimming cells. After 5:50 hours, we observed a sudden jump in the number of swimming cells, which was accompanied by a breakup of bigger clusters on the surface. After approximately 30 minutes where planktonic cells dominated in the microchannel, individual swimming cells reattached to the surface. We interpret this process as an emigration and recolonization event. A number of complementary experiments were performed to investigate the influence of collective effects or a depletion of the growth medium on the transition. Similar to earlier observations on another bacterium from the same family we found that the release of cells to the swimming phase is most likely the result of an individual adaption process, where syntheses of proteins for flagellar motility are upregulated after a number of division cycles at the surface.
Banking System in Russia
(2013)
1. Introduction 2. Analysis of implementation of the Basel III in China 2.1 Implementation of capital adequacy rules 2.2 Implementation of leverage ratio rules 2.3 Implementation of liquidity management rules 3. Suggestions for further development of China’s banking industry 3.1 Promoting capital structure adjustment and broadening capital supplement channels 3.2 Transforming business models and developing intermediary and off-balance business 3.3 Increasing the intensity of risk management and refining its standards
In children the way of life, nutrition and recreation changed in recent years and as a consequence body composition shifted as well. It is established that overweight belongs to a global problem. In addition, German children exhibit a less robust skeleton than ten years ago. These developments may elevate the risk of cardiovascular diseases and skeletal modifications. Heredity and environmental factors as nutrition, socioeconomic status, physical activity and inactivity influence fat accumulation and the skeletal system. Based on these negative developments associations between type of body shape, skeletal measures and physical activity; relations between external skeletal robustness, physical activity and inactivity, BMI and body fat and also the progress of body composition especially external skeletal robustness in comparison in Russian and German children were investigated. In a cross-sectional study 691 German boys and girls aged 6 to 10 years were examined. Anthropometric measurements were taken and questionnaires about physical activity and inactivity were answered by parents. Additionally, pedometers were worn to determinate the physical activity in children. To compare the body composition in Russian and German children data from the years 2000 and 2010 were used. The study has shown that pyknomorphic individuals exhibit the highest external skeletal robustness and leptomorphic ones the lowest. Leptomorphic children may have a higher risk for bone diseases in adulthood. Pyknomorphic boys are more physically active by tendency. This is assessed as positive because pyknomorphic types display the highest BMI and body fat. Results showed that physical activity may reduce BMI and body fat. In contrast physical inactivity may lead to an increase of BMI and body fat and may rise with increasing age. Physical activity encourages additionally a robust skeleton. Furthermore external skeletal robustness is associated with BMI in order that BMI as a measure of overweight should be consider critically. The international 10-year comparison has shown an increase of BMI in Russian children and German boys. Currently, Russian children exhibit a higher external skeletal robustness than the Germans. However, in Russian boys skeleton is less robust than ten years ago. This trend should be observed in the future as well in other countries. All in all, several measures should be used to describe health situation in children and adults. Furthermore, in children it is essential to support physical activity in order to reduce the risk of obesity and to maintain a robust skeleton. In this way diseases are able to prevent in adulthood.
A method is presented of acquiring the principles of three sorting algorithms through developing interactive applications in Excel.
In various biological systems and small scale technological applications particles transiently bind to a cylindrical surface. Upon unbinding the particles diffuse in the vicinal bulk before rebinding to the surface. Such bulk-mediated excursions give rise to an effective surface translation, for which we here derive and discuss the dynamic equations, including additional surface diffusion. We discuss the time evolution of the number of surface-bound particles, the effective surface mean squared displacement, and the surface propagator. In particular, we observe sub- and superdiffusive regimes. A plateau of the surface mean-squared displacement reflects a stalling of the surface diffusion at longer times. Finally, the corresponding first passage problem for the cylindrical geometry is analysed.
Business processes are instrumental to manage work in organisations. To study the interdependencies between business processes, Business Process Architectures have been introduced. These express trigger and message ow relations between business processes. When we investigate real world Business Process Architectures, we find complex interdependencies, involving multiple process instances. These aspects have not been studied in detail so far, especially concerning correctness properties. In this paper, we propose a modular transformation of BPAs to open nets for the analysis of behavior involving multiple business processes with multiplicities. For this purpose we introduce intermediary nets to portray semantics of multiplicity specifications. We evaluate our approach on a use case from the public sector.
Cost models are an essential part of database systems, as they are the basis of query performance optimization. Based on predictions made by cost models, the fastest query execution plan can be chosen and executed or algorithms can be tuned and optimised. In-memory databases shifts the focus from disk to main memory accesses and CPU costs, compared to disk based systems where input and output costs dominate the overall costs and other processing costs are often neglected. However, modelling memory accesses is fundamentally different and common models do not apply anymore. This work presents a detailed parameter evaluation for the plan operators scan with equality selection, scan with range selection, positional lookup and insert in in-memory column stores. Based on this evaluation, a cost model based on cache misses for estimating the runtime of the considered plan operators using different data structures is developed. Considered are uncompressed columns, bit compressed and dictionary encoded columns with sorted and unsorted dictionaries. Furthermore, tree indices on the columns and dictionaries are discussed. Finally, partitioned columns consisting of one partition with a sorted and one with an unsorted dictionary are investigated. New values are inserted in the unsorted dictionary partition and moved periodically by a merge process to the sorted partition. An efficient attribute merge algorithm is described, supporting the update performance required to run enterprise applications on read-optimised databases. Further, a memory traffic based cost model for the merge process is provided.
Lakes are increasingly being recognized as an important component of the global carbon cycle, yet anthropogenic activities that alter their community structure may change the way they transport and process carbon. This research focuses on the relationship between carbon cycling and community structure of primary producers in small, shallow lakes, which are the most abundant lake type in the world, and furthermore subject to intense terrestrial-aquatic coupling due to their high perimeter:area ratio. Shifts between macrophyte and phytoplankton dominance are widespread and common in shallow lakes, with potentially large consequences to regional carbon cycling. I thus compared a lake with clear-water conditions and a submerged macrophyte community to a turbid, phytoplankton-dominated lake, describing differences in the availability, processing, and export of organic and inorganic carbon. I furthermore examined the effects of increasing terrestrial carbon inputs on internal carbon cycling processes. Pelagic diel (24-hour) oxygen curves and independent fluorometric approaches of individual primary producers together indicated that the presence of a submerged macrophyte community facilitated higher annual rates of gross primary production than could be supported in a phytoplankton-dominated lake at similar nutrient concentrations. A simple model constructed from the empirical data suggested that this difference between regime types could be common in moderately eutrophic lakes with mean depths under three to four meters, where benthic primary production is a potentially major contributor to the whole-lake primary production. It thus appears likely that a regime shift from macrophyte to phytoplankton dominance in shallow lakes would typically decrease the quantity of autochthonous organic carbon available to lake food webs. Sediment core analyses indicated that a regime shift from macrophyte to phytoplankton dominance was associated with a four-fold increase in carbon burial rates, signalling a major change in lake carbon cycling dynamics. Carbon mass balances suggested that increasing carbon burial rates were not due to an increase in primary production or allochthonous loading, but instead were due to a higher carbon burial efficiency (carbon burial / carbon deposition). This, in turn, was associated with diminished benthic mineralization rates and an increase in calcite precipitation, together resulting in lower surface carbon dioxide emissions. Finally, a period of unusually high precipitation led to rising water levels, resulting in a feedback loop linking increasing concentrations of dissolved organic carbon (DOC) to severely anoxic conditions in the phytoplankton-dominated system. High water levels and DOC concentrations diminished benthic primary production (via shading) and boosted pelagic respiration rates, diminishing the hypolimnetic oxygen supply. The resulting anoxia created redox conditions which led to a major release of nutrients, DOC, and iron from the sediments. This further transformed the lake metabolism, providing a prolonged summertime anoxia below a water depth of 1 m, and leading to the near-complete loss of fish and macroinvertebrates. Pelagic pH levels also decreased significantly, increasing surface carbon dioxide emissions by an order of magnitude compared to previous years. Altogether, this thesis adds an important body of knowledge to our understanding of the significance of the benthic zone to carbon cycling in shallow lakes. The contribution of the benthic zone towards whole-lake primary production was quantified, and was identified as an important but vulnerable site for primary production. Benthic mineralization rates were furthermore found to influence carbon burial and surface emission rates, and benthic primary productivity played an important role in determining hypolimnetic oxygen availability, thus controlling the internal sediment loading of nutrients and carbon. This thesis also uniquely demonstrates that the ecological community structure (i.e. stable regime) of a eutrophic, shallow lake can significantly influence carbon availability and processing. By changing carbon cycling pathways, regime shifts in shallow lakes may significantly alter the role of these ecosystems with respect to the global carbon cycle.
Cellulose is the most abundant biopolymer on earth. In this work it has been used, in various forms ranging from wood to fully processed laboratory grade microcrystalline cellulose, to synthesise a variety of metal and metal carbide nanoparticles and to establish structuring and patterning methodologies that produce highly functional nano-hybrids. To achieve this, the mechanisms governing the catalytic processes that bring about graphitised carbons in the presence of iron have been investigated. It was found that, when infusing cellulose with an aqueous iron salt solution and heating this mixture under inert atmosphere to 640 °C and above, a liquid eutectic mixture of iron and carbon with an atom ratio of approximately 1:1 forms. The eutectic droplets were monitored with in-situ TEM at the reaction temperature where they could be seen dissolving amorphous carbon and leaving behind a trail of graphitised carbon sheets and subsequently iron carbide nanoparticles. These transformations turned ordinary cellulose into a conductive and porous matrix that is well suited for catalytic applications. Despite these significant changes on the nanometre scale the shape of the matrix as a whole was retained with remarkable precision. This was exemplified by folding a sheet of cellulose paper into origami cranes and converting them via the temperature treatment in to magnetic facsimiles of those cranes. The study showed that the catalytic mechanisms derived from controlled systems and described in the literature can be transferred to synthetic concepts beyond the lab without loss of generality. Once the processes determining the transformation of cellulose into functional materials were understood, the concept could be extended to other metals and metal-combinations. Firstly, the procedure was utilised to produce different ternary iron carbides in the form of MxFeyC (M = W, Mn). None of those ternary carbides have thus far been produced in a nanoparticle form. The next part of this work encompassed combinations of iron with cobalt, nickel, palladium and copper. All of those metals were also probed alone in combination with cellulose. This produced elemental metal and metal alloy particles of low polydispersity and high stability. Both features are something that is typically not associated with high temperature syntheses and enables to connect the good size control with a scalable process. Each of the probed reactions resulted in phase pure, single crystalline, stable materials. After showing that cellulose is a good stabilising and separating agent for all the investigated types of nanoparticles, the focus of the work at hand is shifted towards probing the limits of the structuring and pattering capabilities of cellulose. Moreover possible post-processing techniques to further broaden the applicability of the materials are evaluated. This showed that, by choosing an appropriate paper, products ranging from stiff, self-sustaining monoliths to ultra-thin and very flexible cloths can be obtained after high temperature treatment. Furthermore cellulose has been demonstrated to be a very good substrate for many structuring and patterning techniques from origami folding to ink-jet printing. The thereby resulting products have been employed as electrodes, which was exemplified by electrodepositing copper onto them. Via ink-jet printing they have additionally been patterned and the resulting electrodes have also been post functionalised by electro-deposition of copper onto the graphitised (printed) parts of the samples. Lastly in a preliminary test the possibility of printing several metals simultaneously and thereby producing finely tuneable gradients from one metal to another have successfully been made. Starting from these concepts future experiments were outlined. The last chapter of this thesis concerned itself with alternative synthesis methods of the iron-carbon composite, thereby testing the robustness of the devolved reactions. By performing the synthesis with partly dissolved scrap metal and pieces of raw, dry wood, some progress for further use of the general synthesis technique were made. For example by using wood instead of processed cellulose all the established shaping techniques available for wooden objects, such as CNC milling or 3D prototyping, become accessible for the synthesis path. Also by using wood its intrinsic well defined porosity and the fact that large monoliths are obtained help expanding the prospect of using the composite. It was also demonstrated in this chapter that the resulting material can be applied for the environmentally important issue of waste water cleansing. Additionally to being made from renewable resources and by a cheap and easy one-pot synthesis, the material is recyclable, since the pollutants can be recovered by washing with ethanol. Most importantly this chapter covered experiments where the reaction was performed in a crude, home-built glass vessel, fuelled – with the help of a Fresnel lens – only by direct concentrated sunlight irradiation. This concept carries the thus far presented synthetic procedures from being common laboratory syntheses to a real world application. Based on cellulose, transition metals and simple equipment, this work enabled the easy one-pot synthesis of nano-ceramic and metal nanoparticle composites otherwise not readily accessible. Furthermore were structuring and patterning techniques and synthesis routes involving only renewable resources and environmentally benign procedures established here. Thereby it has laid the foundation for a multitude of applications and pointed towards several future projects reaching from fundamental research, to application focussed research and even and industry relevant engineering project was envisioned.
Within a research project about future sustainable water management options in the Elbe River basin, quasi-natural discharge scenarios had to be provided. The semi-distributed eco-hydrological model SWIM was utilised for this task. According to scenario simulations driven by the stochastical climate model STAR, the region would get distinctly drier. However, this thesis focuses on the challenge of meeting the requirement of high model fidelity even for smaller sub-basins. Usually, the quality of the simulations is lower at inner points than at the outlet. Four research paper chapters and the discussion chapter deal with the reasons for local model deviations and the problem of optimal spatial calibration. Besides other assessments, the Markov Chain Monte Carlo method is applied to show whether evapotranspiration or precipitation should be corrected to minimise runoff deviations, principal component analysis is used in an unusual way to evaluate local precipitation alterations by land cover changes, and remotely sensed surface temperatures allow for an independent view on the evapotranspiration landscape. The overall insight is that spatially explicit hydrological modelling of such a large river basin requires a lot of local knowledge. It probably needs more time to obtain such knowledge as is usually provided for hydrological modelling studies.
Challenging Khmer citizenship : minorities, the state, and the international community in Cambodia
(2013)
The idea of a distinctly ‘liberal’ form of multiculturalism has emerged in the theory and practice of Western democracies and the international community has become actively engaged in its global dissemination via international norms and organizations. This thesis investigates the internationalization of minority rights, by exploring state-minority relations in Cambodia, in light of Will Kymlicka’s theory of multicultural citizenship. Based on extensive empirical research, the analysis explores the situation and aspirations of Cambodia’s ethnic Vietnamese, highland peoples, Muslim Cham, ethnic Chinese and Lao and the relationships between these groups and the state. All Cambodian regimes since independence have defined citizenship with reference to the ethnicity of the Khmer majority and have - often violently - enforced this conception through the assimilation of highland peoples and the Cham and the exclusion of ethnic Vietnamese and Chinese. Cambodia’s current constitution, too, defines citizenship ethnically. State-sponsored Khmerization systematically privileges members of the majority culture and marginalizes minority members politically, economically and socially. The thesis investigates various international initiatives aimed at promoting application of minority rights norms in Cambodia. It demonstrates that these initiatives have largely failed to accomplish a greater degree of compliance with international norms in practice. This failure can be explained by a number of factors, among them Cambodia’s neo-patrimonial political system, the geo-political fears of a ‘minoritized’ Khmer majority, the absence of effective regional security institutions, the lack of minority access to political decision-making, the significant differences between international and Cambodian conceptions of modern statehood and citizenship and the emergence of China as Cambodia’s most important bilateral donor and investor. Based on this analysis, the dissertation develops recommendations for a sequenced approach to minority rights promotion, with pragmatic, less ambitious shorter-term measures that work progressively towards achievement of international norms in the longer-term.
When playing violent video games, aggressive actions are performed against the background of an originally neutral environment, and associations are formed between cues related to violence and contextual features. This experiment examined the hypothesis that neutral contextual features of a virtual environment become associated with aggressive meaning and acquire the function of primes for aggressive cognitions. Seventy-six participants were assigned to one of two violent video game conditions that varied in context (ship vs. city environment) or a control condition. Afterwards, they completed a Lexical Decision Task to measure the accessibility of aggressive cognitions in which they were primed either with ship-related or city-related words. As predicted, participants who had played the violent game in the ship environment had shorter reaction times for aggressive words following the ship primes than the city primes, whereas participants in the city condition responded faster to the aggressive words following the city primes compared to the ship primes. No parallel effect was observed for the non-aggressive targets. The findings indicate that the associations between violent and neutral cognitions learned during violent game play facilitate the accessibility of aggressive cognitions.
1. Introduction of China’s bank reform 1.1 Stage 1 (1978–1993): Rebuilding the financial system 1.2 Stage 2 (1994–1997): Regulating the financial system 1.3 Stage 3 (1998–2002): Deepening reform of state-owned commercial banks 1.4 Stage 4 (2003-present): Public listing of state-owned banks 2. The roles of SWF in China’s bank reform 3. Future challenges
Deep into the second half of the twentieth century the traditionalist definition of India as a country of villages remained dominant in official political rhetoric as well as cultural production. In the past two decades or so, this ruralist paradigm has been effectively superseded by a metropolitan imaginary in which the modern, globalised megacity increasingly functions as representative of India as a whole. Has the village, then, entirely vanished from the cultural imaginary in contemporary India? Addressing economic practices from upper-class consumerism to working-class family support strategies, this paper attempts to trace how ‘the village’ resurfaces or survives as a cultural reference point in the midst of the urban.
Die Anpassung von Sektoren an veränderte klimatische Bedingungen erfordert ein Verständnis von regionalen Vulnerabilitäten. Vulnerabilität ist als Funktion von Sensitivität und Exposition, welche potentielle Auswirkungen des Klimawandels darstellen, und der Anpassungsfähigkeit von Systemen definiert. Vulnerabilitätsstudien, die diese Komponenten quantifizieren, sind zu einem wichtigen Werkzeug in der Klimawissenschaft geworden. Allerdings besteht von der wissenschaftlichen Perspektive aus gesehen Uneinigkeit darüber, wie diese Definition in Studien umgesetzt werden soll. Ausdiesem Konflikt ergeben sich viele Herausforderungen, vor allem bezüglich der Quantifizierung und Aggregierung der einzelnen Komponenten und deren angemessenen Komplexitätsniveaus. Die vorliegende Dissertation hat daher zum Ziel die Anwendbarkeit des Vulnerabilitätskonzepts voranzubringen, indem es in eine systematische Struktur übersetzt wird. Dies beinhaltet alle Komponenten und schlägt für jede Klimaauswirkung (z.B. Sturzfluten) eine Beschreibung des vulnerablen Systems vor (z.B. Siedlungen), welches direkt mit einer bestimmten Richtung eines relevanten klimatischen Stimulus in Verbindung gebracht wird (z.B. stärkere Auswirkungen bei Zunahme der Starkregentage). Bezüglich der herausfordernden Prozedur der Aggregierung werden zwei alternative Methoden, die einen sektorübergreifenden Überblick ermöglichen, vorgestellt und deren Vor- und Nachteile diskutiert. Anschließend wird die entwickelte Struktur einer Vulnerabilitätsstudie mittels eines indikatorbasierten und deduktiven Ansatzes beispielhaft für Gemeinden in Nordrhein-Westfalen in Deutschland angewandt. Eine Übertragbarkeit auf andere Regionen ist dennoch möglich. Die Quantifizierung für die Gemeinden stützt sich dabei auf Informationen aus der Literatur. Da für viele Sektoren keine geeigneten Indikatoren vorhanden waren, werden in dieser Arbeit neue Indikatoren entwickelt und angewandt, beispielsweise für den Forst- oder Gesundheitssektor. Allerdings stellen fehlende empirische Daten bezüglich relevanter Schwellenwerte eine Lücke dar, beispielsweise welche Stärke von Klimaänderungen eine signifikante Auswirkung hervorruft. Dies führt dazu, dass die Studie nur relative Aussagen zum Grad der Vulnerabilität jeder Gemeinde im Vergleich zum Rest des Bundeslandes machen kann. Um diese Lücke zu füllen, wird für den Forstsektor beispielhaft die heutige und zukünftige Sturmwurfgefahr von Wäldern berechnet. Zu diesem Zweck werden die Eigenschaften der Wälder mit empirischen Schadensdaten eines vergangenen Sturmereignisses in Verbindung gebracht. Der sich daraus ergebende Sensitivitätswert wird anschließend mit den Windverhältnissen verknüpft. Sektorübergreifende Vulnerabilitätsstudien erfordern beträchtliche Ressourcen, was oft deren Anwendbarkeit erschwert. In einem nächsten Schritt wird daher das Potential einer Vereinfachung der Komplexität anhand zweier sektoraler Beispiele untersucht. Um das Auftreten von Waldbränden vorherzusagen, stehen zahlreiche meteorologische Indices zur Verfügung, welche eine Spannbreite unterschiedlicher Komplexitäten aufweisen. Bezüglich der Anzahl monatlicher Waldbrände weist die relative Luftfeuchtigkeit für die meisten deutschen Bundesländer eine bessere Vorhersagekraft als komplexere Indices auf. Dies ist er Fall, obgleich sie selbst als Eingangsvariable für die komplexeren Indices verwendet wird. Mit Hilfe dieses einzelnen meteorologischen Faktors kann also die Waldbrandgefahr in deutschen Region ausreichend genau ausgedrückt werden, was die Ressourceneffizienz von Studien erhöht. Die Methodenkomplexität wird auf ähnliche Weise hinsichtlich der Anwendung des ökohydrologischen Modells SWIM für die Region Brandenburg untersucht. Die interannuellen Bodenwasserwerte, welche durch dieses Modell simuliert werden, können nur unzureichend durch ein einfacheres statistisches Modell, welches auf denselben Eingangsdaten aufbaut, abgebildet werden. Innerhalb eines Zeithorizonts von Jahrzehnten, kann der statistische Ansatz jedoch das Bodenwasser zufriedenstellend abbilden und zeigt eine Dominanz der Bodeneigenschaft Feldkapazität. Dies deutet darauf hin, dass die Komplexität im Hinblick auf die Anzahl der Eingangsvariablen für langfristige Berechnungen reduziert werden kann. Allerdings sind die Aussagen durch fehlende beobachtete Bodenwasserwerte zur Validierung beschränkt. Die vorliegenden Studien zur Vulnerabilität und ihren Komponenten haben gezeigt, dass eine Anwendung noch immer wissenschaftlich herausfordernd ist. Folgt man der hier verwendeten Vulnerabilitätsdefinition, treten zahlreiche Probleme bei der Implementierung in regionalen Studien auf. Mit dieser Dissertation wurden Fortschritte bezüglich der aufgezeigten Lücken bisheriger Studien erzielt, indem eine systematische Struktur für die Beschreibung und Aggregierung von Vulnerabilitätskomponenten erarbeitet wurde. Hierfür wurden mehrere Ansätze diskutiert, die jedoch Vor- und Nachteile besitzen. Diese sollten vor der Anwendung von zukünftigen Studien daher ebenfalls sorgfältig abgewogen werden. Darüber hinaus hat sich gezeigt, dass ein Potential besteht einige Ansätze zu vereinfachen, jedoch sind hierfür weitere Untersuchungen nötig. Insgesamt konnte die Dissertation die Anwendung von Vulnerabilitätsstudien als Werkzeug zur Unterstützung von Anpassungsmaßnahmen stärken.
Black shales are sedimentary rocks with a high content of organic carbon, which leads to a dark grayish to black color. Due to their potential to contain oil or gas, black shales are of great interest for the support of the worldwide energy supply. An integrated seismic investigation of the Lower Palaeozoic black shales was carried out at the Danish island Bornholm to locate the shallow-lying Alum Shale layer and its surrounding formations and to characterize its potential as a source rock. Therefore, two seismic experiments at a total of three crossing profiles were carried out in October 2010 and in June 2012 in the southern part of the island. Two different active measurements were conducted with either a weight drop source or a minivibrator. Additionally, the ambient noise field was recorded at the study location over a time interval of about one day, and also a laboratory analysis of borehole samples was carried out. The seismic profiles were positioned as close as possible to two scientific boreholes which were used for comparative purposes. The seismic field data was analyzed with traveltime tomography, surface wave inversion and seismic interferometry to obtain the P-wave and S-wave velocity models of the subsurface. The P-wave velocity models which were determined for all three profiles clearly locate the Alum Shale layer between the Komstad Limestone layer on top and the Læså Sandstone Formation at the base of the models. The black shale layer has P-wave velocities around 3 km/s which are lower compared to the adjacent formations. Due to a very good agreement of the sonic log and the vertical velocity profiles of the two seismic lines, which are directly crossing the borehole where the sonic log was conducted, the reliability of the traveltime tomography is proven. A correlation of the seismic velocities with the content of organic carbon is an important task for the characterization of the reservoir properties of a black shale formation. It is not possible without calibration but in combination with a full 2D tomographic image of the subsurface it gives the subsurface distribution of the organic material. The S-wave model obtained with surface wave inversion of the vibroseis data of one of the profiles images the Alum Shale layer also very well with S-wave velocities around 2 km/s. Although individual 1D velocity models for each of the source positions were determined, the subsurface S-wave velocity distribution is very uniform with a good match between the single models. A really new approach described here is the application of seismic interferometry to a really small study area and a quite short time interval. Also new is the selective procedure of only using time windows with the best crosscorrelation signals to achieve the final interferograms. Due to the small scale of the interferometry even P-wave signals can be observed in the final crosscorrelations. In the laboratory measurements the seismic body waves were recorded for different pressure and temperature stages. Therefore, samples of different depths of the Alum Shale were available from one of the scientific boreholes at the study location. The measured velocities have a high variance with changing pressure or temperature. Recordings with wave propagation both parallel and perpendicular to the bedding of the samples reveal a great amount of anisotropy for the P-wave velocity, whereas the S-wave velocity is almost independent of the wave direction. The calculated velocity ratio is also highly anisotropic with very low values for the perpendicular samples and very high values for the parallel ones. Interestingly, the laboratory velocities of the perpendicular samples are comparable to the velocities of the field experiments indicating that the field measurements are sensitive to wave propagation in vertical direction. The velocity ratio is also calculated with the P-wave and S-wave velocity models of the field experiments. Again, the Alum Shale can be clearly separated from the adjacent formations because it shows overall very low vP/vS ratios around 1.4. The very low velocity ratio indicates the content of gas in the black shale formation. With the combination of all the different methods described here, a comprehensive interpretation of the seismic response of the black shale layer can be made and the hydrocarbon source rock potential can be estimated.
We shall examine the Pedagogical Content Knowledge (PCK) of Computer Science (CS) teachers concerning students’ Computational Thinking (CT) problem solving skills within the context of a CS course in Dutch secondary education and thus obtain an operational definition of CT and ascertain appropriate teaching methodology. Next we shall develop an instrument to assess students’ CT and design a curriculum intervention geared toward teaching and improving students’ CT problem solving skills and competences. As a result, this research will yield an operational definition of CT, knowledge about CT PCK, a CT assessment instrument and teaching materials and accompanying teacher instructions. It shall contribute to CS teacher education, development of CT education and to education in other (STEM) subjects where CT plays a supporting role, both nationally and internationally.
This talk will describe My Digital Life (TU100), a distance learning module that introduces computer science through immediate engagement with ubiquitous computing (ubicomp). This talk will describe some of the principles and concepts we have adopted for this modern computing introduction: the idea of the ‘informed digital citizen’; engagement through narrative; playful pedagogy; making the power of ubicomp available to novices; setting technical skills in real contexts. It will also trace how the pedagogy is informed by experiences and research in Computer Science education.
The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
Continuous synthesis of pyridocarbazoles and initial photophysical and bioprobe characterization
(2013)
Pyridocarbazoles when ligated to transition metals yield high affinity kinase inhibitors. While batch photocyclizations enable the synthesis of these heterocycles, the non-oxidative Mallory reaction only provides modest yields and difficult to purify mixtures. We demonstrate here that a flow-based Mallory cyclization provides superior results and enables observation of a clear isobestic point. The flow method allowed us to rapidly synthesize ten pyridocarbazoles and for the first time to document their interesting photophysical attributes. Preliminary characterization reveals that these molecules might be a new class of fluorescent bioprobe.
African states are often called corrupt indicating that the political system in Africa differs from the one prevalent in the economically advanced democracies. This however does not give us any insight into what makes corruption the ruling norm of African statehood. Thus we must turn to the overly neglected theoretical work on the political economy of Africa in order to determine how the poverty of governance in Africa is firmly anchored both in Africa’s domestic socioeconomic reality, as well as in the region’s role in the international economic order. Instead of focusing on increased monitoring, enforcement and formal democratic procedures, this book integrates economic analysis with political theory in order to arrive at a better understanding of the political-economic roots of corruption in Sub-Saharan Africa.
This cumulative dissertation explored the use of the detection of natural background of fast neutrons, the so-called cosmic-ray neutron sensing (CRS) approach to measure field-scale soil moisture in cropped fields. Primary cosmic rays penetrate the top atmosphere and interact with atmospheric particles. Such interaction results on a cascade of high-energy neutrons, which continue traveling through the atmospheric column. Finally, neutrons penetrate the soil surface and a second cascade is produced with the so-called secondary cosmic-ray neutrons (fast neutrons). Partly, fast neutrons are absorbed by hydrogen (soil moisture). Remaining neutrons scatter back to the atmosphere, where its flux is inversely correlated to the soil moisture content, therefore allowing a non-invasive indirect measurement of soil moisture. The CRS methodology is mainly evaluated based on a field study carried out on a farmland in Potsdam (Brandenburg, Germany) along three crop seasons with corn, sunflower and winter rye; a bare soil period; and two winter periods. Also, field monitoring was carried out in the Schaefertal catchment (Harz, Germany) for long-term testing of CRS against ancillary data. In the first experimental site, the CRS method was calibrated and validated using different approaches of soil moisture measurements. In a period with corn, soil moisture measurement at the local scale was performed at near-surface only, and in subsequent periods (sunflower and winter rye) sensors were placed in three depths (5 cm, 20 cm and 40 cm). The direct transfer of CRS calibration parameters between two vegetation periods led to a large overestimation of soil moisture by the CRS. Part of this soil moisture overestimation was attributed to an underestimation of the CRS observation depth during the corn period ( 5-10 cm), which was later recalculated to values between 20-40 cm in other crop periods (sunflower and winter rye). According to results from these monitoring periods with different crops, vegetation played an important role on the CRS measurements. Water contained also in crop biomass, above and below ground, produces important neutron moderation. This effect was accounted for by a simple model for neutron corrections due to vegetation. It followed crop development and reduced overall CRS soil moisture error for periods of sunflower and winter rye. In Potsdam farmland also inversely-estimated soil hydraulic parameters were determined at the field scale, using CRS soil moisture from the sunflower period. A modelling framework coupling HYDRUS-1D and PEST was applied. Subsequently, field-scale soil hydraulic properties were compared against local scale soil properties (modelling and measurements). Successful results were obtained here, despite large difference in support volume. Simple modelling framework emphasizes future research directions with CRS soil moisture to parameterize field scale models. In Schaefertal catchment, CRS measurements were verified using precipitation and evapotranspiration data. At the monthly resolution, CRS soil water storage was well correlated to these two weather variables. Also clearly, water balance could not be closed due to missing information from other compartments such as groundwater, catchment discharge, etc. In the catchment, the snow influence to natural neutrons was also evaluated. As also observed in Potsdam farmland, CRS signal was strongly influenced by snow fall and snow accumulation. A simple strategy to measure snow was presented for Schaefertal case. Concluding remarks of this dissertation showed that (a) the cosmic-ray neutron sensing (CRS) has a strong potential to provide feasible measurement of mean soil moisture at the field scale in cropped fields; (b) CRS soil moisture is strongly influenced by other environmental water pools such as vegetation and snow, therefore these should be considered in analysis; (c) CRS water storage can be used for soil hydrology modelling for determination of soil hydraulic parameters; and (d) CRS approach has strong potential for long term monitoring of soil moisture and for addressing studies of water balance.
Background: Recent studies have demonstrated a superior diagnostic accuracy of cardiovascular magnetic resonance (CMR) for the detection of coronary artery disease (CAD). We aimed to determine the comparative cost-effectiveness of CMR versus single-photon emission computed tomography (SPECT).
Methods: Based on Bayes' theorem, a mathematical model was developed to compare the cost-effectiveness and utility of CMR with SPECT in patients with suspected CAD. Invasive coronary angiography served as the standard of reference. Effectiveness was defined as the accurate detection of CAD, and utility as the number of quality-adjusted life-years (QALYs) gained. Model input parameters were derived from the literature, and the cost analysis was conducted from a German health care payer's perspective. Extensive sensitivity analyses were performed.
Results: Reimbursement fees represented only a minor fraction of the total costs incurred by a diagnostic strategy. Increases in the prevalence of CAD were generally associated with improved cost-effectiveness and decreased costs per utility unit (Delta QALY). By comparison, CMR was consistently more cost-effective than SPECT, and showed lower costs per QALY gained. Given a CAD prevalence of 0.50, CMR was associated with total costs of (sic)6,120 for one patient correctly diagnosed as having CAD and with (sic)2,246 per Delta QALY gained versus (sic)7,065 and (sic)2,931 for SPECT, respectively. Above a threshold value of CAD prevalence of 0.60, proceeding directly to invasive angiography was the most cost-effective approach.
Conclusions: In patients with low to intermediate CAD probabilities, CMR is more cost-effective than SPECT. Moreover, lower costs per utility unit indicate a superior clinical utility of CMR.
We introduce the notion of coupling distances on the space of Lévy measures in order to quantify rates of convergence towards a limiting Lévy jump diffusion in terms of its characteristic triplet, in particular in terms of the tail of the Lévy measure. The main result yields an estimate of the Wasserstein-Kantorovich-Rubinstein distance on path space between two Lévy diffusions in terms of the couping distances. We want to apply this to obtain precise rates of convergence for Markov chain approximations and a statistical goodness-of-fit test for low-dimensional conceptual climate models with paleoclimatic data.
Under standard conditions the cross metathesis of allyl alcohols and methyl acrylate is accompanied by the formation of ketones, resulting from uncontrolled and undesired double bond isomerization. By conducting the CM in the presence of phenol, the catalyst loading and the reaction time required for quantiative conversion can be reduced, and isomerization can be suppressed. On the other hand, consecutive isomerization can be deliberately promoted by evaporating excess methyl acrylate after completing cross metathesis and by adding a base or silane as chemical triggers.
Crowded field spectroscopy and the search for intermediate-mass black holes in globular clusters
(2013)
Globular clusters are dense and massive star clusters that are an integral part of any major galaxy. Careful studies of their stars, a single cluster may contain several millions of them, have revealed that the ages of many globular clusters are comparable to the age of the Universe. These remarkable ages make them valuable probes for the exploration of structure formation in the early universe or the assembly of our own galaxy, the Milky Way. A topic of current research relates to the question whether globular clusters harbour massive black holes in their centres. These black holes would bridge the gap from stellar mass black holes, that represent the final stage in the evolution of massive stars, to supermassive ones that reside in the centres of galaxies. For this reason, they are referred to as intermediate-mass black holes. The most reliable method to detect and to weigh a black hole is to study the motion of stars inside its sphere of influence. The measurement of Doppler shifts via spectroscopy allows one to carry out such dynamical studies. However, spectroscopic observations in dense stellar fields such as Galactic globular clusters are challenging. As a consequence of diffraction processes in the atmosphere and the finite resolution of a telescope, observed stars have a finite width characterized by the point spread function (PSF), hence they appear blended in crowded stellar fields. Classical spectroscopy does not preserve any spatial information, therefore it is impossible to separate the spectra of blended stars and to measure their velocities. Yet methods have been developed to perform imaging spectroscopy. One of those methods is integral field spectroscopy. In the course of this work, the first systematic study on the potential of integral field spectroscopy in the analysis of dense stellar fields is carried out. To this aim, a method is developed to reconstruct the PSF from the observed data and to use this information to extract the stellar spectra. Based on dedicated simulations, predictions are made on the number of stellar spectra that can be extracted from a given data set and the quality of those spectra. Furthermore, the influence of uncertainties in the recovered PSF on the extracted spectra are quantified. The results clearly show that compared to traditional approaches, this method makes a significantly larger number of stars accessible to a spectroscopic analysis. This systematic study goes hand in hand with the development of a software package to automatize the individual steps of the data analysis. It is applied to data of three Galactic globular clusters, M3, M13, and M92. The data have been observed with the PMAS integral field spectrograph at the Calar Alto observatory with the aim to constrain the presence of intermediate-mass black holes in the centres of the clusters. The application of the new analysis method yields samples of about 80 stars per cluster. These are by far the largest spectroscopic samples that have so far been obtained in the centre of any of the three clusters. In the course of the further analysis, Jeans models are calculated for each cluster that predict the velocity dispersion based on an assumed mass distribution inside the cluster. The comparison to the observed velocities of the stars shows that in none of the three clusters, a massive black hole is required to explain the observed kinematics. Instead, the observations rule out any black hole in M13 with a mass higher than 13000 solar masses at the 99.7% level. For the other two clusters, this limit is at significantly lower masses, namely 2500 solar masses in M3 and 2000 solar masses in M92. In M92, it is possible to lower this limit even further by a combined analysis of the extracted stars and the unresolved stellar component. This component consists of the numerous stars in the cluster that appear unresolved in the integral field data. The final limit of 1300 solar masses is the lowest limit obtained so far for a massive globular cluster.
Calcium phosphate nanofibers with a diameter of only a few nanometers and a cotton-ball-like aggregate morphology have been reported several times in the literature. Although fiber formation seems reproducible in a variety of conditions, the crystal structure and chemical composition of the fibers have been elusive. Using scanning transmission electron microscopy, low dose electron (nano)diffraction, energy-dispersive X-ray spectroscopy, and energy-filtered transmission electron microscopy, we have assigned crystal structures and chemical compositions to the fibers. Moreover, we demonstrate that the mineralization process yields true polymer/calcium phosphate hybrid materials where the block copolymer template is closely associated with the calcium phosphate.
Deepening understanding
(2013)
Deepening understanding
(2013)
1. Key concepts 2. What students should have done 3. What students did 4. Deepening understanding 5. General description of deepening understanding 6. Why is deepening understanding an important stage? 7. How does deepening understanding occur in the lessons and some examples 8. Possible difficulties 9. Conclusion
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains.
The sharply rising level of atmospheric carbon dioxide resulting from anthropogenic emissions is one of the greatest environmental concerns facing our civilization today. Metal-organic frameworks (MOFs) are a new class of materials that constructed by metal-containing nodes bonded to organic bridging ligands. MOFs could serve as an ideal platform for the development of next generation CO2 capture materials owing to their large capacity for the adsorption of gases and their structural and chemical tunability. The ability to rationally select the framework components is expected to allow the affinity of the internal pore surface toward CO2 to be precisely controlled, facilitating materials properties that are optimized for the specific type of CO2 capture to be performed (post-combustion capture, precombustion capture, or oxy-fuel combustion) and potentially even for the specific power plant in which the capture system is to be installed. For this reason, significant effort has been made in recent years in improving the gas separation performance of MOFs and some studies evaluating the prospects of deploying these materials in real-world CO2 capture systems have begun to emerge. We have developed six new MOFs, denoted as IFPs (IFP-5, -6, -7, -8, -9, -10, IFP = Imidazolate Framework Potsdam) and two hydrogen-bonded molecular building block (MBB, named as 1 and 2 for Zn and Co based, respectively) have been synthesized, characterized and applied for gas storage. The structure of IFP possesses 1D hexagonal channels. Metal centre and the substituent groups of C2 position of the linker protrude into the open channels and determine their accessible diameter. Interestingly, the channel diameters (range : 0.3 to 5.2 Å) for IFP structures are tuned by the metal centre (Zn, Co and Cd) and substituent of C2 position of the imidazolate linker. Moreover hydrogen bonded MBB of 1 and 2 is formed an in situ functionalization of a ligand under solvothermal condition. Two different types of channels are observed for 1 and 2. Materials contain solvent accessible void space. Solvent could be easily removed by under high vacuum. The porous framework has maintained the crystalline integrity even without solvent molecules. N2, H2, CO2 and CH4 gas sorption isotherms were performed. Gas uptake capacities are comparable with other frameworks. Gas uptake capacity is reduced when the channel diameter is narrow. For example, the channel diameter of IFP-5 (channel diameter: 3.8 Å) is slightly lower than that of IFP-1 (channel diameter: 4.2 Å); hence, the gas uptake capacity and Brunauer-Emmett-Teller (BET) surface area are slightly lower than IFP-1. The selectivity does not depend only on the size of the gas components (kinetic diameter: CO2 3.3 Å, N2 3.6 Å and CH4 3.8 ) but also on the polarizability of the surface and of the gas components. IFP-5 and-6 have the potential applications for the separation of CO2 and CH4 from N2-containing gas mixtures and CO2 and CH4 containing gas mixtures. Gas sorption isotherms of IFP-7, -8, -9, -10 exhibited hysteretic behavior due to flexible alkoxy (e.g., methoxy and ethoxy) substituents. Such phenomenon is a kind of gate effects which is rarely observed in microporous MOFs. IFP-7 (Zn-centred) has a flexible methoxy substituent. This is the first example where a flexible methoxy substituent shows the gate opening behavior in a MOF. Presence of methoxy functional group at the hexagonal channels, IFP-7 acted as molecular gate for N2 gas. Due to polar methoxy group and channel walls, wide hysteretic isotherm was observed during gas uptake. The N2 The estimated BET surface area for 1 is 471 m2 g-1 and the Langmuir surface area is 570 m2 g-1. However, such surface area is slightly higher than azolate-based hydrogen-bonded supramolecular assemblies and also comparable and higher than some hydrogen-bonded porous organic molecules.
Developing critical thinking
(2013)
Developing critical thinking
(2013)
Developing lessons
(2013)
1. Developing lesson plans and choosing strategies 2. The aims of the lesson plans in general 3. Strategies as a means to achieve theaims of the lesson plans 4. Evaluating the quality of lesson plans 5. Difficulties during lessons and adaptations afterwards 6. Student teachers’ overall feeling about their work 7. Using the strategies in future classes 8. Conclusion
Background: Adaptive behavioural strategies promoting co-occurrence of competing species are known to result from a sympatric evolutionary past. Strategies should be different for indirect resource competition (exploitation, e.g., foraging and avoidance behaviour) than for direct interspecific interference (e.g., aggression, vigilance, and nest guarding). We studied the effects of resource competition and nest predation in sympatric small mammal species using semi-fossorial voles and shrews, which prey on vole offspring during their sensitive nestling phase. Experiments were conducted in caged outdoor enclosures. Focus common vole mothers (Microtus arvalis) were either caged with a greater white-toothed shrew (Crocidura russula) as a potential nest predator, with an herbivorous field vole (Microtus agrestis) as a heterospecific resource competitor, or with a conspecific resource competitor.
Results: We studied behavioural adaptations of vole mothers during pregnancy, parturition, and early lactation, specifically modifications of the burrow architecture and activity at burrow entrances. Further, we measured pre- and postpartum faecal corticosterone metabolites (FCMs) of mothers to test for elevated stress hormone levels. Only in the presence of the nest predator were prepartum FCMs elevated, but we found no loss of vole nestlings and no differences in nestling body weight in the presence of the nest predator or the heterospecific resource competitor. Although the presence of both the shrew and the field vole induced prepartum modifications to the burrow architecture, only nest predators caused an increase in vigilance time at burrow entrances during the sensitive nestling phase.
Conclusion: Voles displayed an adequate behavioural response for both resource competitors and nest predators. They modified burrow architecture to improve nest guarding and increased their vigilance at burrow entrances to enhance offspring survival chances. Our study revealed differential behavioural adaptations to resource competitors and nest predators.
We consider systems of Euler-Lagrange equations with two degrees of freedom and with Lagrangian being quadratic in velocities. For this class of equations the generic case of the equivalence problem is solved with respect to point transformations. Using Lie's infinitesimal method we construct a basis of differential invariants and invariant differentiation operators for such systems. We describe certain types of Lagrangian systems in terms of their invariants. The results are illustrated by several examples.
Background: DNA fragments carrying internal recognition sites for the restriction endonucleases intended for cloning into a target plasmid pose a challenge for conventional cloning.
Results: A method for directional insertion of DNA fragments into plasmid vectors has been developed. The target sequence is amplified from a template DNA sample by PCR using two oligonucleotides each containing a single deoxyinosine base at the third position from the 5' end. Treatment of such PCR products with endonuclease V generates 3' protruding ends suitable for ligation with vector fragments created by conventional restriction endonuclease reactions.
Conclusions: The developed approach generates terminal cohesive ends without the use of Type II restriction endonucleases, and is thus independent from the DNA sequence. Due to PCR amplification, minimal amounts of template DNA are required. Using the robust Taq enzyme or a proofreading Pfu DNA polymerase mutant, the method is applicable to a broad range of insert sequences. Appropriate primer design enables direct incorporation of terminal DNA sequence modifications such as tag addition, insertions, deletions and mutations into the cloning strategy. Further, the restriction sites of the target plasmid can be either retained or removed.
This thesis gives formal definitions of discourse-givenness, coreference and reference, and reports on experiments with computational models of discourse-givenness of noun phrases for English and German. Definitions are based on Bach's (1987) work on reference, Kibble and van Deemter's (2000) work on coreference, and Kamp and Reyle's Discourse Representation Theory (1993). For the experiments, the following corpora with coreference annotation were used: MUC-7, OntoNotes and ARRAU for Englisch, and TueBa-D/Z for German. As for classification algorithms, they cover J48 decision trees, the rule based learner Ripper, and linear support vector machines. New features are suggested, representing the noun phrase's specificity as well as its context, which lead to a significant improvement of classification quality.
In many biological and environmental applications spatially resolved sensing of molecular oxygen is desirable. A powerful tool for distributed measurements is optical time domain reflectometry (OTDR) which is often used in the field of telecommunications. We combine this technique with a novel optical oxygen sensor dye, triangular-[4] phenylene (TP), immobilized in a polymer matrix. The TP luminescence decay time is 86 ns. The short decay time of the sensor dye is suitable to achieve a spatial resolution of some meters. In this paper we present the development and characterization of a reflectometer in the UV range of the electromagnetic spectrum as well as optical oxygen sensing with different fiber arrangements.
In this paper, doubling in Russian Sign Language and Sign Language of the Netherlands is discussed. In both sign languages different constituents (including verbs, nouns, adjectives, adverbs, and whole clauses) can be doubled. It is shown that doubling in both languages has common functions and exhibits a similar structure, despite some differences. On this basis, a unified pragmatic explanation for many doubling phenomena on both the discourse and the clause-internal levels is provided, namely that the main function of doubling both in RSL and NGT is foregrounding of the doubled information.
This article expands our current knowledge about ministerial selection in coalition governments and analyses why ministerial candidates succeed in acquiring a cabinet position after general elections. It argues that political parties bargain over potential office-holders during government-formation processes, selecting future cabinet ministers from an emerging bargaining pool'. The article draws upon a new dataset comprising all ministrable candidates discussed by political parties during eight government-formation processes in Germany between 1983 and 2009. The conditional logit regression analysis reveals that temporal dynamics, such as the day she enters the pool, have a significant effect on her success in achieving a cabinet position. Other determinants of ministerial selection discussed in the existing literature, such as party and parliamentary expertise, are less relevant for achieving ministerial office. The article concludes that scholarship on ministerial selection requires a stronger emphasis for its endogenous nature in government-formation as well as the relevance of temporal dynamics in such processes.
Given a large set of records in a database and a query record, similarity search aims to find all records sufficiently similar to the query record. To solve this problem, two main aspects need to be considered: First, to perform effective search, the set of relevant records is defined using a similarity measure. Second, an efficient access method is to be found that performs only few database accesses and comparisons using the similarity measure. This thesis solves both aspects with an emphasis on the latter. In the first part of this thesis, a frequency-aware similarity measure is introduced. Compared record pairs are partitioned according to frequencies of attribute values. For each partition, a different similarity measure is created: machine learning techniques combine a set of base similarity measures into an overall similarity measure. After that, a similarity index for string attributes is proposed, the State Set Index (SSI), which is based on a trie (prefix tree) that is interpreted as a nondeterministic finite automaton. For processing range queries, the notion of query plans is introduced in this thesis to describe which similarity indexes to access and which thresholds to apply. The query result should be as complete as possible under some cost threshold. Two query planning variants are introduced: (1) Static planning selects a plan at compile time that is used for all queries. (2) Query-specific planning selects a different plan for each query. For answering top-k queries, the Bulk Sorted Access Algorithm (BSA) is introduced, which retrieves large chunks of records from the similarity indexes using fixed thresholds, and which focuses its efforts on records that are ranked high in more than one attribute and thus promising candidates. The described components form a complete similarity search system. Based on prototypical implementations, this thesis shows comparative evaluation results for all proposed approaches on different real-world data sets, one of which is a large person data set from a German credit rating agency.
The genetic code is degenerate; thus, protein evolution does not uniquely determine the coding sequence. One of the puzzles in evolutionary genetics is therefore to uncover evolutionary driving forces that result in specific codon choice. In many bacteria, the first 5-10 codons of protein-coding genes are often codons that are less frequently used in the rest of the genome, an effect that has been argued to arise from selection for slowed early elongation to reduce ribosome traffic jams. However, genome analysis across many species has demonstrated that the region shows reduced mRNA folding consistent with pressure for efficient translation initiation. This raises the possibility that unusual codon usage is a side effect of selection for reduced mRNA structure. Here we discriminate between these two competing hypotheses, and show that in bacteria selection favours codons that reduce mRNA folding around the translation start, regardless of whether these codons are frequent or rare. Experiments confirm that primarily mRNA structure, and not codon usage, at the beginning of genes determines the translation rate.
This thesis presents novel ideas and research findings for the Web of Data – a global data space spanning many so-called Linked Open Data sources. Linked Open Data adheres to a set of simple principles to allow easy access and reuse for data published on the Web. Linked Open Data is by now an established concept and many (mostly academic) publishers adopted the principles building a powerful web of structured knowledge available to everybody. However, so far, Linked Open Data does not yet play a significant role among common web technologies that currently facilitate a high-standard Web experience. In this work, we thoroughly discuss the state-of-the-art for Linked Open Data and highlight several shortcomings – some of them we tackle in the main part of this work. First, we propose a novel type of data source meta-information, namely the topics of a dataset. This information could be published with dataset descriptions and support a variety of use cases, such as data source exploration and selection. For the topic retrieval, we present an approach coined Annotated Pattern Percolation (APP), which we evaluate with respect to topics extracted from Wikipedia portals. Second, we contribute to entity linking research by presenting an optimization model for joint entity linking, showing its hardness, and proposing three heuristics implemented in the LINked Data Alignment (LINDA) system. Our first solution can exploit multi-core machines, whereas the second and third approach are designed to run in a distributed shared-nothing environment. We discuss and evaluate the properties of our approaches leading to recommendations which algorithm to use in a specific scenario. The distributed algorithms are among the first of their kind, i.e., approaches for joint entity linking in a distributed fashion. Also, we illustrate that we can tackle the entity linking problem on the very large scale with data comprising more than 100 millions of entity representations from very many sources. Finally, we approach a sub-problem of entity linking, namely the alignment of concepts. We again target a method that looks at the data in its entirety and does not neglect existing relations. Also, this concept alignment method shall execute very fast to serve as a preprocessing for further computations. Our approach, called Holistic Concept Matching (HCM), achieves the required speed through grouping the input by comparing so-called knowledge representations. Within the groups, we perform complex similarity computations, relation conclusions, and detect semantic contradictions. The quality of our result is again evaluated on a large and heterogeneous dataset from the real Web. In summary, this work contributes a set of techniques for enhancing the current state of the Web of Data. All approaches have been tested on large and heterogeneous real-world input.
The aim of our article is to collect and present information about contemporary programming environments that are suitable for primary education. We studied the ways they implement (or do not implement) some programming concepts, the ways programs are represented and built in order to support young and novice programmers, as well as their suitability to allow different forms of sharing the results of pupils’ work. We present not only a short description of each considered environment and the taxonomy in the form of a table, but also our understanding and opinions on how and why the environments implement the same concepts and ideas in different ways and which concepts and ideas seem to be important to the creators of such environments.
Public debate about energy relations between the EU and Russia is distorted. These distortions present considerable obstacles to the development of true partnership. At the core of the conflict is a struggle for resource rents between energy producing, energy consuming and transit countries. Supposed secondary aspects, however, are also of great importance. They comprise of geopolitics, market access, economic development and state sovereignty. The European Union, having engaged in energy market liberalisation, faces a widening gap between declining domestic resources and continuously growing energy demand. Diverse interests inside the EU prevent the definition of a coherent and respected energy policy. Russia, for its part, is no longer willing to subsidise its neighbouring economies by cheap energy exports. The Russian government engages in assertive policies pursuing Russian interests. In so far, it opts for a different globalisation approach, refusing the role of mere energy exporter. In view of the intensifying struggle for global resources, Russia, with its large energy potential, appears to be a very favourable option for European energy supplies, if not the best one. However, several outcomes of the strategic game between the two partners can be imagined. Engaging in non-cooperative strategies will in the end leave all stakeholders worse-off. The European Union should therefore concentrate on securing its partnership with Russia instead of damaging it. Stable cooperation would need the acceptance that the partner may pursue his own goals, which might be different from one’s own interests. The question is, how can a sustainable compromise be found? This thesis finds that a mix of continued dialogue, a tit for tat approach bolstered by an international institutional framework and increased integration efforts appears as a preferable solution.
We are interested in modeling the Darwinian evolution of a population described by two levels of biological parameters: individuals characterized by an heritable phenotypic trait submitted to mutation and natural selection and cells in these individuals influencing their ability to consume resources and to reproduce. Our models are rooted in the microscopic description of a random (discrete) population of individuals characterized by one or several adaptive traits and cells characterized by their type. The population is modeled as a stochastic point process whose generator captures the probabilistic dynamics over continuous time of birth, mutation and death for individuals and birth and death for cells. The interaction between individuals (resp. between cells) is described by a competition between individual traits (resp. between cell types). We are looking for tractable large population approximations. By combining various scalings on population size, birth and death rates and mutation step, the single microscopic model is shown to lead to contrasting nonlinear macroscopic limits of different nature: deterministic approximations, in the form of ordinary, integro- or partial differential equations, or probabilistic ones, like stochastic partial differential equations or superprocesses.
Developing rich Web applications can be a complex job - especially when it comes to mobile device support. Web-based environments such as Lively Webwerkstatt can help developers implement such applications by making the development process more direct and interactive. Further the process of developing software is collaborative which creates the need that the development environment offers collaboration facilities. This report describes extensions of the webbased development environment Lively Webwerkstatt such that it can be used in a mobile environment. The extensions are collaboration mechanisms, user interface adaptations but as well event processing and performance measuring on mobile devices.
There are two common approaches to implement a virtual machine (VM) for a dynamic object-oriented language. On the one hand, it can be implemented in a C-like language for best performance and maximum control over the resulting executable. On the other hand, it can be implemented in a language such as Java that allows for higher-level abstractions. These abstractions, such as proper object-oriented modularization, automatic memory management, or interfaces, are missing in C-like languages but they can simplify the implementation of prevalent but complex concepts in VMs, such as garbage collectors (GCs) or just-in-time compilers (JITs). Yet, the implementation of a dynamic object-oriented language in Java eventually results in two VMs on top of each other (double stack), which impedes performance. For statically typed languages, the Maxine VM solves this problem; it is written in Java but can be executed without a Java virtual machine (JVM). However, it is currently not possible to execute dynamic object-oriented languages in Maxine. This work presents an approach to bringing object models and execution models of dynamic object-oriented languages to the Maxine VM and the application of this approach to Squeak/Smalltalk. The representation of objects in and the execution of dynamic object-oriented languages pose certain challenges to the Maxine VM that lacks certain variation points necessary to enable an effortless and straightforward implementation of dynamic object-oriented languages' execution models. The implementation of Squeak/Smalltalk in Maxine as a feasibility study is to unveil such missing variation points.
When we read a text, we obtain information at different levels of representation from abstract symbols. A reader’s ultimate aim is the extraction of the meaning of the words and the text. The reserach of eye movements in reading covers a broad range of psychological systems, ranging from low-level perceptual and motor processes to high-level cognition. Reading of skilled readers proceeds highly automatic, but is a complex phenomenon of interacting subprocesses at the same time. The study of eye movements during reading offers the possibility to investigate cognition via behavioral measures during the excercise of an everyday task. The process of reading is not limited to the directly fixated (or foveal) word but also extends to surrounding (or parafoveal) words, particularly the word to the right of the gaze position. This process may be unconscious, but parafoveal information is necessary for efficient reading. There is an ongoing debate on whether processing of the upcoming word encompasses word meaning (or semantics) or only superficial features. To increase the knowledge about how the meaning of one word helps processing another word, seven experiments were conducted. In these studies, words were exachanged during reading. The degree of relatedness between the word to the right of the currently fixated one and the word subsequently fixated was experimentally manipulated. Furthermore, the time course of the parafoveal extraction of meaning was investigated with two different approaches, an experimental one and a statistical one. As a major finding, fixation times were consistently lower if a semantically related word was presented compared to the presence of an unrelated word. Introducing an experimental technique that allows controlling the duration for which words are available, the time course of processing and integrating meaning was evaluated. Results indicated both facilitation and inhibition due to relatedness between the meanings of words. In a more natural reading situation, the effectiveness of the processing of parafoveal words was sometimes time-dependent and substantially increased with shorter distances between the gaze position and the word. Findings are discussed with respect to theories of eye-movement control. In summary, the results are more compatible with models of distributed word processing. The discussions moreover extend to language differences and technical issues of reading research.
Family
(2013)
Filming illegals
(2013)
A detailed description of the characteristics of antimicrobial peptides (AMPs) is highly demanded, since the resistance against traditional antibiotics is an emerging problem in medicine. They are part of the innate immune system in every organism, and they are very efficient in the protection against bacteria, viruses, fungi and even cancer cells. Their advantage is that their target is the cell membrane, in contrast to antibiotics which disturb the metabolism of the respective cell type. This allows AMPs to be more active and faster. The lack of an efficient therapy for some cancer types and the evolvement of resistance against existing antitumor agents make AMPs promising in cancer therapy besides being an alternative to traditional antibiotics. The aim of this work was the physical-chemical characterization of two fragments of LL-37, a human antimicrobial peptide from the cathelicidin family. The fragments LL-32 and LL-20 exhibited contrary behavior in biological experiments concerning their activity against bacterial cells, human cells and human cancer cells. LL-32 had even a higher activity than LL-37, while LL-20 had almost no effect. The interaction of the two fragments with model membranes was systematically studied in this work to understand their mode of action. Planar lipid films were mainly applied as model systems in combination with IR-spectroscopy and X-ray scattering methods. Circular Dichroism spectroscopy in bulk systems completed the results. In the first approach, the structure of the peptides was determined in aqueous solution and compared to the structure of the peptides at the air/water interface. In bulk, both peptides are in an unstructured conformation. Adsorbed and confined to at the air-water interface, the peptides differ drastically in their surface activity as well as in the secondary structure. While LL-32 transforms into an α-helix lying flat at the water surface, LL-20 stays partly unstructured. This is in good agreement with the high antimicrobial activity of LL-32. In the second approach, experiments with lipid monolayers as biomimetic models for the cell membrane were performed. It could be shown that the peptides fluidize condensed monolayers of negatively charged DPPG which can be related to the thinning of a bacterial cell membrane. An interaction of the peptides with zwitterionic PCs, as models for mammalian cells, was not clearly observed, even though LL-32 is haemolytic. In the third approach, the lipid monolayers were more adapted to the composition of human erythrocyte membranes by incorporating sphingomyelin (SM) into the PC monolayers. Physical-chemical properties of the lipid films were determined and the influence of the peptides on them was studied. It could be shown that the interaction of the more active LL-32 is strongly increased for heterogeneous lipid films containing both gel and fluid phases, while the interaction of LL-20 with the monolayers was unaffected. The results indicate an interaction of LL-32 with the membrane in a detergent-like way. Additionally, the modelling of the peptide interaction with cancer cells was performed by incorporating some negatively charged lipids into the PC/SM monolayers, but the increased charge had no effect on the interaction of LL-32. It was concluded, that the high anti-cancer activity of the peptide originates from the changed fluidity of cell membrane rather than from the increased surface charge. Furthermore, similarities to the physical-chemical properties of melittin, an AMP from the bee venom, were demonstrated.
In this article, it will be argued that the concept of functional layering – an extension of Hopper’s (1991) concept of layering – can be fruitfully applied to understand the mechanisms behind the sometimes large and messy looking synchronic picture of diverse meanings which one and the same construction can fulfill at a particular point in time. The concept will be used to account for the meaning spectrum of the present-day English progressive, which, it will be argued, no monosemic approach to date can account for. Taking a look at the diachrony of the construction will help to reveal that the various “exceptions” found in the use of the progressive can be understood as reflections of different stages in its development. Older, less grammaticalized or less well-defined usage patterns thus often survive in certain restricted niches next to the newer, more grammaticalized or more clear-cut functions, representing different diachronic layers. In addition to this diachronic motivation for synchronic meaning variety, the article will also address the crucial question of how a present-day hearer of a progressive form is able to decode the specific meaning intended by the speaker based on contextual clues. The article ends with some suggestions for further applications of the concept of functional layering.
Introduction: We examined patterns of genetic divergence in 26 Mediterranean populations of the semi-terrestrial beachflea Orchestia montagui using mitochondrial (cytochrome oxidase subunit I), microsatellite (eight loci) and allozymic data. The species typically forms large populations within heaps of dead seagrass leaves stranded on beaches at the waterfront. We adopted a hierarchical geographic sampling to unravel population structure in a species living at the sea-land transition and, hence, likely subjected to dramatically contrasting forces.
Results: Mitochondrial DNA showed historical phylogeographic breaks among Adriatic, Ionian and the remaining basins (Tyrrhenian, Western and Eastern Mediterranean Sea) likely caused by the geological and climatic changes of the Pleistocene. Microsatellites (and to a lesser extent allozymes) detected a further subdivision between and within the Western Mediterranean and the Tyrrhenian Sea due to present-day processes. A pattern of isolation by distance was not detected in any of the analyzed data set.
Conclusions: We conclude that the population structure of O. montagui is the result of the interplay of two contrasting forces that act on the species population genetic structure. On one hand, the species semi-terrestrial life style would tend to determine the onset of local differences. On the other hand, these differences are partially counter-balanced by passive movements of migrants via rafting on heaps of dead seagrass leaves across sites by sea surface currents. Approximate Bayesian Computations support dispersal at sea as prevalent over terrestrial regionalism.
Grammatica Grandonica
(2013)
In May 2010, Johann Ernst Hanxleden’s Grammatica Grandonica was rediscovered in Montecompatri (Lazio, Rome). Although historiographers attached much weight to the nearly oldest western grammar of Sanskrit, the precious manuscript was lost for several decades. The first aim of the present digital publication is to offer a photographical reproduction of the manuscript. This facsimile is accompanied by a double edition: a facing diplomatic edition with the Sanskrit in Malayāḷam script, followed by a transliterated established text.
The Arctic tundra, covering approx. 5.5 % of the Earth’s land surface, is one of the last ecosystems remaining closest to its untouched condition. Remote sensing is able to provide information at regular time intervals and large spatial scales on the structure and function of Arctic ecosystems. But almost all natural surfaces reveal individual anisotropic reflectance behaviors, which can be described by the bidirectional reflectance distribution function (BRDF). This effect can cause significant changes in the measured surface reflectance depending on solar illumination and sensor viewing geometries. The aim of this thesis is the hyperspectral and spectro-directional reflectance characterization of important Arctic tundra vegetation communities at representative Siberian and Alaskan tundra sites as basis for the extraction of vegetation parameters, and the normalization of BRDF effects in off-nadir and multi-temporal remote sensing data. Moreover, in preparation for the upcoming German EnMAP (Environmental Mapping and Analysis Program) satellite mission, the understanding of BRDF effects in Arctic tundra is essential for the retrieval of high quality, consistent and therefore comparable datasets. The research in this doctoral thesis is based on field spectroscopic and field spectro-goniometric investigations of representative Siberian and Alaskan measurement grids. The first objective of this thesis was the development of a lightweight, transportable, and easily managed field spectro-goniometer system which nevertheless provides reliable spectro-directional data. I developed the Manual Transportable Instrument platform for ground-based Spectro-directional observations (ManTIS). The outcome of the field spectro-radiometrical measurements at the Low Arctic study sites along important environmental gradients (regional climate, soil pH, toposequence, and soil moisture) show that the different plant communities can be distinguished by their nadir-view reflectance spectra. The results especially reveal separation possibilities between the different tundra vegetation communities in the visible (VIS) blue and red wavelength regions. Additionally, the near-infrared (NIR) shoulder and NIR reflectance plateau, despite their relatively low values due to the low structure of tundra vegetation, are still valuable information sources and can separate communities according to their biomass and vegetation structure. In general, all different tundra plant communities show: (i) low maximum NIR reflectance; (ii) a weakly or nonexistent visible green reflectance peak in the VIS spectrum; (iii) a narrow “red-edge” region between the red and NIR wavelength regions; and (iv) no distinct NIR reflectance plateau. These common nadir-view reflectance characteristics are essential for the understanding of the variability of BRDF effects in Arctic tundra. None of the analyzed tundra communities showed an even closely isotropic reflectance behavior. In general, tundra vegetation communities: (i) usually show the highest BRDF effects in the solar principal plane; (ii) usually show the reflectance maximum in the backward viewing directions, and the reflectance minimum in the nadir to forward viewing directions; (iii) usually have a higher degree of reflectance anisotropy in the VIS wavelength region than in the NIR wavelength region; and (iv) show a more bowl-shaped reflectance distribution in longer wavelength bands (>700 nm). The results of the analysis of the influence of high sun zenith angles on the reflectance anisotropy show that with increasing sun zenith angles, the reflectance anisotropy changes to azimuthally symmetrical, bowl-shaped reflectance distributions with the lowest reflectance values in the nadir view position. The spectro-directional analyses also show that remote sensing products such as the NDVI or relative absorption depth products are strongly influenced by BRDF effects, and that the anisotropic characteristics of the remote sensing products can significantly differ from the observed BRDF effects in the original reflectance data. But the results further show that the NDVI can minimize view angle effects relative to the contrary spectro-directional effects in the red and NIR bands. For the researched tundra plant communities, the overall difference of the off-nadir NDVI values compared to the nadir value increases with increasing sensor viewing angles, but on average never exceeds 10 %. In conclusion, this study shows that changes in the illumination-target-viewing geometry directly lead to an altering of the reflectance spectra of Arctic tundra communities according to their object-specific BRDFs. Since the different tundra communities show only small, but nonetheless significant differences in the surface reflectance, it is important to include spectro-directional reflectance characteristics in the algorithm development for remote sensing products.
Growing out of the crisis
(2013)
Greece’s currently planned institutional reforms will help to get the country going with limited economic growth. With an economy based primarily on tourism, trade, and agriculture, Greece lacks an established competitive industry and an innovation-friendly environment, resulting in a low export ratio given the small size of the country and its long-time EU-membership. Instead, Greece exports only its nation's talent, with low returns. To become prosperous, the country must better capitalize on its Eurozone membership and add innovative sectors to its economic structure. Given Greece's hidden assets, such as the attractiveness of the country, a small number of strong research centers and an impressive diaspora in research, finance and business, we envision a Greek “Silicon Valley” and propose a ten point policy plan to achieve that goal.
Kotzo shel yod by Y. L. Gordon (1832–1892) – one of the prominent intellectuals of the Jewish Enlightenment period – is a well-known Hebrew poem. This poem is characterized by a daring, sharp criticism of the traditional Jewish institutions, which the author felt required a critical shake-up. Gordon’s literary works were inspired by the Jewish Ashkenazi world. This unique and pioneering literary work was translated into Judeo-Spanish (Ladino). The aim of this article is to present the Sephardic version of Gordon’s poem. The article will attempt to examine the motives behind the translation of this work into Ladino, the reception of the translated work by its readership and the challenges faced by the anonymous translator who sought to make this work accessibleto the Ladino-reading public, in the clear knowledge that this version was quite far removed from the Ashkenazi original from which it sprang.
The supercapacitor is one of the most important energy storage devices as its construction allows for addressing many of the drawbacks related to batteries, but the low energy density of current systems is a major issue. In this doctoral dissertation, with a view to attaining high energy density supercapacitor systems that can be comparable to those for batteries, new heteroatom-containing carbons in the form of particles and three-dimensional films were investigated. A nitrogen-containing material, acrodam, was chosen as the carbon precursor due to the inexpensiveness, high carbonization yield, oligomerizability, etc. The carbon particles were prepared from acrodam together with caesium acetate as a meltable flux agent, and disclosed excellent properties in hydroquinone-loaded sulphuric acid electrolyte with high energy densities (up to 133.0 Wh kg–1) and sufficient cycle stabilities. These properties are already now comparable to those of batteries. Besides, conductive carbon three-dimensional films were fabricated using acrodam oligomer as the precursor by the inexpensive spin coating method. The films were found to be homogeneous, flat, void- and crack-free, and high conductivities (up to 334 S cm–1) could be obtained at the carbonization temperature of 1000 ºC. Furthermore, a porous carbon three-dimensional film could be formed using an organic template at the first attempt. This finding demonstrates the film’s potentiality for various applications such as supercapacitor electrode; the essential absence of contact resistance within the network should contribute to effective transportation of electron within the electrode. The progress made in this dissertation will open a new way to further enhancement of energy density for supercapacitor as well as other applications that exceeds the current properties.
Initiation and perpetuation of inflammatory bowel diseases (IBD) may result from an exaggerated mucosal immune response to the luminal microbiota in a susceptible host. We proposed that this may be caused either 1) by an abnormal microbial composition or 2) by weakening of the protective mucus layer due to excessive mucus degradation, which may lead to an easy access of luminal antigens to the host mucosa triggering inflammation. We tested whether the probiotic Enterococcus faecium NCIMB 10415 (NCIMB) is capable of reducing chronic gut inflammation by changing the existing gut microbiota composition and aimed to identify mechanisms that are involved in possible beneficial effects of the probiotic. To identify health-promoting mechanisms of the strain, we used interleukin (IL)-10 deficient mice that spontaneously develop gut inflammation and fed these mice a diet containing NCIMB (106 cells g-1) for 3, 8 and 24 weeks, respectively. Control mice were fed an identically composed diet but without the probiotic strain. No clear-cut differences between the animals were observed in pro-inflammatory cytokine gene expression and in intestinal microbiota composition after probiotic supplementation. However, we observed a low abundance of the mucin-degrading bacterium Akkermansia muciniphila in the mice that were fed NCIMB for 8 weeks. These low cell numbers were associated with significantly lower interferon gamma (IFN-γ) and IFN-γ-inducible protein (IP-10) mRNA levels as compared to the NCIMB-treated mice that were killed after 3 and 24 weeks of intervention. In conclusion, NCIMB was not capable of reducing gut inflammation in the IL-10-/- mouse model. To further identify the exact role of A. muciniphila and uncover a possible interaction between this bacterium, NCIMB and the host in relation to inflammation, we performed in vitro studies using HT-29 colon cancer cells. The HT-29 cells were treated with bacterial conditioned media obtained by growing either A. muciniphila (AM-CM) or NCIMB (NCIMB-CM) or both together (COMB-CM) in Dulbecco’s Modified Eagle Medium (DMEM) for 2 h at 37 °C followed by bacterial cell removal. HT-29 cells treated with COMB-CM displayed reduced cell viability after 18 h (p<0.01) and no viable cells were detected after 24 h of treatment, in contrast to the other groups or heated COMB-CM. Detection of activated caspase-3 in COMB-CM treated groups indicated that death of the HT-29 cells was brought about by apoptosis. It was concluded that either NCIMB or A. muciniphila produce a soluble and heat-sensitive factor during their concomitant presence that influences cell viability in an in vitro system. We currently hypothesize that this factor is a protein, which has not yet been identified. Based on the potential effect of A. muciniphila on inflammation (in vivo) and cell-viability (in vitro) in the presence of NCIMB, we investigated how the presence of A. muciniphila affects the severity of an intestinal Salmonella enterica Typhimurium (STm)-induced gut inflammation using gnotobiotic C3H mice with a background microbiota of eight bacterial species (SIHUMI, referred to as simplified human intestinal microbiota). Presence of A. muciniphila in STm-infected SIHUMI (SIHUMI-AS) mice caused significantly increased histopathology scores and elevated mRNA levels of IFN-γ, IP-10, tumor necrosis factor alpha (TNF-α), IL-12, IL-17 and IL-6 in cecal and colonic tissue. The number of mucin filled goblet cells was 2- to 3- fold lower in cecal tissue of SIHUMI-AS mice compared to SIHUMI mice associated with STm (SIHUMI-S) or A. muciniphila (SIHUMI-A) or SIHUMI mice. Reduced goblet cell numbers significantly correlated with increased IFN-γ (r2 = -0.86, ***P<0.001) in all infected mice. In addition, loss of cecal mucin sulphation was observed in SIHUMI-AS mice. Concomitant presence of A. muciniphila and STm resulted in a drastic change in microbiota composition of the SIHUMI consortium. The proportion of Bacteroides thetaiotaomicron in SIHUMI, SIHUMI-A and SIHUMI-S mice made up to 80-90% but was completely taken over by STm in SIHUMI-AS mice contributing 94% to total bacteria. These results suggest that A. muciniphila exacerbates STm-induced intestinal inflammation by its ability to disturb host mucus homeostasis. In conclusion, abnormal microbiota composition together with excessive mucus degradation contributes to severe intestinal inflammation in a susceptible host.
Background: The linear noise approximation (LNA) is commonly used to predict how noise is regulated and exploited at the cellular level. These predictions are exact for reaction networks composed exclusively of first order reactions or for networks involving bimolecular reactions and large numbers of molecules. It is however well known that gene regulation involves bimolecular interactions with molecule numbers as small as a single copy of a particular gene. It is therefore questionable how reliable are the LNA predictions for these systems.
Results: We implement in the software package intrinsic Noise Analyzer (iNA), a system size expansion based method which calculates the mean concentrations and the variances of the fluctuations to an order of accuracy higher than the LNA. We then use iNA to explore the parametric dependence of the Fano factors and of the coefficients of variation of the mRNA and protein fluctuations in models of genetic networks involving nonlinear protein degradation, post-transcriptional, post-translational and negative feedback regulation. We find that the LNA can significantly underestimate the amplitude and period of noise-induced oscillations in genetic oscillators. We also identify cases where the LNA predicts that noise levels can be optimized by tuning a bimolecular rate constant whereas our method shows that no such regulation is possible. All our results are confirmed by stochastic simulations.
Conclusion: The software iNA allows the investigation of parameter regimes where the LNA fares well and where it does not. We have shown that the parametric dependence of the coefficients of variation and Fano factors for common gene regulatory networks is better described by including terms of higher order than LNA in the system size expansion. This analysis is considerably faster than stochastic simulations due to the extensive ensemble averaging needed to obtain statistically meaningful results. Hence iNA is well suited for performing computationally efficient and quantitative studies of intrinsic noise in gene regulatory networks.
HPI Future SOC Lab
(2013)
The “HPI Future SOC Lab” is a cooperation of the Hasso-Plattner-Institut (HPI) and industrial partners. Its mission is to enable and promote exchange and interaction between the research community and the industrial partners. The HPI Future SOC Lab provides researchers with free of charge access to a complete infrastructure of state of the art hard- and software. This infrastructure includes components, which might be too expensive for an ordinary research environment, such as servers with up to 64 cores. The offerings address researchers particularly from but not limited to the areas of computer science and business information systems. Main areas of research include cloud computing, parallelization, and In-Memory technologies. This technical report presents results of research projects executed in 2012. Selected projects have presented their results on June 18th and November 26th 2012 at the Future SOC Lab Day events.
HPI Future SOC Lab
(2013)
Together with industrial partners Hasso-Plattner-Institut (HPI) is currently establishing a “HPI Future SOC Lab,” which will provide a complete infrastructure for research on on-demand systems. The lab utilizes the latest, multi/many-core hardware and its practical implementation and testing as well as further development. The necessary components for such a highly ambitious project are provided by renowned companies: Fujitsu and Hewlett Packard provide their latest 4 and 8-way servers with 1-2 TB RAM, SAP will make available its latest Business byDesign (ByD) system in its most complete version. EMC² provides high performance storage systems and VMware offers virtualization solutions. The lab will operate on the basis of real data from large enterprises. The HPI Future SOC Lab, which will be open for use by interested researchers also from other universities, will provide an opportunity to study real-life complex systems and follow new ideas all the way to their practical implementation and testing. This technical report presents results of research projects executed in 2011. Selected projects have presented their results on June 15th and October 26th 2011 at the Future SOC Lab Day events.