Filtern
Volltext vorhanden
- ja (177) (entfernen)
Erscheinungsjahr
- 2006 (177) (entfernen)
Dokumenttyp
- Dissertation (46)
- Wissenschaftlicher Artikel (32)
- Konferenzveröffentlichung (32)
- Postprint (26)
- Preprint (24)
- Monographie/Sammelband (10)
- Masterarbeit (3)
- Arbeitspapier (2)
- Teil eines Buches (Kapitel) (1)
- Sonstiges (1)
Sprache
- Englisch (177) (entfernen)
Schlagworte
- Fluoreszenz-Resonanz-Energie-Transfer (3)
- Immunoassay (3)
- Nanopartikel (3)
- Nichtlineare Dynamik (3)
- Optimality Theory (3)
- Polyelektrolyt (3)
- climate change (3)
- Ackerschmalwand (2)
- Adverbial Quantification (2)
- Adverbs of Frequency (2)
- Adverbs of Quantity (2)
- Anisotropie (2)
- Energietransfer (2)
- European Union (2)
- Europäische Union (2)
- FRET (2)
- Israel (2)
- Klimatologie (2)
- Klimawandel (2)
- Lumineszenz (2)
- Maschinelles Lernen (2)
- Minimalist Program (2)
- Momententensor (2)
- Monsun (2)
- Object Shift (2)
- Phylogenie (2)
- Quantenpunkt (2)
- Seismologie (2)
- Situations (2)
- Tense Semantics (2)
- Unsicherheit (2)
- Unsicherheitsanalyse (2)
- Vogtland (2)
- West Bohemia (2)
- Zeitreihenanalyse (2)
- anisotropy (2)
- biogenic amine (2)
- metabolite profiling (2)
- moment tensor (2)
- nanoparticles (2)
- nonlinear dynamics (2)
- polyelectrolyte (2)
- uncertainty analysis (2)
- weighted edge spaces (2)
- "Spacer"-Gruppe (1)
- 315 nm (1)
- 3D-Stadtmodelle (1)
- 3d city models (1)
- 473 nm (1)
- 946 nm (1)
- Adhäsion (1)
- Afrika (1)
- Agglomeration (1)
- Aktualisierung (1)
- Alignment (1)
- Allgemeine Relativitätstheorie (1)
- Anthropogene Klimaänderung (1)
- Approximate likelihood (1)
- Arabidopsis thaliana (1)
- Aral Sea (1)
- Aralsee (1)
- Arbeitsgedächtnis (1)
- Argininbiosynthese (1)
- Astrometrie (1)
- Asymptotische Entwicklung (1)
- AtDGK gene (1)
- AtDGK genes (1)
- Aufmerksamkeit (1)
- Authentifizierung (1)
- Außenpolitik (1)
- BCI (1)
- Bayes (1)
- Behaviour (1)
- Bewegungsanalyse (1)
- Bewegungssteuerung (1)
- Bifurkationsanalyse (1)
- Bioenergie (1)
- Bioinformatik (1)
- Biophotonik (1)
- Brain Computer Interface (1)
- CO₂ (1)
- Campylomormyrus (1)
- Canonical Gibbs measure (1)
- Capsule (1)
- Carbon (1)
- Cassini<Raumsonde> (1)
- Central Asia (1)
- Charge-Storage (1)
- Chemotaxis (1)
- Chicken Repeat (1)
- Chlamydomonas (1)
- Chomskyan linguistics (1)
- Classification (1)
- Climate change scenarios (1)
- Climatology (1)
- Common Foreign and Security Policy (1)
- Common Spatial Pattern (1)
- Competition (1)
- Complex (1)
- Computergrafik (1)
- Computersicherheit (1)
- Continuous Wavelet Spectral Analysis (1)
- Core-Collapse Supernovae (1)
- Derivation-and-Evaluation model (1)
- Detektor (1)
- Detektor-Entwicklung (1)
- Dezentralisation (1)
- Diacylglycerol (1)
- Diacylglycerol-Kinasen (1)
- Diaspore morphology (1)
- Didaktik der Mathematik (1)
- Differenzenoperator (1)
- Diffusionsbarriere (1)
- Dinoflagellatenzyste (1)
- Diodenlaserspektroskopie (1)
- Dormanz (1)
- Duration (1)
- Dynamische Modellierung (1)
- Dynamische Rekonfiguration (1)
- E-Learning (1)
- EEG (1)
- Eastern policy (1)
- Economy (1)
- Ecosystem modelling (1)
- Effizienz (1)
- Einjahrespflanzen (1)
- El Niño (1)
- El Niño Phänomen (1)
- El-Niño-Phänomen (1)
- Elektrische Entladung (1)
- Elektrische Fische (1)
- Elliptic operators in domains with edges (1)
- Energy Transfer (1)
- Ensemble-Simulation (1)
- Entscheidung bei Unsicherheit (1)
- Erbeben (1)
- Erdbeben (1)
- Europaidentität (1)
- European Constitution (1)
- European Neighborhood Policy (1)
- European identity (1)
- Europium (1)
- Europäische Nachbarschaftspolitik (1)
- Europäische Verfassung (1)
- Evaluation (1)
- Extrasolare Planeten (1)
- Feature Combination (1)
- Feedback (1)
- Fehlende Daten (1)
- Finsler-Abstand (1)
- Finsler-distance (1)
- Fixational eye movements (1)
- Florigen (1)
- Flussaufteilung (1)
- Flüssigkristall (1)
- Freistehende Membranen (1)
- Frucht (1)
- Förster Resonanz Energie Transfer (1)
- Förster-Resonanz-Energie-Transfer (1)
- GC-MS (1)
- GPCR (1)
- Gehirn-Computer-Schnittstelle (1)
- Gemeinsame Außen- und Sicherheitspolitik (1)
- General Relativity (1)
- Generalized translation operator (1)
- Genfamilie (1)
- Geometrieunterricht (1)
- Geovisualisierung (1)
- Grain-size distributions (1)
- Gravitational Waves (1)
- Gravitationswellen (1)
- HTP (1)
- Habitat preferences (1)
- Habitatmodellierung (1)
- Habitatpräferenz (1)
- Hauptkomponentenanalyse (1)
- Hauptspannungsachse (1)
- Hydrothermalzeit-Modell (1)
- Hydrotrope (1)
- Hyperbolic-parabolic system (1)
- Hypothesis Test (1)
- IT security (1)
- Indian Monsoon (1)
- Indischer Monsun (1)
- Information Transfer Rate (1)
- Interaktion (1)
- Interpolation (1)
- Interstellare Materie (1)
- Introgression (1)
- Ionic Self-Assembly (1)
- Isotop (1)
- KS model (1)
- Keimungsrate (1)
- Kern-Kollaps-Supernovae (1)
- Kinetik (1)
- Klimaprognose (1)
- Klimasensitivität (1)
- Klimaszenarien (1)
- Klimaveränderung (1)
- Klimaänderung (1)
- Knotenpunkte (1)
- Kohlendioxid (1)
- Kohlenmonoxid (1)
- Kohlenstoff (1)
- Kohärenz-Analyse (1)
- Kollisionsdynamik (1)
- Komplex (1)
- Kontinuumsgrenzwert (1)
- Kontrolltheorie (1)
- Kopplungs-Analyse (1)
- Korngrößenverteilungen (1)
- Kraftsensoren (1)
- Kryptographie (1)
- Kybernetik (1)
- Landnutzungsänderung (1)
- Lanthanide (1)
- Lanthanides (1)
- Lanthanoide (1)
- Laser (1)
- Last Glacial Maximum (1)
- Laubstreu (1)
- Lehrerbildung (1)
- Leistungsdruck (1)
- Lernprozesse (1)
- Lesotho (1)
- Letztes Glaziales Maximum (1)
- Levels of adequacy (1)
- Liquid crystal (1)
- Luminescence (1)
- Machine Learning (1)
- Markov process (1)
- Markov-Prozess (1)
- Mathematik (1)
- Mathematische Physik (1)
- Mediterranean (1)
- Mediterranes Tiefdrucksystem (1)
- Mehrschichtsysteme (1)
- Memory-guided saccades (1)
- Metabiography (1)
- Metabolit (1)
- Metabolite (1)
- Metabolome (1)
- Microlensing (1)
- Microsatellite (1)
- Middleware (1)
- Mikroemulsion (1)
- Mikrogravitationslinseneffekt (1)
- Mikrokapsel (1)
- Mikrosatelliten (1)
- Mittelmeerraum (1)
- Mizellbildung (1)
- Modellkopplung (1)
- Modellsensitivität (1)
- Modenkopplung (1)
- Molekulardynamik (1)
- Moran effect (1)
- Moran-Effekt (1)
- Morphologie <Biologie> (1)
- Multi-Class (1)
- Multilayers (1)
- Multivariate Statistics (1)
- Multivariate Statistik (1)
- NAO (1)
- NIR spectroscopy (1)
- Natural Law (1)
- Neodym-YAG-Laser (1)
- Netzwerk (1)
- Neuronales Netz (1)
- Nilhechte <Familie> (1)
- Nitrogen (1)
- Nonlinear Dynamics (1)
- Numerical cognition (1)
- Numerisches Verfahren (1)
- OT syntax (1)
- Oligomere (1)
- Ontogenie (1)
- Operators on manifolds with edge (1)
- Operatortheorie (1)
- Optode (1)
- Order preservation (1)
- Ostpolitik (1)
- Palaeoclimatology (1)
- Paläoklimatologie (1)
- Partizipation (1)
- Passeriformes (1)
- Phase Synchronization (1)
- Phase-Analysis (1)
- Phasen-Analyse (1)
- Phosphatidsäure (1)
- Physik (1)
- Pitch (1)
- Planetare Ringe (1)
- Politik (1)
- Pollenanalyse (1)
- Polyelectrolyte (1)
- Polyelektrolyte (1)
- Potsdam / Potsdam-Institut für Klimafolgenforschung (1)
- Prognose (1)
- Pseudodifferentialoperatoren auf dem Torus (1)
- Pulszugformung (1)
- Qualitätssicherung (1)
- Quantenkryptographie (1)
- Quantenpunkte (1)
- Quantum Dot (1)
- Quantum Dots (1)
- Raman (1)
- Rationierung (1)
- Realistic Mathematics Education (1)
- Regierungskooperation (1)
- Retrotransposon (1)
- Roche (1)
- Roche Limit (1)
- SLS (1)
- SNARC Effekt (1)
- SNARC effect (1)
- Saccharose Synthase (1)
- Saturn (1)
- Saturn<Planet> (1)
- Sauerstoff (1)
- Scale dependency (1)
- Schatten (1)
- Schwankung (1)
- Schwarmbeben (1)
- Schwefel (1)
- Seestandsänderung (1)
- Seismology (1)
- Sekundarstufe I (1)
- Selbstorganisation (1)
- Semi-klasische Abschätzung (1)
- Semiklassik (1)
- Serotonin (1)
- Serum (1)
- Sibirienhoch (1)
- Signal Processing (1)
- Signaltransduktionsprozesse (1)
- Significance Testing (1)
- Signifikanztests (1)
- Single Trial Analysis (1)
- Skalenabhängigkeit (1)
- Spatio-Spectral Filter (1)
- Species distribution modelling (1)
- Spectroscopy (1)
- Spektralanalyse <Stochastik> (1)
- Spektraltheorie (1)
- Spektroskopie (1)
- Staatskunst (1)
- Sternentstehung (1)
- Sternhaufen (1)
- Stickstoff (1)
- Stimulierte Brillouin Streuung (1)
- Stochastic Differential Equation (1)
- Stochastik (1)
- Stochastische Prozesse (1)
- Stoffwechsel (1)
- Streuamplitude (1)
- Streutheorie (1)
- Supermacht (1)
- Surrogate Data (1)
- Sus scrofa (1)
- Synchronisation (1)
- Synchronisierung (1)
- Temporäre Anbindung (1)
- Tenside (1)
- Texturen (1)
- Theoretical ecology (1)
- Theoretische Ökologie (1)
- Time Series Analysis (1)
- Time-resolved Immunoassay (1)
- Tomate (1)
- Tomato (1)
- Tunneleffekt (1)
- Turbulenz (1)
- Turbulenz Korrelations-Messung (1)
- Turdus sp. (1)
- UV radiation (1)
- UV-Licht (1)
- UV/VIS (1)
- Ultrazentrifuge (1)
- Unabhängige Komponentenanalyse (1)
- Ungleichgewicht (1)
- Unsicherheiten (1)
- VM (1)
- VP-topicalisation (1)
- Vegetation (1)
- Verteidigungspolitik (1)
- Verwaltung (1)
- Vietnam (1)
- Visually-guided saccades (1)
- WKB method (1)
- Wavelet Coherence (1)
- Wavelet-Analyse (1)
- Web Services (1)
- Wellenausbreitung (1)
- Winddynamik (1)
- Wirtschaft (1)
- Word processing (1)
- ZENK (1)
- Zeitaufgelöster Immunoassay (1)
- Zentralasien (1)
- Züchtung (1)
- abiotic decomposition (1)
- abiotische Zersetzung (1)
- adhesion (1)
- administration (1)
- agent interaction (1)
- annual plant (1)
- annual plant species (1)
- applicatives (1)
- arginine biosynthesis (1)
- assembly (1)
- astrometry (1)
- attention (1)
- authentication (1)
- auxin (1)
- bayesian inference (1)
- beer (1)
- bifurcation analysis (1)
- biomass (1)
- biophotonics (1)
- boundary value problems (1)
- breeding (1)
- carbon dioxide (1)
- carbon monoxide (1)
- cellular signalling (1)
- charge profiling (1)
- charge storage (1)
- charge trap (1)
- charge-dipole interaction (1)
- chicken repeat 1 (1)
- climate projection (1)
- climate sensitivity (1)
- climatic change (1)
- collision dynamics (1)
- computer graphics (1)
- constitutive activity (1)
- continuous time Markov Chains (1)
- contrast (1)
- corporate governance (1)
- correction (1)
- coupling (1)
- crops (1)
- cryptography (1)
- cyclic AMP (1)
- cyclic-olefin copolymer (1)
- daily precipitation (1)
- daily rainfall variability (1)
- decentralisation (1)
- defence policy (1)
- detailed balance equation (1)
- detector development (1)
- diacylglycerol (1)
- difference operator (1)
- diffusion barrier (1)
- diode laser spectroscopy (1)
- directed dispersal (1)
- discourse (1)
- disequilibrium (1)
- dopamine (1)
- dormancy (1)
- dynamic reconfiguration (1)
- early Germanic (1)
- early indicators for SLI (1)
- earthquake swarm (1)
- eco-hydrology (1)
- econometric modelling (1)
- economy (1)
- eddy covariance (1)
- efficiency (1)
- einjährige Pflanzen (1)
- ellipticity with respect to interior and edge symbols (1)
- ensemble simulation (1)
- epizoochory (1)
- extrasolar planets (1)
- eye movements (1)
- fixation duration (1)
- fluorescence immunoassay (1)
- fluorescence quenching (1)
- flux partitioning (1)
- foam analysis (1)
- focus ambiguity (1)
- focus types (1)
- force sensors (1)
- foregrounding (1)
- foreign policy (1)
- fruit (1)
- gap-filling (1)
- gaze (1)
- gene family (1)
- generator (1)
- geometry (1)
- geovisualization (1)
- gepulster DPSS Laser (1)
- germination rate (1)
- gesture (1)
- governance (1)
- grammaticalization (1)
- hard core potential (1)
- honeybee (1)
- hormone (1)
- hydrothermal time model (1)
- hydrotropes (1)
- hyperequational theory (1)
- immunoassay (1)
- infants (1)
- information structure (1)
- intergovernmental cooperation (1)
- interstellar medium (1)
- ionischer Self-Assembly (1)
- isotope (1)
- kinetic (1)
- kinetics (1)
- lake-level change (1)
- land use change (1)
- late holocene (1)
- late talker (1)
- layer-by-layer (1)
- leaf litter (1)
- learning processes (1)
- long-distance dispersal (1)
- markedness (1)
- mathematics (1)
- mental number line (1)
- mentaler Zahlenstrahl (1)
- mesostructure (1)
- metabolic genomics (1)
- metabolite breeding (1)
- metabolome (1)
- metabolomics (1)
- metalinguistic (1)
- micellization (1)
- microemulsion (1)
- middle school (1)
- minimalism (1)
- modality (1)
- mode-locking (1)
- model coupling (1)
- model uncertainty (1)
- molecular dynamics (1)
- molecular networks (1)
- molekulare Netzwerke (1)
- monotonicity (1)
- morphological focus marking (1)
- motor control (1)
- movement analysis (1)
- multiuser (1)
- nanoparticle (1)
- negation (1)
- network (1)
- neurohormone (1)
- neuropeptide (1)
- nichtlineare PCA (NLPCA) (1)
- nodalpoints (1)
- non-manuals (1)
- nonlinear PCA (NLPCA) (1)
- numerical simulations (1)
- numerische Kognition (1)
- octopamine (1)
- oligomers (1)
- optimality (1)
- optode (1)
- oxides (1)
- oxygen (1)
- parallel updating (1)
- parallele Aktualisierung (1)
- partial algebras (1)
- participation (1)
- phases (1)
- phosphatidic acid (1)
- photo-stimulated discharge (1)
- photon density wave spectroscopy (1)
- phylogeny (1)
- physics (1)
- planetary rings (1)
- politics (1)
- polyelectrolyte brushes (1)
- polyelectrolyte membranes (1)
- polyethylene terephthalate (1)
- polymer-electret (1)
- poset (1)
- prediction (1)
- pressure (1)
- presupposition (1)
- proactive interference (1)
- proaktive Interferenz (1)
- process analytical technology (1)
- processing of phonological details (1)
- profile likelihood (1)
- propor-tional hazard mode (1)
- prosodic focus (1)
- pulsed DPSS laser (1)
- quantum cryptography (1)
- rationing (1)
- reading (1)
- research programme (1)
- retrotransposon (1)
- reversible measure (1)
- scaled lattice (1)
- scattering amplitude (1)
- scattering theory (1)
- semi-arid grassland (1)
- semi-arides Grasland (1)
- semi-classical spectral estimates (1)
- semiclassics (1)
- shade (1)
- shareholders (1)
- sign languages (1)
- signaling (1)
- spacer group (1)
- spectral theorem (1)
- spätes Holozän (1)
- star clusters (1)
- star formation (1)
- star-product (1)
- stimulated Brillouin scattering (1)
- stimuli-response (1)
- stochastic time series (1)
- stochastische Zeitreihen (1)
- sucrose synthase (1)
- sulfur (1)
- superpower (1)
- surfactants (1)
- syntax-phonology interface (1)
- syntax-semantics interface (1)
- sättigbarer Absorber (1)
- tailored pulse trains (1)
- temporary binding (1)
- tensile Anteile (1)
- tensile earthquake (1)
- tensile earthquakes (1)
- terrestrial carbon balance (1)
- textures (1)
- thermo-luminescence (1)
- thermo-stimulated discharge (1)
- transparent-leitendes Oxid (1)
- trap-depth (1)
- tunneling (1)
- turbulence (1)
- typology (1)
- tyramine (1)
- tägliche Regenmenge (1)
- täglicher Niederschlag (1)
- uncertainties (1)
- updating (1)
- verb-second (1)
- virtual machine (1)
- word recognition (1)
- working memory (1)
- Æ Recurrence Plots (1)
- Ökohydrologie (1)
- Ökosystemmodellierung (1)
Institut
- Extern (37)
- Institut für Mathematik (26)
- Department Linguistik (21)
- Institut für Physik und Astronomie (19)
- Institut für Anglistik und Amerikanistik (17)
- Institut für Biochemie und Biologie (17)
- Institut für Chemie (14)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (11)
- Institut für Geowissenschaften (9)
- Department Psychologie (7)
The paper presents a novel approach to explaining word order variation in the early Germanic languages. Initial observations about verb placement as a device marking types of rhetorical relations made on data from Old High German (cf. Hinterhölzl & Petrova 2005) are now reconsidered on a larger scale and compared with evidence from other early Germanic languages. The paper claims that the identification of information-structural domains in a sentence is best achieved by taking into account the interaction between the pragmatic features of discourse referents and properties of discourse organization.
The author's recently published monograph on Alexander von Humboldt[1] describes the multiple images of this great cultural icon. The book is a metabiographical study that shows how from the middle of the nineteenth century to the present day Humboldt has served as a nucleus of crystallisation for a variety of successive socio-political ideologies, each producing its own distinctive representation of him. The historiographical implications of this biographical diversity are profound and support current attempts to understand historical scholarship in terms of memory cultures.
We consider the problem of testing whether the density of a mul- tivariate random variable can be expressed by a prespecified copula function and the marginal densities. The proposed test procedure is based on the asymptotic normality of the properly standardized integrated squared distance between a multivariate kernel density estimator and an estimator of its expectation under the hypothesis. The test of independence is a special case of this approach.
G protein-coupled receptor (GPCR) genes are large gene families in every animal, sometimes making up to 1-2% of the animal's genome. Of all insect GPCRs, the neurohormone (neuropeptide, protein hormone, biogenic amine) GPCRs are especially important, because they, together with their ligands, occupy a high hierarchic position in the physiology of insects and steer crucial processes such as development, reproduction, and behavior. In this paper, we give a review of our current knowledge on Drosophila melanogaster GPCRs and use this information to annotate the neurohormone GPCR genes present in the recently sequenced genome from the honey bee Apis mellifera. We found 35 neuropeptide receptor genes in the honey bee (44 in Drosophila) and two genes, coding for leucine-rich repeats-containing protein hormone GPCRs (4 in Drosophila). In addition, the honey bee has 19 biogenic amine receptor genes (21 in Drosophila). The larger numbers of neurohormone receptors in Drosophila are probably due to gene duplications that occurred during recent evolution of the fly. Our analyses also yielded the likely ligands for 40 of the 56 honey bee neurohormone GPCRs identified in this study. In addition, we made some interesting observations on neurohormone GPCR evolution and the evolution and co-evolution of their ligands. For neuropeptide and protein hormone GPCRs, there appears to be a general co-evolution between receptors and their ligands. This is in contrast to biogenic amine GPCRs, where evolutionarily unrelated GPCRs often bind to the same biogenic amine, suggesting frequent ligand exchanges ("ligand hops") during GPCR evolution. (c) 2006 Elsevier Ltd. All rights reserved.
E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in con-ventional computer laboratories poses the problem of immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. This report introduces the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibilities to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by the virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present security management solutions to prevent misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This report demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
This thesis discusses challenges in IT security education, points out a gap between e-learning and practical education, and presents a work to fill the gap. E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in conventional computer laboratories poses particular problems such as immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. In this thesis, we introduce the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibility to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by a virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present a security management solution to prevent the misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This thesis demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
In Allefeld & Kurths [2004], we introduced an approach to multivariate phase synchronization analysis in the form of a Synchronization Cluster Analysis (SCA). A statistical model of a synchronization cluster was described, and an abbreviated instruction on how to apply this model to empirical data was given, while an implementation of the corresponding algorithm was (and is) available from the authors. In this letter, the complete details on how the data analysis algorithm is to be derived from the model are filled in.
How does a shared lexicon arise in population of agents with differing lexicons, and how can this shared lexicon be maintained over multiple generations? In order to get some insight into these questions we present an ALife model in which the lexicon dynamics of populations that possess and lack metacommunicative interaction (MCI) capabilities are compared. We ran a series of experiments on multi-generational populations whose initial state involved agents possessing distinct lexicons. These experiments reveal some clear differences in the lexicon dynamics of populations that acquire words solely by introspection contrasted with populations that learn using MCI or using a mixed strategy of introspection and MCI. The lexicon diverges at a faster rate for an introspective population, eventually collapsing to one single form which is associated with all meanings. This contrasts sharply with MCI capable populations in which a lexicon is maintained, where every meaning is associated with a unique word. We also investigated the effect of increasing the meaning space and showed that it speeds up the lexicon divergence for all populations irrespective of their acquisition method.
The ultimate aim of this study is to better understand the relevance of weak electricity in the adaptive radiation of the African mormyrid fish. The chosen model taxon, the genus Campylomormyrus, exhibits a wide diversity of electric organ discharge (EOD) waveform types. Their EOD is age, sex, and species specific and is an important character for discriminating among species that are otherwise cryptic. After having established a complementary set of molecular markers, I examined the radiation of Campylomormyrus by a combined approach of molecular data (sequence data from the mitochondrial cytochrome b and the nuclear S7 ribosomal protein gene, as well as 18 microsatellite loci, especially developed for the genus Campylomormyrus), observation of ontogeny and diversification of EOD waveform, and morphometric analysis of relevant morphological traits. I built up the first convincing phylogenetic hypothesis for the genus Campylomormyrus. Taking advantage of microsatellite data, the identified phylogenetic clades proved to be reproductively isolated biological species. This way I detected at least six species occurring in sympatry near Brazzaville/Kinshasa (Congo Basin). By combining molecular data and EOD analyses, I could show that there are three cryptic species, characterised by their own adult EOD types, hidden under a common juvenile EOD form. In addition, I confirmed that adult male EOD is species-specific and is more different among closely related species than among more distantly related ones. This result and the observation that the EOD changes with maturity suggest its function as a reproductive isolation mechanism. As a result of my morphometric shape analysis, I could assign species types to the identified reproductively isolated groups to produce a sound taxonomy of the group. Besides this, I could also identify morphological traits relevant for the divergences between the identified species. Among them, the variations I found in the shape of the trunk-like snout, suggest the presence of different trophic specializations; therefore, this trait might have been involved in the ecological radiation of the group. In conclusion, I provided a convincing scenario envisioning an adaptive radiation of weakly electric fish triggered by sexual selection via assortative mating due to differences in EOD characteristics, but caused by a divergent selection of morphological traits correlated with the feeding ecology.
The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season.
The biogenic amine serotonin (5-HT) plays a key role in the regulation and modulation of many physiological and behavioural processes in both vertebrates and invertebrates. These functions are mediated through the binding of serotonin to its receptors, of which 13 subtypes have been characterized in vertebrates. We have isolated a cDNA from the honeybee Apis mellifera (Am5-ht7) sharing high similarity to members of the 5-HT7 receptor family. Expression of the Am5-HT7 receptor in HEK293 cells results in an increase in basal cAMP levels, suggesting that Am5-HT7 is expressed as a constitutively active receptor. Serotonin application to Am5-ht7-transfected cells elevates cyclic adenosine 3',5'-monophosphate (cAMP) levels in a dose-dependent manner (EC50 = 1.1-1.8 nM). The Am5-HT7 receptor is also activated by 5-carboxamidotryptamine, whereas methiothepin acts as an inverse agonist. Receptor expression has been investigated by RT-PCR, in situ hybridization, and western blotting experiments. Receptor mRNA is expressed in the perikarya of various brain neuropils, including intrinsic mushroom body neurons, and in peripheral organs. This study marks the first comprehensive characterization of a serotonin receptor in the honeybee and should facilitate further analysis of the role(s) of the receptor in mediating the various central and peripheral effects of 5-HT.
Biogenic amines are important messenger substances in the central nervous system and in peripheral organs of vertebrates and of invertebrates. The honeybee, Apis mellifera, is excellently suited to uncover the functions of biogenic amines in behaviour, because it has an extensive behavioural repertoire, with a number of biogenic amine receptors characterised in this insect. In the honeybee, the biogenic amines dopamine, octopamine, serotonin and tyramine modulate neuronal functions in various ways. Dopamine and serotonin are present in high concentrations in the bee brain, whereas octopamine and tyramine are less abundant. Octopamine is a key molecule for the control of honeybee behaviour. It generally has an arousing effect and leads to higher sensitivity for sensory inputs, better learning performance and increased foraging behaviour. Tyramine has been suggested to act antagonistically to octopamine, but only few experimental data are available for this amine. Dopamine and serotonin often have antagonistic or inhibitory effects as compared to octopamine. Biogenic amines bind to membrane receptors that primarily belong to the large gene-family of GTP-binding (G) protein coupled receptors. Receptor activation leads to transient changes in concentrations of intracellular second messengers such as cAMP, IP3 and/or Ca2+. Although several biogenic amine receptors from the honeybee have been cloned and characterised more recently, many genes still remain to be identified. The availability of the completely sequenced genome of Apis mellifera will contribute substantially to closing this gap. In this review, we will discuss the present knowledge on how biogenic amines and their receptor-mediated cellular responses modulate different behaviours of honeybees including learning processes and division of labour.
The material reported on in this paper is part of a set of experiments in which the role of Information Structure on L2 processing of words is tested. Pitch and duration of 4 sets of experimental material in German and English are measured and analyzed in this paper. The well-known finding that accent boosts duration and pitch is confirmed. Syntactic and lexical means of marking focus, however, do not give the duration and the pitch of a word an extra boost.
Holmberg (1997, 1999) assumes that Holmberg's generalisation (HG) is derivational, prohibiting Object Shift (OS) across an intervening non-adverbial element at any point in the derivation. Counterexamples to this hypothesis are given in Fox & Pesetsky (2005) which show that remnant VP-topicalisations are possible in Scandinavian as long as the VP-internal order relations are maintained. Extending the empirical basis concerning remnant VP-topicalisations, we argue that HG and the restrictions on object stranding result from the same, more general condition on order preservation. Considering this condition to be violable and to interact with various constraints on movement in an Optimality-theoretic fashion, we suggest an account for various asymmetries in the interaction between remnant VP-topicalisations and both OS and other movement operations (especially subject raising) as to their order preserving characteristics and stranding abilities.
Content: 1. Objectives 2. Sociohistorical Background 2.1. The Cornish 2.2. The Welsh 2.3. The Bretons 3. Characteristics of the Brythonic Naming System 3.1. Type 1 Names: Patronymic Lineage 3.2. Type 2 Names: Geographic Origin or Place of Residence 3.3. Type 3 Names: Occupational Activities (Generally Linked to Peasantry) 3.4. Type 4 Names: Physical Characteristics, Moral Flaws 3.5. Type 5 Names: Epithets Relating to Character, Titles of Nobility, etc. 3.6. Epithets Containing References to Victory, War, Warriors, Weapons 3.7. Epithets Containing References to Courage, Strength, Impetuousness and War-like Animals 3.8. Epithets Containing References to Honorific Titles, Noble Lineage, Social Status and Aristocratic Values 4. Summary
This Thesis was devoted to the study of the coupled system composed by El Niño/Southern Oscillation and the Annual Cycle. More precisely, the work was focused on two main problems: 1. How to separate both oscillations into an affordable model for understanding the behaviour of the whole system. 2. How to model the system in order to achieve a better understanding of the interaction, as well as to predict future states of the system. We focused our efforts in the Sea Surface Temperature equations, considering that atmospheric effects were secondary to the ocean dynamics. The results found may be summarised as follows: 1. Linear methods are not suitable for characterising the dimensionality of the sea surface temperature in the tropical Pacific Ocean. Therefore they do not help to separate the oscillations by themselves. Instead, nonlinear methods of dimensionality reduction are proven to be better in defining a lower limit for the dimensionality of the system as well as in explaining the statistical results in a more physical way [1]. In particular, Isomap, a nonlinear modification of Multidimensional Scaling methods, provides a physically appealing method of decomposing the data, as it substitutes the euclidean distances in the manifold by an approximation of the geodesic distances. We expect that this method could be successfully applied to other oscillatory extended systems and, in particular, to meteorological systems. 2. A three dimensional dynamical system could be modeled, using a backfitting algorithm, for describing the dynamics of the sea surface temperature in the tropical Pacific Ocean. We observed that, although there were few data points available, we could predict future behaviours of the coupled ENSO-Annual Cycle system with an accuracy of less than six months, although the constructed system presented several drawbacks: few data points to input in the backfitting algorithm, untrained model, lack of forcing with external data and simplification using a close system. Anyway, ensemble prediction techniques showed that the prediction skills of the three dimensional time series were as good as those found in much more complex models. This suggests that the climatological system in the tropics is mainly explained by ocean dynamics, while the atmosphere plays a secondary role in the physics of the process. Relevant predictions for short lead times can be made using a low dimensional system, despite its simplicity. The analysis of the SST data suggests that nonlinear interaction between the oscillations is small, and that noise plays a secondary role in the fundamental dynamics of the oscillations [2]. A global view of the work shows a general procedure to face modeling of climatological systems. First, we should find a suitable method of either linear or nonlinear dimensionality reduction. Then, low dimensional time series could be extracted out of the method applied. Finally, a low dimensional model could be found using a backfitting algorithm in order to predict future states of the system.
Since 1971, the Freudenthal Institute has developed an approach to mathematics education named Realistic Mathematics Education (RME). The philosophy of RME is based on Hans Freudenthal’s concept of ‘mathematics as a human activity’. Prof. Hans Freudenthal (1905-1990), a mathematician and educator, believes that ‘ready-made mathematics’ should not be taught in school. By contrast, he urges that students should be offered ‘realistic situations’ so that they can rediscover from informal to formal mathematics. Although mathematics education in Vietnam has some achievements, it still encounters several challenges. Recently, the reform of teaching methods has become an urgent task in Vietnam. It appears that Vietnamese mathematics education lacks necessary theoretical frameworks. At first sight, the philosophy of RME is suitable for the orientation of the teaching method reform in Vietnam. However, the potential of RME for mathematics education as well as the ability of applying RME to teaching mathematics is still questionable in Vietnam. The primary aim of this dissertation is to research into abilities of applying RME to teaching and learning mathematics in Vietnam and to answer the question “how could RME enrich Vietnamese mathematics education?”. This research will emphasize teaching geometry in Vietnamese middle school. More specifically, the dissertation will implement the following research tasks: • Analyzing the characteristics of Vietnamese mathematics education in the ‘reformed’ period (from the early 1980s to the early 2000s) and at present; • Implementing a survey of 152 middle school teachers’ ideas from several Vietnamese provinces and cities about Vietnamese mathematics education; • Analyzing RME, including Freudenthal’s viewpoints for RME and the characteristics of RME; • Discussing how to design RME-based lessons and how to apply these lessons to teaching and learning in Vietnam; • Experimenting RME-based lessons in a Vietnamese middle school; • Analyzing the feedback from the students’ worksheets and the teachers’ reports, including the potentials of RME-based lessons for Vietnamese middle school and the difficulties the teachers and their students encountered with RME-based lessons; • Discussing proposals for applying RME-based lessons to teaching and learning mathematics in Vietnam, including making suggestions for teachers who will apply these lessons to their teaching and designing courses for in-service teachers and teachers-in training. This research reveals that although teachers and students may encounter some obstacles while teaching and learning with RME-based lesson, RME could become a potential approach for mathematics education and could be effectively applied to teaching and learning mathematics in Vietnamese school.
Advances in biotechnologies rapidly increase the number of molecules of a cell which can be observed simultaneously. This includes expression levels of thousands or ten-thousands of genes as well as concentration levels of metabolites or proteins. Such Profile data, observed at different times or at different experimental conditions (e.g., heat or dry stress), show how the biological experiment is reflected on the molecular level. This information is helpful to understand the molecular behaviour and to identify molecules or combination of molecules that characterise specific biological condition (e.g., disease). This work shows the potentials of component extraction algorithms to identify the major factors which influenced the observed data. This can be the expected experimental factors such as the time or temperature as well as unexpected factors such as technical artefacts or even unknown biological behaviour. Extracting components means to reduce the very high-dimensional data to a small set of new variables termed components. Each component is a combination of all original variables. The classical approach for that purpose is the principal component analysis (PCA). It is shown that, in contrast to PCA which maximises the variance only, modern approaches such as independent component analysis (ICA) are more suitable for analysing molecular data. The condition of independence between components of ICA fits more naturally our assumption of individual (independent) factors which influence the data. This higher potential of ICA is demonstrated by a crossing experiment of the model plant Arabidopsis thaliana (Thale Cress). The experimental factors could be well identified and, in addition, ICA could even detect a technical artefact. However, in continuously observations such as in time experiments, the data show, in general, a nonlinear distribution. To analyse such nonlinear data, a nonlinear extension of PCA is used. This nonlinear PCA (NLPCA) is based on a neural network algorithm. The algorithm is adapted to be applicable to incomplete molecular data sets. Thus, it provides also the ability to estimate the missing data. The potential of nonlinear PCA to identify nonlinear factors is demonstrated by a cold stress experiment of Arabidopsis thaliana. The results of component analysis can be used to build a molecular network model. Since it includes functional dependencies it is termed functional network. Applied to the cold stress data, it is shown that functional networks are appropriate to visualise biological processes and thereby reveals molecular dynamics.
We analyze the asymptotic behavior in the limit epsilon to zero for a wide class of difference operators H_epsilon = T_epsilon + V_epsilon with underlying multi-well potential. They act on the square summable functions on the lattice (epsilon Z)^d. We start showing the validity of an harmonic approximation and construct WKB-solutions at the wells. Then we construct a Finslerian distance d induced by H and show that short integral curves are geodesics and d gives the rate for the exponential decay of Dirichlet eigenfunctions. In terms of this distance, we give sharp estimates for the interaction between the wells and construct the interaction matrix.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Content: 1. Perfect to Preterite? 2. A Past Grammaticalisation Path for Be after V-ing 2.1. Perfect Grams and Sources 2.2. Perfect Distinctions and Perfect-Preterite Evolution 3. Semantic History of Past-Time Be After V-ing 3.1. Perfect Uses, 1670-1800 3.2. Perfect Uses, 1801-2000 4. Temporal Adverbials and Uses of Be After V-ing, 1701-2000 4.1. Hodiernal Uses 4.2. Preterite Uses 4.3. How Far Is It after Coming? 5. Conclusion
Biochemical and physiological studies of Arabidopsis thaliana Diacylglycerol Kinase 7 (AtDGK7)
(2006)
A family of diacylglycerol kinases (DGK) phosphorylates the substrate diacylglycerol (DAG) to generate phosphatidic acid (PA) . Both molecules, DAG and PA, are involved in signal transduction pathways. In the model plant Arabidopsis thaliana, seven candidate genes (named AtDGK1 to AtDGK7) code for putative DGK isoforms. Here I report the molecular cloning and characterization of AtDGK7. Biochemical, molecular and physiological experiments of AtDGK7 and their corresponding enzyme are analyzed. Information from Genevestigator says that AtDGK7 gene is expressed in seedlings and adult Arabidopsis plants, especially in flowers. The AtDGK7 gene encodes the smallest functional DGK predicted in higher plants; but also, has an alternative coding sequence containing an extended AtDGK7 open reading frame, confirmed by PCR and submitted to the GenBank database (under the accession number DQ350135). The new cDNA has an extension of 439 nucleotides coding for 118 additional amino acids The former AtDGK7 enzyme has a predicted molecular mass of ~41 kDa and its activity is affected by pH and detergents. The DGK inhibitor R59022 also affects AtDGK7 activity, although at higher concentrations (i.e. IC50 ~380 µM). The AtDGK7 enzyme also shows a Michaelis-Menten type saturation curve for 1,2-DOG. Calculated Km and Vmax were 36 µM 1,2-DOG and 0.18 pmol PA min-1 mg of protein-1, respectively, under the assay conditions. Former protein AtDGK7 are able to phosphorylate different DAG analogs that are typically found in plants. The new deduced AtDGK7 protein harbors the catalytic DGKc and accessory domains DGKa, instead the truncated one as the former AtDGK7 protein (Gomez-Merino et al., 2005).
We study elliptic boundary value problems in a wedge with additional edge conditions of trace and potential type. We compute the (difference of the) number of such conditions in terms of the Fredholm index of the principal edge symbol. The task will be reduced to the case of special opening angles, together with a homotopy argument.
brandial06 was the tenth in a series of workshops that aims to bring together researchers working on the semantics and pragmatics of dialogues in fields such as artificial intelligence, formal semantics and pragmatics, computational linguistics, philosophy, and psychology. This volume collects all presented papers and posters and gives abstracts of the invited talks.
Forum: EU-Diplomatie im Jahre 2020
In view of the importance of charge storage in polymer electrets for electromechanical transducer applications, the aim of this work is to contribute to the understanding of the charge-retention mechanisms. Furthermore, we will try to explain how the long-term storage of charge carriers in polymeric electrets works and to identify the probable trap sites. Charge trapping and de-trapping processes were investigated in order to obtain evidence of the trap sites in polymeric electrets. The charge de-trapping behavior of two particular polymer electrets was studied by means of thermal and optical techniques. In order to obtain evidence of trapping or de-trapping, charge and dipole profiles in the thickness direction were also monitored. In this work, the study was performed on polyethylene terephthalate (PETP) and on cyclic-olefin copolymers (COCs). PETP is a photo-electret and contains a net dipole moment that is located in the carbonyl group (C = O). The electret behavior of PETP arises from both the dipole orientation and the charge storage. In contrast to PETP, COCs are not photo-electrets and do not exhibit a net dipole moment. The electret behavior of COCs arises from the storage of charges only. COC samples were doped with dyes in order to probe their internal electric field. COCs show shallow charge traps at 0.6 and 0.11 eV, characteristic for thermally activated processes. In addition, deep charge traps are present at 4 eV, characteristic for optically stimulated processes. PETP films exhibit a photo-current transient with a maximum that depends on the temperature with an activation energy of 0.106 eV. The pair thermalization length (rc) calculated from this activation energy for the photo-carrier generation in PETP was estimated to be approx. 4.5 nm. The generated photo-charge carriers can recombine, interact with the trapped charge, escape through the electrodes or occupy an empty trap. PETP possesses a small quasi-static pyroelectric coefficient (QPC): ~0.6 nC/(m²K) for unpoled samples, ~60 nC/(m²K) for poled samples and ~60 nC/(m²K) for unpoled samples under an electric bias (E ~10 V/µm). When stored charges generate an internal electric field of approx. 10 V/µm, they are able to induce a QPC comparable to that of the oriented dipoles. Moreover, we observe charge-dipole interaction. Since the raw data of the QPC-experiments on PETP samples is noisy, a numerical Fourier-filtering procedure was applied. Simulations show that the data analysis is reliable when the noise level is up to 3 times larger than the calculated pyroelectric current for the QPC. PETP films revealed shallow traps at approx. 0.36 eV during thermally-stimulated current measurements. These energy traps are associated with molecular dipole relaxations (C = O). On the other hand, photo-activated measurements yield deep charge traps at 4.1 and 5.2 eV. The observed wavelengths belong to the transitions in PETP that are analogous to the π - π* benzene transitions. The observed charge de-trapping selectivity in the photocharge decay indicates that the charge detrapping is from a direct photon-charge interaction. Additionally, the charge de-trapping can be facilitated by photo-exciton generation and the interaction of the photo-excitons with trapped charge carriers. These results indicate that the benzene rings (C6H4) and the dipolar groups (C = O) can stabilize and share an extra charge carrier in a chemical resonance. In this way, this charge could be de-trapped in connection with the photo-transitions of the benzene ring and with the dipole relaxations. The thermally-activated charge release shows a difference in the trap depth to its optical counterpart. This difference indicates that the trap levels depend on the de-trapping process and on the chemical nature of the trap site. That is, the processes of charge detrapping from shallow traps are related to secondary forces. The processes of charge de-trapping from deep traps are related to primary forces. Furthermore, the presence of deep trap levels causes the stability of the charge for long periods of time.
When top sports performers fail or “choke” under pressure, everyone asks: why? Research has identified a number of conditions (e.g. an audience) that elicit choking and that moderate (e.g. trait-anxiety) pressure – performance relation. Furthermore, mediating processes have been investigated. For example, explicit monitoring theories link performance failure under psychological stress to an increase in attention paid to a skill and its step-by-step execution (Beilock & Carr, 2001). Many studies have provided support for these ideas. However, so far only overt performance measures have been investigated which do not allow more thorough analyses of processes or performance strategies. But also a theoretical framework has been missing, that could (a) explain the effects of explicit monitoring on skill execution and that (b) makes predictions as to what is being monitored during execution. Consequently in this study, the nodalpoint hypothesis of motor control (Hossner & Ehrlenspiel, 2006) was taken to predict movement changes on three levels of analysis at certain “nodalpoints” within the movement sequence. Performance in two different laboratory tasks was assessed with respect to overt performance (the observable result, for example accuracy in the target), covert performance (description of movement execution, for example the acceleration of body segements) and task exploitation (the utilization of task properties such as covariation). A fake competition (see Beilock & Carr, 2002) was used to invoke pressure. In study 1 a ball bouncing task in a virtual-reality set-up was chosen. Previous studies (de Rugy, Wei, Müller, & Sternad, 2003) have shown that learners are usually able to “passively” exploit the dynamical stability of the system. According to explicit monitoring theories, choking should be expected either if the task itself evokes an “active control” (Experiment 1) or if learners are provided with explicit instructions (Experiment 2). In both experiments, participants first went through a practice phase on day 1. On day 2, following the Baseline Test participants were divided into a High-Stress or No-Stress Group for the final Performance Test. The High-Stress Group entered a fake competition. Overt performance was measured by the Absolute Error (AE) of ball amplitudes from target height; covert performance was measured by Period Modulation between successive hits and task exploitation was measured by Acceleration (AC) at ball-racket impact and Covariation (COV) of impact parameters. To evoke active control in Exp. 1 (N=20), perturbations to the ball flight were introduced. In Exp. 2 (N=39) half of the participants received explicit skill-focused instructions during learning. For overt performance, results generally show an interaction between Stress Group and Test, with better performance (i. e. lower AE) for the High-Stress group in the final Performance Test. This effect is also independent of the Instructions that participants had received during learning (Exp. 2). Similar effects were found for COV but not for AC. In study 2 a visuomotor tracking task in which participants had to pursuit a target cross that was moving on an invisible curve. This curve consisted of 3 segments of 6 turning points sequentially ordered around the x-axis. Participants learned two short movement sequences which were then concatenated to form a single sequence. It was expected that under pressure, this sequence should “fall apart” at the point of concatenation. Overt Performance was assessed by the Root Mean Square Error between target and pursuit cross as well as the Absolute Error at the turning points, covert performance was measured by the Latency from target to pursuit turning and task exploitation was measured by the temporal covariation between successive intervals between turning points. Experiment 3 (intraindividual variation) as well as Experiment 4 (interindividual variation) show performance enhancement in the pressure situation on the overt level with matching results on covert and task exploitation level. Thus, contrary to previous studies, no choking under pressure was found in any of the experiments. This may be interpreted as a failure in the experimental manipulation. But certainly also important characteristics of the task are highlighted. Choking should occur in tasks where performers do not have the time to use action or thought control strategies, that are more relevant to their “self” and that are discrete in nature.
A key problem for models of dialogue is to explain the mechanisms involved in generating and responding to clarification requests. We report a 'Maze task' experiment that investigates the effect of 'spoof' clarification requests on the development of semantic co-ordination. The results provide evidence of both local and global semantic co-ordination phenomena that are not captured by existing dialogue co-ordination models.
This study introduces a method for multiparallel analysis of small organic compounds in the unicellular green alga Chlamydomonas reinhardtii, one of the premier model organisms in cell biology. The comprehensive study of the changes of metabolite composition, or metabolomics, in response to environmental, genetic or developmental signals is an important complement of other functional genomic techniques in the effort to develop an understanding of how genes, proteins and metabolites are all integrated into a seamless and dynamic network to sustain cellular functions. The sample preparation protocol was optimized to quickly inactivate enzymatic activity, achieve maximum extraction capacity and process large sample quantities. As a result of the rapid sampling, extraction and analysis by gas chromatography coupled to time-of-flight mass spectrometry (GC-TOF) more than 800 analytes from a single sample can be measured, of which over a 100 could be positively identified. As part of the analysis of GC-TOF raw data, aliquot ratio analysis to systematically remove artifact signals and tools for the use of principal component analysis (PCA) on metabolomic datasets are proposed. Cells subjected to nitrogen (N), phosphorus (P), sulfur (S) or iron (Fe) depleted growth conditions develop highly distinctive metabolite profiles with metabolites implicated in many different processes being affected in their concentration during adaptation to nutrient deprivation. Metabolite profiling allowed characterization of both specific and general responses to nutrient deprivation at the metabolite level. Modulation of the substrates for N-assimilation and the oxidative pentose phosphate pathway indicated a priority for maintaining the capability for immediate activation of N assimilation even under conditions of decreased metabolic activity and arrested growth, while the rise in 4-hydroxyproline in S deprived cells could be related to enhanced degradation of proteins of the cell wall. The adaptation to sulfur deficiency was analyzed with greater temporal resolution and responses of wild-type cells were compared with mutant cells deficient in SAC1, an important regulator of the sulfur deficiency response. Whereas concurrent metabolite depletion and accumulation occurs during adaptation to S deprivation in wild-type cells, the sac1 mutant strain is characterized by a massive incapability to sustain many processes that normally lead to transient or permanent accumulation of the levels of certain metabolites or recovery of metabolite levels after initial down-regulation. For most of the steps in arginine biosynthesis in Chlamydomonas mutants have been isolated that are deficient in the respective enzyme activities. Three strains deficient in the activities of N-acetylglutamate-5-phosphate reductase (arg1), N2 acetylornithine-aminotransferase (arg9), and argininosuccinate lyase (arg2), respectively, were analyzed with regard to activation of endogenous arginine biosynthesis after withdrawal of externally supplied arginine. Enzymatic blocks in the arginine biosynthetic pathway could be characterized by precursor accumulation, like the amassment of argininosuccinate in arg2 cells, and depletion of intermediates occurring downstream of the enzymatic block, e.g. N2-acetylornithine, ornithine, and argininosuccinate depletion in arg9 cells. The unexpected finding of substantial levels of the arginine pathway intermediates N-acetylornithine, citrulline, and argininosuccinate downstream the enzymatic block in arg1 cells provided an explanation for the residual growth capacity of these cells in the absence of external arginine sources. The presence of these compounds, together with the unusual accumulation of N-Acetylglutamate, the first intermediate that commits the glutamate backbone to ornithine and arginine biosynthesis, in arg1 cells suggests that alternative pathways, possibly involving the activity of ornithine aminotransferase, may be active when the default reaction sequence to produce ornithine via acetylation of glutamate is disabled.
Uncertainty about the sensitivity of the climate system to changes in the Earth’s radiative balance constitutes a primary source of uncertainty for climate projections. Given the continuous increase in atmospheric greenhouse gas concentrations, constraining the uncertainty range in such type of sensitivity is of vital importance. A common measure for expressing this key characteristic for climate models is the climate sensitivity, defined as the simulated change in global-mean equilibrium temperature resulting from a doubling of atmospheric CO2 concentration. The broad range of climate sensitivity estimates (1.5-4.5°C as given in the last Assessment Report of the Intergovernmental Panel on Climate Change, 2001), inferred from comprehensive climate models, illustrates that the strength of simulated feedback mechanisms varies strongly among different models. The central goal of this thesis is to constrain uncertainty in climate sensitivity. For this objective we first generate a large ensemble of model simulations, covering different feedback strengths, and then request their consistency with present-day observational data and proxy-data from the Last Glacial Maximum (LGM). Our analyses are based on an ensemble of fully-coupled simulations, that were realized with a climate model of intermediate complexity (CLIMBER-2). These model versions cover a broad range of different climate sensitivities, ranging from 1.3 to 5.5°C, and have been generated by simultaneously perturbing a set of 11 model parameters. The analysis of the simulated model feedbacks reveals that the spread in climate sensitivity results from different realizations of the feedback strengths in water vapour, clouds, lapse rate and albedo. The calculated spread in the sum of all feedbacks spans almost the entire plausible range inferred from a sampling of more complex models. We show that the requirement for consistency between simulated pre-industrial climate and a set of seven global-mean data constraints represents a comparatively weak test for model sensitivity (the data constrain climate sensitivity to 1.3-4.9°C). Analyses of the simulated latitudinal profile and of the seasonal cycle suggest that additional present-day data constraints, based on these characteristics, do not further constrain uncertainty in climate sensitivity. The novel approach presented in this thesis consists in systematically combining a large set of LGM simulations with data information from reconstructed regional glacial cooling. Irrespective of uncertainties in model parameters and feedback strengths, the set of our model versions reveals a close link between the simulated warming due to a doubling of CO2, and the cooling obtained for the LGM. Based on this close relationship between past and future temperature evolution, we define a method (based on linear regression) that allows us to estimate robust 5-95% quantiles for climate sensitivity. We thus constrain the range of climate sensitivity to 1.3-3.5°C using proxy-data from the LGM at low and high latitudes. Uncertainties in glacial radiative forcing enlarge this estimate to 1.2-4.3°C, whereas the assumption of large structural uncertainties may increase the upper limit by an additional degree. Using proxy-based data constraints for tropical and Antarctic cooling we show that very different absolute temperature changes in high and low latitudes all yield very similar estimates of climate sensitivity. On the whole, this thesis highlights that LGM proxy-data information can offer an effective means of constraining the uncertainty range in climate sensitivity and thus underlines the potential of paleo-climatic data to reduce uncertainty in future climate projections.
An account is presented of the focus properties, common ground effect and dialogue behaviour of the accented German discourse marker "doch" and the accented sentence negation "nicht". It is argued that "doch" and "nicht" evoke as a focus alternative the logical complement of the proposition expressed by the sentence in which they occur, and that an analysis in terms of contrastive focus accounts for their effect on the common ground and their function in dialogue.
A new method is used in an eye-tracking pilot experiment which shows that it is possible to detect differences in common ground associated with the use of minimally different types of indefinite anaphora. Following Richardson and Dale (2005), cross recurrence quantification analysis (CRQA) was used to show that the tandem eye movements of two Swedish-speaking interlocutors are slightly more coupled when they are using fully anaphoric indefinite expressions than when they are using less anaphoric indefinites. This shows the potential of CRQA to detect even subtle processing differences in ongoing discourse.
The terrestrial biosphere impacts considerably on the global carbon cycle. In particular, ecosystems contribute to set off anthropogenic induced fossil fuel emissions and hence decelerate the rise of the atmospheric CO₂ concentration. However, the future net sink strength of an ecosystem will heavily depend on the response of the individual processes to a changing climate. Understanding the makeup of these processes and their interaction with the environment is, therefore, of major importance to develop long-term climate mitigation strategies. Mathematical models are used to predict the fate of carbon in the soil-plant-atmosphere system under changing environmental conditions. However, the underlying processes giving rise to the net carbon balance of an ecosystem are complex and not entirely understood at the canopy level. Therefore, carbon exchange models are characterised by considerable uncertainty rendering the model-based prediction into the future prone to error. Observations of the carbon exchange at the canopy scale can help learning about the dominant processes and hence contribute to reduce the uncertainty associated with model-based predictions. For this reason, a global network of measurement sites has been established that provides long-term observations of the CO₂ exchange between a canopy and the atmosphere along with micrometeorological conditions. These time series, however, suffer from observation uncertainty that, if not characterised, limits their use in ecosystem studies. The general objective of this work is to develop a modelling methodology that synthesises physical process understanding with the information content in canopy scale data as an attempt to overcome the limitations in both carbon exchange models and observations. Similar hybrid modelling approaches have been successfully applied for signal extraction out of noisy time series in environmental engineering. Here, simple process descriptions are used to identify relationships between the carbon exchange and environmental drivers from noisy data. The functional form of these relationships are not prescribed a priori but rather determined directly from the data, ensuring the model complexity to be commensurate with the observations. Therefore, this data-led analysis results in the identification of the processes dominating carbon exchange at the ecosystem scale as reflected in the data. The description of these processes may then lead to robust carbon exchange models that contribute to a faithful prediction of the ecosystem carbon balance. This work presents a number of studies that make use of the developed data-led modelling approach for the analysis and interpretation of net canopy CO₂ flux observations. Given the limited knowledge about the underlying real system, the evaluation of the derived models with synthetic canopy exchange data is introduced as a standard procedure prior to any real data employment. The derived data-led models prove successful in several different applications. First, the data-based nature of the presented methods makes them particularly useful for replacing missing data in the observed time series. The resulting interpolated CO₂ flux observation series can then be analysed with dynamic modelling techniques, or integrated to coarser temporal resolution series for further use e.g., in model evaluation exercises. However, the noise component in these observations interferes with deterministic flux integration in particular when long time periods are considered. Therefore, a method to characterise the uncertainties in the flux observations that uses a semi-parametric stochastic model is introduced in a second study. As a result, an (uncertain) estimate of the annual net carbon exchange of the observed ecosystem can be inferred directly from a statistically consistent integration of the noisy data. For the forest measurement sites analysed, the relative uncertainty for the annual sum did not exceed 11 percent highlighting the value of the data. Based on the same models, a disaggregation of the net CO₂ flux into carbon assimilation and respiration is presented in a third study that allows for the estimation of annual ecosystem carbon uptake and release. These two components can then be further analysed for their separate response to environmental conditions. Finally, a fourth study demonstrates how the results from data-led analyses can be turned into a simple parametric model that is able to predict the carbon exchange of forest ecosystems. Given the global network of measurements available the derived model can now be tested for generality and transferability to other biomes. In summary, this work particularly highlights the potential of the presented data-led methodologies to identify and describe dominant carbon exchange processes at the canopy level contributing to a better understanding of ecosystem functioning.
This diploma thesis deals with the process of political and administrative decentralisation in the Kingdom of Lesotho. Although decentralization in itself does not automatically lead to development it became an integral part of reform processes in many developing countries. Governments and international donors consider efficient decentralized political and administrative structures as essential elements of “good governance” and a prerequisite for structural poverty alleviation. This paper seeks to analyse how the given decentralization strategy and its implementation is affecting different features of good governance in the case of Lesotho. The results of the analysis confirm that the decentralisation process significantly improved political participation of the local population. However, the second objective of enhancing efficiency through decentralisation was not achieved. To the contrary, in the institutional design of the newly created local authorities and in the civil service recruitment policy efficiency considerations did not matter. Additionally, the created mechanisms for political participation generate relevant costs. Thus it is impossible to judge unambiguously on the contribution of decentralisation to the achievement of good governance. Different subtargets of good governance are influenced contrarily. Consequently, the adequacy of the concept of good governance as a guiding concept for decentralisation policies can be questioned. The assessment of the success of decentralisation policies requires a normative framework that takes into account the relations between both participation and efficiency. Despite the partly reduced administrative efficiency the author’s overall impression of the decentralisation process in Lesotho is positive. The establishment of democratically legitimised and participatory local governments justifies certain additional expenditure. However, mistakes in the design and the implementation of the decentralisation strategy would have been avoidable.
Fluvial systems are one of the major features shaping a landscape. They adjust to the prevailing tectonic and climatic setting and therefore are very sensitive markers of changes in these systems. If their response to tectonic and climatic forcing is quantified and if the climatic signal is excluded, it is possible to derive a local deformation history. Here, we investigate fluvial terraces and erosional surfaces in the southern Chilean forearc to assess a long-term geomorphic and hence tectonic evolution. Remote sensing and field studies of the Nahuelbuta Range show that the long-term deformation of the Chilean forearc is manifested by breaks in topography, sequences of differentially uplifted marine, alluvial and strath terraces as well as tectonically modified river courses and drainage basins. We used SRTM-90-data as basic elevation information for extracting and delineating drainage networks. We calculated hypsometric curves as an indicator for basin uplift, stream-length gradient indices to identify stream segments with anomalous slopes, and longitudinal river profiles as well as DS-plots to identify knickpoints and other anomalies. In addition, we investigated topography with elevation-slope graphs, profiles, and DEMs to reveal erosional surfaces. During the first field trip we already measured palaeoflow directions, performed pebble counting and sampled the fluvial terraces in order to apply cosmogenic nuclide dating (<sup>10Be, <sup>26Al) as well as provenance analyses. Our preliminary analysis of the Coastal Cordillera indicates a clear segmentation between the northern and southern parts of the Nahuelbuta Range. The Lanalhue Fault, a NW-SE striking fault zone oblique to the plate boundary, defines the segment boundary. Furthermore, we find a complex drainage re-organisation including a drainage reversal and wind gap on the divide between the Tirúa and Pellahuén basins east of the town Tirúa. The coastal basins lost most of their Andean sediment supply areas that existed in Tertiary and in part during early Pleistocene time. Between the Bío-Bío and Imperial rivers no Andean river is recently capable to traverse the Coastal Cordillera, suggesting ongoing Quaternary uplift of the entire range. From the spatial distribution of geomorphic surfaces in this region two uplift signals may be derived: (1) a long-term differential uplift process, active since the Miocene and possibly caused by underplating of subducted trench sediments, (2) a younger, local uplift affecting only the northern part of the Nahuelbuta Range that may be caused by the interaction of the forearc with the subduction of the Mocha Fracture Zone at the latitude of the Arauco peninsula. Our approach thus provides results in our attempt to decipher the characteristics of forearc development of active convergent margins using long-term geomorphic indicators. Furthermore, it is expected that our ongoing assessment will constrain repeatedly active zones of deformation. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
The main claim of this paper is that the minimalist framework and optimality theory adopt more or less the same architecture of grammar: both assume that a generator defines a set S of potentially well-formed expressions that can be generated on the basis of a given input, and that there is an evaluator that selects the expressions from S that are actually grammatical in a given language L. The paper therefore proposes a model of grammar in which the strengths of the two frameworks are combined: more specifically, it is argued that the computational system of human language CHL from MP creates a set S of potentially well-formed expressions, and that these are subsequently evaluated in an optimality theoretic fashion.
The primary objective of this work was to develop a laser source for fundamental investigations in the field of laser – materials interactions. In particular it is supposed to facilitate the study of the influence of the temporal energy distribution such as the interaction between adjacent pulses on ablation processes. Therefore, the aim was to design a laser with a highly flexible and easily controllable temporal energy distribution. The laser to meet these demands is an SBS-laser with optional active mode-locking. The nonlinear reflectivity of the SBS-mirror leads to a passive Q-switching and issues ns-pulse bursts with µs spacing. The pulse train parameters such as pulse duration, pulse spacing, pulse energy and number of pulses within a burst can be individually adjusted by tuning the pump parameters and the starting conditions for the laser. Another feature of the SBS-reflection is phase conjugation, which leads to an excellent beam quality thanks to the compensation of phase distortions. Transverse fundamental mode operation and a beam quality better than 1.4 times diffraction limited can be maintained for average output powers of up to 10 W. In addition to the dynamics on a ns-timescale described above, a defined splitting up of each ns-pulse into a train of ps-pulses can be achieved by additional active mode-locking. This twofold temporal focussing of the intensity leads to single pulse energies of up to 2 mJ at pulse durations of approximately 400 ps which corresponds to a pulse peak power of 5 MW. While the pulse duration is of the same order of magnitude as those of other passively Q-switched lasers with simultaneous mode-locking, the pulse energy and pulse peak power exceeds the values of these systems found in the literature by an order of magnitude. To the best of my knowledge the laser presented here is the first implementation of a self-starting mode-locked SBS-laser oscillator. In order to gain a better understanding and control of the transient output of the laser two complementary numerical models were developed. The first is based on laser rate equations which are solved for each laser mode individually while the mode-locking dynamics are calculated from the resultant transient spectrum. The rate equations consider the mean photon densities in the resonator, therefore the propagation of the light inside the resonator is not properly displayed. The second model, in contrast, introduces a spatial resolution of the resonator and hence the propagation inside the resonator can more accurately be considered. Consequently, a mismatch between the loss modulation frequency and the resonator round trip time can be conceived. The model calculates all dynamics in the time domain and therefore the spectral influences such as the Stokes-shift have to be neglected. Both models achieve an excellent reproduction of the ns-dynamics that are generated by the SBS-Q-switch. Separately, each model fails to reproduce all aspects of the ps-dynamics of the SBS-laser in detail. This can be attributed to the complexity of the numerous physical processes involved in this system. But thanks to their complementary nature they provide a very useful tool for investigating the various influences on the dynamics of the mode-locked SBS-laser individually. These aspects can eventually be recomposed to give a complete picture of the mechanisms which govern the output dynamics. Among the aspects under scrutiny were in particular the start resonator quality which determines the starting condition for the SBS-Q-switch, the modulation depth of the AOM and the phonon lifetime as well as the Brillouin-frequency of the SBS-medium. The numerical simulations and the experiments have opened several doors inviting further investigations and promising a potential for further improvement of the experimental results: The results of the simulations in combination with the experimental results which determined the starting conditions for the simulations leave no doubt that the bandwidth generation can primarily be attributed to the SBS-Stokes-shift during the buildup of the Q-switch pulse. For each resonator round trip, bandwidth is generated by shifting a part of the revolving light in frequency. The magnitude of the frequency shift corresponds to the Brillouin-frequency which is a constant of the SBS material and amounts in the case of SF6 to 240 MHz. The modulation of the AOM merely provides an exchange of population between spectrally adjacent modes and therefore diminishes a modulation in the spectrum. By use of a material with a Brillouin-frequency in the GHz range the bandwidth generation can be considerably accelerated thereby shortening the pulse duration. Also, it was demonstrated that yet another nonlinear effect of the SBS can be exploited: If the phonon lifetime is short compared to the resonator round trip time we obtain a modulation in the SBS-reflectivity that supports the modulation of the AOM. The application of an external optical feedback by a conventional mirror turns out to be an alternative to the AOM in synchronizing the longitudinal resonator modes. The interesting feature about this system is that it is ― although highly complex in the physical processes and the temporal output dynamics ― very simple and inexpensive from a technical point of view. No expensive modulators and no control electronics are necessary. Finally, the numerical models constitute a powerful tool for the investigation of emission dynamics of complex laser systems on arbitrary timescales and can also display the spectral evolution of the laser output. In particular it could be demonstrated that differences in the results of the complementary models vanish for systems of lesser complexity.
In this work approaches for new detection system development for an Analytical Ultracentrifuge (AUC) were explored. Unlike its counterpart in chromatography fractionation techniques, the use of a Multidetection system for AUC has not yet been implemented to full extent despite its potential benefit. In this study we tried to couple existing fundamental spectroscopic and scattering techniques that are used in day to day science as tool for extracting analyte information. Trials were performed for adapting Raman, Light scattering and UV/Vis (with possibility to work with the whole range of wavelengths) to AUC. Conclusions were drawn for Raman and Light scattering to be a possible detection system for AUC, while the development for a fast fiber optics based multiwavelength detector was completed. The multiwavelength detector demonstrated the capability of data generation matching the literature and reference measurement data and faster data collection than that of the commercial instrument. It became obvious that with the generation of data in 3-D space in the UV/Vis detection system, the user can select the wavelength for the evaluation of experimental results as the data set contains the whole range of information from UV/Vis wavelength. The detector showed the data generation with much faster speed unlike the commercial instruments. The advantage of fast data generation was exemplified with the evaluation of data for a mixture of three colloids. These data were in conformity with measurement results from normal radial experiments and without significant diffusion broadening. Thus conclusions were drawn that with our designed Multiwavelength detector, meaningful data in 3-D space can be collected with much faster speed of data generation.
On the basis of the Dynamic Syntax framework, this paper argues that the production pressures in dialogue determining alignment effects and given versus new informational effects also drive the shift from case-rich free word order systems without clitic pronouns into systems with clitic pronouns with rigid relative ordering. The paper introduces assumptions of Dynamic Syntax, in particular the building up of interpretation through structural underspecification and update, sketches the attendant account of production with close coordination of parsing and production strategies, and shows how what was at the Latin stage a purely pragmatic, production-driven decision about linear ordering becomes encoded in the clitics in theMedieval Spanish system which then through successive steps of routinization yield the modern systems with immediately pre-verbal fixed clitic templates.
Do institutions matter?
(2006)
Contens 1 Introduction 2 Institutions and the Institutional Change 2.1 Institutions and Theoretical Concepts in Economics 2.2 Path Dependence 2.3 Inconsistence of Institutional Development 2.4 Determinants of Effectiveness 2.5 Efficiency of New Institutions 3 What is “Competition Policy”? 4 The Competition Policy in Russia as an Institution 4.1 Establishment of the Competition Policy as an Institution 4.2 Market Structure and Competition Policy 4.3 Measures of Competition Policy 4.3.1 Prohibition of Competition Restrictive Agreements or Concerted Actions 4.3.2 Abuse of Dominance 4.3.3 Merger Control 4.3.4 Restrictive Action to Competition of Administrative Bodies 4.4 Violations of the Competition Law 4.5 Problems of the Russian Competition Policy 5 Which Mistakes Russia has made with the Implementation of theCompetition Policy? 6 Is a Lacking Effectiveness of Transplanted Institutions Inevitable? 7 Concluding Remarks
On a manifold with edge we construct a specific class of (edgedegenerate) elliptic differential operators. The ellipticity refers to the principal symbolic structure σ = (σψ, σ^) of the edge calculus consisting of the interior and edge symbol, denoted by σψ and σ^, respectively. For our choice of weights the ellipticity will not require additional edge conditions of trace or potential type, and the operators will induce isomorphisms between the respective edge spaces.
We consider quasicomplexes of Boutet de Monvel operators in Sobolev spaces on a smooth compact manifold with boundary. To each quasicomplex we associate two complexes of symbols. One complex is defined on the cotangent bundle of the manifold and the other on that of the boundary. The quasicomplex is elliptic if these symbol complexes are exact away from the zero sections. We prove that elliptic quasicomplexes are Fredholm. As a consequence of this result we deduce that a compatibility complex for an overdetermined elliptic boundary problem operator is also Fredholm. Moreover, we introduce the Euler characteristic for elliptic quasicomplexes of Boutet de Monvel operators.
By quasicomplexes are usually meant perturbations of complexes small in some sense. Of interest are not only perturbations within the category of complexes but also those going beyond this category. A sequence perturbed in this way is no longer a complex, and so it bears no cohomology. We show how to introduce Euler characteristic for small perturbations of Fredholm complexes. The paper is to appear in Funct. Anal. and its Appl., 2006.
We present a formal analysis of iconic coverbal gesture. Our model describes the incomplete meaning of gesture that’s derivable from its form, and the pragmatic reasoning that yields a more specific interpretation. Our formalism builds on established models of discourse interpretation to capture key insights from the descriptive literature on gesture: synchronous speech and gesture express a single thought, but while the form of iconic gesture is an important clue to its interpretation, the content of gesture can be resolved only by linking it to its context.
We present a new analysis of illocutionary forces in dialogue. We analyze them as complex conversational moves involving two dimensions: what Speaker commits herself to and what she calls on Addressee to perform. We start from the analysis of speech acts such as confirmation requests or whimperatives, and extend the analysis to seemingly simple speech acts, such as statements and queries. Then, we show how to integrate our proposal in the framework of the Grammar for Conversation (Ginzburg, to app.), which is adequate for modelling agents' information states and how they get updated.
Decisions for the conservation of biodiversity and sustainable management of natural resources are typically related to large scales, i.e. the landscape level. However, understanding and predicting the effects of land use and climate change on scales relevant for decision-making requires to include both, large scale vegetation dynamics and small scale processes, such as soil-plant interactions. Integrating the results of multiple BIOTA subprojects enabled us to include necessary data of soil science, botany, socio-economics and remote sensing into a high resolution, process-based and spatially-explicit model. Using an example from a sustainably-used research farm and a communally used and degraded farming area in semiarid southern Namibia we show the power of simulation models as a tool to integrate processes across disciplines and scales.
Sucrose synthase (Susy) is a key enzyme of sucrose metabolism, catalysing the reversible conversion of sucrose and UDP to UDP-glucose and fructose. Therefore, its activity, localization and function have been studied in various plant species. It has been shown that Susy can play a role in supplying energy in companion cells for phloem loading (Fu and Park, 1995), provides substrates for starch synthesis (Zrenner et al., 1995), and supplies UDP-glucose for cell wall synthesis (Haigler et al., 2001). Analysis of the Arabidopsis genome identifies six Susy isoforms. The expression of these isoforms was investigated using promoter-reporter gene constructs (GUS) and real time RT-PCR. Although these isoforms are closely related at the protein level they have radically different spatial and temporal patterns of expression in the plant with no two isoforms showing the same distribution. More than one isoform is expressed in all organs examined. Some of them have high but specific expression in particular organs or developmental stages whilst others are constantly expressed throughout the whole plant and across various stages of development. The in planta function of the six Susy isoforms were explored through analysis of T-DNA insertion mutants and RNAi lines. Plants without the expression of individual isoforms show no differences in growth and development, and are not significantly different from wild type plants in soluble sugars, starch and cellulose contents under all growth conditions investigated. Analysis of T-DNA insertion mutant lacking Sus3 isoform that was exclusively expressed in stomata cells only had a minor influence on guard cell osmoregulation and/or bioenergetics. Although none of the sucrose synthases appear to be essential for normal growth under our standard growth conditions, they may be necessary for growth under stress conditions. Different isoforms of sucrose synthase respond differently to various abiotic stresses. It has been shown that oxygen deprivation up regulates Sus1 and Sus4 and increases total Susy activity. However, the analysis of the plants with reduced expression of both Sus1 and Sus4 revealed no obvious effects on plant performance under oxygen deprivation. Low temperature up regulates Sus1 expression but the loss of this isoform has no effect on the freezing tolerance of non acclimated and cold acclimated plants. These data provide a comprehensive overview of the expression of this gene family which supports some of the previously reported roles for Susy and indicates the involvement of specific isoforms in metabolism and/or signalling.
Since 2002, keywords like service-oriented engineering, service-oriented computing, and service-oriented architecture have been widely used in research, education, and enterprises. These and related terms are often misunderstood or used incorrectly. To correct these misunderstandings, a deeper knowledge of the concepts, the historical backgrounds, and an overview of service-oriented architectures is demanded and given in this paper.
In this paper we present an approach to recover the dynamics from recurrences of a system and then generate (multivariate) twin surrogate (TS) trajectories. In contrast to other approaches, such as the linear-like surrogates, this technique produces surrogates which correspond to an independent copy of the underlying system, i. e. they induce a trajectory of the underlying system visiting the attractor in a different way. We show that these surrogates are well suited to test for complex synchronization, which makes it possible to systematically assess the reliability of synchronization analyses. We then apply the TS to study binocular fixational movements and find strong indications that the fixational movements of the left and right eye are phase synchronized. This result indicates that there might be one centre only in the brain that produces the fixational movements in both eyes or a close link between two centres.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Goal-oriented dialog as a collaborative subordinated activity involving collective acceptance
(2006)
Modeling dialog as a collaborative activity consists notably in specifying the contain of the Conversational Common Ground and the kind of social mental state involved. In previous work (Saget, 2006), we claim that Collective Acceptance is the proper social attitude for modeling Conversational Common Ground in the particular case of goal-oriented dialog. We provide a formalization of Collective Acceptance, besides elements in order to integrate this attitude in a rational model of dialog are provided; and finally, a model of referential acts as being part of a collaborative activity is provided. The particular case of reference has been chosen in order to exemplify our claims.
Content: 1. Preverbal Composition in Old Irish and Old English 2. The Shape of the Modern Irish Verbal Lexeme 3. Particle Verbs in Irish and English 3.1. Definitions: Phrasal Verb or Prepositional Verb? 3.2. Examples 3.3. Obvious Similarities 3.4. Irish English Peculiarities 4. The Abolition of Verbal Composition in Irish and English – Parallels and Differences in Historical Syntax 5. Conclusions
Germination rates and germination fractions of seeds can be predicted well by the hydrothermal time (HTT) model. Its four parameters hydrothermal time, minimum soil temperature, minimum soil moisture, and variation of minimum soil moisture, however, must be determined by lengthy germination experiments at combinations of several levels of soil temperature and moisture. For some applications of the HTT model it is more important to have approximate estimates for many species rather than exact values for only a few species. We suggest that minimum temperature and variation of minimum moisture can be estimated from literature data and expert knowledge. This allows to derive hydrothermal time and minimum moisture from existing data from germination experiments with one level of temperature and moisture. We applied our approach to a germination experiment comparing germination fractions of wild annual species along an aridity gradient in Israel. Using this simplified approach we estimated hydrothermal time and minimum moisture of 36 species. Comparison with exact data for three species shows that our method is a simple but effective method for obtaining parameters for the HTT model. Hydrothermal time and minimum moisture supposedly indicate climate related germination strategies. We tested whether these two parameters varied with the climate at the site where the seeds had been collected. We found no consistent variation with climate across species, suggesting that variation is more strongly controlled by site-specific factors.
Our work goes in two directions. At first we want to transfer definitions, concepts and results of the theory of hyperidentities and solid varieties from the total to the partial case. (1) We prove that the operators chi^A_RNF and chi^E_RNF are only monotone and additive and we show that the sets of all fixed points of these operators are characterized only by three instead of four equivalent conditions for the case of closure operators. (2) We prove that V is n − SF-solid iff clone^SF V is free with respect to itself, freely generated by the independent set {[fi(x_1, . . . , x_n)]Id^SF_n V | i \in I}. (3) We prove that if V is n-fluid and ~V |P(V ) =~V −iso |P(V ) then V is kunsolid for k >= n (where P(V ) is the set of all V -proper hypersubstitutions of type \tau ). (4) We prove that a strong M-hyperquasi-equational theory is characterized by four equivalent conditions. The second direction of our work is to follow ideas which are typical for the partial case. (1) We characterize all minimal partial clones which are strongly solidifyable. (2)We define the operator Chi^A_Ph where Ph is a monoid of regular partial hypersubstitutions.Using this concept, we define the concept of a Phyp_R(\tau )-solid strong regular variety of partial algebras and we prove that a PHyp_R(\tau )-solid strong regular variety satisfies four equivalent conditions.
In two experiments, many annotators marked antecedents for discourse deixis as unconstrained regions of text. The experiments show that annotators do converge on the identity of these text regions, though much of what they do can be captured by a simple model. Demonstrative pronouns are more likely than definite descriptions to be marked with discourse antecedents. We suggest that our methodology is suitable for the systematic study of discourse deixis.
Improvement of a fluorescence immunoassay with a compact diode-pumped solid state laser at 315 nm
(2006)
We demonstrate the improvement of fluorescence immunoassay (FIA) diagnostics in deploying a newly developed compact diode-pumped solid state (DPSS) laser with emission at 315 nm. The laser is based on the quasi-three-level transition in Nd:YAG at 946 nm. The pulsed operation is either realized by an active Q-switch using an electro-optical device or by introduction of a Cr<SUP>4+</SUP>:YAG saturable absorber as passive Q-switch element. By extra-cavity second harmonic generation in different nonlinear crystal media we obtained blue light at 473 nm. Subsequent mixing of the fundamental and the second harmonic in a β-barium-borate crystal provided pulsed emission at 315 nm with up to 20 μJ maximum pulse energy and 17 ns pulse duration. Substitution of a nitrogen laser in a FIA diagnostics system by the DPSS laser succeeded in considerable improvement of the detection limit. Despite significantly lower pulse energies (7 μJ DPSS laser versus 150 μJ nitrogen laser), in preliminary investigations the limit of detection was reduced by a factor of three for a typical FIA.
The goal of a Brain-Computer Interface (BCI) consists of the development of a unidirectional interface between a human and a computer to allow control of a device only via brain signals. While the BCI systems of almost all other groups require the user to be trained over several weeks or even months, the group of Prof. Dr. Klaus-Robert Müller in Berlin and Potsdam, which I belong to, was one of the first research groups in this field which used machine learning techniques on a large scale. The adaptivity of the processing system to the individual brain patterns of the subject confers huge advantages for the user. Thus BCI research is considered a hot topic in machine learning and computer science. It requires interdisciplinary cooperation between disparate fields such as neuroscience, since only by combining machine learning and signal processing techniques based on neurophysiological knowledge will the largest progress be made. In this work I particularly deal with my part of this project, which lies mainly in the area of computer science. I have considered the following three main points: <b>Establishing a performance measure based on information theory:</b> I have critically illuminated the assumptions of Shannon's information transfer rate for application in a BCI context. By establishing suitable coding strategies I was able to show that this theoretical measure approximates quite well to what is practically achieveable. <b>Transfer and development of suitable signal processing and machine learning techniques:</b> One substantial component of my work was to develop several machine learning and signal processing algorithms to improve the efficiency of a BCI. Based on the neurophysiological knowledge that several independent EEG features can be observed for some mental states, I have developed a method for combining different and maybe independent features which improved performance. In some cases the performance of the combination algorithm outperforms the best single performance by more than 50 %. Furthermore, I have theoretically and practically addressed via the development of suitable algorithms the question of the optimal number of classes which should be used for a BCI. It transpired that with BCI performances reported so far, three or four different mental states are optimal. For another extension I have combined ideas from signal processing with those of machine learning since a high gain can be achieved if the temporal filtering, i.e., the choice of frequency bands, is automatically adapted to each subject individually. <b>Implementation of the Berlin brain computer interface and realization of suitable experiments:</b> Finally a further substantial component of my work was to realize an online BCI system which includes the developed methods, but is also flexible enough to allow the simple realization of new algorithms and ideas. So far, bitrates of up to 40 bits per minute have been achieved with this system by absolutely untrained users which, compared to results of other groups, is highly successful.
The aim of this study was to provide deeper insights in passerine phylogenetic relationships using new molecular markers. The monophyly of the largest avian order Passeriformes (~59% of all living birds) and the division into its suborders suboscines and oscines are well established. Phylogenetic relationships within the group have been extremely puzzling, as most of the evolutionary lineages originated through rapid radiation. Numerous studies have hypothesised conflicting passerine phylogenies and have repeatedly stimulated further research with new markers. In the present study, I used three different approaches to contribute to the ongoing phylogenetic debate in Passeriformes. I investigated the recently introduced gene ZENK for its phylogenetic utility for passerine systematics in combination and comparison to three already established nuclear markers. My phylogenetic analyses of a comprehensive data set yielded highly resolved, consistent and strongly supported trees. I was able to show the high utility of ZENK for elucidating phylogenetic relationships within Passeriformes. For the second and third approach, I used chicken repeat 1 (CR1) retrotransposons as phylogenetic markers. I presented two specific CR1 insertions as apomorphic characters, whose presence/absence pattern significantly contributed to the resolution of a particular phylogenetic uncertainty, namely the position of the rockfowl species Picathartes spp. in the passerine tree. Based on my results, I suggest a closer relationship of these birds to crows, ravens, jays, and allies. For the third approach, I showed that CR1 sequences contain phylogenetic signal and investigated their applicability in more detail. In this context, I screened for CR1 elements in different passerine birds, used sequences of several loci to construct phylogenetic trees, and evaluated their reliability. I was able to corroborate existing hypotheses and provide strong evidence for some new hypotheses, e.g. I suggest a revision of the taxa Corvidae and Corvinae as vireos are closer related to crows, ravens, and allies. The subdivision of the Passerida into three superfamilies, Sylvioidea, Passeroidea, and Muscicapoidea was strongly supported. I found evidence for a split within Sylvioidea into two clades, one consisting of tits and the other comprising warblers, bulbuls, laughingthrushes, whitethroats, and allies. Whereas Passeridae appear to be paraphyletic, monophyly of weavers and estrild finches as a separate clade was strongly supported. The sister taxon relationships of dippers and the thrushes/flycatcher/chat assemblage was corroborated and I suggest a closer relationship of waxwings and kinglets to wrens, tree-creepers, and nuthatches.
We consider a system of infinitely many hard balls in R<sup>d undergoing Brownian motions and submitted to a smooth pair potential. It is modelized by an infinite-dimensional stochastic differential equation with a local time term. We prove that the set of all equilibrium measures, solution of a detailed balance equation, coincides with the set of canonical Gibbs measures associated to the hard core potential added to the smooth interaction potential.
The formation of colloids by the controlled reduction, nucleation, and growth of inorganic precursor salts in different media has been investigated for more than a century. Recently, the preparation of ultrafine particles has received much attention since they can offer highly promising and novel options for a wide range of technical applications (nanotechnology, electrooptical devices, pharmaceutics, etc). The interest derives from the well-known fact that properties of advanced materials are critically dependent on the microstructure of the sample. Control of size, size distribution and morphology of the individual grains or crystallites is of the utmost importance in order to obtain the material characteristics desired. Several methods can be employed for the synthesis of nanoparticles. On the one hand, the reduction can occur in diluted aqueous or alcoholic solutions. On the other hand, the reduction process can be realized in a template phase, e.g. in well-defined microemulsion droplets. However, the stability of the nanoparticles formed mainly depends on their surface charge and it can be influenced with some added protective components. Quite different types of polymers, including polyelectrolytes and amphiphilic block copolymers, can for instance be used as protecting agents. The reduction and stabilization of metal colloids in aqueous solution by adding self-synthesized hydrophobically modified polyelectrolytes were studied in much more details. The polymers used are hydrophobically modified derivatives of poly(sodium acrylate) and of maleamic acid copolymers as well as the commercially available branched poly(ethyleneimine). The first notable result is that the polyelectrolytes used can act alone as both reducing and stabilizing agent for the preparation of gold nanoparticles. The investigation was then focused on the influence of the hydrophobic substitution of the polymer backbone on the reduction and stabilization processes. First of all, the polymers were added at room temperature and the reduction process was investigated over a longer time period (up to 8 days). In comparison, the reduction process was realized faster at higher temperature, i.e. 100°C. In both cases metal nanoparticles of colloidal dimensions can be produced. However, the size and shape of the individual nanoparticles mainly depends on the polymer added and the temperature procedure used. In a second part, the influence of the prior mentioned polyelectrolytes was investigated on the phase behaviour as well as on the properties of the inverse micellar region (L2 phase) of quaternary systems consisting of a surfactant, toluene-pentanol (1:1) and water. The majority of the present work has been made with the anionic surfactant sodium dodecylsulfate (SDS) and the cationic surfactant cetyltrimethylammonium bromide (CTAB) since they can interact with the oppositely charged polyelectrolytes and the microemulsions formed using these surfactants present a large water-in-oil region. Subsequently, the polymer-modified microemulsions were used as new templates for the synthesis of inorganic particles, ranging from metals to complex crystallites, of very small size. The water droplets can indeed act as nanoreactors for the nucleation and growth of the particles, and the added polymer can influence the droplet size, the droplet-droplet interactions, as well as the stability of the surfactant film by the formation of polymer-surfactant complexes. One further advantage of the polymer-modified microemulsions is the possibility to stabilize the primary formed nanoparticles via a polymer adsorption (steric and/or electrostatic stabilization). Thus, the polyelectrolyte-modified nanoparticles formed can be redispersed without flocculation after solvent evaporation.
Soils contain a large amount of carbon (C) that is a critical regulator of the global C budget. Already small changes in the processes governing soil C cycling have the potential to release considerable amounts of CO2, a greenhouse gas (GHG), adding additional radiative forcing to the atmosphere and hence to changing climate. Increased temperatures will probably create a feedback, causing soils to release more GHGs. Furthermore changes in soil C balance impact soil fertility and soil quality, potentially degrading soils and reducing soils function as important resource. Consequently the assessment of soil C dynamics under present, recent past and future environmental conditions is not only of scientific interest and requires an integrated consideration of main factors and processes governing soil C dynamics. To perform this assessment an eco-hydrological modelling tool was used and extended by a process-based description of coupled soil carbon and nitrogen turnover. The extended model aims at delivering sound information on soil C storage changes beside changes in water quality, quantity and vegetation growth under global change impacts in meso- to macro-scale river basins, exemplary demonstrated for a Central European river basin (the Elbe). As a result this study: ▪ Provides information on joint effects of land-use (land cover and land management) and climate changes on croplands soil C balance in the Elbe river basin (Central Europe) presently and in the future. ▪ Evaluates which processes, and at what level of process detail, have to be considered to perform an integrated simulation of soil C dynamics at the meso- to macro-scale and demonstrates the model’s capability to simulate these processes compared to observations. ▪ Proposes a process description relating soil C pools and turnover properties to readily measurable quantities. This reduces the number of model parameters, enhances the comparability of model results to observations, and delivers same performance simulating long-term soil C dynamics as other models. ▪ Presents an extensive assessment of the parameter and input data uncertainty and their importance both temporally and spatially on modelling soil C dynamics. For the basin scale assessments it is estimated that croplands in the Elbe basin currently act as a net source of carbon (net annual C flux of 11 g C m-2 yr-1, 1.57 106 tons CO2 yr-1 entire croplands on average). Although this highly depends on the amount of harvest by-products remaining on the field. Future anticipated climate change and observed climate change in the basin already accelerates soil C loss and increases source strengths (additional 3.2 g C m-2 yr-1, 0.48 106 tons CO2 yr-1 entire croplands). But anticipated changes of agro-economic conditions, translating to altered crop share distributions, display stronger effects on soil C storage than climate change. Depending on future use of land expected to fall out of agricultural use in the future (~ 30 % of croplands area as “surplus” land), the basin either considerably looses soil C and the net annual C flux to the atmosphere increases (surplus used as black fallow) or the basin converts to a net sink of C (sequestering 0.44 106 tons CO2 yr-1 under extensified use as ley-arable) or reacts with decrease in source strength when using bioenergy crops. Bioenergy crops additionally offer a considerable potential for fossil fuel substitution (~37 PJ, 1015 J per year), whereas the basin wide use of harvest by-products for energy generation has to be seen critically although offering an annual energy potential of approximately 125 PJ. Harvest by-products play a central role in soil C reproduction and a percentage between 50 and 80 % should remain on the fields in order to maintain soil quality and fertility. The established modelling tool allows quantifying climate, land use and major land management impacts on soil C balance. New is that the SOM turnover description is embedded in an eco-hydrological river basin model, allowing an integrated consideration of water quantity, water quality, vegetation growth, agricultural productivity and soil carbon changes under different environmental conditions. The methodology and assessment presented here demonstrates the potential for integrated assessment of soil C dynamics alongside with other ecosystem services under global change impacts and provides information on the potentials of soils for climate change mitigation (soil C sequestration) and on their soil fertility status.
Integration of digital elevation models and satellite images to investigate geological processes.
(2006)
In order to better understand the geological boundary conditions for ongoing or past surface processes geologists face two important questions: 1) How can we gain additional knowledge about geological processes by analyzing digital elevation models (DEM) and satellite images and 2) Do these efforts present a viable approach for more efficient research. Here, we will present case studies at a variety of scales and levels of resolution to illustrate how we can substantially complement and enhance classical geological approaches with remote sensing techniques. Commonly, satellite and DEM based studies are being used in a first step of assessing areas of geologic interest. While in the past the analysis of satellite imagery (e.g. Landsat TM) and aerial photographs was carried out to characterize the regional geologic characteristics, particularly structure and lithology, geologists have increasingly ventured into a process-oriented approach. This entails assessing structures and geomorphic features with a concept that includes active tectonics or tectonic activity on time scales relevant to humans. In addition, these efforts involve analyzing and quantifying the processes acting at the surface by integrating different remote sensing and topographic data (e.g. SRTM-DEM, SSM/I, GPS, Landsat 7 ETM, Aster, Ikonos…). A combined structural and geomorphic study in the hyperarid Atacama desert demonstrates the use of satellite and digital elevation data for assessing geological structures formed by long-term (millions of years) feedback mechanisms between erosion and crustal bending (Zeilinger et al., 2005). The medium-term change of landscapes during hundred thousands to millions years in a more humid setting is shown in an example from southern Chile. Based on an analysis of rivers/watersheds combined with landscapes parameterization by using digital elevation models, the geomorphic evolution and change in drainage pattern in the coastal Cordillera can be quantified and put into the context of seismotectonic segmentation of a tectonically active region. This has far-reaching implications for earthquake rupture scenarios and hazard mitigation (K. Rehak, see poster on IMAF Workshop). Two examples illustrate short-term processes on decadal, centennial and millennial time scales: One study uses orogen scale precipitation gradients derived from remotely sensed passive microwave data (Bookhagen et al., 2005a). They demonstrate how debris flows were triggered as a response of slopes to abnormally strong rainfall in the interior parts of the Himalaya during intensified monsoons. The area of the orogen that receives high amounts of precipitation during intensified monsoons also constitutes numerous landslide deposits of up to 1km<sup>3 volume that were generated during intensified monsoon phase at about 27 and 9 ka (Bookhagen et al., 2005b). Another project in the Swiss Alps compared sets of aerial photographs recorded in different years. By calculating high resolution surfaces the mass transport in a landslide could be reconstructed (M. Schwab, Universität Bern). All these examples, although representing only a short and limited selection of projects using remote sense data in geology, have as a common approach the goal to quantify geological processes. With increasing data resolution and new sensors future projects will even enable us to recognize more patterns and / or structures indicative of geological processes in tectonically active areas. This is crucial for the analysis of natural hazards like earthquakes, tsunamis and landslides, as well as those hazards that are related to climatic variability. The integration of remotely sensed data at different spatial and temporal scales with field observations becomes increasingly important. Many of presently highly populated places and increasingly utilized regions are subject to significant environmental pressure and often constitute areas of concentrated economic value. Combined remote sensing and ground-truthing in these regions is particularly important as geologic, seismicity and hydrologic data may be limited here due to the recency of infrastructural development. Monitoring ongoing processes and evaluating the remotely sensed data in terms of recurrence of events will greatly enhance our ability to assess and mitigate natural hazards. <hr> Dokument 1: Foliensatz | Dokument 2: Abstract <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Ultrathin, semi-permeable membranes are not only essential in natural systems (membranes of cells or organelles) but they are also important for applications (separation, filtering) in miniaturized devices. Membranes, integrated as diffusion barriers or filters in micron scale devices need to fulfill equivalent requirements as the natural systems, in particular mechanical stability and functionality (e.g. permeability), while being only tens of nm in thickness to allow fast diffusion times. Promising candidates for such membranes are polyelectrolyte multilayers, which were found to be mechanically stable, and variable in functionality. In this thesis two concepts to integrate such membranes in larger scale structures were developed. The first is based on the directed adhesion of polyelectrolyte hollow microcapsules. As a result, arrays of capsules were created. These can be useful for combinatorial chemistry or sensing. This concept was expanded to couple encapsulated living cells to the surface. The second concept is the transfer of flat freestanding multilayer membranes to structured surfaces. We have developed a method that allows us to couple mm2 areas of defect free film with thicknesses down to 50 nm to structured surfaces and to avoid crumpling of the membrane. We could again use this technique to produce arrays of micron size. The freestanding membrane is a diffusion barrier for high molecular weight molecules, while small molecules can pass through the membrane and thus allows us to sense solution properties. We have shown also that osmotic pressures lead to membrane deflection. That could be described quantitatively.
Interdisciplinary studies on information structure : ISIS ; Working papers of the SFB 632 - Vol. 5
(2006)
In this paper we compare the behaviour of adverbs of frequency (de Swart 1993) like usually with the behaviour of adverbs of quantity like for the most part in sentences that contain plural definites. We show that sentences containing the former type of Q-adverb evidence that Quantificational Variability Effects (Berman 1991) come about as an indirect effect of quantification over situations: in order for quantificational variability readings to arise, these sentences have to obey two newly observed constraints that clearly set them apart from sentences containing corresponding quantificational DPs, and that can plausibly be explained under the assumption that quantification over (the atomic parts of) complex situations is involved. Concerning sentences with the latter type of Q-adverb, on the other hand, such evidence is lacking: with respect to the constraints just mentioned, they behave like sentences that contain corresponding quantificational DPs. We take this as evidence that Q-adverbs like for the most part do not quantify over the atomic parts of sum eventualities in the cases under discussion (as claimed by Nakanishi and Romero (2004)), but rather over the atomic parts of the respective sum individuals.
The layer-by-layer assembly (LBL) of polyelectrolytes has been extensively studied for the preparation of ultrathin films due to the versatility of the build-up process. The control of the permeability of these layers is particularly important as there are potential drug delivery applications. Multilayered polyelectrolyte microcapsules are also of great interest due to their possible use as microcontainers. This work will present two methods that can be used as employable drug delivery systems, both of which can encapsulate an active molecule and tune the release properties of the active species. Poly-(N-isopropyl acrylamide), (PNIPAM) is known to be a thermo-sensitive polymer that has a Lower Critical Solution Temperature (LCST) around 32oC; above this temperature PNIPAM is insoluble in water and collapses. It is also known that with the addition of salt, the LCST decreases. This work shows Differential Scanning Calorimetry (DSC) and Confocal Laser Scanning Microscopy (CLSM) evidence that the LCST of the PNIPAM can be tuned with salt type and concentration. Microcapsules were used to encapsulate this thermo-sensitive polymer, resulting in a reversible and tunable stimuli- responsive system. The encapsulation of the PNIPAM inside of the capsule was proven with Raman spectroscopy, DSC (bulk LCST measurements), AFM (thickness change), SEM (morphology change) and CLSM (in situ LCST measurement inside of the capsules). The exploitation of the capsules as a microcontainer is advantageous not only because of the protection the capsules give to the active molecules, but also because it facilitates easier transport. The second system investigated demonstrates the ability to reduce the permeability of polyelectrolyte multilayer films by the addition of charged wax particles. The incorporation of this hydrophobic coating leads to a reduced water sensitivity particularly after heating, which melts the wax, forming a barrier layer. This conclusion was proven with Neutron Reflectivity by showing the decreased presence of D2O in planar polyelectrolyte films after annealing creating a barrier layer. The permeability of capsules could also be decreased by the addition of a wax layer. This was proved by the increase in recovery time measured by Florescence Recovery After Photobleaching, (FRAP) measurements. In general two advanced methods, potentially suitable for drug delivery systems, have been proposed. In both cases, if biocompatible elements are used to fabricate the capsule wall, these systems provide a stable method of encapsulating active molecules. Stable encapsulation coupled with the ability to tune the wall thickness gives the ability to control the release profile of the molecule of interest.
Irish standard English
(2006)
Near-infrared (NIR) absorption spectroscopy with tunable diode lasers allows the simultaneous detection of the three most important isotopologues of carbon dioxide (<SUP>12</SUP>CO<SUB>2</SUB>, <SUP>13</SUP>CO<SUB>2</SUB>, <SUP>12</SUP>C<SUP>18</SUP>O<SUP>16</SUP>O) and carbon monoxide (<SUP>12</SUP>CO, <SUP>13</SUP>CO, <SUP>12</SUP>C<SUP>18</SUP>O). The flexible and compact fiber-optic tunable diode laser absorption spectrometer (TDLAS) allows selective measurements of CO<SUB>2</SUB> and CO with high isotopic resolution without sample preparation since there is no interference with water vapour. For each species, linear calibration plots with a dynamic range of four orders of magnitude and detection limits (LOD) in the range of a few ppm were obtained utilizing wavelength modulation spectroscopy (WMS) with balanced detection in a Herriott-type multipass cell. The high performance of the apparatus is illustrated by fill-evacuation-refill cycles.
Förster Resonance Energy Transfer (FRET) plays an important role for biochemical applications such as DNA sequencing, intracellular protein-protein interactions, molecular binding studies, in vitro diagnostics and many others. For qualitative and quantitative analysis, FRET systems are usually assembled through molecular recognition of biomolecules conjugated with donor and acceptor luminophores. Lanthanide (Ln) complexes, as well as semiconductor quantum dot nanocrystals (QD), possess unique photophysical properties that make them especially suitable for applied FRET. In this work the possibility of using QD as very efficient FRET acceptors in combination with Ln complexes as donors in biochemical systems is demonstrated. The necessary theoretical and practical background of FRET, Ln complexes, QD and the applied biochemical models is outlined. In addition, scientific as well as commercial applications are presented. FRET can be used to measure structural changes or dynamics at distances ranging from approximately 1 to 10 nm. The very strong and well characterized binding process between streptavidin (Strep) and biotin (Biot) is used as a biomolecular model system. A FRET system is established by Strep conjugation with the Ln complexes and QD biotinylation. Three Ln complexes (one with Tb3+ and two with Eu3+ as central ion) are used as FRET donors. Besides the QD two further acceptors, the luminescent crosslinked protein allophycocyanin (APC) and a commercial fluorescence dye (DY633), are investigated for direct comparison. FRET is demonstrated for all donor-acceptor pairs by acceptor emission sensitization and a more than 1000-fold increase of the luminescence decay time in the case of QD reaching the hundred microsecond regime. Detailed photophysical characterization of donors and acceptors permits analysis of the bioconjugates and calculation of the FRET parameters. Extremely large Förster radii of more than 100 Å are achieved for QD as acceptors, considerably larger than for APC and DY633 (ca. 80 and 60 Å). Special attention is paid to interactions with different additives in aqueous solutions, namely borate buffer, bovine serum albumin (BSA), sodium azide and potassium fluoride (KF). A more than 10-fold limit of detection (LOD) decrease compared to the extensively characterized and frequently used donor-acceptor pair of Europium tris(bipyridine) (Eu-TBP) and APC is demonstrated for the FRET system, consisting of the Tb complex and QD. A sub-picomolar LOD for QD is achieved with this system in azide free borate buffer (pH 8.3) containing 2 % BSA and 0.5 M KF. In order to transfer the Strep-Biot model system to a real-life in vitro diagnostic application, two kinds of imunnoassays are investigated using human chorionic gonadotropin (HCG) as analyte. HCG itself, as well as two monoclonal anti-HCG mouse-IgG (immunoglobulin G) antibodies are labeled with the Tb complex and QD, respectively. Although no sufficient evidence for FRET can be found for a sandwich assay, FRET becomes obvious in a direct HCG-IgG assay showing the feasibility of using the Ln-QD donor-acceptor pair as highly sensitive analytical tool for in vitro diagnostics.
In this work the first observation of new type of liquid crystals is presented. This is ionic self-assembly (ISA) liquid crystals formed by introduction of oppositely charged ions between different low molecular tectonic units. As practically all conventional liquid crystals consist of rigid core and alkyl chains the attention is focused to the simplest case where oppositely charged ions are placed between a rigid core and alkyl tails. The aim of this work is to investigate and understand liquid crystalline and alignment properties of these materials. It was found that ionic interactions within complexes play the main role. Presence of these interactions restricts transition to isotropic phase. In addition, these interactions hold the system (like network) allowing crystallization into a single domain from aligned LC state. Alignment of these simple ISA complexes was spontaneous on a glass substrate. In order to show potentials for application perylenediimide and azobenzene containing ISA complexes have been investigated for correlations between phase behavior and their alignment properties. The best results of macroscopic alignment of perylenediimide-based ISA complexes have been obtained by zone-casting method. In the aligned films the columns of the complex align perpendicular to the phase-transition front. The obtained anisotropy (DR = 18) is thermally stable. The investigated photosensitive (azobenzene-based) ISA complexes show formation of columnar LC phases. It was demonstrated that photo alignment of such complexes was very effective (DR = 50 has been obtained). It was shown that photo-reorientation in the photosensitive ISA complexes is cooperative process. The size of domains has direct influence on efficiency of the photo-reorientation process. In the case of small domains the photo-alignment is the most effective. Under irradiation with linearly polarized light domains reorient in the plane of the film leading to macroscopic alignment of columns parallel to the light polarization and joining of small domains into big ones. Finally, the additional distinguishable properties of the ISA liquid crystalline complexes should be noted: (I) the complexes do not solve in water but readily solve in organic solvents; (II) the complexes have good film-forming properties when cast or spin-coated from organic solvent; (III) alignment of the complexes depends on their structure and secondary interactions between tectonic units.
We prove the existence of a class of local in time solutions, including static solutions, of the Einstein-Euler system. This result is the relativistic generalisation of a similar result for the Euler-Poisson system obtained by Gamblin [8]. As in his case the initial data of the density do not have compact support but fall off at infinity in an appropriate manner. An essential tool in our approach is the construction and use of weighted Sobolev spaces of fractional order. Moreover, these new spaces allow us to improve the regularity conditions for the solutions of evolution equations. The details of this construction, the properties of these spaces and results on elliptic and hyperbolic equations will be presented in a forthcoming article.
We describe an experiment to gather original data on geometrical aspects of pointing. In particular, we are focusing upon the concept of the pointing cone, a geometrical model of a pointing’s extension. In our setting we employed methodological and technical procedures of a new type to integrate data from annotations as well as from tracker recordings. We combined exact information on position and orientation with rater’s classifications. Our first results seem to challenge classical linguistic and philosophical theories of demonstration in that they advise to separate pointings from reference.
The rigorous development, application and validation of distributed hydrological models obligates to evaluate data in a spatially distributed way. In particular, spatial model predictions such as the distribution of soil moisture, runoff generating areas or nutrient-contributing areas or erosion rates, are to be assessed against spatially distributed observations. Also model inputs, such as the distribution of modelling units derived by GIS and remote sensing analyses, should be evaluated against groundbased observations of landscape characteristics. So far, however, quantitative methods of spatial field comparison have rarely been used in hydrology. In this paper, we present algorithms that allow to compare observed and simulated spatial hydrological data. The methods can be applied for binary and categorical data on regular grids. They comprise cell-by-cell algorithms, cell-neighbourhood approaches that account for fuzziness of location, and multi-scale algorithms that evaluate the similarity of spatial fields with changing resolution. All methods provide a quantitative measure of the similarity of two maps. The comparison methods are applied in two mountainous catchments in southern Germany (Brugga, 40 km<sup>2) and Austria (Löhnersbach, 16 km<sup>2). As an example of binary hydrological data, the distribution of saturated areas is analyzed in both catchments. For categorical data, vegetation zones that are associated with different runoff generation mechanisms are analyzed in the Löhnersbach. Mapped spatial patterns are compared to simulated patterns from terrain index calculations and from satellite image analysis. It is discussed how particular features of visual similarity between the spatial fields are captured by the quantitative measures, leading to recommendations on suitable algorithms in the context of evaluating distributed hydrological models.
One type of internal diachronic change that has been extensively studied for spoken languages is grammaticalization whereby lexical elements develop into free or bound grammatical elements. Based on a wealth of spoken languages, a large amount of prototypical grammaticalization pathways has been identified. Moreover, it has been shown that desemanticization, decategorialization, and phonetic erosion are typical characteristics of grammaticalization processes. Not surprisingly, grammaticalization is also responsible for diachronic change in sign languages. Drawing data from a fair number of sign languages, we show that grammaticalization in visual-gestural languages – as far as the development from lexical to grammatical element is concerned – follows the same developmental pathways as in spoken languages. That is, the proposed pathways are modalityindependent. Besides these intriguing parallels, however, sign languages have the possibility of developing grammatical markers from manual and non-manual co-speech gestures. We will discuss various instances of grammaticalized gestures and we will also briefly address the issue of the modality-specificity of this phenomenon.
We analyze anaphoric phenomena in the context of building an input understanding component for a conversational system for tutoring mathematics. In this paper, we report the results of data analysis of two sets of corpora of dialogs on mathematical theorem proving. We exemplify anaphoric phenomena, identify factors relevant to anaphora resolution in our domain and extensions to the input interpretation component to support it.
Claiming that cross-speaker "but" can signal correction in dialogue, we start by describing the types of corrections "but" can communicate by focusing on the Speech Act (SA) communicated in the previous turn and address the ways in which "but" can correct what is communicated. We address whether "but" corrects the proposition, the direct SA or the discourse relation communicated in the previous turn. We will also briefly address other relations signalled by cross-turn "but". After presenting a typology of the situations "but" can correct, we will address how these corrections can be modelled in the Information State model of dialogue, motivating this work by showing how it can be used to potentially avoid misunderstandings. We wrap up by showing how the model presented here updates beliefs in the Information State representation of the dialogue and can be used to facilitate response deliberation.
To characterise the habitat preferences of ring ouzel (Turdus torquatus) and blackbird (T. merula) in Switzerland, we adopt species distribution modelling and predict the species’ spatial distribution. We model on two different scales to analyse in how far downscaling leads to a different set of predictors to describe the realised habitat best. While the models on macroscale (grid of one square kilometre) cover the entire country, we select a set of smaller plots for modelling on territory scale. Whereas ring ouzels occur in altitudes above 1’000 m a.s.l. only, blackbirds occur from the lowlands up to the timber line. The altitudinal range overlap of the two species is up to 400 m. Despite both species coexist on macroscale, a direct niche overlap on territory scale is rare. Small-scale differences in vegetation cover and structure seem to play a dominant role for habitat selection. On macroscale however, we observe a high dependency on climatic variables mainly representing the altitudinal range and the related forest structure preferred by the two species. Applying the models for climate change scenarios, we predict a decline of suitable habitat for the ring ouzel with a simultaneous median altitudinal shift of +440 m until 2070. In contrast, the blackbird is predicted to benefit from higher temperatures and expand its range to higher elevations.
This thesis studies strong, completely charged polyelectrolyte brushes. Extensive molecular dynamics simulations are performed on different polyelectrolyte brush systems using local compute servers and massively parallel supercomputers. The full Coulomb interaction of charged monomers, counterions, and salt ions is treated explicitly. The polymer chains are anchored by one of their ends to a uncharged planar surface. The chains are treated under good solvent conditions. Monovalent salt ions (1:1 type) are modelled same as counterions. The studies concentrate on three different brush systems at constant temperature and moderate Coulomb interaction strength (Bjerrum length equal to bond length): The first system consists of a single polyelectrolyte brush anchored with varying grafting density to a plane. Results show that chains are extended up to about 2/3 of their contour length. The brush thickness slightly grows with increasing anchoring density. This slight dependence of the brush height on grafting density is in contrast to the well known scaling result for the osmotic brush regime. That is why the result obtained by simulations has stimulated further development of theory as well as new experimental investigations on polyelectrolyte brushes. This observation can be understood on a semi-quantitative level using a simple scaling model that incorporates excluded volume effects in a free-volume formulation where an effective cross section is assigned to the polymer chain from where couterions are excluded. The resulting regime is called nonlinear osmotic brush regime. Recently this regime was also obtained in experiments. The second system studied consists of polyelectrolyte brushes with added salt in the nonlinear osmotic regime. Varying salt is an important parameter to tune the structure and properties of polyelectrolytes. Further motivation is due to a theoretical scaling prediction by Pincus for the salt dependence of brush thickness. In the high salt limit (salt concentration much larger than counterion concentration) the brush height is predicted to decrease with increasing external salt, but with a relatively weak power law showing an exponent -1/3. There is some experimental and theoretical work that confirms this prediction, but there are other results that are in contradiction. In such a situation simulations are performed to validate the theoretical prediction. The simulation result shows that brush thickness decreases with added salt, and indeed is in quite good agreement with the scaling prediction by Pincus. The relation between buffer concentration and the effective ion strength inside the brush at varying salt concentration is of interest both from theoretical as well as experimental point of view. The simulation result shows that mobile ions (counterions as well as salt) distribute nonhomogeneously inside and outside of the brush. To explain the relation between the internal ion concentration with the buffer concentration a Donnan equilibrium approach is employed. Modifying the Donnan approach by taking into account the self-volume of polyelectrolyte chains as indicated above, the simulation result can be explained using the same effective cross section for the polymer chains. The extended Donnan equilibrium relation represents a interesting theoretical prediction that should be checked by experimental data. The third system consist of two interacting polyelectrolyte brushes that are grafted to two parallel surfaces. The interactions between brushes are important, for instance, in stabilization of dispersions against flocculation. In the simulations pressure is evaluated as a function of separation D between the two grafting planes. The pressure behavior shows different regimes for decreasing separation. This behavior is in qualitative agreement with experimental data. At relatively weak compression the pressure behavior obtained in the simulation agrees with a 1/D power law predicted by scaling theory. Beyond that the present study could supply new insight for understanding the interaction between polyelectrolyte brushes.
The salivary glands of the blowfly were injected with luminescent oxygen-sensitive microbeads. The changes in oxygen content within individual gland tubules during hormone-induced secretory activity were quantified. The measurements are based on an upgraded phase-modulation technique, where the phase shift of the sensor phosphorescence is determined independently from concentration and background signals. We show that the combination of a lock-in amplifier with a fluorescence microscope results in a convenient setup to measure oxygen concentrations within living animal tissues at the cellular level.
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent. However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time.
The paper presents an in-depth study of focus marking in Gùrùntùm, a West Chadic language spoken in Bauchi Province of Northern Nigeria. Focus in Gùrùntùm is marked morphologically by means of a focus marker a, which typically precedes the focus constituent. Even though the morphological focus-marking system of Gùrùntùm allows for a lot of fine-grained distinctions in information structure (IS) in principle, the language is not entirely free of focus ambiguities that arise as the result of conflicting IS- and syntactic requirements that govern the placement of focus markers. We show that morphological focus marking with a applies across different types of focus, such as newinformation, contrastive, selective and corrective focus, and that a does not have a second function as a perfectivity marker, as is assumed in the literature. In contrast, we show at the end of the paper that a can also function as a foregrounding device at the level of discourse structure.
Demonstratives, in particular gestures that "only" accompany speech, are not a big issue in current theories of grammar. If we deal with gestures, fixing their function is one big problem, the other one is how to integrate the representations originating from different channels and, ultimately, how to determine their composite meanings. The growing interest in multi-modal settings, computer simulations, human-machine interfaces and VRapplications increases the need for theories ofmultimodal structures and events. In our workshopcontribution we focus on the integration of multimodal contents and investigate different approaches dealing with this problem such as Johnston et al. (1997) and Johnston (1998), Johnston and Bangalore (2000), Chierchia (1995), Asher (2005), and Rieser (2005).
Polyelectrolyte microcapsules containing stimuli-responsive polymers have potential applications in the fields of sensors or actuators, stimulable microcontainers and controlled drug delivery. Such capsules were prepared, with the focus on pH-sensitivity and carbohydrate-sensing. First, pH-responsive polyelectrolyte capsules were produced by means of electrostatic layer-by-layer assembly of oppositely charged weak polyelectrolytes onto colloidal templates that were subsequently removed. The capsules were composed of poly(allylamine hydrochloride) (PAH) and poly(methacrylic acid) (PMA) or poly(4-vinylpyridine) (P4VP) and PMA and varied considerably in their hydrophobicity and the influence of secondary interactions. These polymers were assembled onto CaCO3 and SiO2 particles with diameters of ~ 5 µm, and a new method for the removal of the silica template under mild conditions was proposed. The pH-dependent stability of PAH/PMA and P4VP/PMA capsules was studied by confocal laser scanning microscopy (CLSM). They were stable over a wide pH-range and exhibited a pronounced swelling at the edges of stability, which was attributed to uncompensated positive or negative charges within the multilayers. The swollen state could be stabilized when the electrostatic repulsion was counteracted by hydrogen-bonding, hydrophobic interactions or polymeric entanglement. This stabilization made it possible to reversibly swell and shrink the capsules by tuning the pH of the solution. The pH-dependent ionization degree of PMA was used to modulate the binding of calcium ions. In addition to the pH-sensitivity, the stability and the swelling degree of these capsules at a given pH could be modified, when the ionic strength of the medium was altered. The reversible swelling was accompanied by reversible permeability changes for low and high molecular weight substances. The permeability for glucose was evaluated by studying the time-dependence of the buckling of the capsule walls in glucose solutions and the reversible permeability modulation was used for the encapsulation of polymeric material. A theoretical model was proposed to explain the pH-dependent size variations that took into account an osmotic expanding force and an elastic restoring force to evaluate the pH-dependent size changes of weak polyelectrolyte capsules. Second, sugar-sensitive multilayers were assembled using the reversible covalent ester formation between the polysaccharide mannan and phenylboronic acid moieties that were grafted onto poly(acrylic acid) (PAA). The resulting multilayer films were sensitive to several carbohydrates, showing the highest sensitivity to fructose. The response to carbohydrates resulted from the competitive binding of small molecular weight sugars and mannan to the boronic acid groups within the film, and was observed as a fast dissolution of the multilayers, when they were brought into contact with the sugar-containing solution above a critical concentration. It was also possible to prepare carbohydrate-sensitive multilayer capsules, and their sugar-dependent stability was investigated by following the release of encapsulated rhodamine-labeled bovine serum albumin (TRITC-BSA).
Natural law
(2006)
This work concentrates on the requirements of the computational system of HL, by developing the idea that Natural Law applies to universal syntactic principles. The systems of efficient growth are for the continuation of motion and maximal distance between the elements. The condition of maximization accounts for the properties of syntactic trees - binary branching, labeling, and the EPP. NL justifies the basic principle of organization in Merge: it provides a functional explanation of phase formation and thematic domains. In Optimality Theory, it accounts for the selection of a particular word order in languages. A comprehensive and definitive understanding of the principles underlying MP will eventually lead to a more advanced design of OT.
Two examples of our biophotonic research utilizing nanoparticles are presented, namely laser-based fluoroimmuno analysis and in-vivo optical oxygen monitoring. Results of the work include significantly enhanced sensitivity of a homogeneous fluorescence immunoassay and markedly improved spatial resolution of oxygen gradients in root nodules of a legume species.
Nonaqueous synthesis of metal oxide nanoparticles and their assembly into mesoporous materials
(2006)
This thesis mainly consist of two parts, the synthesis of several kinds of technologically interesting crystalline metal oxide nanoparticles via nonaqueous sol-gel process and the formation of mesoporous metal oxides using some of these nanoparticles as building blocks via evaporation induced self-assembly (EISA) technique. In the first part, the experimental procedures and characterization results of successful syntheses of crystalline tin oxide and tin doped indium oxide (ITO) nanoparticles are reported. SnO2 nanoparticles exhibit monodisperse particle size (3.5 nm in average), high crystallinity and particularly high dispersibility in THF, which enable them to become the ideal particulate precursor for the formation of mesoporous SnO2. ITO nanoparticles possess uniform particle morphology, narrow particle size distribution (5-10 nm), high crystallinity as well as high electrical conductivity. The synthesis approaches and characterization of various mesoporous metal oxides, including TiO2, SnO2, mixture of CeO2 and TiO2, mixture of BaTiO3 and SnO2, are reported in the second part of this thesis. Mesoporous TiO2 and SnO2 are presented as highlights of this part. Mesoporous TiO2 was produced in the forms of both films and bulk material. In the case of mesoporous SnO2, the study was focused on the high order of the porous structure. All these mesoporous metal oxides show high crystallinity, high surface area and rather monodisperse pore sizes, which demonstrate the validity of EISA process and the usage of preformed crystalline nanoparticles as nanobuilding blocks (NBBs) to produce mesoporous metal oxides.
The properties of a series of well-defined new surfactant oligomers (dimers to tetramers)were examined. From a molecular point of view, these oligomeric surfactants consist of simple monomeric cationic surfactant fragments coupled via the hydrophilic ammonium chloride head groups by spacer groups (different in nature and length). Properties of these cationic surfactant oligomers in aqueous solution such as solubility, micellization and surface activity, micellar size and aggregation number were discussed with respect to the two new molecular variables introduced, i.e. degree of oligomerization and spacer group, in order to establish structure – property relationships. Thus, increasing the degree of oligomerization results in a pronounced decrease of the critical micellization concentration (CMC). Both reduced spacer length and increased spacer hydrophobicity lead to a decrease of the CMC, but to a lesser extent. For these particular compounds, the formed micelles are relatively small and their aggregation number decreases with increasing the degree of oligomerization, increasing spacer length and sterical hindrance. In addition, pseudo-phase diagrams were established for the dimeric surfactants in more complex systems, namely inverse microemulsions, demonstrating again the important influence of the spacer group on the surfactant behaviour. Furthermore, the influence of additives on the property profile of the dimeric compounds was examined, in order to see if the solution properties can be improved while using less material. Strong synergistic effects were observed by adding special organic salts (e.g. sodium salicylate, sodium vinyl benzoate, etc.) to the surfactant dimers in stoichiometric amounts. For such mixtures, the critical aggregation concentration is strongly shifted to lower concentration, the effect being more pronounced for dimers than for analogous monomers. A sharp decrease of the surface tension can also be attained. Many of the organic anions produce viscoelastic solutions when added to the relatively short-chain dimers in aqueous solution, as evidenced by rheological measurements. This behaviour reflects the formation of entangled wormlike micelles due to strong interactions of the anions with the cationic surfactants, decreasing the curvature of the micellar aggregates. It is found that the associative behaviour is enhanced by dimerization. For a given counterion, the spacer group may also induce a stronger viscosifying effect depending on its length and hydrophobicity. Oppositely charged surfactants were combined with the cationic dimers, too. First, some mixtures with the conventional anionic surfactant SDS revealed vesicular aggregates in solution. Also, in view of these catanionic mixtures, a novel anionic dimeric surfactant based on EDTA was synthesized and studied. The synthesis route is relatively simple and the compound exhibits particularly appealing properties such as low CMC and σCMC values, good solubilization capacity of hydrophobic probes and high tolerance to hard water. Noteworthy, mixtures with particular cationic dimers gave rise to viscous solutions, reflecting the micelle growth.
This article presents an analysis of German nicht...sondern... (contrastive not...but...) which departs from the commonly held view that this construction should be explained by appeal to its alleged corrective function. It will be demonstrated that in nicht A sondern B (not A but B), A and B just behave like stand-alone unmarked answers to a common question Q, and that this property of sondern is presuppositional in character. It is shown that from this general observation many interesting properties of nicht...sondern... follow, among them distributional differences between German 'sondern' and German 'aber' (contrastive but, concessive but), intonational requirements and exhaustivity effects. sondern's presupposition is furthermore argued to be the result of the conventionalization of conversational implicatures.
We give a construction of an eigenstate for a non-critical level of the Hamiltonian function, and investigate the contribution of Morse critical points to the spectral decomposition. We compare the rigorous result with the series obtained by a perturbation theory. As an example the relation to the spectral asymptotics is discussed.
In the present work, phenomena in the ionosphere are studied, which are connected with earthquakes (16 events) having a depth of less than 50 km and a magnitude M larger than 4. Analysed are night-time Es-spread effects using data of the vertical sounding station Petropavlovsk- Kanchatsky (φ=53.0°, λ=158.7°) from May 2004 until August 2004 registered every 15 minutes. It is found that the maximum distance of the earthquake from the sounding station, where pre-seismic phenomena are yet observable, depends on the magnitude of the earthquake. Further it is shown that 1-2 days before the earthquakes, in the premidnight hours, the appearance of Es-spread increases. The reliability of this increase amounts to 0.95.