Extern
Refine
Year of publication
Document Type
- Article (151)
- Doctoral Thesis (107)
- Working Paper (33)
- Postprint (16)
- Monograph/Edited Volume (13)
- Preprint (6)
- Conference Proceeding (4)
- Master's Thesis (4)
- Habilitation Thesis (2)
- Bachelor Thesis (1)
Language
- English (338) (remove)
Is part of the Bibliography
- yes (338) (remove)
Keywords
- Arktis (6)
- climate change (6)
- Arctic (5)
- COVID-19 (5)
- Fernerkundung (4)
- exercise (4)
- football (4)
- Chlamydomonas (3)
- Holozän (3)
- Klimawandel (3)
Institute
- Extern (338)
- Institut für Biochemie und Biologie (60)
- Institut für Physik und Astronomie (45)
- Institut für Geowissenschaften (37)
- Strukturbereich Kognitionswissenschaften (32)
- Institut für Chemie (30)
- Center for Economic Policy Analysis (CEPA) (28)
- Fachgruppe Volkswirtschaftslehre (26)
- Institut für Ernährungswissenschaft (15)
- Wirtschaftswissenschaften (11)
For the first time stabilizer-free vinylidene fluoride (VDF) polymerizations were carried out in homogeneous phase with supercritical CO₂. Polymerizations were carried out at 140°C, 1500 bar and were initiated with di-tert-butyl peroxide (DTBP). In-line FT-NIR (Fourier Transform- Near Infrared) spectroscopy showed that complete monomer conversion may be obtained. Molecular weights were determined via size-exclusion chromatography (SEC) and polymer end group analysis by 1H-NMR spectroscopy. The number average molecular weights were below 104 g∙mol−1 and polydispersities ranged from 3.1 to 5.7 depending on DTBP and VDF concentration. To allow for isothermal reactions high CO₂ contents ranging from 61 to 83 wt.% were used. The high-temperature, high-pressure conditions were required for homogeneous phase polymerization. These conditions did not alter the amount of defects in VDF chaining. Scanning electron microscopy (SEM) indicated that regular stack-type particles were obtained upon expansion of the homogeneous polymerization mixture. To reduce the required amount of initiator, further VDF polymerizations using chain transfer agents (CTAs) to control molecular weights were carried out in homogeneous phase with supercritical carbon dioxide (scCO₂) at 120 °C and 1500 bar. Using perfluorinated hexyl iodide as CTA, polymers of low polydispersity ranging from 1.5 to 1.2 at the highest iodide concentration of 0.25 mol·L-1 were obtained. Electrospray ionization- mass spectroscopy (ESI-MS) indicates the absence of initiator derived end groups, supporting livingness of the system. The “livingness” is based on the labile C-I bond. However, due to the weakness of the C-I bond perfluorinated hexyl iodide also contributes to initiation. To allow for kinetic analyses of VDF polymerizations the CTA should not contribute to initiation. Therefore, additional CTAs were applied: BrCCl3, C6F13Br and C6F13H. It was found that C6F13H does not contribute to initiation. At 120°C and 1500 bar kp/kt0.5~ 0.64 (L·mol−1·s−1)0.5 was derived. The chain transfer constant (CT) at 120°C has been determined to be 8·10−1, 9·10−2 and 2·10−4 for C6F13I, C6F13Br and C6F13H, respectively. These CT values are associated with the bond energy of the C-X bond. Moreover, the labile C-I bond allows for functionalization of the polymer to triazole end groups applying click reactions. After substitution of the iodide end group by an azide group 1,3 dipolar cycloadditions with alkynes yield polymers with 1,2,3 triazole end groups. Using symmetrical alkynes the reactions may be carried out in the absence of any catalyst. This end-functionalized poly (vinylidene fluoride) (PVDF) has higher thermal stability as compared to the normal PVDF. PVDF samples from homogeneous phase polymerizations in supercritical CO₂ and subsequent expansion to ambient conditions were analyzed with respect to polymer end groups, crystallinity, type of polymorphs and morphology. Upon expansion the polymer was obtained as white powder. Scanning electron microscopy (SEM) showed that DTBP derived polymer end groups led to stack-type particles whereas sponge- or rose-type particles were obtained in case of CTA fragments as end groups. Fourier-Transform Infrared spectroscopy and wide angle X-ray diffraction indicated that the type of polymorph, α or β crystal phase was significantly affected by the type of end group. The content of β-phase material, which is responsible for piezoelectricity of PVDF, is the highest for polymer with DTBP-derived end groups. In addition, the crystallinity of the material, as determined via differential scanning calorimetry is affected by the end groups and polymer molecular weights. For example, crystallinity ranges from around 26 % for DTBP-derived end groups to a maximum of 62 % for end groups originating from perfluorinated hexyl iodide for polymers with Mn ~2200 g·mol–1. Expansion of the homogeneous polymerization mixture results in particle formation by a non-optimized RESS (Rapid Expansion from Supercritical Solution) process. Thus, it was tested how polymer end groups affect the particles size distribution obtained from RESS process under controlled conditions (T = 50°C and P = 200 bar). In all RESS experiments, small primary PVDF with diameters less than 100 nm without the use of liquid solvents, surfactants, or other additives were produced. A strong correlation between particle size and particle size distribution with polymer end groups and molecular weight of the original material was observed. The smallest particles were found for RESS of PVDF with Mn~ 4000 g·mol–1 and PFHI (C6F13I) - derived end groups.
In biological cells, the long-range intracellular traffic is powered by molecular motors which transport various cargos along microtubule filaments. The microtubules possess an intrinsic direction, having a 'plus' and a 'minus' end. Some molecular motors such as cytoplasmic dynein walk to the minus end, while others such as conventional kinesin walk to the plus end. Cells typically have an isopolar microtubule network. This is most pronounced in neuronal axons or fungal hyphae. In these long and thin tubular protrusions, the microtubules are arranged parallel to the tube axis with the minus ends pointing to the cell body and the plus ends pointing to the tip. In such a tubular compartment, transport by only one motor type leads to 'motor traffic jams'. Kinesin-driven cargos accumulate at the tip, while dynein-driven cargos accumulate near the cell body. We identify the relevant length scales and characterize the jamming behaviour in these tube geometries by using both Monte Carlo simulations and analytical calculations. A possible solution to this jamming problem is to transport cargos with a team of plus and a team of minus motors simultaneously, so that they can travel bidirectionally, as observed in cells. The presumably simplest mechanism for such bidirectional transport is provided by a 'tug-of-war' between the two motor teams which is governed by mechanical motor interactions only. We develop a stochastic tug-of-war model and study it with numerical and analytical calculations. We find a surprisingly complex cooperative motility behaviour. We compare our results to the available experimental data, which we reproduce qualitatively and quantitatively.
This PhD thesis presents the spatio-temporal distribution of terrestrial carbon fluxes for the time period of 1982 to 2002 simulated by a combination of the process-based dynamic global vegetation model LPJ and a 21-year time series of global AVHRR-fPAR data (fPAR – fraction of photosynthetically active radiation). Assimilation of the satellite data into the model allows improved simulations of carbon fluxes on global as well as on regional scales. As it is based on observed data and includes agricultural regions, the model combined with satellite data produces more realistic carbon fluxes of net primary production (NPP), soil respiration, carbon released by fire and the net land-atmosphere flux than the potential vegetation model. It also produces a good fit to the interannual variability of the CO2 growth rate. Compared to the original model, the model with satellite data constraint produces generally smaller carbon fluxes than the purely climate-based stand-alone simulation of potential natural vegetation, now comparing better to literature estimates. The lower net fluxes are a result of a combination of several effects: reduction in vegetation cover, consideration of human influence and agricultural areas, an improved seasonality, changes in vegetation distribution and species composition. This study presents a way to assess terrestrial carbon fluxes and elucidates the processes contributing to interannual variability of the terrestrial carbon exchange. Process-based terrestrial modelling and satellite-observed vegetation data are successfully combined to improve estimates of vegetation carbon fluxes and stocks. As net ecosystem exchange is the most interesting and most sensitive factor in carbon cycle modelling and highly uncertain, the presented results complementary contribute to the current knowledge, supporting the understanding of the terrestrial carbon budget.
Stellar winds play an important role for the evolution of massive stars and their cosmic environment. Multiple lines of evidence, coming from spectroscopy, polarimetry, variability, stellar ejecta, and hydrodynamic modeling, suggest that stellar winds are non-stationary and inhomogeneous. This is referred to as 'wind clumping'. The urgent need to understand this phenomenon is boosted by its far-reaching implications. Most importantly, all techniques to derive empirical mass-loss rates are more or less corrupted by wind clumping. Consequently, mass-loss rates are extremely uncertain. Within their range of uncertainty, completely different scenarios for the evolution of massive stars are obtained. Settling these questions for Galactic OB, LBV and Wolf-Rayet stars is prerequisite to understanding stellar clusters and galaxies, or predicting the properties of first-generation stars. In order to develop a consistent picture and understanding of clumped stellar winds, an international workshop on 'Clumping in Hot Star Winds' was held in Potsdam, Germany, from 18. - 22. June 2007. About 60 participants, comprising almost all leading experts in the field, gathered for one week of extensive exchange and discussion. The Scientific Organizing Committee (SOC) included John Brown (Glasgow), Joseph Cassinelli (Madison), Paul Crowther (Sheffield), Alex Fullerton (Baltimore), Wolf-Rainer Hamann (Potsdam, chair), Anthony Moffat (Montreal), Stan Owocki (Newark), and Joachim Puls (Munich). These proceedings contain the invited and contributed talks presented at the workshop, and document the extensive discussions.
Today about 24 Million people worldwide suffer from dementia, Alzheimer’s Disease accounts for approximately 50-60% of all dementia cases. As the prevalence of dementia grows with increasing age Alzheimer’s Disease becomes more and more of an issue for society as the proportion of elderly people increases from year to year. It is well established, that the amino acid glutamate - quantitatively being the most important neurotransmitter in the central nervous system (CNS) - may reach toxic concentrations if not cleared from the synaptic cleft into which it is released during transmittance of action potentials. In Alzheimer’s Disease there is strong evidence for a generally impaired glutamate uptake system which in turn is thought to result in toxic levels of the amino acid with the potential to kill off neurons. The excitatory amino acid transporter 1 (EAAT1) belongs to the family of Na+-dependent glutamate transporter and accounts together with EAAT2 for most of the glutamate uptake in the CNS. In this project a new splice variant of EAAT1, skipping exon 3 was detected in human brain samples and subsequently called EAAT1Δ3, this being the second splice variant found after the recent detection of EAAT1Δ9. A method was developed to quantify the transcript of EAAT1 wt, EAAT1Δ3 and EAAT1Δ9 by means of real-time PCR. Samples were taken from different brain areas of a set of control and AD cases. The areas chosen for examination are affected differently in Alzheimer’s Disease, this was used an internal control for the experiments done in this project as to determine whether any effect observed is specific for AD, i.e. AD affected areas or is generally seen in all areas examined. The results of this project show that EAAT1Δ3 is transcribed in very low copy numbers making up a proportion of 0.15% of EAAT1 wt whereas EAAT1Δ9 is transcribed in a considerably large proportion of EAAT1 wt of 26.6%. It was moreover found that all EAAT1 variants are transcribed at significantly lower rates (P<0.0001) in AD cases, supporting the theory that EAAT1 protein expression is reduced to a point where glutamate uptake normally mediated by this transporter is impaired. This in turn is thought to result in toxic levels glutamate accounting for neuronal loss in the disease. No area-dependent effects were found, suggesting that the reduction of EAAT1 transcription is rather a result of an underlying general mechanism present in AD. Further research will have to be done to assess the degree of EAAT1 expression in AD and whether those future findings match with the result of this project.
The Arctic plays a key role in Earth’s climate system as global warming is predicted to be most pronounced at high latitudes and because one third of the global carbon pool is stored in ecosystems of the northern latitudes. In order to improve our understanding of the present and future carbon dynamics in climate sensitive permafrost ecosystems, the present study concentrates on investigations of microbial controls of methane fluxes, on the activity and structure of the involved microbial communities, and on their response to changing environmental conditions. For this purpose an integrated research strategy was applied, which connects trace gas flux measurements to soil ecological characterisation of permafrost habitats and molecular ecological analyses of microbial populations. Furthermore, methanogenic archaea isolated from Siberian permafrost have been used as potential keystone organisms for studying and assessing life under extreme living conditions. Long-term studies on methane fluxes were carried out since 1998. These studies revealed considerable seasonal and spatial variations of methane emissions for the different landscape units ranging from 0 to 362 mg m-2 d-1. For the overall balance of methane emissions from the entire delta, the first land cover classification based on Landsat images was performed and applied for an upscaling of the methane flux data sets. The regionally weighted mean daily methane emissions of the Lena Delta (10 mg m-2 d-1) are only one fifth of the values calculated for other Arctic tundra environments. The calculated annual methane emission of the Lena Delta amounts to about 0.03 Tg. The low methane emission rates obtained in this study are the result of the used remotely sensed high-resolution data basis, which provides a more realistic estimation of the real methane emissions on a regional scale. Soil temperature and near soil surface atmospheric turbulence were identified as the driving parameters of methane emissions. A flux model based on these variables explained variations of the methane budget corresponding to continuous processes of microbial methane production and oxidation, and gas diffusion through soil and plants reasonably well. The results show that the Lena Delta contributes significantly to the global methane balance because of its extensive wetland areas. The microbiological investigations showed that permafrost soils are colonized by high numbers of microorganisms. The total biomass is comparable to temperate soil ecosystems. Activities of methanogens and methanotrophs differed significantly in their rates and distribution patterns along both the vertical profiles and the different investigated soils. The methane production rates varied between 0.3 and 38.9 nmol h-1 g-1, while the methane oxidation ranged from 0.2 to 7.0 nmol h-1 g-1. Phylogenetic analyses of methanogenic communities revealed a distinct diversity of methanogens affiliated to Methanomicrobiaceae, Methanosarcinaceae and Methanosaetaceae, which partly form four specific permafrost clusters. The results demonstrate the close relationship between methane fluxes and the fundamental microbiological processes in permafrost soils. The microorganisms do not only survive in their extreme habitat but also can be metabolic active under in situ conditions. It was shown that a slight increase of the temperature can lead to a substantial increase in methanogenic activity within perennially frozen deposits. In case of degradation, this would lead to an extensive expansion of the methane deposits with their subsequent impacts on total methane budget. Further studies on the stress response of methanogenic archaea, especially Methanosarcina SMA-21, isolated from Siberian permafrost, revealed an unexpected resistance of the microorganisms against unfavourable living conditions. A better adaptation to environmental stress was observed at 4 °C compared to 28 °C. For the first time it could be demonstrated that methanogenic archaea from terrestrial permafrost even survived simulated Martian conditions. The results show that permafrost methanogens are more resistant than methanogens from non-permafrost environments under Mars-like climate conditions. Microorganisms comparable to methanogens from terrestrial permafrost can be seen as one of the most likely candidates for life on Mars due to their physiological potential and metabolic specificity.
Being living systems unable to adjust their location to changing environmental conditions, plants display homeostatic networks that have evolved to maintain transition metal levels in a very narrow concentration range in order to avoid either deficiency or toxicity. Hence, plants possess a broad repertoire of mechanisms for the cellular uptake, compartmentation and efflux, as well as for the chelation of transition metal ions. A small number of plants are hypertolerant to one or a few specific transition metals. Some metal tolerant plants are also able to hyperaccumulate metal ions. The Brassicaceae family member Arabidopis halleri ssp. halleri (L.) O´KANE and AL´SHEHBAZ is a hyperaccumulator of zinc (Zn), and it is closely related to the non-hypertolerant and non-hyperaccumulating model plant Arabidopsis thaliana (L.) HEYNHOLD. The close relationship renders A. halleri a promising emerging model plant for the comparative investigation of the molecular mechanisms behind hypertolerance and hyperaccumulation. Among several potential candidate genes that are probably involved in mediating the zinc-hypertolerant and zinc-hyperaccumulating trait is AhHMA3. The AhHMA3 gene is highly similar to AtHMA3 (AGI number: At4g30120) in A. thaliana, and its encoded protein belongs to the P-type IB ATPase family of integral membrane transporter proteins that transport transition metals. In contrast to the low AtHMA3 transcript levels in A. thaliana, the gene was found to be constitutively highly expressed across different Zn treatments in A. halleri, especially in shoots. In this study, the cloning and characterisation of the HMA3 gene and its promoter from Arabidopsis halleri (L.) O´KANE and AL´SHEHBAZ and Arabidopsis thaliana (L.) HEYNHOLD is described. Heterologously expressed AhHMA3 mediated enhanced tolerance to Zn and to a much lesser degree to cadmium (Cd) but not to cobalt (Co) in metal-sensitive mutant strains of budding yeast. It is demonstrated that the genome of A. halleri contains at least four copies of AhHMA3, AhHMA3-1 to AhHMA3-4. A copy-specific real-time RT-PCR indicated that an AhHMA3-1 related gene copy is the source of the constitutively high transcript level in A. halleri and not a gene copy similar to AhHMA3-2 or AhHMA3-4. In accordance with the enhanced AtHMA3mRNA transcript level in A. thaliana roots, an AtHMA3 promoter-GUS gene construct mediated GUS activity predominantly in the vascular tissues of roots and not in shoots. However, the observed AhHMA3-1 and AhHMA3-2 promoter-mediated GUS activity in A. thaliana or A. halleri plants did not reflect the constitutively high expression of AhHMA3 in shoots of A. halleri. It is suggested that other factors e. g. characteristic sequence inserts within the first intron of AhHMA3-1 might enable a constitutively high expression. Moreover, the unknown promoter of the AhHMA3-3 gene copy could be the source of the constitutively high AhHMA3 transcript levels in A. halleri. In that case, the AhHMA3-3 sequence is predicted to be highly homologous to AhHMA3-1. The lack of solid localisation data for the AhHMA3 protein prevents a clear functional assignment. The provided data suggest several possible functions of the AhHMA3 protein: Like AtHMA2 and AtHMA4 it might be localised to the plasma membrane and could contribute to the efficient translocation of Zn from root to shoot and/or to the cell-to-cell distribution of Zn in the shoot. If localised to the vacuolar membrane, then a role in maintaining a low cytoplasmic zinc concentration by vacuolar zinc sequestration is possible. In addition, AhHMA3 might be involved in the delivery of zinc ions to trichomes and mesophyll leaf cells that are major zinc storage sites in A. halleri.
The aim of this work was the generation of carbon materials with high surface area, exhibiting a hierarchical pore system in the macro- and mesorange. Such a pore system facilitates the transport through the material and enhances the interaction with the carbon matrix (macropores are pores with diameters > 50 nm, mesopores between 2 – 50 nm). Thereto, new strategies for the synthesis of novel carbon materials with designed porosity were developed that are in particular useful for the storage of energy. Besides the porosity, it is the graphene structure itself that determines the properties of a carbon material. Non-graphitic carbon materials usually exhibit a quite large degree of disorder with many defects in the graphene structure, and thus exhibit inherent microporosity (d < 2nm). These pores are traps and oppose reversible interaction with the carbon matrix. Furthermore they reduce the stability and conductivity of the carbon material, which was undesired for the proposed applications. As one part of this work, the graphene structures of different non-graphitic carbon materials were studied in detail using a novel wide-angle x-ray scattering model that allowed precise information about the nature of the carbon building units (graphene stacks). Different carbon precursors were evaluated regarding their potential use for the synthesis shown in this work, whereas mesophase pitch proved to be advantageous when a less disordered carbon microstructure is desired. By using mesophase pitch as carbon precursor, two templating strategies were developed using the nanocasting approach. The synthesized (monolithic) materials combined for the first time the advantages of a hierarchical interconnected pore system in the macro- and mesorange with the advantages of mesophase pitch as carbon precursor. In the first case, hierarchical macro- / mesoporous carbon monoliths were synthesized by replication of hard (silica) templates. Thus, a suitable synthesis procedure was developed that allowed the infiltration of the template with the hardly soluble carbon precursor. In the second case, hierarchical macro- / mesoporous carbon materials were synthesized by a novel soft-templating technique, taking advantage of the phase separation (spinodal decomposition) between mesophase pitch and polystyrene. The synthesis also allowed the generation of monolithic samples and incorporation of functional nanoparticles into the material. The synthesized materials showed excellent properties as an anode material in lithium batteries and support material for supercapacitors.
A numerical bifurcation analysis of the electrically driven plane sheet pinch is presented. The electrical conductivity varies across the sheet such as to allow instability of the quiescent basic state at some critical Hartmann number. The most unstable perturbation is the two-dimensional tearing mode. Restricting the whole problem to two spatial dimensions, this mode is followed up to a time-asymptotic steady state, which proves to be sensitive to three-dimensional perturbations even close to the point where the primary instability sets in. A comprehensive three-dimensional stability analysis of the two-dimensional steady tearing-mode state is performed by varying parameters of the sheet pinch. The instability with respect to three-dimensional perturbations is suppressed by a sufficiently strong magnetic field in the invariant direction of the equilibrium. For a special choice of the system parameters, the unstably perturbed state is followed up in its nonlinear evolution and is found to approach a three-dimensional steady state.
We investigate numerically the appearance of heteroclinic behavior in a three-dimensional, buoyancy-driven fluid layer with stress-free top and bottom boundaries, a square horizontal periodicity with a small aspect ratio, and rotation at low to moderate rates about a vertical axis. The Prandtl number is 6.8. If the rotation is not too slow, the skewed-varicose instability leads from stationary rolls to a stationary mixed-mode solution, which in turn loses stability to a heteroclinic cycle formed by unstable roll states and connections between them. The unstable eigenvectors of these roll states are also of the skewed-varicose or mixed-mode type and in some parameter regions skewed-varicose like shearing oscillations as well as square patterns are involved in the cycle. Always present weak noise leads to irregular horizontal translations of the convection pattern and makes the dynamics chaotic, which is verified by calculating Lyapunov exponents. In the nonrotating case, the primary rolls lose, depending on the aspect ratio, stability to traveling waves or a stationary square pattern. We also study the symmetries of the solutions at the intermittent fixed points in the heteroclinic cycle.
Our dynamic Sun manifests its activity by different phenomena: from the 11-year cyclic sunspot pattern to the unpredictable and violent explosions in the case of solar flares. During flares, a huge amount of the stored magnetic energy is suddenly released and a substantial part of this energy is carried by the energetic electrons, considered to be the source of the nonthermal radio and X-ray radiation. One of the most important and still open question in solar physics is how the electrons are accelerated up to high energies within (the observed in the radio emission) short time scales. Because the acceleration site is extremely small in spatial extent as well (compared to the solar radius), the electron acceleration is regarded as a local process. The search for localized wave structures in the solar corona that are able to accelerate electrons together with the theoretical and numerical description of the conditions and requirements for this process, is the aim of the dissertation. Two models of electron acceleration in the solar corona are proposed in the dissertation: I. Electron acceleration due to the solar jet interaction with the background coronal plasma (the jet--plasma interaction) A jet is formed when the newly reconnected and highly curved magnetic field lines are relaxed by shooting plasma away from the reconnection site. Such jets, as observed in soft X-rays with the Yohkoh satellite, are spatially and temporally associated with beams of nonthermal electrons (in terms of the so-called type III metric radio bursts) propagating through the corona. A model that attempts to give an explanation for such observational facts is developed here. Initially, the interaction of such jets with the background plasma leads to an (ion-acoustic) instability associated with growing of electrostatic fluctuations in time for certain range of the jet initial velocity. During this process, any test electron that happen to feel this electrostatic wave field is drawn to co-move with the wave, gaining energy from it. When the jet speed has a value greater or lower than the one, required by the instability range, such wave excitation cannot be sustained and the process of electron energization (acceleration and/or heating) ceases. Hence, the electrons can propagate further in the corona and be detected as type III radio burst, for example. II. Electron acceleration due to attached whistler waves in the upstream region of coronal shocks (the electron--whistler--shock interaction) Coronal shocks are also able to accelerate electrons, as observed by the so-called type II metric radio bursts (the radio signature of a shock wave in the corona). From in-situ observations in space, e.g., at shocks related to co-rotating interaction regions, it is known that nonthermal electrons are produced preferably at shocks with attached whistler wave packets in their upstream regions. Motivated by these observations and assuming that the physical processes at shocks are the same in the corona as in the interplanetary medium, a new model of electron acceleration at coronal shocks is presented in the dissertation, where the electrons are accelerated by their interaction with such whistlers. The protons inflowing toward the shock are reflected there by nearly conserving their magnetic moment, so that they get a substantial velocity gain in the case of a quasi-perpendicular shock geometry, i.e, the angle between the shock normal and the upstream magnetic field is in the range 50--80 degrees. The so-accelerated protons are able to excite whistler waves in a certain frequency range in the upstream region. When these whistlers (comprising the localized wave structure in this case) are formed, only the incoming electrons are now able to interact resonantly with them. But only a part of these electrons fulfill the the electron--whistler wave resonance condition. Due to such resonant interaction (i.e., of these electrons with the whistlers), the electrons are accelerated in the electric and magnetic wave field within just several whistler periods. While gaining energy from the whistler wave field, the electrons reach the shock front and, subsequently, a major part of them are reflected back into the upstream region, since the shock accompanied with a jump of the magnetic field acts as a magnetic mirror. Co-moving with the whistlers now, the reflected electrons are out of resonance and hence can propagate undisturbed into the far upstream region, where they are detected in terms of type II metric radio bursts. In summary, the kinetic energy of protons is transfered into electrons by the action of localized wave structures in both cases, i.e., at jets outflowing from the magnetic reconnection site and at shock waves in the corona.
In this thesis we mainly generalize two theorems from Mackaay-Picken and Picken (2002, 2004). In the first paper, Mackaay and Picken show that there is a bijective correspondence between Deligne 2-classes $\xi \in \check{H}^2(M,\mathcal{D}^2)$ and holonomy maps from the second thin-homotopy group $\pi_2^2(M)$ to $U(1)$. In the second one, a generalization of this theorem to manifolds with boundaries is given: Picken shows that there is a bijection between Deligne 2-cocycles and a certain variant of 2-dimensional topological quantum field theories. In this thesis we show that these two theorems hold in every dimension. We consider first the holonomy case, and by using simplicial methods we can prove that the group of smooth Deligne $d$-classes is isomorphic to the group of smooth holonomy maps from the $d^{th}$ thin-homotopy group $\pi_d^d(M)$ to $U(1)$, if $M$ is $(d-1)$-connected. We contrast this with a result of Gajer (1999). Gajer showed that Deligne $d$-classes can be reconstructed by a different class of holonomy maps, which not only include holonomies along spheres, but also along general $d$-manifolds in $M$. This approach does not require the manifold $M$ to be $(d-1)$-connected. We show that in the case of flat Deligne $d$-classes, our result differs from Gajers, if $M$ is not $(d-1)$-connected, but only $(d-2)$-connected. Stiefel manifolds do have this property, and if one applies our theorem to these and compare the result with that of Gajers theorem, it is revealed that our theorem reconstructs too many Deligne classes. This means, that our reconstruction theorem cannot live without the extra assumption on the manifold $M$, that is our reconstruction needs less informations about the holonomy of $d$-manifolds in $M$ at the price of assuming $M$ to be $(d-1)$-connected. We continue to show, that also the second theorem can be generalized: By introducing the concept of Picken-type topological quantum field theory in arbitrary dimensions, we can show that every Deligne $d$-cocycle induces such a $d$-dimensional field theory with two special properties, namely thin-invariance and smoothness. We show that any $d$-dimensional topological quantum field theory with these two properties gives rise to a Deligne $d$-cocycle and verify that this construction is surjective and injective, that is both groups are isomorphic.
The Voyager 2 Photopolarimeter experiment has yielded the highest resolved data of Saturn's rings, exhibiting a wide variety of features. The B-ring region between 105000 km and 110000 km distance from Saturn has been investigated. It has a high matter density and contains no significance features visible by eye. Analysis with statistical methods has let us to the detection of two significant events. These features are correlated with the inner 3:2 resonances of the F-ring shepherd satellites Pandora and Prometheus, and may be evidence of large ring paricles caught in the corotation resonances.
It is desirable to reduce the potential threats that result from the variability of nature, such as droughts or heat waves that lead to food shortage, or the other extreme, floods that lead to severe damage. To prevent such catastrophic events, it is necessary to understand, and to be capable of characterising, nature's variability. Typically one aims to describe the underlying dynamics of geophysical records with differential equations. There are, however, situations where this does not support the objectives, or is not feasible, e.g., when little is known about the system, or it is too complex for the model parameters to be identified. In such situations it is beneficial to regard certain influences as random, and describe them with stochastic processes. In this thesis I focus on such a description with linear stochastic processes of the FARIMA type and concentrate on the detection of long-range dependence. Long-range dependent processes show an algebraic (i.e. slow) decay of the autocorrelation function. Detection of the latter is important with respect to, e.g. trend tests and uncertainty analysis. Aiming to provide a reliable and powerful strategy for the detection of long-range dependence, I suggest a way of addressing the problem which is somewhat different from standard approaches. Commonly used methods are based either on investigating the asymptotic behaviour (e.g., log-periodogram regression), or on finding a suitable potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test the fractional difference parameter d for compatibility with zero. Here, I suggest to rephrase the problem as a model selection task, i.e.comparing the most suitable long-range dependent and the most suitable short-range dependent model. Approaching the task this way requires a) a suitable class of long-range and short-range dependent models along with suitable means for parameter estimation and b) a reliable model selection strategy, capable of discriminating also non-nested models. With the flexible FARIMA model class together with the Whittle estimator the first requirement is fulfilled. Standard model selection strategies, e.g., the likelihood-ratio test, is for a comparison of non-nested models frequently not powerful enough. Thus, I suggest to extend this strategy with a simulation based model selection approach suitable for such a direct comparison. The approach follows the procedure of a statistical test, with the likelihood-ratio as the test statistic. Its distribution is obtained via simulations using the two models under consideration. For two simple models and different parameter values, I investigate the reliability of p-value and power estimates obtained from the simulated distributions. The result turned out to be dependent on the model parameters. However, in many cases the estimates allow an adequate model selection to be established. An important feature of this approach is that it immediately reveals the ability or inability to discriminate between the two models under consideration. Two applications, a trend detection problem in temperature records and an uncertainty analysis for flood return level estimation, accentuate the importance of having reliable methods at hand for the detection of long-range dependence. In the case of trend detection, falsely concluding long-range dependence implies an underestimation of a trend and possibly leads to a delay of measures needed to take in order to counteract the trend. Ignoring long-range dependence, although present, leads to an underestimation of confidence intervals and thus to an unjustified belief in safety, as it is the case for the return level uncertainty analysis. A reliable detection of long-range dependence is thus highly relevant in practical applications. Examples related to extreme value analysis are not limited to hydrological applications. The increased uncertainty of return level estimates is a potentially problem for all records from autocorrelated processes, an interesting examples in this respect is the assessment of the maximum strength of wind gusts, which is important for designing wind turbines. The detection of long-range dependence is also a relevant problem in the exploration of financial market volatility. With rephrasing the detection problem as a model selection task and suggesting refined methods for model comparison, this thesis contributes to the discussion on and development of methods for the detection of long-range dependence.
In the modern industrialized countries every year several hundred thousands of people die due to the sudden cardiac death. The individual risk for this sudden cardiac death cannot be defined precisely by common available, non-invasive diagnostic tools like Holter-monitoring, highly amplified ECG and traditional linear analysis of heart rate variability (HRV). Therefore, we apply some rather unconventional methods of nonlinear dynamics to analyse the HRV. Especially, some complexity measures that are basing on symbolic dynamics as well as a new measure, the renormalized entropy, detect some abnormalities in the HRV of several patients who have been classified in the low risk group by traditional methods. A combination of these complexity measures with the parameters in the frequency domain seems to be a promising way to get a more precise definition of the individual risk. These findings have to be validated by a representative number of patients.
We have used techniques of nonlinear dynamics to compare a special model for the reversals of the Earth's magnetic field with the observational data. Although this model is rather simple, there is no essential difference to the data by means of well-known characteristics, such as correlation function and probability distribution. Applying methods of symbolic dynamics we have found that the considered model is not able to describe the dynamical properties of the observed process. These significant differences are expressed by algorithmic complexity and Renyi information.
The innovation of information techniques has changed many aspects of our life. In health care field, we can obtain, manage and communicate high-quality large volumetric image data by computer integrated devices, to support medical care. In this dissertation I propose several promising methods that could assist physicians in processing, observing and communicating the image data. They are included in my three research aspects: telemedicine integration, medical image visualization and image segmentation. And these methods are also demonstrated by the demo software that I developed. One of my research point focuses on medical information storage standard in telemedicine, for example DICOM, which is the predominant standard for the storage and communication of medical images. I propose a novel 3D image data storage method, which was lacking in current DICOM standard. I also created a mechanism to make use of the non-standard or private DICOM files. In this thesis I present several rendering techniques on medical image visualization to offer different display manners, both 2D and 3D, for example, cut through data volume in arbitrary degree, rendering the surface shell of the data, and rendering the semi-transparent volume of the data. A hybrid segmentation approach, designed for semi-automated segmentation of radiological image, such as CT, MRI, etc, is proposed in this thesis to get the organ or interested area from the image. This approach takes advantage of the region-based method and boundary-based methods. Three steps compose the hybrid approach: the first step gets coarse segmentation by fuzzy affinity and generates homogeneity operator; the second step divides the image by Voronoi Diagram and reclassifies the regions by the operator to refine segmentation from the previous step; the third step handles vague boundary by level set model. Topics for future research are mentioned in the end, including new supplement for DICOM standard for segmentation information storage, visualization of multimodal image information, and improvement of the segmentation approach to higher dimension.
First studies of electron transfer in [N]phenylenes were performed in bimolecular quenching reactions of angular [3]- and triangular [4]phenylene with various electron acceptors. The relation between the quenching rate constants kq and the free energy change of the electron transfer (ΔG0CS ) could be described by the Rehm-Weller equation. From the experimental results, a reorganization energy λ of 0.7 eV was derived. Intramolecular electron transfer reactions were studied in an [N]phenylene bichomophore and a corresponding reference compound. Fluorescence lifetime and quantum yield of the bichromophor display a characteristic dependence on the solvent polarity, whereas the corresponding values of the reference compound remain constant. From the results, a nearly isoenergonic ΔG0CS can be determined. As the triplet quantum yield is nearly independent of the polarity, charge recombination leads to the population of the triplet state.
Contents: Chapter 1. Introduction 1 Information Structure 2 Grammatical Correlates of Information Structure 3 Structure of the Questionnaire 4 Experimental Tasks 5 Technicalities 6 Archiving 7 Acknowledgments Chapter 2. General Questions 1 General Information 2 Phonology 3 Morphology and Syntax Chapter 3. Experimental tasks 1 Changes (Given/New in Intransitives and Transitives) 2 Giving (Given/New in Ditransitives) 3 Visibility (Given/New, Animacy and Type/Token Reference) 4 Locations (Given/New in Locative Expressions) 5 Sequences (Given/New/Contrast in Transitives) 6 Dynamic Localization (Given/New in Dynamic Loc. Descriptions) 7 Birthday Party (Weight and Discourse Status) 8 Static Localization (Macro-Planning and Given/New in Locatives) 9 Guiding (Presentational Utterances) 10 Event Cards (All New) 11 Anima (Focus types and Animacy) 12 Contrast (Contrast in pairing events) 13 Animal Game (Broad/Narrow Focus in NP) 14 Properties (Focus on Property and Possessor) 15 Eventives (Thetic and Categorical Utterances) 16 Tell a Story (Contrast in Text) 17 Focus Cards (Selective, Restrictive, Additive, Rejective Focus) 18 Who does What (Answers to Multiple Constituent Questions) 19 Fairy Tale (Topic and Focus in Coherent Discourse) 20 Map Task (Contrastive and Selective Focus in Spontaneous Dialogue) 21 Drama (Contrastive Focus in Argumentation) 22 Events in Places (Spatial, Temporal and Complex Topics) 23 Path Descriptions (Topic Change in Narrative) 24 Groups (Partial Topic) 25 Connections (Bridging Topic) 26 Indirect (Implicational Topic) 27 Surprises (Subject-Topic Interrelation) 28 Doing (Action Given, Action Topic) 29 Influences (Question Priming) Chapter 4. Translation tasks 1 Basic Intonational Properties 2 Focus Translation 3 Topic Translation 4 Quantifiers Chapter 5. Information structure summary survey 1 Preliminaries 2 Syntax 3 Morphology 4 Prosody 5 Summary: Information structure Chapter 6. Performance of Experimental Tasks in the Field 1 Field sessions 2 Field Session Metadata 3 Informants’ Agreement
Contents: 1. Introduction 2. Migration and Assimilation – Theoretical Approaches 2.1 Meaning and Definition of the Terms Migration and Migrant 2.2 Milton M. Gordon – Sub Processes of Assimilation 2.3 Hartmut Esser - Acculturation, Integration, and Assimilation 2.4 The Concept of Integration and Assimilation 2.5 Straight–line Assimilation and its Implications 2.6 Segmented Assimilation and its Implications 3. Social Inequality and Welfare – Theoretical Approaches 3.1 Dimensions of Inequality 3.2 Welfare Regimes and Social Inequality 3.3 Migration, Assimilation and Inequality 4. Research Design 4.1 Research Question and General Proceeding 4.2 Sample and Data Base 4.3 Operationalisation and Indicators 5. Migration, Welfare and Inequality in Three European Countries 6. Empirical Results 6.1 Performance of Migrants Compared With Natives 6.2 Different Trajectories of Assimilation 6.3 Trajectories of Segmented Assimilation and their Determinants 6.4 Policies, Attitudes and Assimilation – An Aggregate Analysis 6.5 Summary – What Determines the Performance of Migrants? 7. Discussion of Empirical Results in Terms of Theoretical Approaches 7.1 The Situation of Migrants in Three European Countries 7.2 Assessment of the Trajectories of Assimilation 8. Conclusion – Future Prospects of Migration in Europe
One type of internal diachronic change that has been extensively studied for spoken languages is grammaticalization whereby lexical elements develop into free or bound grammatical elements. Based on a wealth of spoken languages, a large amount of prototypical grammaticalization pathways has been identified. Moreover, it has been shown that desemanticization, decategorialization, and phonetic erosion are typical characteristics of grammaticalization processes. Not surprisingly, grammaticalization is also responsible for diachronic change in sign languages. Drawing data from a fair number of sign languages, we show that grammaticalization in visual-gestural languages – as far as the development from lexical to grammatical element is concerned – follows the same developmental pathways as in spoken languages. That is, the proposed pathways are modalityindependent. Besides these intriguing parallels, however, sign languages have the possibility of developing grammatical markers from manual and non-manual co-speech gestures. We will discuss various instances of grammaticalized gestures and we will also briefly address the issue of the modality-specificity of this phenomenon.
Public pensions in the U.S.
(2005)
Contents: The Public Old Age Insurance of the U.S. -Historical overview -Technical details -Individual equity and social adequacy The Economic Problem of Old Age -Risks and economic security -Old age, retirement, and idividual precaution -Insurance markets, market failures, and social insurance -Options for public pension systems The Problems of Social Security -The financial balance of OASDI -Causes of the long-run problems -Rates of return -Conclusion - The case for Social Security reform Proposed Remedies -Full, partial, or no privatization? -The President's Commission to Strengthen Social Security -Kotlikoff's Personal Security System -The Diamond-Orszag Three-Part plan
Many European countries have experienced a significant increase of unemployment in recent years. This paper reviews several theoretical models that try to explain this phenomenon. Predominantly, these models claim a link between the poor performance of European labor markets and the high level of market regulation. Commonly referred to as the Eurosclerosis debate, prominent approaches consider insider-outsider relationships, search-models, and the influence of hiring and firing costs on equilibrium employment. The paper presents empirical evidence of each model and studies the relevance of the identified rigidities as a determinant of high unemployment in Europe. Furthermore, a case study analyzes the unemployment problem in Germany and critically discusses new reform efforts. In particular this section analyzes whether the recently enacted Hartz reforms can induce higher employment.
Revisiting public investment
(2004)
The consumption equivalence method is the theoretical basis of public cost-benefit analysis. Consumption equivalence public capital prices are explicitly introduces in order to sufficiently care for the opportunity cost of public expenditure. This can solve the dispute about the social rate of discount within public cost-benefit analysis witch was generated on a criterion looking similar to the capital value formula, known as Lind’s approach. The social rate of discount is liberated from opportunity costs considerations and the discounting away of the effects for future welfare vanishes. The corresponding question whether one should accept a positive value of the pure rate of social time preference is an old issue. Its current state between the prescriptive and descriptive view can also be interpreted as a consequence of the oversimplification of standard cost– benefit analysis. But apart from an economic self-process the pure rate of social time preference is also defined as a business-as-usual value of social distance discounting. Hence, a political choice has to be made about this rate which is free in principal.
An exhaustive and disjoint decomposition of social choice situations is derived in a general set theoretical framework using the new tools of the Lifted Pareto relation on the power set of social states representing a pre-choice comparison of choice option sets. The main result is the classification of social choice situations which include three types of social choice problems. First, we usually observe the common incompleteness of the Pareto relation. Second, a kind of non-compactness problem of a choice set of social states can be generated. Finally, both can be combined. The first problem root can be regarded as natural everyday dilemma of social choice theory whereas the second may probably be much more due to modeling technique implications. The distinction is enabled at a very general set theoretical level. Hence, the derived classification of social choice situations is applicable on almost every relevant economic model.
The concepts of food deficit, hunger, undernourishment and food security are discussed. Axioms and indices for the assessment of nutrition of individuals and groups are suggested. Furthermore a measure for food aid donor performance is developed and applied to a sample of bilateral and multilateral donors providing food aid for African countries.
The papers in this volume were presented at the workshop Heterogeneity in Linguistic Databases', which took place on July 9, 2004 at the University of Potsdam. The workshop was organized by project D1: Linguistic Database for Information Structure: Annotation and Retrieval', a member project of the SFB 632, a collaborative research center entitled Information Structure: the Linguistic Means for Structuring Utterances, Sentences and Texts'. The workshop brought together both developers and users of linguistic databases from a number of research projects which work on an empirical basis, all of which have to cope with different sorts of heterogeneity: primary linguistic data and annotated information may be heterogeneous, as well as the data structures representing them. The first four papers (by Wagner, Schmidt, Lüdeling, and Witt) address aspects of heterogeneous data from the point of view of database developers; the remaining three papers (by Meyer, Smith, and Teich/Fankhauser) focus on data exploitation by the users.
Interdisciplinary studies on information structure : ISIS ; working papers of the SFB 632. - Vol. 1
(2004)
Contents: A1: Phonology and syntax of focussing and topicalisation: Gisbert Fanselow: Cyclic Phonology–Syntax-Interaction: Movement to First Position in German Caroline Féry and Laura Herbst: German Sentence Accent Revisited Shinichiro Ishihara: Prosody by Phase: Evidence from Focus Intonation–Wh-scope Correspondence in Japanese A2: Quantification and information structure: Cornelia Endriss and Stefan Hinterwimmer: The Influence of Tense in Adverbial Quantification A3: Rhetorical Structure in Spoken Language: Modeling of Global Prosodic Parameters: Ekaterina Jasinskaja, Jörg Mayer and David Schlangen: Discourse Structure and Information Structure: Interfaces and Prosodic Realization B2: Focussing in African Tchadic languages: Katharina Hartmann and Malte Zimmermann: Focus Strategies in Chadic: The Case of Tangale Revisited D1: Linguistic database for information structure: Annotation and retrieval: Stefanie Dipper, Michael Götze, Manfred Stede and Tillmann Wegst: ANNIS: A Linguistic Database for Exploring Information Structure
Collisions of black holes and neutron stars, named mixed binaries in the following, are interesting because of at least two reasons. Firstly, it is expected that they emit a large amount of energy as gravitational waves, which could be measured by new detectors. The form of those waves is expected to carry information about the internal structure of such systems. Secondly, collisions of such objects are the prime suspects of short gamma ray bursts. The exact mechanism for the energy emission is unknown so far. In the past, Newtonian theory of gravitation and modifications to it were often used for numerical simulations of collisions of mixed binary systems. However, near to such objects, the gravitational forces are so strong, that the use of General Relativity is necessary for accurate predictions. There are a lot of problems in general relativistic simulations. However, systems of two neutron stars and systems of two black holes have been studies extensively in the past and a lot of those problems have been solved. One of the remaining problems so far has been the use of hydrodynamic on excision boundaries. Inside excision regions, no evolution is carried out. Such regions are often used inside black holes to circumvent instabilities of the numerical methods near the singularity. Methods to handle hydrodynamics at such boundaries have been described and tests are shown in this work. One important test and the first application of those methods has been the simulation of a collapsing neutron star to a black hole. The success of these simulations and in particular the performance of the excision methods was an important step towards simulations of mixed binaries. Initial data are necessary for every numerical simulation. However, the creation of such initial data for general relativistic situations is in general very complicated. In this work it is shown how to obtain initial data for mixed binary systems using an already existing method for initial data of two black holes. These initial data have been used for evolutions of such systems and problems encountered are discussed in this work. One of the problems are instabilities due to different methods, which could be solved by dissipation of appropriate strength. Another problem is the expected drift of the black hole towards the neutron star. It is shown, that this can be solved by using special gauge conditions, which prevent the black hole from moving on the computational grid. The methods and simulations shown in this work are only the starting step for a much more detailed study of mixed binary system. Better methods, models and simulations with higher resolution and even better gauge conditions will be focus of future work. It is expected that such detailed studies can give information about the emitted gravitational waves, which is important in view of the newly built gravitational wave detectors. In addition, these simulations could give insight into the processes responsible for short gamma ray bursts.
The advent of large-scale and high-throughput technologies has recently caused a shift in focus in contemporary biology from decades of reductionism towards a more systemic view. Alongside the availability of genome sequences the exploration of organisms utilizing such approach should give rise to a more comprehensive understanding of complex systems. Domestication and intensive breeding of crop plants has led to a parallel narrowing of their genetic basis. The potential to improve crops by conventional breeding using elite cultivars is therefore rather limited and molecular technologies, such as marker assisted selection (MAS) are currently being exploited to re-introduce allelic variance from wild species. Molecular breeding strategies have mostly focused on the introduction of yield or resistance related traits to date. However given that medical research has highlighted the importance of crop compositional quality in the human diet this research field is rapidly becoming more important. Chemical composition of biological tissues can be efficiently assessed by metabolite profiling techniques, which allow the multivariate detection of metabolites of a given biological sample. Here, a GC/MS metabolite profiling approach has been applied to investigate natural variation of tomatoes with respect to the chemical composition of their fruits. The establishment of a mass spectral and retention index (MSRI) library was a prerequisite for this work in order to establish a framework for the identification of metabolites from a complex mixture. As mass spectral and retention index information is highly important for the metabolomics community this library was made publicly available. Metabolite profiling of tomato wild species revealed large differences in the chemical composition, especially of amino and organic acids, as well as on the sugar composition and secondary metabolites. Intriguingly, the analysis of a set of S. pennellii introgression lines (IL) identified 889 quantitative trait loci of compositional quality and 326 yield-associated traits. These traits are characterized by increases/decreases not only of single metabolites but also of entire metabolic pathways, thus highlighting the potential of this approach in uncovering novel aspects of metabolic regulation. Finally the biosynthetic pathway of the phenylalanine-derived fruit volatiles phenylethanol and phenylacetaldehyde was elucidated via a combination of metabolic profiling of natural variation, stable isotope tracer experiments and reverse genetic experimentation.
Fluvial systems are one of the major features shaping a landscape. They adjust to the prevailing tectonic and climatic setting and therefore are very sensitive markers of changes in these systems. If their response to tectonic and climatic forcing is quantified and if the climatic signal is excluded, it is possible to derive a local deformation history. Here, we investigate fluvial terraces and erosional surfaces in the southern Chilean forearc to assess a long-term geomorphic and hence tectonic evolution. Remote sensing and field studies of the Nahuelbuta Range show that the long-term deformation of the Chilean forearc is manifested by breaks in topography, sequences of differentially uplifted marine, alluvial and strath terraces as well as tectonically modified river courses and drainage basins. We used SRTM-90-data as basic elevation information for extracting and delineating drainage networks. We calculated hypsometric curves as an indicator for basin uplift, stream-length gradient indices to identify stream segments with anomalous slopes, and longitudinal river profiles as well as DS-plots to identify knickpoints and other anomalies. In addition, we investigated topography with elevation-slope graphs, profiles, and DEMs to reveal erosional surfaces. During the first field trip we already measured palaeoflow directions, performed pebble counting and sampled the fluvial terraces in order to apply cosmogenic nuclide dating (<sup>10Be, <sup>26Al) as well as provenance analyses. Our preliminary analysis of the Coastal Cordillera indicates a clear segmentation between the northern and southern parts of the Nahuelbuta Range. The Lanalhue Fault, a NW-SE striking fault zone oblique to the plate boundary, defines the segment boundary. Furthermore, we find a complex drainage re-organisation including a drainage reversal and wind gap on the divide between the Tirúa and Pellahuén basins east of the town Tirúa. The coastal basins lost most of their Andean sediment supply areas that existed in Tertiary and in part during early Pleistocene time. Between the Bío-Bío and Imperial rivers no Andean river is recently capable to traverse the Coastal Cordillera, suggesting ongoing Quaternary uplift of the entire range. From the spatial distribution of geomorphic surfaces in this region two uplift signals may be derived: (1) a long-term differential uplift process, active since the Miocene and possibly caused by underplating of subducted trench sediments, (2) a younger, local uplift affecting only the northern part of the Nahuelbuta Range that may be caused by the interaction of the forearc with the subduction of the Mocha Fracture Zone at the latitude of the Arauco peninsula. Our approach thus provides results in our attempt to decipher the characteristics of forearc development of active convergent margins using long-term geomorphic indicators. Furthermore, it is expected that our ongoing assessment will constrain repeatedly active zones of deformation. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
We consider a system of infinitely many hard balls in R<sup>d undergoing Brownian motions and submitted to a smooth pair potential. It is modelized by an infinite-dimensional stochastic differential equation with a local time term. We prove that the set of all equilibrium measures, solution of a detailed balance equation, coincides with the set of canonical Gibbs measures associated to the hard core potential added to the smooth interaction potential.
In this paper, we consider families of time Markov fields (or reciprocal classes) which have the same bridges as a Brownian diffusion. We characterize each class as the set of solutions of an integration by parts formula on the space of continuous paths C[0; 1]; R-d) Our techniques provide a characterization of gradient diffusions by a duality formula and, in case of reversibility, a generalization of a result of Kolmogorov.
We develop a cluster expansion in space-time for an infinite-dimensional system of interacting diffusions where the drift term of each diffusion depends on the whole past of the trajectory; these interacting diffusions arise when considering the Langevin dynamics of a ferromagnetic system submitted to a disordered external magnetic field.
The authors analyse different Gibbsian properties of interactive Brownian diffusions X indexed by the d-dimensional lattice. In the first part of the paper, these processes are characterized as Gibbs states on path spaces. In the second part of the paper, they study the Gibbsian character on R^{Z^d} of the law at time t of the infinite-dimensional diffusion X(t), when the initial law is Gibbsian. AMS Classifications: 60G15 , 60G60 , 60H10 , 60J60
We prove in this paper an existence result for infinite-dimensional stationary interactive Brownian diffusions. The interaction is supposed to be small in the norm ||.||∞ but otherwise is very general, being possibly non-regular and non-Markovian. Our method consists in using the characterization of such diffusions as space-time Gibbs fields so that we construct them by space-time cluster expansions in the small coupling parameter.
Adsorption layers of soluble surfactants enable and govern a variety of phenomena in surface and colloidal sciences, such as foams. The ability of a surfactant solution to form wet foam lamellae is governed by the surface dilatational rheology. Only systems having a non-vanishing imaginary part in their surface dilatational modulus, E, are able to form wet foams. The aim of this thesis is to illuminate the dissipative processes that give rise to the imaginary part of the modulus. There are two controversial models discussed in the literature. The reorientation model assumes that the surfactants adsorb in two distinct states, differing in their orientation. This model is able to describe the frequency dependence of the modulus E. However, it assumes reorientation dynamics in the millisecond time regime. In order to assess this model, we designed a SHG pump-probe experiment that addresses the orientation dynamics. Results obtained reveal that the orientation dynamics occur in the picosecond time regime, being in strong contradiction with the two states model. The second model regards the interface as an interphase. The adsorption layer consists of a topmost monolayer and an adjacent sublayer. The dissipative process is due to the molecular exchange between both layers. The assessment of this model required the design of an experiment that discriminates between the surface compositional term and the sublayer contribution. Such an experiment has been successfully designed and results on elastic and viscoelastic surfactant provided evidence for the correctness of the model. Because of its inherent surface specificity, surface SHG is a powerful analytical tool that can be used to gain information on molecular dynamics and reorganization of soluble surfactants. They are central elements of both experiments. However, they impose several structural elements of the model system. During the course of this thesis, a proper model system has been identified and characterized. The combination of several linear and nonlinear optical techniques, allowed for a detailed picture of the interfacial architecture of these surfactants.