Refine
Year of publication
Document Type
- Article (20663)
- Doctoral Thesis (3127)
- Postprint (2090)
- Monograph/Edited Volume (1195)
- Other (657)
- Review (585)
- Conference Proceeding (313)
- Preprint (232)
- Part of a Book (227)
- Working Paper (133)
Language
- English (29436) (remove)
Is part of the Bibliography
- yes (29436) (remove)
Keywords
- climate change (172)
- Germany (101)
- machine learning (85)
- diffusion (76)
- German (68)
- Arabidopsis thaliana (66)
- anomalous diffusion (58)
- stars: massive (58)
- Climate change (55)
- Holocene (55)
Institute
- Institut für Physik und Astronomie (4872)
- Institut für Biochemie und Biologie (4699)
- Institut für Geowissenschaften (3305)
- Institut für Chemie (2848)
- Institut für Mathematik (1566)
- Department Psychologie (1401)
- Institut für Ernährungswissenschaft (1030)
- Department Linguistik (921)
- Wirtschaftswissenschaften (825)
- Institut für Informatik und Computational Science (795)
The first goal of the present work focuses on the need for different rationing methods of the The Global Change and Financial Transition (GFT) work- ing group at the Potsdam Institute for Climate Impact Research (PIK): I provide a toolbox which contains a variety of rationing methods to be ap- plied to micro-economic disequilibrium models of the lagom model family. This toolbox consists of well known rationing methods, and of rationing methods provided specifically for lagom. To ensure an easy application the toolbox is constructed in modular fashion. The second goal of the present work is to present a micro-economic labour market where heterogenous labour suppliers experience consecu- tive job opportunities and need to decide whether to apply for employ- ment. The labour suppliers are heterogenous with respect to their qualifi- cations and their beliefs about the application behaviour of their competi- tors. They learn simultaneously – in Bayesian fashion – about their individ- ual perceived probability to obtain employment conditional on application (PPE) by observing each others’ application behaviour over a cycle of job opportunities.
This dissertation contains theoretical investigations on the morphology and statistical mechanics of vesicles. The shapes of homogeneous fluid vesicles and inhomogeneous vesicles with fluid and solid membrane domains are calculated. The influence of thermal fluctuations is investigated. The obtained results are valid on mesoscopic length scales and are based on a geometrical membrane model, where the vesicle membrane is described as either a static or a thermal fluctuating surface. The thesis consists of three parts. In the first part, homogeneous vesicles are considered. The focus in this part is on the thermally induced morphological transition between vesicles with prolate and oblate shape. With the help of Monte Carlo simulations, the free energy profile of these vesicles is determined. It can be shown that the shape transformation between prolate and oblate vesicles proceeds continuously and is not hampered by a free energy barrier. The second and third part deal with inhomogeneous vesicles which contain intramembrane domains. These investigations are motivated by experimental results on domain formation in single or multicomponent vesicles, where phase separation occurs and different membrane phases coexist. The resulting domains differ with regard to their membrane structure (solid, fluid). The membrane structure has a distinct effect on the form of the domain and the morphology of the vesicle. In the second part, vesicles with coexisting solid and fluid membrane domains are studied, while the third part addresses vesicles with coexisting fluid domains. The equilibrium morphology of vesicles with simple and complex domain forms, derived through minimisation of the membrane energy, is determined as a function of material parameters. The results are summarised in morphology diagrams. These diagrams show previously unknown morphological transitions between vesicles with different domain shapes. The impact of thermal fluctuations on the vesicle and the form of the domains is investigated by means of Monte Carlo simulations.
The need to develop sustainable resource management strategies for semi-arid and arid rangelands is acute as non-adapted grazing strategies lead to irreversible environmental problems such as desertification and associated loss of economic support to society. In such vulnerable ecosystems, successful implementation of sustainable management strategies depends on well-founded under-standing of processes at different scales that underlay the complex system dynamic. There is ample evidence that, in contrast to traditional sectoral approaches, only interdisciplinary research does work for resolving problems in conservation and natural resource management. In this thesis I combined a range of modeling approaches that integrate different disciplines and spatial scales in order to contribute to basic guidelines for sustainable management of semi-arid and arid range-lands. Since water availability and livestock management are seen as most potent determinants for the dynamics of semi-arid and arid ecosystems I focused on (i) the interaction of ecological and hydro-logical processes and (ii) the effect of farming strategies. First, I developed a grid-based and small-scaled model simulating vegetation dynamics and inter-linked hydrological processes. The simulation results suggest that ecohydrological interactions gain importance in rangelands with ascending slope where vegetation cover serves to obstruct run-off and decreases evaporation from the soil. Disturbances like overgrazing influence these positive feedback mechanisms by affecting vegetation cover and composition. In the second part, I present a modeling approach that has the power to transfer and integrate ecological information from the small scale vegetation model to the landscape scale, most relevant for the conservation of biodiversity and sustainable management of natural resources. I combined techniques of stochastic modeling with remotely sensed data and GIS to investigate to which ex-tent spatial interactions, like the movement of surface water by run-off in water limited environments, affect ecosystem functioning at the landscape scale. My simulation experiments show that overgrazing decreases the number of vegetation patches that act as hydrological sinks and run-off increases. The results of both simulation models implicate that different vegetation types should not only be regarded as provider of forage production but also as regulator of ecosystem functioning. Vegetation patches with good cover of perennial vegetation are capable to catch and conserve surface run-off from degraded surrounding areas. Therefore, downstream out of the simulated system is prevented and efficient use of water resources is guaranteed at all times. This consequence also applies to commercial rotational grazing strategies for semi-arid and arid rangelands with ascending slope where non-degraded paddocks act as hydrological sinks. Finally, by the help of an integrated ecological-economic modeling approach, I analyzed the relevance of farmers’ ecological knowledge for longterm functioning of semi-arid and arid grazing systems under current and future climatic conditions. The modeling approach consists of an ecological and an economic module and combines relevant processes on either level. Again, vegetation dynamics and forage productivity is derived by the small-scaled vegetation model. I showed that sustainable management of semi-arid and arid rangelands relies strongly on the farmers’ knowledge on how the ecosystem works. Furthermore, my simulation results indicate that the projected lower annual rainfall due to climate change in combination with non-adapted grazing strategies adds an additional layer of risk to these ecosystems that are already prone to land degradation. All simulation models focus on the most essential factors and ignore specific details. Therefore, even though all simulation models are parameterized for a specific dwarf shrub savanna in arid southern Namibia, the conclusions drawn are applicable for semi-arid and arid rangelands in general.
The aim of this work was the generation of carbon materials with high surface area, exhibiting a hierarchical pore system in the macro- and mesorange. Such a pore system facilitates the transport through the material and enhances the interaction with the carbon matrix (macropores are pores with diameters > 50 nm, mesopores between 2 – 50 nm). Thereto, new strategies for the synthesis of novel carbon materials with designed porosity were developed that are in particular useful for the storage of energy. Besides the porosity, it is the graphene structure itself that determines the properties of a carbon material. Non-graphitic carbon materials usually exhibit a quite large degree of disorder with many defects in the graphene structure, and thus exhibit inherent microporosity (d < 2nm). These pores are traps and oppose reversible interaction with the carbon matrix. Furthermore they reduce the stability and conductivity of the carbon material, which was undesired for the proposed applications. As one part of this work, the graphene structures of different non-graphitic carbon materials were studied in detail using a novel wide-angle x-ray scattering model that allowed precise information about the nature of the carbon building units (graphene stacks). Different carbon precursors were evaluated regarding their potential use for the synthesis shown in this work, whereas mesophase pitch proved to be advantageous when a less disordered carbon microstructure is desired. By using mesophase pitch as carbon precursor, two templating strategies were developed using the nanocasting approach. The synthesized (monolithic) materials combined for the first time the advantages of a hierarchical interconnected pore system in the macro- and mesorange with the advantages of mesophase pitch as carbon precursor. In the first case, hierarchical macro- / mesoporous carbon monoliths were synthesized by replication of hard (silica) templates. Thus, a suitable synthesis procedure was developed that allowed the infiltration of the template with the hardly soluble carbon precursor. In the second case, hierarchical macro- / mesoporous carbon materials were synthesized by a novel soft-templating technique, taking advantage of the phase separation (spinodal decomposition) between mesophase pitch and polystyrene. The synthesis also allowed the generation of monolithic samples and incorporation of functional nanoparticles into the material. The synthesized materials showed excellent properties as an anode material in lithium batteries and support material for supercapacitors.
In the present dissertation paper we study problems related to synchronization phenomena in the presence of noise which unavoidably appears in real systems. One part of the work is aimed at investigation of utilizing delayed feedback to control properties of diverse chaotic dynamic and stochastic systems, with emphasis on the ones determining predisposition to synchronization. Other part deals with a constructive role of noise, i.e. its ability to synchronize identical self-sustained oscillators. First, we demonstrate that the coherence of a noisy or chaotic self-sustained oscillator can be efficiently controlled by the delayed feedback. We develop the analytical theory of this effect, considering noisy systems in the Gaussian approximation. Possible applications of the effect for the synchronization control are also discussed. Second, we consider synchrony of limit cycle systems (in other words, self-sustained oscillators) driven by identical noise. For weak noise and smooth systems we proof the purely synchronizing effect of noise. For slightly different oscillators and/or slightly nonidentical driving, synchrony becomes imperfect, and this subject is also studied. Then, with numerics we show moderate noise to be able to lead to desynchronization of some systems under certain circumstances. For neurons the last effect means “antireliability” (the “reliability” property of neurons is treated to be important from the viewpoint of information transmission functions), and we extend our investigation to neural oscillators which are not always limit cycle ones. Third, we develop a weakly nonlinear theory of the Kuramoto transition (a transition to collective synchrony) in an ensemble of globally coupled oscillators in presence of additional time-delayed coupling terms. We show that a linear delayed feedback not only controls the transition point, but effectively changes the nonlinear terms near the transition. A purely nonlinear delayed coupling does not affect the transition point, but can reduce or enhance the amplitude of collective oscillations.
The predictability problem
(2007)
We try to determine whether it is possible to approximate the subjective Cloze predictability measure with two types of objective measures, semantic and word n-gram measures, based on the statistical properties of text corpora. The semantic measures are constructed either by querying Internet search engines or by applying Latent Semantic Analysis, while the word n-gram measures solely depend on the results of Internet search engines. We also analyse the role of Cloze predictability in the SWIFT eye movement model, and evaluate whether other parameters might be able to take the place of predictability. Our results suggest that a computational model that generates predictability values not only needs to use measures that can determine the relatedness of a word to its context; the presence of measures that assert unrelatedness is just as important. In spite of the fact, however, that we only have similarity measures, we predict that SWIFT should perform just as well when we replace Cloze predictability with our measures.
This work introduces novel internal and external memory algorithms for computing voxel skeletons of massive voxel objects with complex network-like architecture and for converting these voxel skeletons to piecewise linear geometry, that is triangle meshes and piecewise straight lines. The presented techniques help to tackle the challenge of visualizing and analyzing 3d images of increasing size and complexity, which are becoming more and more important in, for example, biological and medical research. Section 2.3.1 contributes to the theoretical foundations of thinning algorithms with a discussion of homotopic thinning in the grid cell model. The grid cell model explicitly represents a cell complex built of faces, edges, and vertices shared between voxels. A characterization of pairs of cells to be deleted is much simpler than characterizations of simple voxels were before. The grid cell model resolves topologically unclear voxel configurations at junctions and locked voxel configurations causing, for example, interior voxels in sets of non-simple voxels. A general conclusion is that the grid cell model is superior to indecomposable voxels for algorithms that need detailed control of topology. Section 2.3.2 introduces a noise-insensitive measure based on the geodesic distance along the boundary to compute two-dimensional skeletons. The measure is able to retain thin object structures if they are geometrically important while ignoring noise on the object's boundary. This combination of properties is not known of other measures. The measure is also used to guide erosion in a thinning process from the boundary towards lines centered within plate-like structures. Geodesic distance based quantities seem to be well suited to robustly identify one- and two-dimensional skeletons. Chapter 6 applies the method to visualization of bone micro-architecture. Chapter 3 describes a novel geometry generation scheme for representing voxel skeletons, which retracts voxel skeletons to piecewise linear geometry per dual cube. The generated triangle meshes and graphs provide a link to geometry processing and efficient rendering of voxel skeletons. The scheme creates non-closed surfaces with boundaries, which contain fewer triangles than a representation of voxel skeletons using closed surfaces like small cubes or iso-surfaces. A conclusion is that thinking specifically about voxel skeleton configurations instead of generic voxel configurations helps to deal with the topological implications. The geometry generation is one foundation of the applications presented in Chapter 6. Chapter 5 presents a novel external memory algorithm for distance ordered homotopic thinning. The presented method extends known algorithms for computing chamfer distance transformations and thinning to execute I/O-efficiently when input is larger than the available main memory. The applied block-wise decomposition schemes are quite simple. Yet it was necessary to carefully analyze effects of block boundaries to devise globally correct external memory variants of known algorithms. In general, doing so is superior to naive block-wise processing ignoring boundary effects. Chapter 6 applies the algorithms in a novel method based on confocal microscopy for quantitative study of micro-vascular networks in the field of microcirculation.
The statistical analysis of the variations of the dayly-mean frequency of the maximum ionospheric electron density foF2 is performed in connection with the occurrence of (more than 60) earthquakes with magnitudes M > 6.0, depths h < 80 km and distances from the vertical sounding station R < 1000 km. For the study, data of the Tokyo sounding station are used, which were registered every hour in the years 1957-1990. It is shown that, on the average, foF2 decreases before the earthquakes. One day before the shock the decrease amounts to about 5 %. The statistical reliability of this phenomenon is obtained to be better than 0.95. Further, the variations of the occurrence probability of the turbulization of the F-layer (F spread) are investigated for (more than 260) earthquakes with M > 5.5, h < 80 km, R < 1000 km. For the analysis, data of the Japanese station Akita from 1969-1990 are used, which were obtained every hour. It is found that before the earthquakes the occurrence probability of F spread decreases. In the week before the event, the decrease has values of more than 10 %. The statistical reliability of this phenomenon is also larger than 0.95. Examining the seismo-ionospheric effects, here periods of time with weak heliogeomagnetic disturbances are considered, the Wolf number is less than 100 and the index ∑ Kp is smaller than 30.
In the present work, phenomena in the ionosphere are studied, which are connected with earthquakes (16 events) having a depth of less than 50 km and a magnitude M larger than 4. Analysed are night-time Es-spread effects using data of the vertical sounding station Petropavlovsk- Kanchatsky (φ=53.0°, λ=158.7°) from May 2004 until August 2004 registered every 15 minutes. It is found that the maximum distance of the earthquake from the sounding station, where pre-seismic phenomena are yet observable, depends on the magnitude of the earthquake. Further it is shown that 1-2 days before the earthquakes, in the premidnight hours, the appearance of Es-spread increases. The reliability of this increase amounts to 0.95.
A model of the generation of pulses of local electric fields with characteristic time scales of 1–10 minutes is considered for atmospheric conditions above fracture regions of earthquakes. In the model, it is proposed that aerosols, increased ionization velocity and upstreaming air flows occur at night-time conditions. The pulses of local electric fields cause respective pulses of infrared emissions. But infrared emissions with time scales of 1–10 minutes were not observed up to now experimentally. The authors think, that the considered non-stationary field and radiation effects might be a new-type of applicable earthquake indicators and ask to perform special earth-based and satellite observations of the night-time atmosphere in seismoactive fracture regions.
We numerically investigate nonlinear asymmetric square patterns in a horizontal convection layer with up-down reflection symmetry. As a novel feature we find the patterns to appear via the skewed varicose instability of rolls. The time-independent nonlinear state is generated by two unstable checkerboard (symmetric square) patterns and their nonlinear interaction. As the bouyancy forces increase, the interacting modes give rise to bifurcations leading to a periodic alternation between a nonequilateral hexagonal pattern and the square pattern or to different kinds of standing oscillations.
In this paper an analysis of the excitation conditions of mirror waves is done, which propagate parallel to an external magnetic field. There are found analytical expressions for the dispersion relations of the waves in case of different plasma conditions. These relations may be used in future to develop the nonlinear theory of mirror waves. In comparison with former analytical works, in the study the inuence of the magnetic field and nite temperatures of the ions parallel to the magnetic field are taken into account. Application is done for the earth's magnetosheath.
Basing on recent solar models, the excitation of ion-acoustic turbulence in the weaklycollisional, fully and partially-ionized regions of the solar atmosphere is investigated. Within the frame of hydrodynamics, conditions are found under which the heating of the plasma by ion-acoustic type waves is more effective than the Joule heating. Taking into account wave and Joule heating effects, a nonlinear differential equation is derived, which describes the evolution of nonlinear ion-acoustic waves in the collisional plasma.
A numerical MHD model is developed to investigate acceleration and heating of both thermal and auroral plasma. This is done for magnetospheric flux tubes in which intensive field aligned currents flow. To give each of these tubes, the empirical Tsyganenko model of the magnetospheric field is used. The parameters of the background plasma outside the flux tube as well as the strength of the electric field of magnetospheric convection are given. Performing the numerical calculations, the distributions of the plasma densities, velocities, temperatures, parallel electric field and current, and of the coefficients of thermal conductivity are obtained in a self-consistent way. It is found that EIC turbulence develops effectively in the thermal plasma. The parallel electric field develops under the action of the anomalous resistivity. This electric field accelerates both the thermal and the auroral plasma. The thermal turbulent plasma is also subjected to an intensive heating. The increase of the plasma of the Earth's ionosphere. Besides, studying the growth and dispersion properties of oblique ion cyclotron waves excited in a drifting magnetized plasma, it is shown that under non-stationary conditions such waves may reveal the properties of bursts of polarized transverse electromagnetic waves at frequencies near the patron gyrofrequency.
This paper deals with the Mie scattering kernels for multi-spectral data. The kernels may be represented in form of power series. Furthermore, the singular-value spectrum and the degree of ill-posedness in dependence on the refractive index of the particles are numerically approximated. A special hybrid regularization technique allows us to determine via inversion the particle distributions of different types.
Contents: 1 Introduction 2 Experiment 3 Data 4 Symbolic dynamics 4.1 Symbolic dynamics as a tool for data analysis 4.2 2-symbols coding 4.3 3-symbols coding 5 Measures of complexity 5.1 Word statistics 5.2 Shannon entropy 6 Testing for stationarity 6.1 Stationarity 6.2 Time series of cycle durations 6.3 Chi-square test 7 Control parameters in the production of rhythms 8 Analysis of relative phases 9 Discussion 10 Outlook
A numerical bifurcation analysis of the electrically driven plane sheet pinch is presented. The electrical conductivity varies across the sheet such as to allow instability of the quiescent basic state at some critical Hartmann number. The most unstable perturbation is the two-dimensional tearing mode. Restricting the whole problem to two spatial dimensions, this mode is followed up to a time-asymptotic steady state, which proves to be sensitive to three-dimensional perturbations even close to the point where the primary instability sets in. A comprehensive three-dimensional stability analysis of the two-dimensional steady tearing-mode state is performed by varying parameters of the sheet pinch. The instability with respect to three-dimensional perturbations is suppressed by a sufficiently strong magnetic field in the invariant direction of the equilibrium. For a special choice of the system parameters, the unstably perturbed state is followed up in its nonlinear evolution and is found to approach a three-dimensional steady state.
We investigate numerically the appearance of heteroclinic behavior in a three-dimensional, buoyancy-driven fluid layer with stress-free top and bottom boundaries, a square horizontal periodicity with a small aspect ratio, and rotation at low to moderate rates about a vertical axis. The Prandtl number is 6.8. If the rotation is not too slow, the skewed-varicose instability leads from stationary rolls to a stationary mixed-mode solution, which in turn loses stability to a heteroclinic cycle formed by unstable roll states and connections between them. The unstable eigenvectors of these roll states are also of the skewed-varicose or mixed-mode type and in some parameter regions skewed-varicose like shearing oscillations as well as square patterns are involved in the cycle. Always present weak noise leads to irregular horizontal translations of the convection pattern and makes the dynamics chaotic, which is verified by calculating Lyapunov exponents. In the nonrotating case, the primary rolls lose, depending on the aspect ratio, stability to traveling waves or a stationary square pattern. We also study the symmetries of the solutions at the intermittent fixed points in the heteroclinic cycle.
The dynamics of tail-like current sheets under the influence of small-scale plasma turbulence
(1999)
A 2D-magnetohydrodynamic model of current-sheet dynamics caused by anomalous electrical resistivity as result of small-scale plasma turbulence is proposed. The anomalous resistivity is assumed to be proportional to the square of the gradient of the magnetic pressure as may be valid for instance in the case of lower-hybrid-drift turbulence. The initial resistivity pulse is given. Then the temporal and spatial evolution of the magnetic and electric fields, plasma density, pressure, convection and resistivity are considered. The motion of the induced electric field is discussed as indicator of the plasma disturbances. The obtained results found using much improved numerical methods show a magnetic field evolution with x-line formation and plasma acceleration. Besides, in the current sheet, three types of magnetohydrodynamic waves occur, fast magnetoacoustic waves of compression and rarefaction as well as slow magnetoacoustic waves.
Sulphur, a macronutrient essential for plant growth, is among the most versatile elements in living organisms. Unfortunately, little is known about regulation of sulphate uptake and assimilation by plants. Identification of sulphate signalling processes will allow to control sulphate acquisition and assimilation and may prove useful in the future to improve sulphur-use efficiency in agriculture. Many of genes involved in sulphate metabolism are regulated on transcriptional level by products of other genes called transcription factors (TF). Several published experiments revealed TF genes that respond to sulphate deprivation, but none of these have been so far been characterized functionally. Thus, we aimed at identifying and characterising transcription factors that control sulphate metabolism in the model plant Arabidopsis thaliana. To achieve that goal we postulated that factors regulating Arabidopsis responses to inorganic sulphate deficiency change their transcriptional levels under sulphur-limited conditions. By comparing TF transcript profiles from plants grown on different sulphate regimes, we identified TF genes that may specifically induce or repress changes in expression of genes that allow plants to adapt to changes in sulphate availability. Candidate genes obtained from this screening were tested by reverse genetics approaches. Transgenic plants constitutively overproducing selected TF genes and mutant plants, lacking functional selected TF genes (knock out), were used. By comparing metabolite and transcript profiles from transgenic and wild type plants we aimed at confirming the role of selected AP2 TF candidate genes in plant adaptation to sulphur unavailability. After preliminary characterisation of WRKY24 and MYB93 TF genes, we postulate that these factors are involved in a complex multifactorial regulatory network, in which WRKY24 and MYB93 would act as superior factors regulating other transcription factors directly involved in the regulation of S-metabolism genes. Results obtained for plants overproducing TOE1 and TOE2 TF genes suggests that these factors may be involved in a mechanism, which is promoting synthesis of an essential amino acid, methionine, over synthesis of another amino acid, cysteine. Thus, TOE1 and TOE2 genes might be a part of transcriptional regulation of methionine synthesis. Approaches creating genetically manipulated plants may produce plant phenotypes of immediate biotechnological interest, such as plants with increased sulphate or sulphate-containing amino acid content, or better adapted to the sulphate unavailability.
Our dynamic Sun manifests its activity by different phenomena: from the 11-year cyclic sunspot pattern to the unpredictable and violent explosions in the case of solar flares. During flares, a huge amount of the stored magnetic energy is suddenly released and a substantial part of this energy is carried by the energetic electrons, considered to be the source of the nonthermal radio and X-ray radiation. One of the most important and still open question in solar physics is how the electrons are accelerated up to high energies within (the observed in the radio emission) short time scales. Because the acceleration site is extremely small in spatial extent as well (compared to the solar radius), the electron acceleration is regarded as a local process. The search for localized wave structures in the solar corona that are able to accelerate electrons together with the theoretical and numerical description of the conditions and requirements for this process, is the aim of the dissertation. Two models of electron acceleration in the solar corona are proposed in the dissertation: I. Electron acceleration due to the solar jet interaction with the background coronal plasma (the jet--plasma interaction) A jet is formed when the newly reconnected and highly curved magnetic field lines are relaxed by shooting plasma away from the reconnection site. Such jets, as observed in soft X-rays with the Yohkoh satellite, are spatially and temporally associated with beams of nonthermal electrons (in terms of the so-called type III metric radio bursts) propagating through the corona. A model that attempts to give an explanation for such observational facts is developed here. Initially, the interaction of such jets with the background plasma leads to an (ion-acoustic) instability associated with growing of electrostatic fluctuations in time for certain range of the jet initial velocity. During this process, any test electron that happen to feel this electrostatic wave field is drawn to co-move with the wave, gaining energy from it. When the jet speed has a value greater or lower than the one, required by the instability range, such wave excitation cannot be sustained and the process of electron energization (acceleration and/or heating) ceases. Hence, the electrons can propagate further in the corona and be detected as type III radio burst, for example. II. Electron acceleration due to attached whistler waves in the upstream region of coronal shocks (the electron--whistler--shock interaction) Coronal shocks are also able to accelerate electrons, as observed by the so-called type II metric radio bursts (the radio signature of a shock wave in the corona). From in-situ observations in space, e.g., at shocks related to co-rotating interaction regions, it is known that nonthermal electrons are produced preferably at shocks with attached whistler wave packets in their upstream regions. Motivated by these observations and assuming that the physical processes at shocks are the same in the corona as in the interplanetary medium, a new model of electron acceleration at coronal shocks is presented in the dissertation, where the electrons are accelerated by their interaction with such whistlers. The protons inflowing toward the shock are reflected there by nearly conserving their magnetic moment, so that they get a substantial velocity gain in the case of a quasi-perpendicular shock geometry, i.e, the angle between the shock normal and the upstream magnetic field is in the range 50--80 degrees. The so-accelerated protons are able to excite whistler waves in a certain frequency range in the upstream region. When these whistlers (comprising the localized wave structure in this case) are formed, only the incoming electrons are now able to interact resonantly with them. But only a part of these electrons fulfill the the electron--whistler wave resonance condition. Due to such resonant interaction (i.e., of these electrons with the whistlers), the electrons are accelerated in the electric and magnetic wave field within just several whistler periods. While gaining energy from the whistler wave field, the electrons reach the shock front and, subsequently, a major part of them are reflected back into the upstream region, since the shock accompanied with a jump of the magnetic field acts as a magnetic mirror. Co-moving with the whistlers now, the reflected electrons are out of resonance and hence can propagate undisturbed into the far upstream region, where they are detected in terms of type II metric radio bursts. In summary, the kinetic energy of protons is transfered into electrons by the action of localized wave structures in both cases, i.e., at jets outflowing from the magnetic reconnection site and at shock waves in the corona.
The central melanin-concentrating hormone (MCH) system has been intensively studied for its involvement in the regulation of feeding behaviour and body weight regulation. The importance of the neuropeptide MCH in the control of energy balance has been underlined by MCH knock out and Melanin-concentrating hormone receptor subtype 1 (MCHR-1) knock-out animals. The anorectic and anti-obesity effects of selective MCHR-1 antagonists have confirmed the notion that pharmacological blockade of MCHR-1 is a potential therapeutic approach for obesity. First aim of this work is to study the neurochemical “equipment” of MCHR-1 immunoreactive neurons by double-labelling immunohistochemistry within the rat hypothalamus. Of special interest is the neuroanatomical identification of other hypothalamic neuropeptides that are co-distributed with MCHR-1. A second part of this study deals with the examination of neuronal activation patterns after pharmacological or physiological, feeding-related stimuli and was introduced to further understand central regulatory mechanisms of the MCH system. In the first part of work, I wanted to neurochemically characterize MCHR-1 immunoreactive neurons in the rat hypothalamus for colocalisation with neuropeptides of interest. Therefore I performed an immunohistochemical colocalisation study using a specific antibody against MCHR-1 in combination with antibodies against hypothalamic neuropeptides. I showed that MCHR-1 immunoreactivity (IR) was co-localised with orexin A in the lateral hypothalamus, and with adrenocorticotropic hormone and neuropeptide Y in the arcuate nucleus. Additionally, MCHR-1 IR was co-localised with the neuropeptides vasopressin and oxytocin in magnocellular neurons of the supraoptic and paraventricular hypothalamic nucleus and corticotrophin releasing hormone in the parvocellular division of the paraventricular hypothalamic nucleus. Moreover, for the first time MCHR-1 immunoreactivity was found in both the adenohypophyseal and neurohypophyseal part of the rat pituitary. These results provide the neurochemical basis for previously described potential physiological actions of MCH at its target receptor. In particular, the MCHR-1 may be involved not only in food intake regulation, but also in other physiological actions such as fluid regulation, reproduction and stress response, possibly through here examined neuropeptides. Central activation patterns induced by pharmacological or physiological stimulation can be mapped using c-Fos immunohistochemistry. In the first experimental design, central administration (icv) of MCH in the rat brain resulted in acute and significant increase of food and water intake, but this animal treatment did not induce a specific c-Fos induction pattern in hypothalamic nuclei. In contrast, sub-chronic application of MCHR-1 antagonist promoted a significant decrease in food- and water intake during an eight day treatment period. A qualitative analysis of c-Fos immunohistochemistry of sections derived from MCHR-1 antagonist treated animals showed a specific neuronal activation in the paraventricular nucleus, the supraoptic nucleus and the dorsomedial hypothalamus. These results could be substantiated by quantitative evaluation of an automated, software-supported analysis of the c-Fos signal. Additionally, I examined the activation pattern of rats in a restricted feeding schedule (RFS) to identify pathways involved in hunger and satiety. Animals were trained for 9 days to feed during a three hour period. On the last day, food restricted animals was also allowed to feed for the three hours, while food deprived (FD) animals did not receive food. Mapping of neuronal activation showed a clear difference between stareved (FD) and satiated (FR) rats. FD animals showed significant induction of c-Fos in forebrain regions, several hypothalamic nuclei, amygdaloid thalamus and FR animals in the supraoptic nucleus and the paraventricular nucleus of the hypothalamus, and the nucleus of the solitary tract. In the lateral hypothalamus of FD rats, c-Fos IR showed strong colocalisation for Orexin A, but no co-staining for MCH immunoreactivity. However, a large number of c-Fos IR neurons within activated regions of FD and FR animals was co-localised with MCHR-1 within selected regions. To conclude, the experimental set-up of scheduled feeding can be used to induce a specific hunger or satiety activation pattern within the rat brain. My results show a differential activation by hunger signals of MCH neurons and furthermore, demonstrates that MCHR-1 expressing neurons may be essential parts of downstream processing of physiological feeding/hunger stimuli. In the final part of my work, the relevance of here presented studies is discussed with respect to possible introduction of MCHR-1 antagonists as drug candidates for the treatment of obesity.
Microsaccades are an important component of the small eye movements that constitute fixation, the basis of visual perception. The specific function of microsaccades has been a long-standing research problem. Only recently, conclusive evidence emerged, showing that microsaccades aid both visual perception and oculomotor control. The main goal of this thesis was to improve our understanding of the implementation of microsaccade generation within the circuitry of saccade control, an unsolved issue in oculomotor research. We make a case for a model according to which microsaccades and saccades result from mutually dependent motor plans, competing for expression. The model consists of an activation field, coding for fixation at its center and for saccades at peripheral locations; saccade amplitude increases with eccentricity. Activity during fixation spreads to slightly peripheral locations in the field and, thus, may result in the generation of microsaccades. Inhibition of remote and excitation of neighbouring locations govern the dynamics of the field, resulting in a strong competition between fixation and saccade generation. We propose that this common-field model of microsaccade and saccade generation finds a neurophysiological counterpart in the motor map of the superior colliculus (SC), a key brainstem structure involved in the generation of saccades. In a series of five behavioral experiments, we tested implications of the model. Predictions were derived concerning (1) the behavior of microsaccades in a given task (microsaccade rate, amplitude, and direction), (2) the interactions of microsaccades and subsequent saccades, and (3) the relationship between microsaccadic behavior and neurophysiological processes at the level of the SC. The results yielded strong support for the model at all three levels of analysis, suggesting that microsaccade statistics are indicative of the state of the fixation-related part of the SC motor map.
The development of rural areas concerning food security, sustainability and social-economic stability is key issue to the globalized community. Regarding the current state of climatic change, especially semi-arid regions in uenced by monsoon or El Niño are prone to extreme weather events. Droughts, ooding, erosion, degradation of soils and water quality and deserti cation are some of the common impacts. State of the art in hydrologic environmental modeling is generally operating under a reductionist paradigm (Sivapalan 2005). Even an enormous quantity of process-oriented models exists, we fail in due reproduction of complexly interacting processes in their effective scale in the space-time-continuum, as they are described through deterministic small-scale process theories (e.g. Beven 2002). Yet large amounts of parameters - with partly doubtful physical expression - and input data are needed. In contradiction to that most soft information about patterns and organizing principles cannot be employed (Seibert and McDonnell 2002). For an analysis of possible strategies on the one hand towards integrated hydrologic modeling as decision support and on the other hand for sustainable land use development the 512 km2 large catchment of the Mod river in Jhabua, Madhya Pradesh, India has been chosen. It is characterized by a setting of common problems of peripheral rural semi-arid human-eco-systems with intensive agriculture, deforestation, droughts and general hardship for the people. Scarce data and missing gauges are adding to the requirements of data acquisition and process description. The study at hand presents a methodical framework to combine eld scale data analysis and remote sensing for the setup of a database focusing plausibility over strict data accuracy. The catena-based hydrologic model WASA (Güntner 2002) employes this database. It is expanded by a routine for crop development simulation after the de Wit approach (e.g. in Bouman et al. 1996). For its application as decision support system an agentbased land use algorithm is developed which decides on base of site speci cations and certain constraints (like maximum pro t or best local adaptation) about the cropping. The new model is employed to analyze (some) land use strategies. Not anticipated and a priori de ned scenarios will account for the realization of the model but the interactions within the system. This study points out possible approaches to enhance the situation in the catchment. It also approaches central questions of ways towards due integrated hydrological modeling on catchment scale for ungauged conditions and to overcome current paradigms.
A water quality model for shallow river-lake systems and its application in river basin management
(2007)
This work documents the development and application of a new model for simulating mass transport and turnover in rivers and shallow lakes. The simulation tool called 'TRAM' is intended to complement mesoscale eco-hydrological catchment models in studies on river basin management. TRAM aims at describing the water quality of individual water bodies, using problem- and scale-adequate approaches for representing their hydrological and ecological characteristics. The need for such flexible water quality analysis and prediction tools is expected to further increase during the implementation of the European Water Framework Directive (WFD) as well as in the context of climate change research. The developed simulation tool consists of a transport and a reaction module with the latter being highly flexible with respect to the description of turnover processes in the aquatic environment. Therefore, simulation approaches of different complexity can easily be tested and model formulations can be chosen in consideration of the problem at hand, knowledge of process functioning, and data availability. Consequently, TRAM is suitable for both heavily simplified engineering applications as well as scientific ecosystem studies involving a large number of state variables, interactions, and boundary conditions. TRAM can easily be linked to catchment models off-line and it requires the use of external hydrodynamic simulation software. Parametrization of the model and visualization of simulation results are facilitated by the use of geographical information systems as well as specific pre- and post-processors. TRAM has been developed within the research project 'Management Options for the Havel River Basin' funded by the German Ministry of Education and Research. The project focused on the analysis of different options for reducing the nutrient load of surface waters. It was intended to support the implementation of the WFD in the lowland catchment of the Havel River located in North-East Germany. Within the above-mentioned study TRAM was applied with two goals in mind. In a first step, the model was used for identifying the magnitude as well as spatial and temporal patterns of nitrogen retention and sediment phosphorus release in a 100~km stretch of the highly eutrophic Lower Havel River. From the system analysis, strongly simplified conceptual approaches for modeling N-retention and P-remobilization in the studied river-lake system were obtained. In a second step, the impact of reduced external nutrient loading on the nitrogen and phosphorus concentrations of the Havel River was simulated (scenario analysis) taking into account internal retention/release. The boundary conditions for the scenario analysis such as runoff and nutrient emissions from river basins were computed by project partners using the catchment models SWIM and ArcEGMO-Urban. Based on the output of TRAM, the considered options of emission control could finally be evaluated using a site-specific assessment scale which is compatible with the requirements of the WFD. Uncertainties in the model predictions were also examined. According to simulation results, the target of the WFD -- with respect to total phosphorus concentrations in the Lower Havel River -- could be achieved in the medium-term, if the full potential for reducing point and non-point emissions was tapped. Furthermore, model results suggest that internal phosphorus loading will ease off noticeably until 2015 due to a declining pool of sedimentary mobile phosphate. Mass balance calculations revealed that the lakes of the Lower Havel River are an important nitrogen sink. This natural retention effect contributes significantly to the efforts aimed at reducing the river's nitrogen load. If a sustainable improvement of the river system's water quality is to be achieved, enhanced measures to further reduce the immissions of both phosphorus and nitrogen are required.
There is already strong evidence that temperate lakes have been highly vulnerable to human induced climate warming during the last century. Hitherto climate impact studies have mainly focussed on the impacts of the recent long-term warming in winter and spring and little is known on the influence of climate warming on temperate lakes in summer. In the present thesis, I studied some aspects, which may have been strongly involved in determining the response of a lake to climate warming in summer. Thereby I have focussed on climate induced impacts on the thermal characteristics and the phenology and abundance of summer plankton in a shallow polymictic lake (Müggelsee, Germany). First, the influence of climate warming on the phenology and abundance of the lake plankton was investigated across seasons. Fast-growing spring phytoplankton and zooplankton (Daphnia) advanced largely synchronously, whereas long-term changes in the phenology of slow-growing summer zooplankton were clearly species-specific and not synchronised. The phenology and/or abundance of several summer copepod species changed according to their individual thermal requirements at decisive developmental stages such as emergence from diapause in spring. The study emphasises that not only the degree of warming, but also its timing within the annual cycle is of great ecological importance. To analyse the impact of climate change on the thermal characteristics of the lake, I examined the long-term development of the daily epilimnetic temperature extrema during summer. The study demonstrated for the first time for lakes that the daily epilimnetic minima (during nighttime) have increased more rapidly than the daily epilimnetic maxima (during daytime), resulting in a distinct decrease in the daily epilimnetic temperature range. This day-night asymmetry in epilimnetic temperature was likely caused by an increased nighttime emission of long-wave radiation from the atmosphere. This underlines that not only increases in air temperature, but also changes in other meteorological variables such as wind speed, relative humidity and cloud cover may play an important role in determining the lake temperature with respect to further climate change. Furthermore, a short-term analysis on the mixing regime of the polymictic lake was conducted to examine the frequency and duration of stratification events and their impacts on dissolved oxygen, dissolved nutrients and summer phytoplankton. Even during the longest stratification events (heatwaves in 2003 and 2006) the thermal characteristics of the lake differed from those typically found in shallow dimictic lakes, which exhibit a continuous stratification during summer. Particularly, hypolimnetic temperatures were higher, favouring the depletion of oxygen and the accumulation of dissolved nutrient in the hypolimnion. Thermal stratification will be very likely amplified in the future, thus, I conclude that polymictic lakes will be very vulnerable to alterations in the thermal regime with respect to projections of further climate change during summer. Finally, a long-term case study on the long and short-term changes in the development of the planktonic larvae of the freshwater mussel Dreissena polymorpha was performed to analyse the impacts of simultaneous changes in the thermal and in the trophic regime of the lake. Both the climate warming and the decrease in external nutrient load were important in determining the abundance of the pelagic larvae by affecting different features of the life-history of this species throughout the warm season. The long-term increase in the abundance and length of larvae was related to the decrease in external nutrient loading and the change in phytoplankton composition. However, the recent heatwaves in 2003 and 2006 have offset this positive effect on larval abundance, due to unfavourable low oxygen concentrations that had resulted from extremely long stratification events, mimicking the effects of nutrient enrichment. Climate warming may thus induce counteracting effects in productive shallow lakes that underwent lake restoration through a decrease in external nutrient loading. I conclude that not only the nature of climate change and thus the timing of climate warming throughout the seasons and the occurrence of climatic extremes as heatwaves, but also site-specific lake conditions as the thermal mixing regime and the trophic state are crucial factors governing the impacts of climate warming on internal lake processes during summer. Consequently, further climate impact research on lake functioning should focus on how the different lake types respond to the complex environmental forcing in summer, to allow for a comprehensive understanding of human induced environmental changes in lakes.
The solar tachocline is a thin transition layer between the solar radiative zone rotating uniformly and the solar convection zone, which has a mainly latitudinal differential rotation profile. This layer has a thickness of less than $0.05R_{\sun}$ and is subject to extreme radial as well as latitudinal shears. Helioseismological estimates put this layer at roughly $0.7R_{\sun}$. The tachocline mostly resides in the sub-adiabatic, non-turbulent radiative interior, except for a small overlap with the convection zone on the top. Many proposed dynamo mechanisms involve strong toroidal magnetic fields in this transition region. The exact mechanisms behind the formation of such a thin layer is still disputed. A very plausible mechanism is the one involving a weak, relic poloidal magnetic field trapped inside the radiative zone, which is responsible for expelling differential rotation outwards. This was first proposed by \citet{RK97}. The present work develops this idea with numerical simulations including additional effects like meridional circulation. It is shown that a relic field of 1~Gauss or smaller would be sufficient to explain the observed thickness of the tachocline. The stability of the solar tachocline is addressed as the next part of the problem. It is shown that the tachocline is stable up to a differential rotation of 52\% in the absence of magnetic fields. This is a new finding as compared to the earlier two dimensional models which estimated the solar differential rotation (about 28\%) to be marginally stable or even unstable. The changed stability limit is attributed to the changed stability criterion of the 3-dimensional model which also involves radial gradients of the angular velocity. In the presence of toroidal magnetic field belts, the lowest non-axisymmetric mode is shown to be the most unstable one for the radiative part of the tachocline. It is estimated that the tachocline would become unstable for toroidal fields exceeding about 100~Gauss. With both formation and stability questions satisfactorily addressed, this work presents the most comprehensive analysis of the physical processes in the solar tachocline to date.
Our Solar system contains a large amount of dust, containing valuable information about our close cosmic environment. If created in a planet's system, the particles stay predominantly in its vicinity and can form extended dust envelopes, tori or rings around them. A fascinating example of these complexes are Saturnian rings containing a wide range of particles sizes from house-size objects in the main rings up to micron-sized grains constituting the E ring. Other example are ring systems in general, containing a large fraction of dust or also the putative dust-tori surrounding the planet Mars. The dynamical life'' of such circumplanetary dust populations is the main subject of our study. In this thesis a general model of creation, dynamics and death'' of circumplanetary dust is developed. Endogenic and exogenic processes creating dust at atmosphereless bodies are presented. Then, we describe the main forces influencing the particle dynamics and study dynamical responses induced by stochastic fluctuations. In order to estimate the properties of steady-state population of considered dust complex, the grain mean lifetime as a result of a balance of dust creation, life'' and loss mechanisms is determined. The latter strongly depends on the surrounding environment, the particle properties and its dynamical history. The presented model can be readily applied to study any circumplanetary dust complex. As an example we study dynamics of two dust populations in the Solar system. First we explore the dynamics of particles, ejected from Martian moon Deimos by impacts of micrometeoroids, which should form a putative tori along the orbit of the moon. The long-term influence of indirect component of radiation pressure, the Poynting-Robertson drag gives rise in significant change of torus geometry. Furthermore, the action of radiation pressure on rotating non-spherical dust particles results in stochastic dispersion of initially confined ensemble of particles, which causes decrease of particle number densities and corresponding optical depth of the torus. Second, we investigate the dust dynamics in the vicinity of Saturnian moon Enceladus. During three flybys of the Cassini spacecraft with Enceladus, the on-board dust detector registered a micron-sized dust population around the moon. Surprisingly, the peak of the measured impact rate occurred 1 minute before the closest approach of the spacecraft to the moon. This asymmetry of the measured rate can be associated with locally enhanced dust production near Enceladus south pole. Other Cassini instruments also detected evidence of geophysical activity in the south polar region of the moon: high surface temperature and extended plumes of gas and dust leaving the surface. Comparison of our results with this in situ measurements reveals that the south polar ejecta may provide the dominant source of particles sustaining the Saturn's E ring.
Development and application of novel genetic transformation technologies in maize (Zea mays L.)
(2007)
Plant genetic engineering approaches are of pivotal importance to both basic and applied research. However, rapid commercialization of genetically engineered crops, especially maize, raises several ecological and environmental concerns largely related to transgene flow via pollination. In most crops, the plastid genome is inherited uniparentally in a maternal manner. Consequently, a trait introduced into the plastid genome would not be transferred to the sexually compatible relatives of the crops via pollination. Thus, beside its several other advantages, plastid transformation provides transgene containment, and therefore, is an environmentally friendly approach for genetic engineering of crop plants. Reliable in vitro regeneration systems allowing repeated rounds of regeneration are of utmost importance to development of plastid transformation technologies in higher plants. While being the world’s major food crops, cereals are among the most difficult-to-handle plants in tissue culture which severely limits genetic engineering approaches. In maize, immature zygotic embryos provide the predominantly used material for establishing regeneration-competent cell or callus cultures for genetic transformation experiments. The procedures involved are demanding, laborious and time consuming and depend on greenhouse facilities. In one part of this work, a novel tissue culture and plant regeneration system was developed that uses maize leaf tissue and thus is independent of zygotic embryos and greenhouse facilities. Also, protocols were established for (i) the efficient induction of regeneration-competent callus from maize leaves in the dark, (ii) inducing highly regenerable callus in the light, and (iii) the use of leaf-derived callus for the generation of stably transformed maize plants. Furthermore, several selection methods were tested for developing a plastid transformation system in maize. However, stable plastid transformed maize plants could not be yet recovered. Possible explanations as well as suggestions for future attempts towards developing plastid transformation in maize are discussed. Nevertheless, these results represent a first essential step towards developing chloroplast transformation technology for maize, a method that requires multiple rounds of plant regeneration and selection to obtain genetically stable transgenic plants. In order to apply the newly developed transformation system towards metabolic engineering of carotenoid biosynthesis, the daffodil phytoene synthase (PSY) gene was integrated into the maize genome. The results illustrate that expression of a recombinant PSY significantly increases carotenoid levels in leaves. The beta-carotene (pro-vitamin A) amounts in leaves of transgenic plants were increased by ~21% in comparison to the wild-type. These results represent evidence for maize to have significant potential to accumulate higher amounts of carotenoids, especially beta-carotene, through transgenic expression of phytoene synthases. Finally, progresses were made towards developing transformation technologies in Peperomia (Piperaceae) by establishing an efficient leaf-based regeneration system. Also, factors determining plastid size and number in Peperomia, whose species display great interspecific variation in chloroplast size and number per cell, were investigated. The results suggest that organelle size and number are regulated in a tissue-specific manner rather than in dependency on the plastid type. Investigating plastid morphology in Peperomia species with giant chloroplasts, plasmatic connections between chloroplasts (stromules) were observed under the light microscope and in the absence of tissue fixation or GFP overexpression demonstrating the relevance of these structures in vivo. Furthermore, bacteria-like microorganisms were discovered within Peperomia cells, suggesting that this genus provides an interesting model not only for studying plastid biology but also for investigating plant-microbe interactions.
In nature one commonly finds interacting complex oscillators which by the coupling scheme form small and large networks, e.g. neural networks. Surprisingly, the oscillators can synchronize, still preserving the complex behavior. Synchronization is a fundamental phenomenon in coupled nonlinear oscillators. Synchronization can be enhanced at different levels, that is, the constraints on which the synchronization appears. Those can be in the trajectory amplitude, requiring the amplitudes of both oscillators to be equal, giving place to complete synchronization. Conversely, the constraint could also be in a function of the trajectory, e.g. the phase, giving place to phase synchronization (PS). In this case, one requires the phase difference between both oscillators to be finite for all times, while the trajectory amplitude may be uncorrelated. The study of PS has shown its relevance to important technological problems, e.g. communication, collective behavior in neural networks, pattern formation, Parkinson disease, epilepsy, as well as behavioral activities. It has been reported that it mediates processes of information transmission and collective behavior in neural and active networks and communication processes in the Human brain. In this work, we have pursed a general way to analyze the onset of PS in small and large networks. Firstly, we have analyzed many phase coordinates for compact attractors. We have shown that for a broad class of attractors the PS phenomenon is invariant under the phase definition. Our method enables to state about the existence of phase synchronization in coupled chaotic oscillators without having to measure the phase. This is done by observing the oscillators at special times, and analyzing whether this set of points is localized. We have show that this approach is fruitful to analyze the onset of phase synchronization in chaotic attractors whose phases are not well defined, as well as, in networks of non-identical spiking/bursting neurons connected by chemical synapses. Moreover, we have also related the synchronization and the information transmission through the conditional observations. In particular, we have found that inside a network clusters may appear. These can be used to transmit more than one information, which provides a multi-processing of information. Furthermore, These clusters provide a multichannel communication, that is, one can integrate a large number of neurons into a single communication system, and information can arrive simultaneously at different places of the network.
Biochemical and physiological studies of Arabidopsis thaliana Diacylglycerol Kinase 7 (AtDGK7)
(2006)
A family of diacylglycerol kinases (DGK) phosphorylates the substrate diacylglycerol (DAG) to generate phosphatidic acid (PA) . Both molecules, DAG and PA, are involved in signal transduction pathways. In the model plant Arabidopsis thaliana, seven candidate genes (named AtDGK1 to AtDGK7) code for putative DGK isoforms. Here I report the molecular cloning and characterization of AtDGK7. Biochemical, molecular and physiological experiments of AtDGK7 and their corresponding enzyme are analyzed. Information from Genevestigator says that AtDGK7 gene is expressed in seedlings and adult Arabidopsis plants, especially in flowers. The AtDGK7 gene encodes the smallest functional DGK predicted in higher plants; but also, has an alternative coding sequence containing an extended AtDGK7 open reading frame, confirmed by PCR and submitted to the GenBank database (under the accession number DQ350135). The new cDNA has an extension of 439 nucleotides coding for 118 additional amino acids The former AtDGK7 enzyme has a predicted molecular mass of ~41 kDa and its activity is affected by pH and detergents. The DGK inhibitor R59022 also affects AtDGK7 activity, although at higher concentrations (i.e. IC50 ~380 µM). The AtDGK7 enzyme also shows a Michaelis-Menten type saturation curve for 1,2-DOG. Calculated Km and Vmax were 36 µM 1,2-DOG and 0.18 pmol PA min-1 mg of protein-1, respectively, under the assay conditions. Former protein AtDGK7 are able to phosphorylate different DAG analogs that are typically found in plants. The new deduced AtDGK7 protein harbors the catalytic DGKc and accessory domains DGKa, instead the truncated one as the former AtDGK7 protein (Gomez-Merino et al., 2005).
To characterise the habitat preferences of ring ouzel (Turdus torquatus) and blackbird (T. merula) in Switzerland, we adopt species distribution modelling and predict the species’ spatial distribution. We model on two different scales to analyse in how far downscaling leads to a different set of predictors to describe the realised habitat best. While the models on macroscale (grid of one square kilometre) cover the entire country, we select a set of smaller plots for modelling on territory scale. Whereas ring ouzels occur in altitudes above 1’000 m a.s.l. only, blackbirds occur from the lowlands up to the timber line. The altitudinal range overlap of the two species is up to 400 m. Despite both species coexist on macroscale, a direct niche overlap on territory scale is rare. Small-scale differences in vegetation cover and structure seem to play a dominant role for habitat selection. On macroscale however, we observe a high dependency on climatic variables mainly representing the altitudinal range and the related forest structure preferred by the two species. Applying the models for climate change scenarios, we predict a decline of suitable habitat for the ring ouzel with a simultaneous median altitudinal shift of +440 m until 2070. In contrast, the blackbird is predicted to benefit from higher temperatures and expand its range to higher elevations.
Do institutions matter?
(2006)
Contens 1 Introduction 2 Institutions and the Institutional Change 2.1 Institutions and Theoretical Concepts in Economics 2.2 Path Dependence 2.3 Inconsistence of Institutional Development 2.4 Determinants of Effectiveness 2.5 Efficiency of New Institutions 3 What is “Competition Policy”? 4 The Competition Policy in Russia as an Institution 4.1 Establishment of the Competition Policy as an Institution 4.2 Market Structure and Competition Policy 4.3 Measures of Competition Policy 4.3.1 Prohibition of Competition Restrictive Agreements or Concerted Actions 4.3.2 Abuse of Dominance 4.3.3 Merger Control 4.3.4 Restrictive Action to Competition of Administrative Bodies 4.4 Violations of the Competition Law 4.5 Problems of the Russian Competition Policy 5 Which Mistakes Russia has made with the Implementation of theCompetition Policy? 6 Is a Lacking Effectiveness of Transplanted Institutions Inevitable? 7 Concluding Remarks
A casual look at regional unemployment rates reveals that there are vast differences, which cannot be explained by different institutional settings. Our paper attempts to trace these differences in the labor market performance back to the regions' specialization in products that are more or less advanced in their product cycle. The model we develop shows how individual profit and utility maximization endogenously yields higher employment levels in the beginning. In later phases, however, employment decreases in the presence of process innovation. Our model suggests that the only way to escape from this vicious circle is to specialize in products that are at the beginning of their "economic life". The model is based on an interaction of demand and supply side forces.
Textbook wisdom says that competition yields lower prices and higher consumer surplus than monopoly. We show in two versions of a simple location-product differentiation model with and without endogenous choice of products that these two results have to be qualified. In both models, more than half of the reasonable parameter values lead to higher prices with duopoly than with monopoly. If the product characteristics are exogenous to the firms, consumers may even be be better off with monopoly in average.
This paper analyses the structural change in Russia during the transition from the planned to a market economy. With regard to the famous three sector hypothesis, broad economic sectors were formed as required by this theory. The computation of their shares at GNP at market prices using Input-Output tables, and the adjustment of results from distortions, generated as side effects of tax avoidance practices, shows results that clearly reject claims that Russia would be on the road to a post-industrial service economy. Instead, at least until 2001, a tendency of "primarisation" could be observed, that presents Russia closer to less-developed countries.
Instability in competition
(2005)
In this paper we show that Puu (2002) does not provide a stable solution to the location game, according to his own definition of stability. If the usual two-stage game is considered, where in the first stage a location is chosen once and forever, and in the second stage prices are determined, the equilibrium proves stable for a sizeable interval of parameters, however. Even though this procedure is most common in analyzing Hotelling's location problem, it is not satisfying because it exhibits an inconsistent informational structure. The search for a better concept of stability is imperative.
This volume presents annotation guidelines that have been developed in the context of the SFB 632, a collaborative research center entitled "Information Structure: the Linguistic Means for Structuring Utterances, Sentences and Texts". An important result of the SFB 632 are the SFB corpora from more than 20 typologically different languages, which have been annotated according to the guidelines presented here. The ultimate target of the data and its annotations is to support the study of Information Structure. Information Structure involves all levels of grammar and, hence, the present guidelines cover relevant aspects of all these levels: - Phonology - Morphology - Syntax - Semantics - Information Structure These levels are dealt with in individual chapters, containing tagset declarations with obligatory and optional tags, detailed annotation instructions, and illustrative examples. The volume also presents an evaluation of inter-annotator agreement of Syntax and Information Structural annotation.
Table of contens 1 Introduction 2 The concept of sustainability 2.1 Ecological sustainability 2.2 Social sustainability 2.3 Economic sustainability 2.4 The sustainability strategy of the german government 3 Effects of energy use on the enviromment 4 Requirements of the SSGG for energy policy 4.1 Ecological implications of thr SSGG 4.2 Social and economic requirements of the SSGG 5 The German Renewable Energies Act 5.1 Objectives 5.2 Design and mechanisms 5.3 Fees-in tariffs 6 Does the EEG meet the sustainability requirements of the SSGG? 6.1 Management rules 6.2 Social sustainability 6.3 Economic sustainability 6.4 Development tendencis 7 Possible amendments for more sustainability 7.1 Changing the promotional system 7.2 A European regulation
Social segregation in cities takes place where different household groups exist and when, according to Schelling, their location choice either minimizes the number of differing households in their neighborhood or maximizes their own group. In this contribution an evolutionary simulation based on a monocentric city model with externalities among households is used to discuss the spatial segregation patterns of four groups. The resulting complex spatial patterns can be shown as graphic animations. They can be applied as initial situation for the analysis of the effects a rent control has on segregation.
Optimal spatial patterns of two, three and four segregated household groups in a monocentric city
(2004)
Usually, in monocentric city models the spatial patterns of segregated household groups are assumed to be ring-shaped, while early in the 1930ies Hoyt showed that wedge-shaped areas empirically predominate. This contribution presents a monocentric city model with different household groups generating positive externalities within the groups. At first, border length is founded as a criterion of optimality. Secondly, it is shown that mixed patterns of concentric and wedge-shaped areas represent multiple equilibria if more than two groups of households are being considered. The welfare optimal segregated pattern depends on the relative purchasing power of different household groups.
Usually, in monocentric city models, the spatial patterns of segregated ethnic groups are assumed to be ring-shaped, whereas in the 1930ies Hoyt showed that empirically wedge-shaped areas predominate. In contrast to Rose-Ackerman.s discussion of the in.uence within a ring-shaped pattern which the aversion which different households in the context of racism have, Yinger showed that, depending on the population mix, a wedge-shaped pattern may arise if it is border length which causes the spatial pattern. In this contribution, a simulation based on a monocentric city model with two or more different household groups is used to derive spatial patterns. Wedge-shaped segregation is shown to be the result of positive externalities among similar households. Differences between households only lead to ring-shaped patterns if the e¤ect of a city center on spatial structure dominates neighborhood e¤ects. If more than two groups of households are being considered, mixed patterns of concentric and wedge-shaped areas arise.
Economy vs. history
(2004)
The aim of this study is to examine in which cases economic forces or historical singularities prevail in the determination of the long-run distribution of firms. We develop a relatively general model of heterogenous firms' location choice in discrete space. The main force towards an agglomerated structure is the reduction of transaction costs for consumers if firms are located closely, whilst competition and transport costs work towards a more disperse structure. We then assess the importance of the initial conditions by simulating and comparing the resulting distribution of firms for identical economic parameters but varying initial settings. If the equilibrium distributions of firms are similar we conclude that economic forces have prevailed, while differences in the resulting distributions indicate that 'history' is more important. The (dis)similarity of distributions of firms is calculated by means of a measure, which exhibits a number of desirable features.
In this thesis we mainly generalize two theorems from Mackaay-Picken and Picken (2002, 2004). In the first paper, Mackaay and Picken show that there is a bijective correspondence between Deligne 2-classes $\xi \in \check{H}^2(M,\mathcal{D}^2)$ and holonomy maps from the second thin-homotopy group $\pi_2^2(M)$ to $U(1)$. In the second one, a generalization of this theorem to manifolds with boundaries is given: Picken shows that there is a bijection between Deligne 2-cocycles and a certain variant of 2-dimensional topological quantum field theories. In this thesis we show that these two theorems hold in every dimension. We consider first the holonomy case, and by using simplicial methods we can prove that the group of smooth Deligne $d$-classes is isomorphic to the group of smooth holonomy maps from the $d^{th}$ thin-homotopy group $\pi_d^d(M)$ to $U(1)$, if $M$ is $(d-1)$-connected. We contrast this with a result of Gajer (1999). Gajer showed that Deligne $d$-classes can be reconstructed by a different class of holonomy maps, which not only include holonomies along spheres, but also along general $d$-manifolds in $M$. This approach does not require the manifold $M$ to be $(d-1)$-connected. We show that in the case of flat Deligne $d$-classes, our result differs from Gajers, if $M$ is not $(d-1)$-connected, but only $(d-2)$-connected. Stiefel manifolds do have this property, and if one applies our theorem to these and compare the result with that of Gajers theorem, it is revealed that our theorem reconstructs too many Deligne classes. This means, that our reconstruction theorem cannot live without the extra assumption on the manifold $M$, that is our reconstruction needs less informations about the holonomy of $d$-manifolds in $M$ at the price of assuming $M$ to be $(d-1)$-connected. We continue to show, that also the second theorem can be generalized: By introducing the concept of Picken-type topological quantum field theory in arbitrary dimensions, we can show that every Deligne $d$-cocycle induces such a $d$-dimensional field theory with two special properties, namely thin-invariance and smoothness. We show that any $d$-dimensional topological quantum field theory with these two properties gives rise to a Deligne $d$-cocycle and verify that this construction is surjective and injective, that is both groups are isomorphic.
In the recent years there are many researchs discussing the effects of trade policy (tariffs, subsidies etc.) in international trade. The results are manifold. Some authors show that trade policy has negative effects on welfare, some spatial economists demonstrate that trade policy can have positive effects on welfare. This paper considers the effects of the trade policy made by both countries participating in international trade in a spatial economic model. It can be showed that trade policy of both trade partners (tariffs of one country and export subsidies of the other country) can improve the world welfare in comparison with free trade.
Table of contents 1 Introduction 2 Ecological regulation and cost effectiveness 2.1 Climate policy 2.2 Promotion of renewable energies 3 Ecological regulation and security of supply 3.1 Climate policy 3.2 Promotion of renewable energies 4 The German Renewable Energies Act (EEG) 4.1 Objectives 4.2 Design and mechanisms 5 The European emissions trading system (EETS) 5.1 Objectives 5.2 Framework 6 The EEG and the EETS: trade off between ecological objectivesand cost effectiveness, innovation and security of supply? 6.1 EEG 6.2 EETS 6.3 Comparison between the approaches of the EEG and the EETS 7 Conclusions and outlook
(De)regulatory interventions frequently have unintended cross- market effects, which may or may not be desirable. We assess the effects of three policies on aggregate variables, in particular real income, from a theoretical perspective. Our results suggest that instruments acting upon wages have only a weak impact on real income, whereas the distribution of income is affected strongly. In contrast, a policy that enhances product market competition is fostering real income, but also impacts strongly on union wages and the distribution of income.
Existing theoretical literature fails to explain the differences between the pay of workers that are covered by union agreements and others who are not. This study aims at closing this gap by a single general- equilibrium approach that integrates a dual labor market and a two- sector product market. Our results suggest that the so called 'union wage gap' is largely determined by the degree of centralization of the bargains, and, to a somewhat lesser extent, by the expenditure share of the unionized sector's goods.
Different habitat models were created for the White Stork (Ciconia ciconia) in the region of the former German province of East Prussia (equals app. the current Russian oblast Kaliningrad and the Polish voivodship Warmia-Masuria). Different historical data sets describing the occurrence of the White Stork in the 1930s, as well as selected variables for the description of landscape and habitat, were employed. The processing and modeling of the applied data sets was done with a geographical information system (ArcGIS) and a statistical modeling approach that comes from the disciplines of machine-learning and data mining (TreeNet by Salford Systems Ltd.). Applying historical habitat descriptors, as well as data on the occurrence of the White Stork, models on two different scales were created: (i) a point scale model applying a raster with a cell size of 1 km2 and (ii) an administrative district scale model based on the organization of the former province of East Prussia. The evaluation of the created models show that the occurrence of White Stork nesting grounds in the former East Prussia for most parts is defined by the variables ‘forest’, ‘settlement area’, ‘pasture land’ and ‘proximity to coastline’. From this set of variables it can be assumed that a good food supply and nesting opportunities are provided to the White Stork in pasture and meadows as well as in the proximity to human settlements. These could be seen as crucial factors for the choice of nesting White Stork in East Prussia. Dense forest areas appear to be unsuited as nesting grounds of White Storks. The high influence of the variable ‘coastline’ is most likely explained by the specific landscape composition of East Prussia parallel to the coastline and is to be seen as a proximal factor for explaining the distribution of breeding White Storks. In a second step, predictions for the period of 1981 to 1993 could be made applying both scales of the models created in this study. In doing so, a decline of potential nesting habitat was predicted on the point scale. In contrast, the predicted White Stork occurrence increases when applying the model of the administrative district scale. The difference between both predictions is to be seen in the application of different scales (density versus suitability as breeding ground) and partly dissimilar explanatory variables. More studies are needed to investigate this phenomenon. The model predictions for the period 1981 to 1993 could be compared to the available inventories of that period. It shows that the figures predicted here were higher than the figures established by the census. This means that the models created here show rather a capacity of the habitat (potential niche). Other factors affecting the population size e.g. breeding success or mortality have to be investigated further. A feasible approach on how to generate possible habitat models was shown employing the methods presented here and applying historical data as well as assessing the effects of changes in land use on the White Stork. The models present the first of their kind, and could be improved by means of further data regarding the structure of the habitat and more exact spatially explicit information on the location of the nesting sites of the White Stork. In a further step, a habitat model of the present times should be created. This would allow for a more precise comparison regarding the findings from the changes of land use and relevant conditions of the environment on the White Stork in the region of former East Prussia, e.g. in the light of coming landscape changes brought by the European Union (EU).
It is shown that the ff effect of mean-field magnetohydrodynamics, which consists in the generation of a mean electromotive force along the mean magnetic field by turbulently fluctuating parts of velocity and magnetic field, is equivalent to the simultaneous generation of both turbulent and mean-field magnetic helicities, the generation rates being equal in magnitude and opposite in sign. In the particular case of statistically stationary and homogeneous fluctuations this implies that the ff effect can increase the energy in the mean magnetic field only under the condition that also magnetic helicity is accumulated there.
We report on bifurcation studies for the incompressible Navier-Stokes equations in two space dimensions with periodic boundary conditions and an external forcing of the Kolmogorov type. Fourier representations of velocity and pressure have been used to approximate the original partial differential equations by a finite-dimensional system of ordinary differential equations, which then has been studied by means of bifurcation-analysis techniques. A special route into chaos observed for increasing Reynolds number or strength of the imposed forcing is described. It includes several steady states, traveling waves, modulated traveling waves, periodic and torus solutions, as well as a period-doubling cascade for a torus solution. Lyapunov exponents and Kaplan-Yorke dimensions have been calculated to characterize the chaotic branch. While studying the dynamics of the system in Fourier space, we also have transformed solutions to real space and examined the relation between the different bifurcations in Fourier space and toplogical changes of the streamline portrait. In particular, the time-dependent solutions, such as, e.g., traveling waves, torus, and chaotic solutions, have been characterized by the associated fluid-particle motion (Lagrangian dynamics).
Projection methods based on wavelet functions combine optimal convergence rates with algorithmic efficiency. The proofs in this paper utilize the approximation properties of wavelets and results from the general theory of regularization methods. Moreover, adaptive strategies can be incorporated still leading to optimal convergence rates for the resulting algorithms. The so-called wavelet-vaguelette decompositions enable the realization of especially fast algorithms for certain operators.
The bifurcation behaviour of the 3D magnetohydrodynamic equations has been studied for external forcings of varying degree of helicity. With increasing strength of the forcing a primary non-magnetic steady state loses stability to a magnetic periodic state if the helicity exceeds a threshold value and to different non-magnetic states otherwise.
The present paper is related to the problem of approximating the exact solution to the magnetohydrodynamic equations (MHD). The behaviour of a viscous, incompressible and resistive fluid is exemined for a long period of time. Contents: 1 The magnetohydrodynamic equations 2 Notations and precise functional setting of the problem 3 Existence, uniqueness and regularity results 4 Statement and Proof of the main theorem 5 The approximate inertial manifold 6 Summary
We demonstrate the occurrence of regimes with singular continuous (fractal) Fourier spectra in autonomous dissipative dynamical systems. The particular example in an ODE system at the accumulation points of bifurcation sequences associated to the creation of complicated homoclinic orbits. Two different machanisms responsible for the appearance of such spectra are proposed. In the first case when the geometry of the attractor is symbolically represented by the Thue-Morse sequence, both the continuous-time process and its descrete Poincaré map have singular power spectra. The other mechanism owes to the logarithmic divergence of the first return times near the saddle point; here the Poincaré map possesses the discrete spectrum, while the continuous-time process displays the singular one. A method is presented for computing the multifractal characteristics of the singular continuous spectra with the help of the usual Fourier analysis technique.
An increasing number of applications requires user interfaces that facilitate the handling of large geodata sets. Using virtual 3D city models, complex geospatial information can be communicated visually in an intuitive way. Therefore, real-time visualization of virtual 3D city models represents a key functionality for interactive exploration, presentation, analysis, and manipulation of geospatial data. This thesis concentrates on the development and implementation of concepts and techniques for real-time city model visualization. It discusses rendering algorithms as well as complementary modeling concepts and interaction techniques. Particularly, the work introduces a new real-time rendering technique to handle city models of high complexity concerning texture size and number of textures. Such models are difficult to handle by current technology, primarily due to two problems: - Limited texture memory: The amount of simultaneously usable texture data is limited by the memory of the graphics hardware. - Limited number of textures: Using several thousand different textures simultaneously causes significant performance problems due to texture switch operations during rendering. The multiresolution texture atlases approach, introduced in this thesis, overcomes both problems. During rendering, it permanently maintains a small set of textures that are sufficient for the current view and the screen resolution available. The efficiency of multiresolution texture atlases is evaluated in performance tests. To summarize, the results demonstrate that the following goals have been achieved: - Real-time rendering becomes possible for 3D scenes whose amount of texture data exceeds the main memory capacity. - Overhead due to texture switches is kept permanently low, so that the number of different textures has no significant effect on the rendering frame rate. Furthermore, this thesis introduces two new approaches for real-time city model visualization that use textures as core visualization elements: - An approach for visualization of thematic information. - An approach for illustrative visualization of 3D city models. Both techniques demonstrate that multiresolution texture atlases provide a basic functionality for the development of new applications and systems in the domain of city model visualization.
Our analysis is concerned with the impact of a regionalisation of unemployment insurance (UI) on workers’ preferences, on firms’ profits, and on effciency. The existence and the extent of UI are endogenously derived by maximising an objective function of the state. Three different types of regionalisation are considered which differ with respect to the area the UI objective function is related to, and with respect to the policy variable used to maximise it. It comes to light that workers are always in favour of central UI, while it depends on the type of regionalisation whether or not firms are better off with regional or with central UI. The same somewhat surprising result applies for efficiency.
We examine the effects of regionalising the budget of unemployment insurance (UI) on wages, employment, and on UI parameters, which, for their part, determine the agents’ preferences concerning such a reform. A numerical example shows that, under reasonable assumptions, the intuition that the reform would enhance efficiency and improve the economic situation of agents from the low- unemployment region to the disadvantage of agents from the high- unemployment region is not valid in general.
This study examines how the size of trade unions relative to the la- bor force impacts on the desirability of different organizational forms of self-financing unemployment insurance (UI) for workers, firms, and with reference to an efficiency criterion. For this purpose, we respectively nu- merically compare the outcome of a model with a uniform payroll tax to a model where workers pay taxes according to their systematic risk of unemployment. Our results highlight the importance of the bargaining structure for the assessment of a particular UI scheme. Most importantly, it depends on the size of the unions whether efficiency favors a uniform or a differentiated UI scheme.
The Voyager 2 Photopolarimeter experiment has yielded the highest resolved data of Saturn's rings, exhibiting a wide variety of features. The B-ring region between 105000 km and 110000 km distance from Saturn has been investigated. It has a high matter density and contains no significance features visible by eye. Analysis with statistical methods has let us to the detection of two significant events. These features are correlated with the inner 3:2 resonances of the F-ring shepherd satellites Pandora and Prometheus, and may be evidence of large ring paricles caught in the corotation resonances.
We report on bifurcation studies for the incompressible magnetohydrodynamic equations in three space dimensions with periodic boundary conditions and a temporally constant external forcing. Fourier reprsentations of velocity, pressure and magnetic field have been used to transform the original partial differential equations into systems of ordinary differential equations (ODE), to which then special numerical methods for the qualitative analysis of systems of ODE have been applied, supplemented by the simulative calculation of solutions for selected initial conditions. In a part of the calculations, in order to reduce the number of modes to be retained, the concept of approximate inertial manifolds has been applied. For varying (incereasing from zero) strength of the imposed forcing, or varying Reynolds number, respectively, time-asymptotic states, notably stable stationary solutions, have been traced. A primary non-magnetic steady state loses, in a Hopf bifurcation, stability to a periodic state with a non-vanishing magnetic field, showing the appearance of a generic dynamo effect. From now on the magnetic field is present for all values of the forcing. The Hopf bifurcation is followed by furhter, symmetry-breaking, bifurcations, leading finally to chaos. We pay particular attention to kinetic and magnetic helicities. The dynamo effect is observed only if the forcing is chosen such that a mean kinetic helicity is generated; otherwise the magnetic field diffuses away, and the time-asymptotic states are non-magnetic, in accordance with traditional kinematic dynamo theory.
Since 1971, the Freudenthal Institute has developed an approach to mathematics education named Realistic Mathematics Education (RME). The philosophy of RME is based on Hans Freudenthal’s concept of ‘mathematics as a human activity’. Prof. Hans Freudenthal (1905-1990), a mathematician and educator, believes that ‘ready-made mathematics’ should not be taught in school. By contrast, he urges that students should be offered ‘realistic situations’ so that they can rediscover from informal to formal mathematics. Although mathematics education in Vietnam has some achievements, it still encounters several challenges. Recently, the reform of teaching methods has become an urgent task in Vietnam. It appears that Vietnamese mathematics education lacks necessary theoretical frameworks. At first sight, the philosophy of RME is suitable for the orientation of the teaching method reform in Vietnam. However, the potential of RME for mathematics education as well as the ability of applying RME to teaching mathematics is still questionable in Vietnam. The primary aim of this dissertation is to research into abilities of applying RME to teaching and learning mathematics in Vietnam and to answer the question “how could RME enrich Vietnamese mathematics education?”. This research will emphasize teaching geometry in Vietnamese middle school. More specifically, the dissertation will implement the following research tasks: • Analyzing the characteristics of Vietnamese mathematics education in the ‘reformed’ period (from the early 1980s to the early 2000s) and at present; • Implementing a survey of 152 middle school teachers’ ideas from several Vietnamese provinces and cities about Vietnamese mathematics education; • Analyzing RME, including Freudenthal’s viewpoints for RME and the characteristics of RME; • Discussing how to design RME-based lessons and how to apply these lessons to teaching and learning in Vietnam; • Experimenting RME-based lessons in a Vietnamese middle school; • Analyzing the feedback from the students’ worksheets and the teachers’ reports, including the potentials of RME-based lessons for Vietnamese middle school and the difficulties the teachers and their students encountered with RME-based lessons; • Discussing proposals for applying RME-based lessons to teaching and learning mathematics in Vietnam, including making suggestions for teachers who will apply these lessons to their teaching and designing courses for in-service teachers and teachers-in training. This research reveals that although teachers and students may encounter some obstacles while teaching and learning with RME-based lesson, RME could become a potential approach for mathematics education and could be effectively applied to teaching and learning mathematics in Vietnamese school.
Mafic magmatism in the Eastern Cordillera and Putumayo Basin, Colombia : causes and consequences
(2007)
The Eastern Cordillera of Colombia is mainly composed of sedimentary rocks deposited since early Mesozoic times. Magmatic rocks are scarce. They are represented only by a few locally restricted occurrences of dykes and sills of mafic composition presumably emplaced in the Cretaceous and of volcanic rocks of Neogene age. This work is focused on the study of the Cretaceous magmatism with the intention to understand the processes causing the genesis of these rocks and their significance in the regional tectonic setting of the Northern Andes. The magmatic rocks cut the Cretaceous sedimentary succession of black shales and marlstones that crop out in both flanks of the Eastern Cordillera. The studied rocks were classified as gabbros (Cáceres, Pacho, Rodrigoque), tonalites (Cáceres, La Corona), diorites and syenodiorites (La Corona), pyroxene-hornblende gabbros (Pacho), and pyroxene-hornblendites (Pajarito). The gabbroic samples are mainly composed of plagioclase, clinopyroxene, and/or green to brown hornblende, whereas the tonalitic rocks are mainly composed of plagioclase and quartz. The samples are highly variable in crystal sizes from fine- to coarse-grained. Accessory minerals such as biotite, titanite and zircon are present. Some samples are characterized by moderate to strong alteration, and show the presence of epidote, actinolite and chlorite. Major and trace element compositions of the rocks as well as the rock-forming minerals show significant differences in the geochemical and petrological characteristics for the different localities, suggesting that this magmatism does not result from a single melting process. The wide compositional spectrum of trace elements in the intrusions is characteristic for different degrees of mantle melting and enrichment of incompatible elements. MORB- and OIB-like compositions suggest at least two different sources of magma with tholeiitic and alkaline affinity, respectively. Evidence of slab-derived fluids can be recognized in the western part of the basin reflected in higher Ba/Nb and Sr/P ratios and also in the Sr radiogenic isotope ratios, which is possible a consequence of metasomatism in the mantle due to processes related to the presence of a previously subducted slab. The trace element patterns evidence an extensional setting in the Cretaceous basin producing a continental rift, with continental crust being stretched until oceanic crust was generated in the last stages of this extension. Electron microprobe analyses (EMPA) of the major elements and synchrotron radiation micro-X-ray fluorescence (μ-SRXRF) analyses of the trace element composition of the early crystallized minerals of the intrusions (clinopyroxenes and amphiboles) reflect the same dual character that has been found in the bulk-rock analyses. Despite the observed alteration of the rocks, the mineral composition shows evidences for an enriched and a relative depleted magma source. Even the normalization of the trace element concentrations of clinopyroxenes and amphiboles to the whole rock nearly follows the pattern predicted by published partition coefficients, suggesting that the alteration did not change the original trace element compositions of the investigated minerals. Sr-Nd-Pb isotope data reveal a large isotopic variation but still suggest an initial origin of the magmas in the mantle. Samples have moderate to highly radiogenic compositions of 143Nd/144Nd and high 87Sr/86Sr ratios and follow a trend towards enriched mantle compositions, like the local South American Paleozoic crust. The melts experienced variable degrees of contamination by sediments, crust, and seawater. The age corrected Pb isotope ratios show two separated groups of samples. This suggests that the chemical composition of the mantle below the Northern Andes has been modified by the interaction with other components resulting in a heterogeneous combination of materials of diverse origins. Although previous K/Ar age dating have shown that the magmatism took place in the Cretaceous, the high error of the analyses and the altered nature of the investigated minerals did preclude reliable interpretations. In the present work 40Ar/39Ar dating was carried out. The results show a prolonged history of magmatism during the Cretaceous over more than 60 Ma, from ~136 to ~74 Ma (Hauterivian to Campanian). Pre-Cretaceous rifting phases occurred in the Triassic-Jurassic for the western part of the basin and in the Paleozoic for the eastern part. Those previous rifting phases are decisive mechanisms controlling the localization and composition of the Cretaceous magmatism. Therefore, it is the structural position and not the age of the intrusions which preconditions the kind of magmatism and the degree of melting. The divergences on ages are the consequence of the segmentation of the basin in several sub-basins which stretching, thermal evolution and subsidence rate evolved independently. The first hypothesis formulated at the beginning of this investigation was that the Cretaceous gabbroic intrusions identified in northern Ecuador could be correlated with the intrusions described in the Eastern Cordillera. The mafic occurrences should mark the location of the most subsiding places of the large Cretaceous basin in northern South America. For this reason, the gabbroic intrusions cutting the Cretaceous succession in the Putumayo Basin, southern Colombia, were investigated. The results of the studies were quite unexpected. The petrologic and geochemical character of the magmatic rocks indicates subduction-related magmatism. K/Ar dating of amphibole yields a Late Miocene to Pliocene age (6.1 ± 0.7 Ma) for the igneous event in the basin. Although there is no correlation between this magmatic event and the Cretaceous magmatic event, the data obtained has significant tectonic and economic implications. The emplacement of the Neogene gabbroic rocks coincides with the late Miocene/Pliocene Andean orogenic uplift as well as with a significant pulse of hydrocarbon generation and expulsion.
It is desirable to reduce the potential threats that result from the variability of nature, such as droughts or heat waves that lead to food shortage, or the other extreme, floods that lead to severe damage. To prevent such catastrophic events, it is necessary to understand, and to be capable of characterising, nature's variability. Typically one aims to describe the underlying dynamics of geophysical records with differential equations. There are, however, situations where this does not support the objectives, or is not feasible, e.g., when little is known about the system, or it is too complex for the model parameters to be identified. In such situations it is beneficial to regard certain influences as random, and describe them with stochastic processes. In this thesis I focus on such a description with linear stochastic processes of the FARIMA type and concentrate on the detection of long-range dependence. Long-range dependent processes show an algebraic (i.e. slow) decay of the autocorrelation function. Detection of the latter is important with respect to, e.g. trend tests and uncertainty analysis. Aiming to provide a reliable and powerful strategy for the detection of long-range dependence, I suggest a way of addressing the problem which is somewhat different from standard approaches. Commonly used methods are based either on investigating the asymptotic behaviour (e.g., log-periodogram regression), or on finding a suitable potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test the fractional difference parameter d for compatibility with zero. Here, I suggest to rephrase the problem as a model selection task, i.e.comparing the most suitable long-range dependent and the most suitable short-range dependent model. Approaching the task this way requires a) a suitable class of long-range and short-range dependent models along with suitable means for parameter estimation and b) a reliable model selection strategy, capable of discriminating also non-nested models. With the flexible FARIMA model class together with the Whittle estimator the first requirement is fulfilled. Standard model selection strategies, e.g., the likelihood-ratio test, is for a comparison of non-nested models frequently not powerful enough. Thus, I suggest to extend this strategy with a simulation based model selection approach suitable for such a direct comparison. The approach follows the procedure of a statistical test, with the likelihood-ratio as the test statistic. Its distribution is obtained via simulations using the two models under consideration. For two simple models and different parameter values, I investigate the reliability of p-value and power estimates obtained from the simulated distributions. The result turned out to be dependent on the model parameters. However, in many cases the estimates allow an adequate model selection to be established. An important feature of this approach is that it immediately reveals the ability or inability to discriminate between the two models under consideration. Two applications, a trend detection problem in temperature records and an uncertainty analysis for flood return level estimation, accentuate the importance of having reliable methods at hand for the detection of long-range dependence. In the case of trend detection, falsely concluding long-range dependence implies an underestimation of a trend and possibly leads to a delay of measures needed to take in order to counteract the trend. Ignoring long-range dependence, although present, leads to an underestimation of confidence intervals and thus to an unjustified belief in safety, as it is the case for the return level uncertainty analysis. A reliable detection of long-range dependence is thus highly relevant in practical applications. Examples related to extreme value analysis are not limited to hydrological applications. The increased uncertainty of return level estimates is a potentially problem for all records from autocorrelated processes, an interesting examples in this respect is the assessment of the maximum strength of wind gusts, which is important for designing wind turbines. The detection of long-range dependence is also a relevant problem in the exploration of financial market volatility. With rephrasing the detection problem as a model selection task and suggesting refined methods for model comparison, this thesis contributes to the discussion on and development of methods for the detection of long-range dependence.
In the modern industrialized countries every year several hundred thousands of people die due to the sudden cardiac death. The individual risk for this sudden cardiac death cannot be defined precisely by common available, non-invasive diagnostic tools like Holter-monitoring, highly amplified ECG and traditional linear analysis of heart rate variability (HRV). Therefore, we apply some rather unconventional methods of nonlinear dynamics to analyse the HRV. Especially, some complexity measures that are basing on symbolic dynamics as well as a new measure, the renormalized entropy, detect some abnormalities in the HRV of several patients who have been classified in the low risk group by traditional methods. A combination of these complexity measures with the parameters in the frequency domain seems to be a promising way to get a more precise definition of the individual risk. These findings have to be validated by a representative number of patients.
We have used techniques of nonlinear dynamics to compare a special model for the reversals of the Earth's magnetic field with the observational data. Although this model is rather simple, there is no essential difference to the data by means of well-known characteristics, such as correlation function and probability distribution. Applying methods of symbolic dynamics we have found that the considered model is not able to describe the dynamical properties of the observed process. These significant differences are expressed by algorithmic complexity and Renyi information.
Two deterministic processes leading to roughening interfaces are considered. It is shown that the dynamics of linear perturbations of turbulent regimes in coupled map lattices is governed by a discrete version of the Kardar-Parisi-Zhang equation. The asymptotic scaling behavior of the perturbation field is investigated in the case of large lattices. Secondly, the dynamics of an order-disorder interface is modelled with a simple two-dimensional coupled map lattice, possesing a turbulent and a laminar state. It is demonstrated, that in some range of parameters the spreading of the turbulent state is accompanied by kinetic roughening of the interface.
Strange nonchaotic attractors typically appear in quasiperiodically driven nonlinear systems. Two methods of their characterization are proposed. The first one is based on the bifurcation analysis of the systems, resulting from periodic approximations of the quasiperiodic forcing. Secondly, we propose th characterize their strangeness by calculating a phase sensitivity exponent, that measures the sensitivity with respect to changes of the phase of the external force. It is shown, that phase sensitivity appears if there is a non-zero probability for positive local Lyapunov exponents to occur.
We have studied bifurcation phenomena for the incompressable Navier-Stokes equations in two space dimensions with periodic boundary conditions. Fourier representations of velocity and pressure have been used to transform the original partial differential equations into systems of ordinary differential equations (ODE), to which then numerical methods for the qualitative analysis of systems of ODE have been applied, supplemented by the simulative calculation of solutions for selected initial conditions. Invariant sets, notably steady states, have been traced for varying Reynolds number or strength of the imposed forcing, respectively. A complete bifurcation sequence leading to chaos is described in detail, including the calculation of the Lyapunov exponents that characterize the resulting chaotic branch in the bifurcation diagram.
During the last decades, the global change of the environment has caused a dramatic loss of habitats and species. In Central Europe, open habitats are particularly affected. The main objective of this thesis was to experimentally test the suitability of wild megaherbivore grazing as a conservation tool to manage open habitats. We studied the effect of wild ungulates in a 160 ha game preserve in NE Germany in three successional stages (i) Corynephorus canescens-dominated grassland, (ii) ruderal tall forb vegetation dominated by Tanacetum vulgare and (iii) Pinus sylvestris-pioneer forest over three years. Our results demonstrate that wild megaherbivores considerably affected species composition and delayed successional pathways in open habitats. Grazing effects differed considerably between successional stages: species richness was higher in grazed ruderal and pioneer forest plots, but not in the Corynephorus sites. Species composition changed significantly in the Corynephorus and ruderal sites. Grazed ruderal sites had turned into sites with very short vegetation dominated by Agrostis spp. and the moss Brachythecium albicans, most species did not flower. Woody plant cover was significantly affected only in the pioneer forest sites. Young pine trees were severely damaged and tree height was considerably reduced, leading to a “Pinus-macchie”-appearance. Ecological patterns and processes are known to vary with spatial scale. Since grazing by megaherbivores has a strong spatial component, the scale of monitoring success of grazing may largely differ among and within different systems. Thus, the second aim of this thesis was to test whether grazing effects are consistent over different spatial scales, and to give recommendations for appropriate monitoring scales. For this purpose, we studied grazing effects on plant community structure using multi-scale plots that included three nested spatial scales (0.25 m2, 4 m2, and 40 m2). Over all vegetation types, the scale of observation directly affected grazing effects on woody plant cover and on floristic similarity, but not on the proportion of open soil and species richness. Grazing effects manifested at small scales regarding floristic similarity in pioneer forest and ruderal sites and regarding species richness in ruderal sites. The direction of scale-effects on similarity differed between vegetation types: Grazing effects on floristic similarity in the Corynephorus sites were significantly higher at the medium and large scale, while in the pioneer forest sites they were significantly higher at the smallest scale. Disturbances initiate vegetation changes by creating gaps and affecting colonization and extinction rates. The third intention of the thesis was to investigate the effect of small-scale disturbances on the species-level. In a sowing experiment, we studied early establishment probabilities of Corynephorus canescens, a key species of open sandy habitats. Applying two different regimes of mechanical ground disturbance (disturbed and undisturbed) in the three successional stages mentioned above, we focused on the interactive effects of small-scale disturbances, successional stage and year-to-year variation. Disturbance led to higher emergence in a humid and to lower emergence in a very dry year. Apparently, when soil moisture was sufficient, the main factor limiting C. canescens establishment was competition, while in the dry year water became the limiting factor. Survival rates were not affected by disturbance. In humid years, C. canescens emerged in higher numbers in open successional stages while in the dry year, emergence rates were higher in late stages, suggesting an important role of late successional stages for the persistence of C. canescens. We conclude that wild ungulate grazing is a useful tool to slow down succession and to preserve a species-rich, open landscape, because it does not only create disturbances, thereby supporting early successional stages, but at the same time efficiently controls woody plant cover. However, wild ungulate grazing considerably changed the overall appearance of the landscape. Additional measures like shifting exclosures might be necessary to allow vulnerable species to flower and reproduce. We further conclude that studying grazing impacts on a range of scales is crucial, since different parameters are affected at different spatial scales. Larger scales are suitable for assessing grazing impact on structural parameters like the proportion of open soil or woody plant cover, whereas species richness and floristic similarity are affected at smaller scales. Our results further indicate that the optimal strategy for promoting C. canescens is to apply disturbances just before seed dispersal and not during dry years. Further, at the landscape scale, facilitation by late successional species may be an important mechanism for the persistence of protected pioneer species.
Aim The aim of the present study was to examine young female volleyballers’ body build, physical abilities, technical skills and psychophysiological properties in relation to their performance at competitions. The sample consisted of 46 female volleyballers aged 13-16 years. 49 basic anthropometric measurements were measured and 65 proportions and body composition characteristics were calculated. 9 physical ability tests, 9 volleyball technical skills tests and 21 psychophysiological tests were carried out. The game performance was recorded by the computer program Game. The program enabled to fix the performance of technical elements in case of each player. The computer program Game calculated the index of proficiency in case of each girl for each element. The first control group consisted of 74 female volleyballers aged 13–15 years with whom reduced anthropometry was provided and 28 games were recorded. The second control group consisted of 586 ordinary schoolgirls aged 13–16 years with whom full anthropometry was provided. Results In order to systematize all anthropometric characteristics, we first studied the essence of the anthropometric structure of the body as a whole. It turned out to be a characteristic system where all variables are in significant correlation between one another and where the leading characteristics are height and weight. Therefore we based the classification on the mean height and weight of the whole sample. We formed a 5 class SD classification. There are three classes of concordance between height and weight: small height – small weight, medium height – medium height, big height – big weight. The other two classes were classes of disconcordance between height and weight- pycnomorphs and leptomorphs. We managed to show that gradual increase in height and weight brought about statistically significant increase in length, breadth and depth measurements, circumferences, bone thicknesses and skinfolds. There were also systematic changes in indeces and body composition characteristics. Pycnomorphs and leptomorphs also showed differences specific to their body types in body measurements and body composition. The results of all tests were submitted to basic statistical analysis and all correlations were found between all the tests (volleyball technical skills, psychophysiological abilities, physical abilities), and all basic anthropometric variables (n = 49) and all proportions and body composition characteristics (n = 65). All anthropometric measurements and test results were correlated with the index of proficiency for all elements of the game. The best linear regression models were calculated for predicting proficiency in different elements of the game. We can see that body build and all kind of tests took part in predicting the proficiency of the game. The most essential for performing attack, block and feint were anthropometric and psychophysiological models. The studied complex of body build characteristics and tests results determine the players’ proficiency at competitions, are an important tool for testing the player’s individual development, enable to choose volleyballers from among schoolgirls and represent the whole body constitutional model of a young female volleyballer. Outlook Our outlook for the future is to continue recording of all Estonian championship games with the computer program Game, to continue the players’ anthropometric measuring and psychophysiological testing at competitions and to compile a national register for assessment of development of individual players and teams.
Numerous recent publications on the psychological meaning of “if” have proposed a probabilistic interpretation of conditional sentences. According to the proponents of probabilistic approaches, sentences like “If the weather is nice, I will be at the beach tomorrow” (or “If p, then q” in the abstract version) express a high probability of the consequent (being at the beach), given the antecedent (nice weather). When people evaluate conditional sentences, they assumingly do so by deriving the conditional probability P(q|p) using a procedure called the Ramsey test. This is a contradicting view to the hitherto dominant Mental Model Theory (MMT, Johnson-Laird, 1983), that proposes conditional sentences refer to possibilities in the world that are represented in form of mental models. Whereas probabilistic approaches gained a lot of momentum in explaining the interpretation of conditionals, there is still no conclusive probabilistic account of conditional reasoning. This thesis investigates the potential of a comprehensive probabilistic account on conditionals that covers the interpretation of conditionals as well as conclusion drawn from these conditionals when used as a premise in an inference task. The first empirical chapter of this thesis, Chapter 2, implements a further investigation of the interpretation of conditionals. A plain version of the Ramsey test as proposed by Evans and Over (2004) was tested against a similarity sensitive version of the Ramsey test (Oberauer, 2006) in two experiments using variants of the probabilistic truth table task (Experiments 2.1 and 2.2). When it comes to decide whether an instance is relevant for the evaluation of a conditional, similarity seems to play a minor role. Once the decision about relevance is made, believability judgments of the conditional seem to be unaffected by the similarity manipulation and judgments are based on frequency of instances, in the way predicted by the plain Ramsey test. In Chapter 3 contradicting predictions of the probabilistic approaches on conditional reasoning of Verschueren et al (2005), Evans and Over (2004) and Oaksford & Chater (2001) are tested against each other. Results from the probabilistic truth table task modified for inference tasks supports the account of Oaksford and Chater (Experiment 3.1). A learning version of the task and a design with every day conditionals yielded results unpredicted by any of the theories (Experiments 3.2-3.4). Based on these results, a new probabilistic 2-stage model of conditional reasoning is proposed. To preclude claims that the use of the probabilistic truth table task (or variants thereof) favors judgments reflecting conditional probabilities, Chapter 4 combines methodologies used by proponents of the MMT with the probabilistic truth table task. In three Experiments (4.1 -4.3) it could be shown for believability judgments of the conditional and inferences drawn from it, that causal information about counterexamples only prevails, when no frequencies of exceptional cases are present. Experiment 4.4 extends these findings to every day conditionals. A probabilistic estimation process based on frequency information is used to explain results on all tasks. The findings confirm with a probabilistic approach on conditionals and moreover constitute an explanatory challenge for the MMT. In conclusion of all the evidence gathered in this dissertation it seems justified to draw the picture of a comprehensive probabilistic view on conditionals quite optimistically. Probability estimates not only explain the believability people assign to a conditional sentence, they also explain to what extend people are willing to draw conclusions from those sentences.
Sucrose synthase (Susy) is a key enzyme of sucrose metabolism, catalysing the reversible conversion of sucrose and UDP to UDP-glucose and fructose. Therefore, its activity, localization and function have been studied in various plant species. It has been shown that Susy can play a role in supplying energy in companion cells for phloem loading (Fu and Park, 1995), provides substrates for starch synthesis (Zrenner et al., 1995), and supplies UDP-glucose for cell wall synthesis (Haigler et al., 2001). Analysis of the Arabidopsis genome identifies six Susy isoforms. The expression of these isoforms was investigated using promoter-reporter gene constructs (GUS) and real time RT-PCR. Although these isoforms are closely related at the protein level they have radically different spatial and temporal patterns of expression in the plant with no two isoforms showing the same distribution. More than one isoform is expressed in all organs examined. Some of them have high but specific expression in particular organs or developmental stages whilst others are constantly expressed throughout the whole plant and across various stages of development. The in planta function of the six Susy isoforms were explored through analysis of T-DNA insertion mutants and RNAi lines. Plants without the expression of individual isoforms show no differences in growth and development, and are not significantly different from wild type plants in soluble sugars, starch and cellulose contents under all growth conditions investigated. Analysis of T-DNA insertion mutant lacking Sus3 isoform that was exclusively expressed in stomata cells only had a minor influence on guard cell osmoregulation and/or bioenergetics. Although none of the sucrose synthases appear to be essential for normal growth under our standard growth conditions, they may be necessary for growth under stress conditions. Different isoforms of sucrose synthase respond differently to various abiotic stresses. It has been shown that oxygen deprivation up regulates Sus1 and Sus4 and increases total Susy activity. However, the analysis of the plants with reduced expression of both Sus1 and Sus4 revealed no obvious effects on plant performance under oxygen deprivation. Low temperature up regulates Sus1 expression but the loss of this isoform has no effect on the freezing tolerance of non acclimated and cold acclimated plants. These data provide a comprehensive overview of the expression of this gene family which supports some of the previously reported roles for Susy and indicates the involvement of specific isoforms in metabolism and/or signalling.
Nowadays, colloidal rods can be synthesized in large amounts. The rods are typically cylindrically and their length ranges from several nanometers to a few micrometers. In solution, systems of colloidal rodlike molecules or aggregates can form liquid-crystalline phases with long-range orientational and spatial order. In the present work, we investigate structure formation and fractionation in systems of rodlike colloids with the help of Monte Carlo simulations in the NPT ensemble. Repulsive interactions can successfully be mimicked by the hard rod model, which has been studied extensively in the past. In many cases, attractive interactions like van der Waals or depletion forces cannot be neglected, however. In the first part of this work, the phase behavior of monodisperse attractive rods is characterized for different interaction strengths. Phase diagrams as a function of rod length and pressure are presented. Most systems of synthesized mesoscopic rods have a polydisperse length distribution as a consequence of the longitudinal growth process of the rods. For many technical and research applications, a rather small polydispersity is desired in order to have well defined material properties. The polydispersity can be reduced by a spatial demixing (fractionation) of long and short rods. Fractionation and structure formation is studied in a tridisperse and a polydisperse bulk suspension of rods. We observe that the resulting structures depend distinctly on the interaction strength. The fractionation in the system is strongly enhanced with increasing interaction strength. Suspensions are typically confined in a container. We also examine the influence of adjacent substrates in systems of tridisperse and polydisperse rod suspensions. Three different substrate types are studied in detail: a planar wall, a corrugated substrate, and a substrate with rectangular cavities. We analyze the fluid structure close to the substrate and substrate controlled fractionation. The spatial arrangement of long and short rods in front of the substrate depends sensitively on the substrate structure and the pressure. Rods with a predefined length are segregated at substrates with rectangular cavities.
In this thesis the interplay between hydrodynamic transport and specific adhesion is theoretically investigated. An important biological motivation for this work is the rolling adhesion of white blood cells experimentally investigated in flow chambers. There, specific adhesion is mediated by weak bonds between complementary molecular building blocks which are either located on the cell surface (receptors) or attached to the bottom plate of the flow chamber (ligands). The model system under consideration is a hard sphere covered with receptors moving above a planar ligand-bearing wall. The motion of the sphere is influenced by a simple shear flow, deterministic forces, and Brownian motion. An algorithm is given that allows to numerically simulate this motion as well as the formation and rupture of bonds between receptors and ligands. The presented algorithm spatially resolves receptors and ligands. This opens up the perspective to apply the results also to flow chamber experiments done with patterned substrates based on modern nanotechnological developments. In the first part the influence of flow rate, as well as of the number and geometry of receptors and ligands, on the probability for initial binding is studied. This is done by determining the mean time that elapses until the first encounter between a receptor and a ligand occurs. It turns out that besides the number of receptors, especially the height by which the receptors are elevated above the surface of the sphere plays an important role. These findings are in good agreement with observations of actual biological systems like white blood cells or malaria-infected red blood cells. Then, the influence of bonds which have formed between receptors and ligands, but easily rupture in response to force, on the motion of the sphere is studied. It is demonstrated that different states of motion-for example rolling-can be distinguished. The appearance of these states depending on important model parameters is then systematically investigated. Furthermore, it is shown by which bond property the ability of cells to stably roll in a large range of applied flow rates is increased. Finally, the model is applied to another biological process, the transport of spherical cargo particles by molecular motors. In analogy to the so far described systems molecular motors can be considered as bonds that are able to actively move. In this part of the thesis the mean distance the cargo particles are transported is determined.
The present thesis deals with the mental representation of numbers in space. Generally it is assumed that numbers are mentally represented on a mental number line along which they ordered in a continuous and analogical manner. Dehaene, Bossini and Giraux (1993) found that the mental number line is spatially oriented from left-to-right. Using a parity-judgment task they observed faster left-hand responses for smaller numbers and faster right-hand responses for larger numbers. This effect has been labelled as Spatial Numerical Association of Response Codes (SNARC) effect. The first study of the present thesis deals with the question whether the spatial orientation of the mental number line derives from the writing system participants are adapted to. According to a strong ontogenetic interpretation the SNARC effect should only obtain for effectors closely related to the comprehension and production of written language (hands and eyes). We asked participants to indicate the parity status of digits by pressing a pedal with their left or right foot. In contrast to the strong ontogenetic view we observed a pedal SNARC effect which did not differ from the manual SNARC effect. In the second study we evaluated whether the SNARC effect reflects an association of numbers and extracorporal space or an association of numbers and hands. To do so we varied the spatial arrangement of the response buttons (vertical vs. horizontal) and the instruction (handrelated vs. button-related). For vertically arranged buttons and a buttonrelated instruction we found a button-related SNARC effect. In contrast, for a hand-related instruction we obtained a hand-related SNARC effect. For horizontally arranged buttons and a handrelated instruction, however, we found a buttonrelated SNARC effect. The results of the first to studies were interpreted in terms of weak ontogenetic view. In the third study we aimed to examine the functional locus of the SNARC effect. We used the psychological refractory period paradigm. In the first experiment participants first indicated the pitch of a tone and then the parity status of a digit (locus-of-slack paradigma). In a second experiment the order of stimulus presentation and thus tasks changed (effect-propagation paradigm). The results led us conclude that the SNARC effect arises while the response is centrally selected. In our fourth study we test for an association of numbers and time. We asked participants to compare two serially presented digits. Participants were faster to compare ascending digit pairs (e.g., 2-3) than descending pairs (e.g., 3-2). The pattern of our results was interpreted in terms of forwardassociations (“1-2-3”) as formed by our ubiquitous cognitive routines to count of objects or events.
The properties of a series of well-defined new surfactant oligomers (dimers to tetramers)were examined. From a molecular point of view, these oligomeric surfactants consist of simple monomeric cationic surfactant fragments coupled via the hydrophilic ammonium chloride head groups by spacer groups (different in nature and length). Properties of these cationic surfactant oligomers in aqueous solution such as solubility, micellization and surface activity, micellar size and aggregation number were discussed with respect to the two new molecular variables introduced, i.e. degree of oligomerization and spacer group, in order to establish structure – property relationships. Thus, increasing the degree of oligomerization results in a pronounced decrease of the critical micellization concentration (CMC). Both reduced spacer length and increased spacer hydrophobicity lead to a decrease of the CMC, but to a lesser extent. For these particular compounds, the formed micelles are relatively small and their aggregation number decreases with increasing the degree of oligomerization, increasing spacer length and sterical hindrance. In addition, pseudo-phase diagrams were established for the dimeric surfactants in more complex systems, namely inverse microemulsions, demonstrating again the important influence of the spacer group on the surfactant behaviour. Furthermore, the influence of additives on the property profile of the dimeric compounds was examined, in order to see if the solution properties can be improved while using less material. Strong synergistic effects were observed by adding special organic salts (e.g. sodium salicylate, sodium vinyl benzoate, etc.) to the surfactant dimers in stoichiometric amounts. For such mixtures, the critical aggregation concentration is strongly shifted to lower concentration, the effect being more pronounced for dimers than for analogous monomers. A sharp decrease of the surface tension can also be attained. Many of the organic anions produce viscoelastic solutions when added to the relatively short-chain dimers in aqueous solution, as evidenced by rheological measurements. This behaviour reflects the formation of entangled wormlike micelles due to strong interactions of the anions with the cationic surfactants, decreasing the curvature of the micellar aggregates. It is found that the associative behaviour is enhanced by dimerization. For a given counterion, the spacer group may also induce a stronger viscosifying effect depending on its length and hydrophobicity. Oppositely charged surfactants were combined with the cationic dimers, too. First, some mixtures with the conventional anionic surfactant SDS revealed vesicular aggregates in solution. Also, in view of these catanionic mixtures, a novel anionic dimeric surfactant based on EDTA was synthesized and studied. The synthesis route is relatively simple and the compound exhibits particularly appealing properties such as low CMC and σCMC values, good solubilization capacity of hydrophobic probes and high tolerance to hard water. Noteworthy, mixtures with particular cationic dimers gave rise to viscous solutions, reflecting the micelle growth.
The innovation of information techniques has changed many aspects of our life. In health care field, we can obtain, manage and communicate high-quality large volumetric image data by computer integrated devices, to support medical care. In this dissertation I propose several promising methods that could assist physicians in processing, observing and communicating the image data. They are included in my three research aspects: telemedicine integration, medical image visualization and image segmentation. And these methods are also demonstrated by the demo software that I developed. One of my research point focuses on medical information storage standard in telemedicine, for example DICOM, which is the predominant standard for the storage and communication of medical images. I propose a novel 3D image data storage method, which was lacking in current DICOM standard. I also created a mechanism to make use of the non-standard or private DICOM files. In this thesis I present several rendering techniques on medical image visualization to offer different display manners, both 2D and 3D, for example, cut through data volume in arbitrary degree, rendering the surface shell of the data, and rendering the semi-transparent volume of the data. A hybrid segmentation approach, designed for semi-automated segmentation of radiological image, such as CT, MRI, etc, is proposed in this thesis to get the organ or interested area from the image. This approach takes advantage of the region-based method and boundary-based methods. Three steps compose the hybrid approach: the first step gets coarse segmentation by fuzzy affinity and generates homogeneity operator; the second step divides the image by Voronoi Diagram and reclassifies the regions by the operator to refine segmentation from the previous step; the third step handles vague boundary by level set model. Topics for future research are mentioned in the end, including new supplement for DICOM standard for segmentation information storage, visualization of multimodal image information, and improvement of the segmentation approach to higher dimension.
Förster Resonance Energy Transfer (FRET) plays an important role for biochemical applications such as DNA sequencing, intracellular protein-protein interactions, molecular binding studies, in vitro diagnostics and many others. For qualitative and quantitative analysis, FRET systems are usually assembled through molecular recognition of biomolecules conjugated with donor and acceptor luminophores. Lanthanide (Ln) complexes, as well as semiconductor quantum dot nanocrystals (QD), possess unique photophysical properties that make them especially suitable for applied FRET. In this work the possibility of using QD as very efficient FRET acceptors in combination with Ln complexes as donors in biochemical systems is demonstrated. The necessary theoretical and practical background of FRET, Ln complexes, QD and the applied biochemical models is outlined. In addition, scientific as well as commercial applications are presented. FRET can be used to measure structural changes or dynamics at distances ranging from approximately 1 to 10 nm. The very strong and well characterized binding process between streptavidin (Strep) and biotin (Biot) is used as a biomolecular model system. A FRET system is established by Strep conjugation with the Ln complexes and QD biotinylation. Three Ln complexes (one with Tb3+ and two with Eu3+ as central ion) are used as FRET donors. Besides the QD two further acceptors, the luminescent crosslinked protein allophycocyanin (APC) and a commercial fluorescence dye (DY633), are investigated for direct comparison. FRET is demonstrated for all donor-acceptor pairs by acceptor emission sensitization and a more than 1000-fold increase of the luminescence decay time in the case of QD reaching the hundred microsecond regime. Detailed photophysical characterization of donors and acceptors permits analysis of the bioconjugates and calculation of the FRET parameters. Extremely large Förster radii of more than 100 Å are achieved for QD as acceptors, considerably larger than for APC and DY633 (ca. 80 and 60 Å). Special attention is paid to interactions with different additives in aqueous solutions, namely borate buffer, bovine serum albumin (BSA), sodium azide and potassium fluoride (KF). A more than 10-fold limit of detection (LOD) decrease compared to the extensively characterized and frequently used donor-acceptor pair of Europium tris(bipyridine) (Eu-TBP) and APC is demonstrated for the FRET system, consisting of the Tb complex and QD. A sub-picomolar LOD for QD is achieved with this system in azide free borate buffer (pH 8.3) containing 2 % BSA and 0.5 M KF. In order to transfer the Strep-Biot model system to a real-life in vitro diagnostic application, two kinds of imunnoassays are investigated using human chorionic gonadotropin (HCG) as analyte. HCG itself, as well as two monoclonal anti-HCG mouse-IgG (immunoglobulin G) antibodies are labeled with the Tb complex and QD, respectively. Although no sufficient evidence for FRET can be found for a sandwich assay, FRET becomes obvious in a direct HCG-IgG assay showing the feasibility of using the Ln-QD donor-acceptor pair as highly sensitive analytical tool for in vitro diagnostics.
The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season.
The terrestrial biosphere impacts considerably on the global carbon cycle. In particular, ecosystems contribute to set off anthropogenic induced fossil fuel emissions and hence decelerate the rise of the atmospheric CO₂ concentration. However, the future net sink strength of an ecosystem will heavily depend on the response of the individual processes to a changing climate. Understanding the makeup of these processes and their interaction with the environment is, therefore, of major importance to develop long-term climate mitigation strategies. Mathematical models are used to predict the fate of carbon in the soil-plant-atmosphere system under changing environmental conditions. However, the underlying processes giving rise to the net carbon balance of an ecosystem are complex and not entirely understood at the canopy level. Therefore, carbon exchange models are characterised by considerable uncertainty rendering the model-based prediction into the future prone to error. Observations of the carbon exchange at the canopy scale can help learning about the dominant processes and hence contribute to reduce the uncertainty associated with model-based predictions. For this reason, a global network of measurement sites has been established that provides long-term observations of the CO₂ exchange between a canopy and the atmosphere along with micrometeorological conditions. These time series, however, suffer from observation uncertainty that, if not characterised, limits their use in ecosystem studies. The general objective of this work is to develop a modelling methodology that synthesises physical process understanding with the information content in canopy scale data as an attempt to overcome the limitations in both carbon exchange models and observations. Similar hybrid modelling approaches have been successfully applied for signal extraction out of noisy time series in environmental engineering. Here, simple process descriptions are used to identify relationships between the carbon exchange and environmental drivers from noisy data. The functional form of these relationships are not prescribed a priori but rather determined directly from the data, ensuring the model complexity to be commensurate with the observations. Therefore, this data-led analysis results in the identification of the processes dominating carbon exchange at the ecosystem scale as reflected in the data. The description of these processes may then lead to robust carbon exchange models that contribute to a faithful prediction of the ecosystem carbon balance. This work presents a number of studies that make use of the developed data-led modelling approach for the analysis and interpretation of net canopy CO₂ flux observations. Given the limited knowledge about the underlying real system, the evaluation of the derived models with synthetic canopy exchange data is introduced as a standard procedure prior to any real data employment. The derived data-led models prove successful in several different applications. First, the data-based nature of the presented methods makes them particularly useful for replacing missing data in the observed time series. The resulting interpolated CO₂ flux observation series can then be analysed with dynamic modelling techniques, or integrated to coarser temporal resolution series for further use e.g., in model evaluation exercises. However, the noise component in these observations interferes with deterministic flux integration in particular when long time periods are considered. Therefore, a method to characterise the uncertainties in the flux observations that uses a semi-parametric stochastic model is introduced in a second study. As a result, an (uncertain) estimate of the annual net carbon exchange of the observed ecosystem can be inferred directly from a statistically consistent integration of the noisy data. For the forest measurement sites analysed, the relative uncertainty for the annual sum did not exceed 11 percent highlighting the value of the data. Based on the same models, a disaggregation of the net CO₂ flux into carbon assimilation and respiration is presented in a third study that allows for the estimation of annual ecosystem carbon uptake and release. These two components can then be further analysed for their separate response to environmental conditions. Finally, a fourth study demonstrates how the results from data-led analyses can be turned into a simple parametric model that is able to predict the carbon exchange of forest ecosystems. Given the global network of measurements available the derived model can now be tested for generality and transferability to other biomes. In summary, this work particularly highlights the potential of the presented data-led methodologies to identify and describe dominant carbon exchange processes at the canopy level contributing to a better understanding of ecosystem functioning.
Polyelectrolyte microcapsules containing stimuli-responsive polymers have potential applications in the fields of sensors or actuators, stimulable microcontainers and controlled drug delivery. Such capsules were prepared, with the focus on pH-sensitivity and carbohydrate-sensing. First, pH-responsive polyelectrolyte capsules were produced by means of electrostatic layer-by-layer assembly of oppositely charged weak polyelectrolytes onto colloidal templates that were subsequently removed. The capsules were composed of poly(allylamine hydrochloride) (PAH) and poly(methacrylic acid) (PMA) or poly(4-vinylpyridine) (P4VP) and PMA and varied considerably in their hydrophobicity and the influence of secondary interactions. These polymers were assembled onto CaCO3 and SiO2 particles with diameters of ~ 5 µm, and a new method for the removal of the silica template under mild conditions was proposed. The pH-dependent stability of PAH/PMA and P4VP/PMA capsules was studied by confocal laser scanning microscopy (CLSM). They were stable over a wide pH-range and exhibited a pronounced swelling at the edges of stability, which was attributed to uncompensated positive or negative charges within the multilayers. The swollen state could be stabilized when the electrostatic repulsion was counteracted by hydrogen-bonding, hydrophobic interactions or polymeric entanglement. This stabilization made it possible to reversibly swell and shrink the capsules by tuning the pH of the solution. The pH-dependent ionization degree of PMA was used to modulate the binding of calcium ions. In addition to the pH-sensitivity, the stability and the swelling degree of these capsules at a given pH could be modified, when the ionic strength of the medium was altered. The reversible swelling was accompanied by reversible permeability changes for low and high molecular weight substances. The permeability for glucose was evaluated by studying the time-dependence of the buckling of the capsule walls in glucose solutions and the reversible permeability modulation was used for the encapsulation of polymeric material. A theoretical model was proposed to explain the pH-dependent size variations that took into account an osmotic expanding force and an elastic restoring force to evaluate the pH-dependent size changes of weak polyelectrolyte capsules. Second, sugar-sensitive multilayers were assembled using the reversible covalent ester formation between the polysaccharide mannan and phenylboronic acid moieties that were grafted onto poly(acrylic acid) (PAA). The resulting multilayer films were sensitive to several carbohydrates, showing the highest sensitivity to fructose. The response to carbohydrates resulted from the competitive binding of small molecular weight sugars and mannan to the boronic acid groups within the film, and was observed as a fast dissolution of the multilayers, when they were brought into contact with the sugar-containing solution above a critical concentration. It was also possible to prepare carbohydrate-sensitive multilayer capsules, and their sugar-dependent stability was investigated by following the release of encapsulated rhodamine-labeled bovine serum albumin (TRITC-BSA).
First studies of electron transfer in [N]phenylenes were performed in bimolecular quenching reactions of angular [3]- and triangular [4]phenylene with various electron acceptors. The relation between the quenching rate constants kq and the free energy change of the electron transfer (ΔG0CS ) could be described by the Rehm-Weller equation. From the experimental results, a reorganization energy λ of 0.7 eV was derived. Intramolecular electron transfer reactions were studied in an [N]phenylene bichomophore and a corresponding reference compound. Fluorescence lifetime and quantum yield of the bichromophor display a characteristic dependence on the solvent polarity, whereas the corresponding values of the reference compound remain constant. From the results, a nearly isoenergonic ΔG0CS can be determined. As the triplet quantum yield is nearly independent of the polarity, charge recombination leads to the population of the triplet state.
Contents: Chapter 1. Introduction 1 Information Structure 2 Grammatical Correlates of Information Structure 3 Structure of the Questionnaire 4 Experimental Tasks 5 Technicalities 6 Archiving 7 Acknowledgments Chapter 2. General Questions 1 General Information 2 Phonology 3 Morphology and Syntax Chapter 3. Experimental tasks 1 Changes (Given/New in Intransitives and Transitives) 2 Giving (Given/New in Ditransitives) 3 Visibility (Given/New, Animacy and Type/Token Reference) 4 Locations (Given/New in Locative Expressions) 5 Sequences (Given/New/Contrast in Transitives) 6 Dynamic Localization (Given/New in Dynamic Loc. Descriptions) 7 Birthday Party (Weight and Discourse Status) 8 Static Localization (Macro-Planning and Given/New in Locatives) 9 Guiding (Presentational Utterances) 10 Event Cards (All New) 11 Anima (Focus types and Animacy) 12 Contrast (Contrast in pairing events) 13 Animal Game (Broad/Narrow Focus in NP) 14 Properties (Focus on Property and Possessor) 15 Eventives (Thetic and Categorical Utterances) 16 Tell a Story (Contrast in Text) 17 Focus Cards (Selective, Restrictive, Additive, Rejective Focus) 18 Who does What (Answers to Multiple Constituent Questions) 19 Fairy Tale (Topic and Focus in Coherent Discourse) 20 Map Task (Contrastive and Selective Focus in Spontaneous Dialogue) 21 Drama (Contrastive Focus in Argumentation) 22 Events in Places (Spatial, Temporal and Complex Topics) 23 Path Descriptions (Topic Change in Narrative) 24 Groups (Partial Topic) 25 Connections (Bridging Topic) 26 Indirect (Implicational Topic) 27 Surprises (Subject-Topic Interrelation) 28 Doing (Action Given, Action Topic) 29 Influences (Question Priming) Chapter 4. Translation tasks 1 Basic Intonational Properties 2 Focus Translation 3 Topic Translation 4 Quantifiers Chapter 5. Information structure summary survey 1 Preliminaries 2 Syntax 3 Morphology 4 Prosody 5 Summary: Information structure Chapter 6. Performance of Experimental Tasks in the Field 1 Field sessions 2 Field Session Metadata 3 Informants’ Agreement
When top sports performers fail or “choke” under pressure, everyone asks: why? Research has identified a number of conditions (e.g. an audience) that elicit choking and that moderate (e.g. trait-anxiety) pressure – performance relation. Furthermore, mediating processes have been investigated. For example, explicit monitoring theories link performance failure under psychological stress to an increase in attention paid to a skill and its step-by-step execution (Beilock & Carr, 2001). Many studies have provided support for these ideas. However, so far only overt performance measures have been investigated which do not allow more thorough analyses of processes or performance strategies. But also a theoretical framework has been missing, that could (a) explain the effects of explicit monitoring on skill execution and that (b) makes predictions as to what is being monitored during execution. Consequently in this study, the nodalpoint hypothesis of motor control (Hossner & Ehrlenspiel, 2006) was taken to predict movement changes on three levels of analysis at certain “nodalpoints” within the movement sequence. Performance in two different laboratory tasks was assessed with respect to overt performance (the observable result, for example accuracy in the target), covert performance (description of movement execution, for example the acceleration of body segements) and task exploitation (the utilization of task properties such as covariation). A fake competition (see Beilock & Carr, 2002) was used to invoke pressure. In study 1 a ball bouncing task in a virtual-reality set-up was chosen. Previous studies (de Rugy, Wei, Müller, & Sternad, 2003) have shown that learners are usually able to “passively” exploit the dynamical stability of the system. According to explicit monitoring theories, choking should be expected either if the task itself evokes an “active control” (Experiment 1) or if learners are provided with explicit instructions (Experiment 2). In both experiments, participants first went through a practice phase on day 1. On day 2, following the Baseline Test participants were divided into a High-Stress or No-Stress Group for the final Performance Test. The High-Stress Group entered a fake competition. Overt performance was measured by the Absolute Error (AE) of ball amplitudes from target height; covert performance was measured by Period Modulation between successive hits and task exploitation was measured by Acceleration (AC) at ball-racket impact and Covariation (COV) of impact parameters. To evoke active control in Exp. 1 (N=20), perturbations to the ball flight were introduced. In Exp. 2 (N=39) half of the participants received explicit skill-focused instructions during learning. For overt performance, results generally show an interaction between Stress Group and Test, with better performance (i. e. lower AE) for the High-Stress group in the final Performance Test. This effect is also independent of the Instructions that participants had received during learning (Exp. 2). Similar effects were found for COV but not for AC. In study 2 a visuomotor tracking task in which participants had to pursuit a target cross that was moving on an invisible curve. This curve consisted of 3 segments of 6 turning points sequentially ordered around the x-axis. Participants learned two short movement sequences which were then concatenated to form a single sequence. It was expected that under pressure, this sequence should “fall apart” at the point of concatenation. Overt Performance was assessed by the Root Mean Square Error between target and pursuit cross as well as the Absolute Error at the turning points, covert performance was measured by the Latency from target to pursuit turning and task exploitation was measured by the temporal covariation between successive intervals between turning points. Experiment 3 (intraindividual variation) as well as Experiment 4 (interindividual variation) show performance enhancement in the pressure situation on the overt level with matching results on covert and task exploitation level. Thus, contrary to previous studies, no choking under pressure was found in any of the experiments. This may be interpreted as a failure in the experimental manipulation. But certainly also important characteristics of the task are highlighted. Choking should occur in tasks where performers do not have the time to use action or thought control strategies, that are more relevant to their “self” and that are discrete in nature.
Contents: Introduction Experimental Techniques: The LIF demonstrator unit - The LIF demonstrator unit - The mobile LIF spectrometer OPTIMOS - Investigated petroleum products and soil samples Results and Discussion: Photophysical properties of the petroleum products LIF spectroscopic investigations of oil-spiked samples LIF spectroscopic investigations of real-world soils Conclusions
The fluorescence properties and the fluorescence quenching by Tb3+ of substituted benzoic acid were investigated in solution at different pH. The substituted benzoic acids were used as simple model compounds for chromophores present in humic substances (HS). It is shown that the fluorescence properties of the model compounds resemble fluorescence of HS quite well. A major factor determining the fluorescence of model compounds are proton transfer reactions in the electronically excited state. It is intriguing that the fluorescence of the model compounds was almost not quenched by Tb3+ while the HS fluorescence was decreased very effectively. From our results we concluded that proton transfer reactions as well as conformational reorientation processes play an important role in the fluorescence of HS. The luminescence of bound Tb3+ was sensitized by an energy transfer step upon excitation of the model compounds and of HS, respectively. For HS the observed sensitization was dependent on its origin indicating differences 1) in the connection between chromophores and binding sites and 2) in the energy levels of the chromophore triplet states. Hence, the observed sensitization of the Tb3+ luminescence could be useful to characterize structural differences of HS in solution. Interlanthanide energy transfer between Tb3+ and Nd3+ was used to determine the average distance R between both ions using the well-known formalism of luminescence resonance energy transfer. R was dependent on the origin of the HS reflecting the difference in structure. The value of Rmin seemed to be a unique feature of the HS. It was further found that upon variation of the pH R also changed. This demonstrates that the measurement of interlanthanide energy transfer can be used as a direct method to monitor conformational changes in HS.
The Andean orogen is the most outstanding example of mountain building caused by the subduction of oceanic below continental lithosphere. The Andes formed by the subduction of the Nazca and Antarctic oceanic plates under the South American continent over at least ~200 million years. Tectonic and climatic conditions vary markedly along this north-south–oriented plate boundary, which thus represents an ideal natural laboratory to study tectonic and climatic segmentation processes and their possible feedbacks. Most of the seismic energy on Earth is released by earthquakes in subduction zones, like the giant 1960, Mw 9.5 event in south-central Chile. However, the segmentation mechanisms of surface deformation during and between these giant events have remained poorly understood. The Andean margin is a key area to study seismotectonic processes because of its along-strike variability under similar plate kinematic boundary conditions. Active deformation has been widely studied in the central part of the Andes, but the south-central sector of the orogen has gathered less research efforts. This study focuses on tectonics at the Neogene and late Quaternary time scales in the Main Cordillera and coastal forearc of the south-central Andes. For both domains I document the existence of previously unrecognized active faults and present estimates of deformation rates and fault kinematics. Furthermore these data are correlated to address fundamental mountain building processes like strain partitioning and large-scale segmentation. In the Main Cordillera domain and at the Neogene timescale, I integrate structural and stratigraphic field observations with published isotopic ages to propose four main phases of coupled styles of tectonics and distribution of volcanism and magmatism. These phases can be related to the geometry and kinematics of plate convergence. At the late Pleistocene timescale, I integrate field observations with lake seismic and bathymetric profiles from the Lago Laja region, located near the Andean drainage divide. These data reveal Holocene extensional faults, which define the Lago Laja fault system. This fault system has no significant strike-slip component, contrasting with the Liquiñe-Ofqui dextral intra-arc system to the south, where Holocene strike-slip markers are ubiquitous. This contrast in structural style along the arc is coincident with a marked change in along-strike fault geometries in the forearc, across the Arauco Peninsula. Thereon I propose that a net gradient in the degree of partitioning of oblique subduction occurs across the Arauco transition zone. To the north, the margin parallel component of oblique convergence is distributed in a wide zone of diffuse deformation, while to the south it is partitioned along an intra-arc, margin-parallel strike-slip fault zone. In the coastal forearc domain and at the Neogene timescale, I integrate structural and stratigraphic data from field observations, industry reflection-seismic profiles and boreholes to emphasize the influence of climate-driven filling of the trench on the mechanics and kinematics of the margin. I show that forearc basins in the 34-45°S segment record Eocene to early Pliocene extension and subsidence followed by ongoing uplift and contraction since the late Pliocene. I interpret the first stage as caused by tectonic erosion due to high plate convergence rates and reduced trench fill. The subsequent stage, in turn, is related to accretion caused by low convergence rates and the rapid increase in trench fill after the onset of Patagonian glaciations and climate-driven exhumation at ~6-5 Ma. On the late Quaternary timescale, I integrate off-shore seismic profiles with the distribution of deformed marine terraces from Isla Santa María, dated by the radiocarbon method, to show that inverted reverse faulting controls the coastal geomorphology and segmentation of surface deformation. There, a cluster of microearthquakes illuminates one of these reverse faults, which presumingly reaches the plate interface. Furthermore, I use accounts of coseismic uplift during the 1835 M>8 earthquake made by Charles Darwin, to propose that this active reverse fault has been mechanically coupled to the megathrust. This has important implications on the assessment of seismic hazards in this, and other similar regions. These results underscore the need to study plate-boundary deformation processes at various temporal and spatial scales and to integrate geomorphologic, structural, stratigraphic, and geophysical data sets in order to understand the present distribution and causes of tectonic segmentation.
Answer Set Programming (ASP) emerged in the late 1990s as a new logic programming paradigm, having its roots in nonmonotonic reasoning, deductive databases, and logic programming with negation as failure. The basic idea of ASP is to represent a computational problem as a logic program whose answer sets correspond to solutions, and then to use an answer set solver for finding answer sets of the program. ASP is particularly suited for solving NP-complete search problems. Among these, we find applications to product configuration, diagnosis, and graph-theoretical problems, e.g. finding Hamiltonian cycles. On different lines of ASP research, many extensions of the basic formalism have been proposed. The most intensively studied one is the modelling of preferences in ASP. They constitute a natural and effective way of selecting preferred solutions among a plethora of solutions for a problem. For example, preferences have been successfully used for timetabling, auctioning, and product configuration. In this thesis, we concentrate on preferences within answer set programming. Among several formalisms and semantics for preference handling in ASP, we concentrate on ordered logic programs with the underlying D-, W-, and B-semantics. In this setting, preferences are defined among rules of a logic program. They select preferred answer sets among (standard) answer sets of the underlying logic program. Up to now, those preferred answer sets have been computed either via a compilation method or by meta-interpretation. Hence, the question comes up, whether and how preferences can be integrated into an existing ASP solver. To solve this question, we develop an operational graph-based framework for the computation of answer sets of logic programs. Then, we integrate preferences into this operational approach. We empirically observe that our integrative approach performs in most cases better than the compilation method or meta-interpretation. Another research issue in ASP are optimization methods that remove redundancies, as also found in database query optimizers. For these purposes, the rather recently suggested notion of strong equivalence for ASP can be used. If a program is strongly equivalent to a subprogram of itself, then one can always use the subprogram instead of the original program, a technique which serves as an effective optimization method. Up to now, strong equivalence has not been considered for logic programs with preferences. In this thesis, we tackle this issue and generalize the notion of strong equivalence to ordered logic programs. We give necessary and sufficient conditions for the strong equivalence of two ordered logic programs. Furthermore, we provide program transformations for ordered logic programs and show in how far preferences can be simplified. Finally, we present two new applications for preferences within answer set programming. First, we define new procedures for group decision making, which we apply to the problem of scheduling a group meeting. As a second new application, we reconstruct a linguistic problem appearing in German dialects within ASP. Regarding linguistic studies, there is an ongoing debate about how unique the rule systems of language are in human cognition. The reconstruction of grammatical regularities with tools from computer science has consequences for this debate: if grammars can be modelled this way, then they share core properties with other non-linguistic rule systems.
Our work goes in two directions. At first we want to transfer definitions, concepts and results of the theory of hyperidentities and solid varieties from the total to the partial case. (1) We prove that the operators chi^A_RNF and chi^E_RNF are only monotone and additive and we show that the sets of all fixed points of these operators are characterized only by three instead of four equivalent conditions for the case of closure operators. (2) We prove that V is n − SF-solid iff clone^SF V is free with respect to itself, freely generated by the independent set {[fi(x_1, . . . , x_n)]Id^SF_n V | i \in I}. (3) We prove that if V is n-fluid and ~V |P(V ) =~V −iso |P(V ) then V is kunsolid for k >= n (where P(V ) is the set of all V -proper hypersubstitutions of type \tau ). (4) We prove that a strong M-hyperquasi-equational theory is characterized by four equivalent conditions. The second direction of our work is to follow ideas which are typical for the partial case. (1) We characterize all minimal partial clones which are strongly solidifyable. (2)We define the operator Chi^A_Ph where Ph is a monoid of regular partial hypersubstitutions.Using this concept, we define the concept of a Phyp_R(\tau )-solid strong regular variety of partial algebras and we prove that a PHyp_R(\tau )-solid strong regular variety satisfies four equivalent conditions.
This paper investigates the formation of the ownership structure and the corporate governance system of the Ukraine as a country in transition. Numerous studies consider that privatization results in the establishment of a proprietors’ motivation mechanism. On the other hand it causes ownership concentration in the hands of a few shareholders and managers. The goal of economic reform in transition and, largely, its pace, is measured by the degree to which shareholders participate in short- and long-term corporate value creation. Shareholder access to such created value depends on the ability of corporate “insiders”, especially executives and management, to claim a disproportionate share of corporate value (the “insider effect”). An econometric analysis of the correlation between privatization and macroeconomic factors studies the degree of effectiveness of economic reforming in Ukrainian regions.
This paper tries to apply common methods to estimate unbiased coefficients for the return to schooling in Germany for the year 2004. Based on the simple Mincer-type wage equation, the return to schooling is around 9.5% per year. There is no sheepskin effect. As expected the return in the private sector is higher than in the public sector. Females have a higher return than males, but there are no differences between East and West Germans. An Instrumental Variables and a 3-Stage-Least-Squares approach give very high returns. For correcting the sample selection, the Heckman Two Step Procedure and the Heckman Maximum Likelihood Approach are used. For both methods, the coefficients are very similar, but higher than without correcting for it.
This paper presents in the first section a methodological introduction concerning statistics of consumer prices in Georgia. The second section gives a general idea of the development of consumer prices from January 1994 till September 1999. A detailed regional analysis is added in section 3. The fourth section analyses the development of consumer prices for the eight main groups included in the total CPI. Section 5 compares the changes in Georgian CPI with the movements of foreign exchange rates in Georgian Lari. This paper ends with a summary including a short outlook to the next years.
The attractiveness of foreign direct investment in Russia and Ukraine : a statistical analysis
(1999)
In this paper a comparative exploration of the potential for foreign investment and real inflow to Russia and Ukraine are examined. The analysis showed that primarily both countries enjoyed significant comparative advantages in attracting foreign capital. Since the foundation of independent states in 1992 attractiveness began to diverge dramatically. This difference is clearly explained by the determination of the Russian government to reform the economy earlier than the Ukrainian government. The transition to a market economy is closely connected with the development of a favorable investment climate in both countries. It includes the foundation of a stable system of property rights and a conducive legal environment.
The formation of colloids by the controlled reduction, nucleation, and growth of inorganic precursor salts in different media has been investigated for more than a century. Recently, the preparation of ultrafine particles has received much attention since they can offer highly promising and novel options for a wide range of technical applications (nanotechnology, electrooptical devices, pharmaceutics, etc). The interest derives from the well-known fact that properties of advanced materials are critically dependent on the microstructure of the sample. Control of size, size distribution and morphology of the individual grains or crystallites is of the utmost importance in order to obtain the material characteristics desired. Several methods can be employed for the synthesis of nanoparticles. On the one hand, the reduction can occur in diluted aqueous or alcoholic solutions. On the other hand, the reduction process can be realized in a template phase, e.g. in well-defined microemulsion droplets. However, the stability of the nanoparticles formed mainly depends on their surface charge and it can be influenced with some added protective components. Quite different types of polymers, including polyelectrolytes and amphiphilic block copolymers, can for instance be used as protecting agents. The reduction and stabilization of metal colloids in aqueous solution by adding self-synthesized hydrophobically modified polyelectrolytes were studied in much more details. The polymers used are hydrophobically modified derivatives of poly(sodium acrylate) and of maleamic acid copolymers as well as the commercially available branched poly(ethyleneimine). The first notable result is that the polyelectrolytes used can act alone as both reducing and stabilizing agent for the preparation of gold nanoparticles. The investigation was then focused on the influence of the hydrophobic substitution of the polymer backbone on the reduction and stabilization processes. First of all, the polymers were added at room temperature and the reduction process was investigated over a longer time period (up to 8 days). In comparison, the reduction process was realized faster at higher temperature, i.e. 100°C. In both cases metal nanoparticles of colloidal dimensions can be produced. However, the size and shape of the individual nanoparticles mainly depends on the polymer added and the temperature procedure used. In a second part, the influence of the prior mentioned polyelectrolytes was investigated on the phase behaviour as well as on the properties of the inverse micellar region (L2 phase) of quaternary systems consisting of a surfactant, toluene-pentanol (1:1) and water. The majority of the present work has been made with the anionic surfactant sodium dodecylsulfate (SDS) and the cationic surfactant cetyltrimethylammonium bromide (CTAB) since they can interact with the oppositely charged polyelectrolytes and the microemulsions formed using these surfactants present a large water-in-oil region. Subsequently, the polymer-modified microemulsions were used as new templates for the synthesis of inorganic particles, ranging from metals to complex crystallites, of very small size. The water droplets can indeed act as nanoreactors for the nucleation and growth of the particles, and the added polymer can influence the droplet size, the droplet-droplet interactions, as well as the stability of the surfactant film by the formation of polymer-surfactant complexes. One further advantage of the polymer-modified microemulsions is the possibility to stabilize the primary formed nanoparticles via a polymer adsorption (steric and/or electrostatic stabilization). Thus, the polyelectrolyte-modified nanoparticles formed can be redispersed without flocculation after solvent evaporation.
This issue of Linguistics in Potsdam contains a number of papers that grew out of the workshop Descriptive and Empirical Adequacy in Linguistics held in Berlin on December 17-19 December, 2005. One of the goals of this meeting was to bring together scholars working in various frameworks (with emphasis on the Minimalist Program and Optimality Theory) and to discuss matters concerning descriptive and empirical adequacy. Another explicit goal was to discuss the question whether Minimalism and Optimality Theory should be considered incompatible and, hence, competing theories, or whether the two frameworks should rather be considered complementary in certain respects (see http://let.uvt.nl/deal05/call.html for the call for papers). Five of the seven papers in this volume directly grew out of the oral presentations given at the workshop. Although Vieri Samek-Lodovici’s paper was not part of the workshop, it can also be considered a result of the workshop since it pulls together some of his many comments during the discussion time. The paper by Eva Engels and Sten Vikner discusses a phenomenon that received much interest from both minimalist and optimality theoretic syntax in the recent years, Scandinavian object shift. The paper may serve as a practical example for a claim that is repeatedly made in this volume: minimalist and OT analyses, even where they might be competing, can fruitfully inform each other in a constructive manner, leading to a deeper understanding of syntactic phenomena.
The limited capacity of working memory forces people to update its contents continuously. Two aspects of the updating process were investigated in the present experimental series. The first series concerned the question if it is possible to update several representations in parallel. Similar results were obtained for the updating of object features as well as for the updating of whole objects, participants were able to update representations in parallel. The second experimental series addressed the question if working memory representations which were replaced in an updating disappear directly or interfere with the new representations. Evidence for the existence of old representations was found under working memory conditions and under conditions exceeding working memory capacity. These results contradict the hypothesis that working memory contents are protected from proactive interference of long-term memory contents.