Refine
Year of publication
- 2006 (82) (remove)
Document Type
- Doctoral Thesis (82) (remove)
Language
- English (82) (remove)
Keywords
- Nichtlineare Dynamik (3)
- Polyelektrolyt (3)
- Ackerschmalwand (2)
- Klimatologie (2)
- Maschinelles Lernen (2)
- Monsun (2)
- Nanopartikel (2)
- Phylogenie (2)
- Unsicherheit (2)
- Unsicherheitsanalyse (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Biochemie und Biologie (12)
- Institut für Chemie (10)
- Institut für Mathematik (10)
- Institut für Informatik und Computational Science (7)
- Institut für Umweltwissenschaften und Geographie (5)
- Extern (3)
- Institut für Ernährungswissenschaft (3)
- Institut für Geowissenschaften (3)
- Department Psychologie (2)
What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis
(2006)
Since Galileo Galilei invented the first thermometer, researchers have tried to understand the complex dynamics of ocean and atmosphere by means of scientific methods. They observe nature and formulate theories about the climate system. Since some decades powerful computers are capable to simulate the past and future evolution of climate. Time series analysis tries to link the observed data to the computer models: Using statistical methods, one estimates characteristic properties of the underlying climatological processes that in turn can enter the models. The quality of an estimation is evaluated by means of error bars and significance testing. On the one hand, such a test should be capable to detect interesting features, i.e. be sensitive. On the other hand, it should be robust and sort out false positive results, i.e. be specific. This thesis mainly aims to contribute to methodological questions of time series analysis with a focus on sensitivity and specificity and to apply the investigated methods to recent climatological problems. First, the inference of long-range correlations by means of Detrended Fluctuation Analysis (DFA) is studied. It is argued that power-law scaling of the fluctuation function and thus long-memory may not be assumed a priori but have to be established. This requires to investigate the local slopes of the fluctuation function. The variability characteristic for stochastic processes is accounted for by calculating empirical confidence regions. The comparison of a long-memory with a short-memory model shows that the inference of long-range correlations from a finite amount of data by means of DFA is not specific. When aiming to infer short memory by means of DFA, a local slope larger than $\alpha=0.5$ for large scales does not necessarily imply long-memory. Also, a finite scaling of the autocorrelation function is shifted to larger scales in the fluctuation function. It turns out that long-range correlations cannot be concluded unambiguously from the DFA results for the Prague temperature data set. In the second part of the thesis, an equivalence class of nonstationary Gaussian stochastic processes is defined in the wavelet domain. These processes are characterized by means of wavelet multipliers and exhibit well defined time dependent spectral properties; they allow one to generate realizations of any nonstationary Gaussian process. The dependency of the realizations on the wavelets used for the generation is studied, bias and variance of the wavelet sample spectrum are calculated. To overcome the difficulties of multiple testing, an areawise significance test is developed and compared to the conventional pointwise test in terms of sensitivity and specificity. Applications to Climatological and Hydrological questions are presented. The thesis at hand mainly aims to contribute to methodological questions of time series analysis and to apply the investigated methods to recent climatological problems. In the last part, the coupling between El Nino/Southern Oscillation (ENSO) and the Indian Monsoon on inter-annual time scales is studied by means of Hilbert transformation and a curvature defined phase. This method allows one to investigate the relation of two oscillating systems with respect to their phases, independently of their amplitudes. The performance of the technique is evaluated using a toy model. From the data, distinct epochs are identified, especially two intervals of phase coherence, 1886-1908 and 1964-1980, confirming earlier findings from a new point of view. A significance test of high specificity corroborates these results. Also so far unknown periods of coupling invisible to linear methods are detected. These findings suggest that the decreasing correlation during the last decades might be partly inherent to the ENSO/Monsoon system. Finally, a possible interpretation of how volcanic radiative forcing could cause the coupling is outlined.
The limited capacity of working memory forces people to update its contents continuously. Two aspects of the updating process were investigated in the present experimental series. The first series concerned the question if it is possible to update several representations in parallel. Similar results were obtained for the updating of object features as well as for the updating of whole objects, participants were able to update representations in parallel. The second experimental series addressed the question if working memory representations which were replaced in an updating disappear directly or interfere with the new representations. Evidence for the existence of old representations was found under working memory conditions and under conditions exceeding working memory capacity. These results contradict the hypothesis that working memory contents are protected from proactive interference of long-term memory contents.
When Galactic microlensing events of stars are observed, one usually measures a symmetric light curve corresponding to a single lens, or an asymmetric light curve, often with caustic crossings, in the case of a binary lens system. In principle, the fraction of binary stars at a certain separation range can be estimated based on the number of measured microlensing events. However, a binary system may produce a light curve which can be fitted well as a single lens light curve, in particullary if the data sampling is poor and the errorbars are large. We investigate what fraction of microlensing events produced by binary stars for different separations may be well fitted by and hence misinterpreted as single lens events for various observational conditions. We find that this fraction strongly depends on the separation of the binary components, reaching its minimum at between 0.6 and 1.0 Einstein radius, where it is still of the order of 5% The Einstein radius is corresponding to few A.U. for typical Galactic microlensing scenarios. The rate for misinterpretation is higher for short microlensing events lasting up to few months and events with smaller maximum amplification. For fixed separation it increases for binaries with more extreme mass ratios. Problem of degeneracy in photometric light curve solution between binary lens and binary source microlensing events was studied on simulated data, and data observed by the PLANET collaboration. The fitting code BISCO using the PIKAIA genetic algorithm optimizing routine was written for optimizing binary-source microlensing light curves observed at different sites, in I, R and V photometric bands. Tests on simulated microlensing light curves show that BISCO is successful in finding the solution to a binary-source event in a very wide parameter space. Flux ratio method is suggested in this work for breaking degeneracy between binary-lens and binary-source photometric light curves. Models show that only a few additional data points in photometric V band, together with a full light curve in I band, will enable breaking the degeneracy. Very good data quality and dense data sampling, combined with accurate binary lens and binary source modeling, yielded the discovery of the lowest-mass planet discovered outside of the Solar System so far, OGLE-2005-BLG-390Lb, having only 5.5 Earth masses. This was the first observed microlensing event in which the degeneracy between a planetary binary-lens and an extreme flux ratio binary-source model has been successfully broken. For events OGLE-2003-BLG-222 and OGLE-2004-BLG-347, the degeneracy was encountered despite of very dense data sampling. From light curve modeling and stellar evolution theory, there was a slight preference to explain OGLE-2003-BLG-222 as a binary source event, and OGLE-2004-BLG-347 as a binary lens event. However, without spectra, this degeneracy cannot be fully broken. No planet was found so far around a white dwarf, though it is believed that Jovian planets should survive the late stages of stellar evolution, and that white dwarfs will retain planetary systems in wide orbits. We want to perform high precision astrometric observations of nearby white dwarfs in wide binary systems with red dwarfs in order to find planets around white dwarfs. We selected a sample of observing targets (WD-RD binary systems, not published yet), which can possibly have planets around the WD component, and modeled synthetic astrometric orbits which can be observed for these targets using existing and future astrometric facilities. Modeling was performed for the astrometric accuracy of 0.01, 0.1, and 1.0 mas, separation between WD and planet of 3 and 5 A.U., binary system separation of 30 A.U., planet masses of 10 Earth masses, 1 and 10 Jupiter masses, WD mass of 0.5M and 1.0 Solar masses, and distances to the system of 10, 20 and 30 pc. It was found that the PRIMA facility at the VLTI will be able to detect planets around white dwarfs once it is operating, by measuring the astrometric wobble of the WD due to a planet companion, down to 1 Jupiter mass. We show for the simulated observations that it is possible to model the orbits and find the parameters describing the potential planetary systems.
The present thesis deals with the mental representation of numbers in space. Generally it is assumed that numbers are mentally represented on a mental number line along which they ordered in a continuous and analogical manner. Dehaene, Bossini and Giraux (1993) found that the mental number line is spatially oriented from left-to-right. Using a parity-judgment task they observed faster left-hand responses for smaller numbers and faster right-hand responses for larger numbers. This effect has been labelled as Spatial Numerical Association of Response Codes (SNARC) effect. The first study of the present thesis deals with the question whether the spatial orientation of the mental number line derives from the writing system participants are adapted to. According to a strong ontogenetic interpretation the SNARC effect should only obtain for effectors closely related to the comprehension and production of written language (hands and eyes). We asked participants to indicate the parity status of digits by pressing a pedal with their left or right foot. In contrast to the strong ontogenetic view we observed a pedal SNARC effect which did not differ from the manual SNARC effect. In the second study we evaluated whether the SNARC effect reflects an association of numbers and extracorporal space or an association of numbers and hands. To do so we varied the spatial arrangement of the response buttons (vertical vs. horizontal) and the instruction (handrelated vs. button-related). For vertically arranged buttons and a buttonrelated instruction we found a button-related SNARC effect. In contrast, for a hand-related instruction we obtained a hand-related SNARC effect. For horizontally arranged buttons and a handrelated instruction, however, we found a buttonrelated SNARC effect. The results of the first to studies were interpreted in terms of weak ontogenetic view. In the third study we aimed to examine the functional locus of the SNARC effect. We used the psychological refractory period paradigm. In the first experiment participants first indicated the pitch of a tone and then the parity status of a digit (locus-of-slack paradigma). In a second experiment the order of stimulus presentation and thus tasks changed (effect-propagation paradigm). The results led us conclude that the SNARC effect arises while the response is centrally selected. In our fourth study we test for an association of numbers and time. We asked participants to compare two serially presented digits. Participants were faster to compare ascending digit pairs (e.g., 2-3) than descending pairs (e.g., 3-2). The pattern of our results was interpreted in terms of forwardassociations (“1-2-3”) as formed by our ubiquitous cognitive routines to count of objects or events.
The intracontinental endorheic Aral Sea, remote from oceanic influences, represents an excellent sedimentary archive in Central Asia that can be used for high-resolution palaeoclimate studies. We performed palynological, microfacies and geochemical analyses on sediment cores retrieved from Chernyshov Bay, in the NW part of the modern Large Aral Sea. The most complete sedimentary sequence, whose total length is 11 m, covers approximately the past 2000 years of the late Holocene. High-resolution palynological analyses, conducted on both dinoflagellate cysts assemblages and pollen grains, evidenced prominent environmental change in the Aral Sea and in the catchment area. The diversity and the distribution of dinoflagellate cysts within the assemblages characterized the sequence of salinity and lake-level changes during the past 2000 years. Due to the strong dependence of the Aral Sea hydrology to inputs from its tributaries, the lake levels are ultimately linked to fluctuations in meltwater discharges during spring. As the amplitude of glacial meltwater inputs is largely controlled by temperature variations in the Tien Shan and Pamir Mountains during the melting season, salinity and lake-level changes of the Aral Sea reflect temperature fluctuations in the high catchment area during the past 2000 years. Dinoflagellate cyst assemblages document lake lowstands and hypersaline conditions during ca. 0–425 AD, 920–1230 AD, 1500 AD, 1600–1650 AD, 1800 AD and since the 1960s, whereas oligosaline conditions and higher lake levels prevailed during the intervening periods. Besides, reworked dinoflagellate cysts from Palaeogene and Neogene deposits happened to be a valuable proxy for extreme sheet-wash events, when precipitation is enhanced over the Aral Sea Basin as during 1230–1450 AD. We propose that the recorded environmental changes are related primarily to climate, but may have been possibly amplified during extreme conditions by human-controlled irrigation activities or military conflicts. Additionally, salinity levels and variations in solar activity show striking similarities over the past millennium, as during 1000–1300 AD, 1450–1550 and 1600–1700 AD when low lake levels match well with an increase in solar activity thus suggesting that an increase in the net radiative forcing reinforced past Aral Sea’s regressions. On the other hand, we used pollen analyses to quantify changes in moisture conditions in the Aral Sea Basin. High-resolution reconstruction of precipitation (mean annual) and temperature (mean annual, coldest versus warmest month) parameters are performed using the “probability mutual climatic spheres” method, providing the sequence of climate change for the past 2000 years in western Central Asia. Cold and arid conditions prevailed during ca. 0–400 AD, 900–1150 AD and 1500–1650 AD with the extension of xeric vegetation dominated by steppe elements. Conversely, warmer and less arid conditions occurred during ca. 400–900 AD and 1150–1450 AD, where steppe vegetation was enriched in plants requiring moister conditions. Change in the precipitation pattern over the Aral Sea Basin is shown to be predominantly controlled by the Eastern Mediterranean (EM) cyclonic system, which provides humidity to the Middle East and western Central Asia during winter and early spring. As the EM is significantly regulated by pressure modulations of the North Atlantic Oscillation (NAO) when the system is in a negative phase, a relationship between humidity over western Central Asia and the NAO is proposed. Besides, laminated sediments record shifts in sedimentary processes during the late Holocene that reflect pronounced changes in taphonomic dynamics. In Central Asia, the frequency of dust storms occurring during spring when the continent is heating up is mostly controlled by the intensity and the position of the Siberian High (SH) Pressure System. Using titanium (Ti) content in laminated sediments as a proxy for aeolian detrital inputs, changes in wind dynamics over Central Asia is documented for the past 1500 years, offering the longest reconstruction of SH variability to date. Based on high Ti content, stronger wind dynamics are reported from 450–700 AD, 1210–1265 AD, 1350–1750 AD and 1800–1975 AD, reporting a stronger SH during spring. In contrast, lower Ti content from 1750–1800 AD and 1980–1985 AD reflect a diminished influence of the SH and a reduced atmospheric circulation. During 1180–1210 AD and 1265–1310 AD, considerably weakened atmospheric circulation is evidenced. As a whole, though climate dynamics controlled environmental changes and ultimately modulated changes in the western Central Asia’s climate system, it is likely that changes in solar activity also had an impact by influencing to some extent the Aral Sea’s hydrology balance and also regional temperature patterns in the past. <hr> The appendix of the thesis is provided via the HTML document as ZIP download.
With increasing number of applications in Internet and mobile environments, distributed software systems are demanded to be more powerful and flexible, especially in terms of dynamism and security. This dissertation describes my work concerning three aspects: dynamic reconfiguration of component software, security control on middleware applications, and web services dynamic composition. Firstly, I proposed a technology named Routing Based Workflow (RBW) to model the execution and management of collaborative components and realize temporary binding for component instances. The temporary binding means component instances are temporarily loaded into a created execution environment to execute their functions, and then are released to their repository after executions. The temporary binding allows to create an idle execution environment for all collaborative components, on which the change operations can be immediately carried out. The changes on execution environment will result in a new collaboration of all involved components, and also greatly simplifies the classical issues arising from dynamic changes, such as consistency preserving etc. To demonstrate the feasibility of RBW, I created a dynamic secure middleware system - the Smart Data Server Version 3.0 (SDS3). In SDS3, an open source implementation of CORBA is adopted and modified as the communication infrastructure, and three secure components managed by RBW, are created to enhance the security on the access of deployed applications. SDS3 offers multi-level security control on its applications from strategy control to application-specific detail control. For the management by RBW, the strategy control of SDS3 applications could be dynamically changed by reorganizing the collaboration of the three secure components. In addition, I created the Dynamic Services Composer (DSC) based on Apache open source projects, Apache Axis and WSIF. In DSC, RBW is employed to model the interaction and collaboration of web services and to enable the dynamic changes on the flow structure of web services. Finally, overall performance tests were made to evaluate the efficiency of the developed RBW and SDS3. The results demonstrated that temporary binding of component instances makes slight impacts on the execution efficiency of components, and the blackout time arising from dynamic changes can be extremely reduced in any applications.
<img src="http://vg00.met.vgwort.de/na/806c85cec18906a64e06" width="1" height="1" alt=""> Subject of this work is the possibility to synchronize nonlinear systems via correlated noise and automatic control. The thesis is divided into two parts. The first part is motivated by field studies on feral sheep populations on two islands of the St. Kilda archipelago, which revealed strong correlations due to environmental noise. For a linear system the population correlation equals the noise correlation (Moran effect). But there exists no systematic examination of the properties of nonlinear maps under the influence of correlated noise. Therefore, in the first part of this thesis the noise-induced correlation of logistic maps is systematically examined. For small noise intensities it can be shown analytically that the correlation of quadratic maps in the fixed-point regime is always smaller than or equal to the noise correlation. In the period-2 regime a Markov model explains qualitatively the main dynamical characteristics. Furthermore, two different mechanisms are introduced which lead to a higher correlation of the systems than the environmental correlation. The new effect of "correlation resonance" is described, i. e. the correlation yields a maximum depending on the noise intensity. In the second part of the thesis an automatic control method is presented which synchronizes different systems in a robust way. This method is inspired by phase-locked loops and is based on a feedback loop with a differential control scheme, which allows to change the phases of the controlled systems. The effectiveness of the approach is demonstrated for controlled phase synchronization of regular oscillators and foodweb models.
I perform and analyse the first ever calculations of rotating stellar iron core collapse in {3+1} general relativity that start out with presupernova models from stellar evolutionary calculations and include a microphysical finite-temperature nuclear equation of state, an approximate scheme for electron capture during collapse and neutrino pressure effects. Based on the results of these calculations, I obtain the to-date most realistic estimates for the gravitational wave signal from collapse, bounce and the early postbounce phase of core collapse supernovae. I supplement my {3+1} GR hydrodynamic simulations with 2D Newtonian neutrino radiation-hydrodynamic supernova calculations focussing on (1) the late postbounce gravitational wave emission owing to convective overturn, anisotropic neutrino emission and protoneutron star pulsations, and (2) on the gravitational wave signature of accretion-induced collapse of white dwarfs to neutron stars.
Daylength is one of several parameters controlling flowering time in many plant species. The day length is perceived in leaves, but how the floral signal is transduced to the shoot apex via the phloem to induce flowering remains to be elucidated. This study aimed at the identification of new candidates involved in the induction of flowering by employing three plant species, Arabidopsis thaliana, Sinapis alba and Brassica napus in combination with transcript profiling by Affymetrix chip hybridization, metabolite profiling by gas chromatography – mass spectrometry and targeted protein analysis using antibodies. All analyses were performed on tissue-specific samples and focused on phloem sap or phloem exudates. To find common transcript and metabolite candidates potentially associated with the floral transition, two independent induction systems in Arabidopsis were used: a photoextension system, whereby plants received fourteen additional hours of light, and a parallel dexamethasone-inducible system, which was centered on the induction of the known flowering gene CONSTANS (CO). Identification of signals preceding the CO cascade was possible using the light extension regime, while downstream events dependent on CO activation were compared in both systems. Altogether, a number of interesting transcript and metabolite candidates were identified in both systems with some degree of overlap. Sinapis alba was used to investigate the universality of the floral signals between species. Comparisons of metabolite data revealed a few common candidates that may prove interesting for further studies. In addition, a targeted approach was carried out to investigate the presence of the Flowering Locus T (FT) protein during different stages of flower development using an antibody. Interesting changes in the sizes of antigens from rape phloem were seen and appeared consistent in Arabidopsis and to a lesser extent in Sinapis. Overall, the broad surveying approaches for transcripts and metabolites used in this study revealed several new potential candidates involved in the induction and/or regulation of flowering. As far as the protein work, additional experiments will reveal the link between FT and floral induction as well as its role in maintaining the floral state using the abovementioned plant species.
Semiclassical asymptotics for the scattering amplitude in the presence of focal points at infinity
(2006)
Semiclassical asymptotics for the scattering amplitude in the presence of focal points at infinity
(2006)
We consider scattering in $\R^n$, $n\ge 2$, described by the Schr\"odinger operator $P(h)=-h^2\Delta+V$, where $V$ is a short-range potential. With the aid of Maslov theory, we give a geometrical formula for the semiclassical asymptotics as $h\to 0$ of the scattering amplitude $f(\omega_-,\omega_+;\lambda,h)$ $\omega_+\neq\omega_-$) which remains valid in the presence of focal points at infinity (caustics). Crucial for this analysis are precise estimates on the asymptotics of the classical phase trajectories and the relationship between caustics in euclidean phase space and caustics at infinity.
Earthquakes form by sudden brittle failure of rock mostly as shear ruptures along a rupture plane. Beside this, mechanisms other than pure shearing have been observed for some earthquakes mainly in volcanic areas. Possible explanations include complex rupture geometries and tensile earthquakes. Tensile earthquakes occur by opening or closure of cracks during rupturing. They are likely to be often connected with fluids that cause pressure changes in the pore space of rocks leading to earthquake triggering. Tensile components have been reported for swarm earthquakes in West Bohemia in 2000. The aim and subject of this work is an assessment and the accurate determination of such tensile components for earthquakes in anisotropic media. Currently used standard techniques for the retrieval of earthquake source mechanisms assume isotropic rock properties. By means of moment tensors, equivalent forces acting at the source are used to explain the radiated wavefield. Conversely, seismic anisotropy, i.e. directional dependence of elastic properties, has been observed in the earth's crust and mantle such as in West Bohemia. In comparison to isotropy, anisotropy causes modifications in wave amplitudes and shear-wave splitting. In this work, effects of seismic anisotropy on true or apparent tensile source components of earthquakes are investigated. In addition, earthquake source parameters are determined considering anisotropy. It is shown that moment tensors and radiation patterns due to shear sources in anisotropic media may be similar to those of tensile sources in isotropic media. In contrast, similarities between tensile earthquakes in anisotropic rocks and shear sources in isotropic media may exist. As a consequence, the interpretation of tensile source components is ambiguous. The effects that are due to anisotropy depend on the orientation of the earthquake source and the degree of anisotropy. The moment of an earthquake is also influenced by anisotropy. The orientation of fault planes can be reliably determined even if isotropy instead of anisotropy is assumed and if the spectra of the compressional waves are used. Greater difficulties may arise when the spectra of split shear waves are additionally included. Retrieved moment tensors show systematic artefacts. Observed tensile source components determined for events in West Bohemia in 1997 can only partly be attributed to the effects of moderate anisotropy. Furthermore, moment tensors determined earlier for earthquakes induced at the German Continental Deep Drilling Program (KTB), Bavaria, were reinterpreted under assumptions of anisotropic rock properties near the borehole. The events can be consistently identified as shear sources, although their moment tensors comprise tensile components that are considered to be apparent. These results emphasise the necessity to consider anisotropy to uniquely determine tensile source parameters. Therefore, a new inversion algorithm has been developed, tested, and successfully applied to 112 earthquakes that occurred during the most recent intense swarm episode in West Bohemia in 2000 at the German-Czech border. Their source mechanisms have been retrieved using isotropic and anisotropic velocity models. Determined local magnitudes are in the range between 1.6 and 3.2. Fault-plane solutions are similar to each other and characterised by left-lateral faulting on steeply dipping, roughly North-South oriented rupture planes. Their dip angles decrease above a depth of about 8.4km. Tensile source components indicating positive volume changes are found for more than 60% of the considered earthquakes. Their size depends on source time and location. They are significant at the beginning of the swarm and at depths below 8.4km but they decrease in importance later in the course of the swarm. Determined principle stress axes include P axes striking Northeast and Taxes striking Southeast. They resemble those found earlier in Central Europe. However, depth-dependence in plunge is observed. Plunge angles of the P axes decrease gradually from 50° towards shallow angles with increasing depth. In contrast, the plunge angles of the T axes change rapidly from about 8° above a depth of 8.4km to 21° below this depth. By this thesis, spatial and temporal variations in tensile source components and stress conditions have been reported for the first time for swarm earthquakes in West Bohemia in 2000. They also persist, when anisotropy is assumed and can be explained by intrusion of fluids into the opened cracks during tensile faulting.
An increasing number of applications requires user interfaces that facilitate the handling of large geodata sets. Using virtual 3D city models, complex geospatial information can be communicated visually in an intuitive way. Therefore, real-time visualization of virtual 3D city models represents a key functionality for interactive exploration, presentation, analysis, and manipulation of geospatial data. This thesis concentrates on the development and implementation of concepts and techniques for real-time city model visualization. It discusses rendering algorithms as well as complementary modeling concepts and interaction techniques. Particularly, the work introduces a new real-time rendering technique to handle city models of high complexity concerning texture size and number of textures. Such models are difficult to handle by current technology, primarily due to two problems: - Limited texture memory: The amount of simultaneously usable texture data is limited by the memory of the graphics hardware. - Limited number of textures: Using several thousand different textures simultaneously causes significant performance problems due to texture switch operations during rendering. The multiresolution texture atlases approach, introduced in this thesis, overcomes both problems. During rendering, it permanently maintains a small set of textures that are sufficient for the current view and the screen resolution available. The efficiency of multiresolution texture atlases is evaluated in performance tests. To summarize, the results demonstrate that the following goals have been achieved: - Real-time rendering becomes possible for 3D scenes whose amount of texture data exceeds the main memory capacity. - Overhead due to texture switches is kept permanently low, so that the number of different textures has no significant effect on the rendering frame rate. Furthermore, this thesis introduces two new approaches for real-time city model visualization that use textures as core visualization elements: - An approach for visualization of thematic information. - An approach for illustrative visualization of 3D city models. Both techniques demonstrate that multiresolution texture atlases provide a basic functionality for the development of new applications and systems in the domain of city model visualization.
The first goal of the present work focuses on the need for different rationing methods of the The Global Change and Financial Transition (GFT) work- ing group at the Potsdam Institute for Climate Impact Research (PIK): I provide a toolbox which contains a variety of rationing methods to be ap- plied to micro-economic disequilibrium models of the lagom model family. This toolbox consists of well known rationing methods, and of rationing methods provided specifically for lagom. To ensure an easy application the toolbox is constructed in modular fashion. The second goal of the present work is to present a micro-economic labour market where heterogenous labour suppliers experience consecu- tive job opportunities and need to decide whether to apply for employ- ment. The labour suppliers are heterogenous with respect to their qualifi- cations and their beliefs about the application behaviour of their competi- tors. They learn simultaneously – in Bayesian fashion – about their individ- ual perceived probability to obtain employment conditional on application (PPE) by observing each others’ application behaviour over a cycle of job opportunities.
The advent of large-scale and high-throughput technologies has recently caused a shift in focus in contemporary biology from decades of reductionism towards a more systemic view. Alongside the availability of genome sequences the exploration of organisms utilizing such approach should give rise to a more comprehensive understanding of complex systems. Domestication and intensive breeding of crop plants has led to a parallel narrowing of their genetic basis. The potential to improve crops by conventional breeding using elite cultivars is therefore rather limited and molecular technologies, such as marker assisted selection (MAS) are currently being exploited to re-introduce allelic variance from wild species. Molecular breeding strategies have mostly focused on the introduction of yield or resistance related traits to date. However given that medical research has highlighted the importance of crop compositional quality in the human diet this research field is rapidly becoming more important. Chemical composition of biological tissues can be efficiently assessed by metabolite profiling techniques, which allow the multivariate detection of metabolites of a given biological sample. Here, a GC/MS metabolite profiling approach has been applied to investigate natural variation of tomatoes with respect to the chemical composition of their fruits. The establishment of a mass spectral and retention index (MSRI) library was a prerequisite for this work in order to establish a framework for the identification of metabolites from a complex mixture. As mass spectral and retention index information is highly important for the metabolomics community this library was made publicly available. Metabolite profiling of tomato wild species revealed large differences in the chemical composition, especially of amino and organic acids, as well as on the sugar composition and secondary metabolites. Intriguingly, the analysis of a set of S. pennellii introgression lines (IL) identified 889 quantitative trait loci of compositional quality and 326 yield-associated traits. These traits are characterized by increases/decreases not only of single metabolites but also of entire metabolic pathways, thus highlighting the potential of this approach in uncovering novel aspects of metabolic regulation. Finally the biosynthetic pathway of the phenylalanine-derived fruit volatiles phenylethanol and phenylacetaldehyde was elucidated via a combination of metabolic profiling of natural variation, stable isotope tracer experiments and reverse genetic experimentation.
Stars are born in turbulent molecular clouds that fragment and collapse under the influence of their own gravity, forming a cluster of hundred or more stars. The star formation process is controlled by the interplay between supersonic turbulence and gravity. In this work, the properties of stellar clusters created by numerical simulations of gravoturbulent fragmentation are compared to those from observations. This includes the analysis of properties of individual protostars as well as statistical properties of the entire cluster. It is demonstrated that protostellar mass accretion is a highly dynamical and time-variant process. The peak accretion rate is reached shortly after the formation of the protostellar core. It is about one order of magnitude higher than the constant accretion rate predicted by the collapse of a classical singular isothermal sphere, in agreement with the observations. For a more reasonable comparison, the model accretion rates are converted to the observables bolometric temperature, bolometric luminosity, and envelope mass. The accretion rates from the simulations are used as input for an evolutionary scheme. The resulting distribution in the Tbol-Lbol-Menv parameter space is then compared to observational data by means of a 3D Kolmogorov-Smirnov test. The highest probability found that the distributions of model tracks and observational data points are drawn from the same population is 70%. The ratios of objects belonging to different evolutionary classes in observed star-forming clusters are compared to the temporal evolution of the gravoturbulent models in order to estimate the evolutionary stage of a cluster. While it is difficult to estimate absolute ages, the realtive numbers of young stars reveal the evolutionary status of a cluster with respect to other clusters. The sequence shows Serpens as the youngest and IC 348 as the most evolved of the investigated clusters. Finally the structures of young star clusters are investigated by applying different statistical methods like the normalised mean correlation length and the minimum spanning tree technique and by a newly defined measure for the cluster elongation. The clustering parameters of the model clusters correspond in many cases well to those from observed ones. The temporal evolution of the clustering parameters shows that the star cluster builds up from several subclusters and evolves to a more centrally concentrated cluster, while the cluster expands slower than new stars are formed.
Ontogeny of leptin signalling in the rat hypothalamus: Evidence for selective leptin insensitivity
(2006)
Since their discovery in 1610 by Galileo Galilei, Saturn's rings continue to fascinate both experts and amateurs. Countless numbers of icy grains in almost Keplerian orbits reveal a wealth of structures such as ringlets, voids and gaps, wakes and waves, and many more. Grains are found to increase in size with increasing radial distance to Saturn. Recently discovered "propeller" structures in the Cassini spacecraft data, provide evidence for the existence of embedded moonlets. In the wake of these findings, the discussion resumes about origin and evolution of planetary rings, and growth processes in tidal environments. In this thesis, a contact model for binary adhesive, viscoelastic collisions is developed that accounts for agglomeration as well as restitution. Collisional outcomes are crucially determined by the impact speed and masses of the collision partners and yield a maximal impact velocity at which agglomeration still occurs. Based on the latter, a self-consistent kinetic concept is proposed. The model considers all possible collisional outcomes as there are coagulation, restitution, and fragmentation. Emphasizing the evolution of the mass spectrum and furthermore concentrating on coagulation alone, a coagulation equation, including a restricted sticking probability is derived. The otherwise phenomenological Smoluchowski equation is reproduced from basic principles and denotes a limit case to the derived coagulation equation. Qualitative and quantitative analysis of the relevance of adhesion to force-free granular gases and to those under the influence of Keplerian shear is investigated. Capture probability, agglomerate stability, and the mass spectrum evolution are investigated in the context of adhesive interactions. A size dependent radial limit distance from the central planet is obtained refining the Roche criterion. Furthermore, capture probability in the presence of adhesion is generally different compared to the case of pure gravitational capture. In contrast to a Smoluchowski-type evolution of the mass spectrum, numerical simulations of the obtained coagulation equation revealed, that a transition from smaller grains to larger bodies cannot occur via a collisional cascade alone. For parameters used in this study, effective growth ceases at an average size of centimeters.
Uncertainties are pervasive in the Earth System modelling. This is not just due to a lack of knowledge about physical processes but has its seeds in intrinsic, i.e. inevitable and irreducible, uncertainties concerning the process of modelling as well. Therefore, it is indispensable to quantify uncertainty in order to determine, which are robust results under this inherent uncertainty. The central goal of this thesis is to explore how uncertainties map on the properties of interest such as phase space topology and qualitative dynamics of the system. We will address several types of uncertainty and apply methods of dynamical systems theory on a trendsetting field of climate research, i.e. the Indian monsoon. For the systematic analysis concerning the different facets of uncertainty, a box model of the Indian monsoon is investigated, which shows a saddle node bifurcation against those parameters that influence the heat budget of the system and that goes along with a regime shift from a wet to a dry summer monsoon. As some of these parameters are crucially influenced by anthropogenic perturbations, the question is whether the occurrence of this bifurcation is robust against uncertainties in parameters and in the number of considered processes and secondly, whether the bifurcation can be reached under climate change. Results indicate, for example, the robustness of the bifurcation point against all considered parameter uncertainties. The possibility of reaching the critical point under climate change seems rather improbable. A novel method is applied for the analysis of the occurrence and the position of the bifurcation point in the monsoon model against parameter uncertainties. This method combines two standard approaches: a bifurcation analysis with multi-parameter ensemble simulations. As a model-independent and therefore universal procedure, this method allows investigating the uncertainty referring to a bifurcation in a high dimensional parameter space in many other models. With the monsoon model the uncertainty about the external influence of El Niño / Southern Oscillation (ENSO) is determined. There is evidence that ENSO influences the variability of the Indian monsoon, but the underlying physical mechanism is discussed controversially. As a contribution to the debate three different hypotheses are tested of how ENSO and the Indian summer monsoon are linked. In this thesis the coupling through the trade winds is identified as key in linking these two key climate constituents. On the basis of this physical mechanism the observed monsoon rainfall data can be reproduced to a great extent. Moreover, this mechanism can be identified in two general circulation models (GCMs) for the present day situation and for future projections under climate change. Furthermore, uncertainties in the process of coupling models are investigated, where the focus is on a comparison of forced dynamics as opposed to fully coupled dynamics. The former describes a particular type of coupling, where the dynamics from one sub-module is substituted by data. Intrinsic uncertainties and constraints are identified that prevent the consistency of a forced model with its fully coupled counterpart. Qualitative discrepancies between the two modelling approaches are highlighted, which lead to an overestimation of predictability and produce artificial predictability in the forced system. The results suggest that bistability and intermittent predictability, when found in a forced model set-up, should always be cross-validated with alternative coupling designs before being taken for granted. All in this, this thesis contributes to the fundamental issue of dealing with uncertainties the climate modelling community is confronted with. Although some uncertainties allow for including them in the interpretation of the model results, intrinsic uncertainties could be identified, which are inevitable within a certain modelling paradigm and are provoked by the specific modelling approach.
The properties of a series of well-defined new surfactant oligomers (dimers to tetramers)were examined. From a molecular point of view, these oligomeric surfactants consist of simple monomeric cationic surfactant fragments coupled via the hydrophilic ammonium chloride head groups by spacer groups (different in nature and length). Properties of these cationic surfactant oligomers in aqueous solution such as solubility, micellization and surface activity, micellar size and aggregation number were discussed with respect to the two new molecular variables introduced, i.e. degree of oligomerization and spacer group, in order to establish structure – property relationships. Thus, increasing the degree of oligomerization results in a pronounced decrease of the critical micellization concentration (CMC). Both reduced spacer length and increased spacer hydrophobicity lead to a decrease of the CMC, but to a lesser extent. For these particular compounds, the formed micelles are relatively small and their aggregation number decreases with increasing the degree of oligomerization, increasing spacer length and sterical hindrance. In addition, pseudo-phase diagrams were established for the dimeric surfactants in more complex systems, namely inverse microemulsions, demonstrating again the important influence of the spacer group on the surfactant behaviour. Furthermore, the influence of additives on the property profile of the dimeric compounds was examined, in order to see if the solution properties can be improved while using less material. Strong synergistic effects were observed by adding special organic salts (e.g. sodium salicylate, sodium vinyl benzoate, etc.) to the surfactant dimers in stoichiometric amounts. For such mixtures, the critical aggregation concentration is strongly shifted to lower concentration, the effect being more pronounced for dimers than for analogous monomers. A sharp decrease of the surface tension can also be attained. Many of the organic anions produce viscoelastic solutions when added to the relatively short-chain dimers in aqueous solution, as evidenced by rheological measurements. This behaviour reflects the formation of entangled wormlike micelles due to strong interactions of the anions with the cationic surfactants, decreasing the curvature of the micellar aggregates. It is found that the associative behaviour is enhanced by dimerization. For a given counterion, the spacer group may also induce a stronger viscosifying effect depending on its length and hydrophobicity. Oppositely charged surfactants were combined with the cationic dimers, too. First, some mixtures with the conventional anionic surfactant SDS revealed vesicular aggregates in solution. Also, in view of these catanionic mixtures, a novel anionic dimeric surfactant based on EDTA was synthesized and studied. The synthesis route is relatively simple and the compound exhibits particularly appealing properties such as low CMC and σCMC values, good solubilization capacity of hydrophobic probes and high tolerance to hard water. Noteworthy, mixtures with particular cationic dimers gave rise to viscous solutions, reflecting the micelle growth.
Nonaqueous synthesis of metal oxide nanoparticles and their assembly into mesoporous materials
(2006)
This thesis mainly consist of two parts, the synthesis of several kinds of technologically interesting crystalline metal oxide nanoparticles via nonaqueous sol-gel process and the formation of mesoporous metal oxides using some of these nanoparticles as building blocks via evaporation induced self-assembly (EISA) technique. In the first part, the experimental procedures and characterization results of successful syntheses of crystalline tin oxide and tin doped indium oxide (ITO) nanoparticles are reported. SnO2 nanoparticles exhibit monodisperse particle size (3.5 nm in average), high crystallinity and particularly high dispersibility in THF, which enable them to become the ideal particulate precursor for the formation of mesoporous SnO2. ITO nanoparticles possess uniform particle morphology, narrow particle size distribution (5-10 nm), high crystallinity as well as high electrical conductivity. The synthesis approaches and characterization of various mesoporous metal oxides, including TiO2, SnO2, mixture of CeO2 and TiO2, mixture of BaTiO3 and SnO2, are reported in the second part of this thesis. Mesoporous TiO2 and SnO2 are presented as highlights of this part. Mesoporous TiO2 was produced in the forms of both films and bulk material. In the case of mesoporous SnO2, the study was focused on the high order of the porous structure. All these mesoporous metal oxides show high crystallinity, high surface area and rather monodisperse pore sizes, which demonstrate the validity of EISA process and the usage of preformed crystalline nanoparticles as nanobuilding blocks (NBBs) to produce mesoporous metal oxides.
Polyelectrolyte microcapsules containing stimuli-responsive polymers have potential applications in the fields of sensors or actuators, stimulable microcontainers and controlled drug delivery. Such capsules were prepared, with the focus on pH-sensitivity and carbohydrate-sensing. First, pH-responsive polyelectrolyte capsules were produced by means of electrostatic layer-by-layer assembly of oppositely charged weak polyelectrolytes onto colloidal templates that were subsequently removed. The capsules were composed of poly(allylamine hydrochloride) (PAH) and poly(methacrylic acid) (PMA) or poly(4-vinylpyridine) (P4VP) and PMA and varied considerably in their hydrophobicity and the influence of secondary interactions. These polymers were assembled onto CaCO3 and SiO2 particles with diameters of ~ 5 µm, and a new method for the removal of the silica template under mild conditions was proposed. The pH-dependent stability of PAH/PMA and P4VP/PMA capsules was studied by confocal laser scanning microscopy (CLSM). They were stable over a wide pH-range and exhibited a pronounced swelling at the edges of stability, which was attributed to uncompensated positive or negative charges within the multilayers. The swollen state could be stabilized when the electrostatic repulsion was counteracted by hydrogen-bonding, hydrophobic interactions or polymeric entanglement. This stabilization made it possible to reversibly swell and shrink the capsules by tuning the pH of the solution. The pH-dependent ionization degree of PMA was used to modulate the binding of calcium ions. In addition to the pH-sensitivity, the stability and the swelling degree of these capsules at a given pH could be modified, when the ionic strength of the medium was altered. The reversible swelling was accompanied by reversible permeability changes for low and high molecular weight substances. The permeability for glucose was evaluated by studying the time-dependence of the buckling of the capsule walls in glucose solutions and the reversible permeability modulation was used for the encapsulation of polymeric material. A theoretical model was proposed to explain the pH-dependent size variations that took into account an osmotic expanding force and an elastic restoring force to evaluate the pH-dependent size changes of weak polyelectrolyte capsules. Second, sugar-sensitive multilayers were assembled using the reversible covalent ester formation between the polysaccharide mannan and phenylboronic acid moieties that were grafted onto poly(acrylic acid) (PAA). The resulting multilayer films were sensitive to several carbohydrates, showing the highest sensitivity to fructose. The response to carbohydrates resulted from the competitive binding of small molecular weight sugars and mannan to the boronic acid groups within the film, and was observed as a fast dissolution of the multilayers, when they were brought into contact with the sugar-containing solution above a critical concentration. It was also possible to prepare carbohydrate-sensitive multilayer capsules, and their sugar-dependent stability was investigated by following the release of encapsulated rhodamine-labeled bovine serum albumin (TRITC-BSA).
This thesis studies strong, completely charged polyelectrolyte brushes. Extensive molecular dynamics simulations are performed on different polyelectrolyte brush systems using local compute servers and massively parallel supercomputers. The full Coulomb interaction of charged monomers, counterions, and salt ions is treated explicitly. The polymer chains are anchored by one of their ends to a uncharged planar surface. The chains are treated under good solvent conditions. Monovalent salt ions (1:1 type) are modelled same as counterions. The studies concentrate on three different brush systems at constant temperature and moderate Coulomb interaction strength (Bjerrum length equal to bond length): The first system consists of a single polyelectrolyte brush anchored with varying grafting density to a plane. Results show that chains are extended up to about 2/3 of their contour length. The brush thickness slightly grows with increasing anchoring density. This slight dependence of the brush height on grafting density is in contrast to the well known scaling result for the osmotic brush regime. That is why the result obtained by simulations has stimulated further development of theory as well as new experimental investigations on polyelectrolyte brushes. This observation can be understood on a semi-quantitative level using a simple scaling model that incorporates excluded volume effects in a free-volume formulation where an effective cross section is assigned to the polymer chain from where couterions are excluded. The resulting regime is called nonlinear osmotic brush regime. Recently this regime was also obtained in experiments. The second system studied consists of polyelectrolyte brushes with added salt in the nonlinear osmotic regime. Varying salt is an important parameter to tune the structure and properties of polyelectrolytes. Further motivation is due to a theoretical scaling prediction by Pincus for the salt dependence of brush thickness. In the high salt limit (salt concentration much larger than counterion concentration) the brush height is predicted to decrease with increasing external salt, but with a relatively weak power law showing an exponent -1/3. There is some experimental and theoretical work that confirms this prediction, but there are other results that are in contradiction. In such a situation simulations are performed to validate the theoretical prediction. The simulation result shows that brush thickness decreases with added salt, and indeed is in quite good agreement with the scaling prediction by Pincus. The relation between buffer concentration and the effective ion strength inside the brush at varying salt concentration is of interest both from theoretical as well as experimental point of view. The simulation result shows that mobile ions (counterions as well as salt) distribute nonhomogeneously inside and outside of the brush. To explain the relation between the internal ion concentration with the buffer concentration a Donnan equilibrium approach is employed. Modifying the Donnan approach by taking into account the self-volume of polyelectrolyte chains as indicated above, the simulation result can be explained using the same effective cross section for the polymer chains. The extended Donnan equilibrium relation represents a interesting theoretical prediction that should be checked by experimental data. The third system consist of two interacting polyelectrolyte brushes that are grafted to two parallel surfaces. The interactions between brushes are important, for instance, in stabilization of dispersions against flocculation. In the simulations pressure is evaluated as a function of separation D between the two grafting planes. The pressure behavior shows different regimes for decreasing separation. This behavior is in qualitative agreement with experimental data. At relatively weak compression the pressure behavior obtained in the simulation agrees with a 1/D power law predicted by scaling theory. Beyond that the present study could supply new insight for understanding the interaction between polyelectrolyte brushes.
Metabolism of the dietary lignan secoisolariciresinol diglucoside by human intestinal bacteria
(2006)
In this work the first observation of new type of liquid crystals is presented. This is ionic self-assembly (ISA) liquid crystals formed by introduction of oppositely charged ions between different low molecular tectonic units. As practically all conventional liquid crystals consist of rigid core and alkyl chains the attention is focused to the simplest case where oppositely charged ions are placed between a rigid core and alkyl tails. The aim of this work is to investigate and understand liquid crystalline and alignment properties of these materials. It was found that ionic interactions within complexes play the main role. Presence of these interactions restricts transition to isotropic phase. In addition, these interactions hold the system (like network) allowing crystallization into a single domain from aligned LC state. Alignment of these simple ISA complexes was spontaneous on a glass substrate. In order to show potentials for application perylenediimide and azobenzene containing ISA complexes have been investigated for correlations between phase behavior and their alignment properties. The best results of macroscopic alignment of perylenediimide-based ISA complexes have been obtained by zone-casting method. In the aligned films the columns of the complex align perpendicular to the phase-transition front. The obtained anisotropy (DR = 18) is thermally stable. The investigated photosensitive (azobenzene-based) ISA complexes show formation of columnar LC phases. It was demonstrated that photo alignment of such complexes was very effective (DR = 50 has been obtained). It was shown that photo-reorientation in the photosensitive ISA complexes is cooperative process. The size of domains has direct influence on efficiency of the photo-reorientation process. In the case of small domains the photo-alignment is the most effective. Under irradiation with linearly polarized light domains reorient in the plane of the film leading to macroscopic alignment of columns parallel to the light polarization and joining of small domains into big ones. Finally, the additional distinguishable properties of the ISA liquid crystalline complexes should be noted: (I) the complexes do not solve in water but readily solve in organic solvents; (II) the complexes have good film-forming properties when cast or spin-coated from organic solvent; (III) alignment of the complexes depends on their structure and secondary interactions between tectonic units.
Förster Resonance Energy Transfer (FRET) plays an important role for biochemical applications such as DNA sequencing, intracellular protein-protein interactions, molecular binding studies, in vitro diagnostics and many others. For qualitative and quantitative analysis, FRET systems are usually assembled through molecular recognition of biomolecules conjugated with donor and acceptor luminophores. Lanthanide (Ln) complexes, as well as semiconductor quantum dot nanocrystals (QD), possess unique photophysical properties that make them especially suitable for applied FRET. In this work the possibility of using QD as very efficient FRET acceptors in combination with Ln complexes as donors in biochemical systems is demonstrated. The necessary theoretical and practical background of FRET, Ln complexes, QD and the applied biochemical models is outlined. In addition, scientific as well as commercial applications are presented. FRET can be used to measure structural changes or dynamics at distances ranging from approximately 1 to 10 nm. The very strong and well characterized binding process between streptavidin (Strep) and biotin (Biot) is used as a biomolecular model system. A FRET system is established by Strep conjugation with the Ln complexes and QD biotinylation. Three Ln complexes (one with Tb3+ and two with Eu3+ as central ion) are used as FRET donors. Besides the QD two further acceptors, the luminescent crosslinked protein allophycocyanin (APC) and a commercial fluorescence dye (DY633), are investigated for direct comparison. FRET is demonstrated for all donor-acceptor pairs by acceptor emission sensitization and a more than 1000-fold increase of the luminescence decay time in the case of QD reaching the hundred microsecond regime. Detailed photophysical characterization of donors and acceptors permits analysis of the bioconjugates and calculation of the FRET parameters. Extremely large Förster radii of more than 100 Å are achieved for QD as acceptors, considerably larger than for APC and DY633 (ca. 80 and 60 Å). Special attention is paid to interactions with different additives in aqueous solutions, namely borate buffer, bovine serum albumin (BSA), sodium azide and potassium fluoride (KF). A more than 10-fold limit of detection (LOD) decrease compared to the extensively characterized and frequently used donor-acceptor pair of Europium tris(bipyridine) (Eu-TBP) and APC is demonstrated for the FRET system, consisting of the Tb complex and QD. A sub-picomolar LOD for QD is achieved with this system in azide free borate buffer (pH 8.3) containing 2 % BSA and 0.5 M KF. In order to transfer the Strep-Biot model system to a real-life in vitro diagnostic application, two kinds of imunnoassays are investigated using human chorionic gonadotropin (HCG) as analyte. HCG itself, as well as two monoclonal anti-HCG mouse-IgG (immunoglobulin G) antibodies are labeled with the Tb complex and QD, respectively. Although no sufficient evidence for FRET can be found for a sandwich assay, FRET becomes obvious in a direct HCG-IgG assay showing the feasibility of using the Ln-QD donor-acceptor pair as highly sensitive analytical tool for in vitro diagnostics.
The layer-by-layer assembly (LBL) of polyelectrolytes has been extensively studied for the preparation of ultrathin films due to the versatility of the build-up process. The control of the permeability of these layers is particularly important as there are potential drug delivery applications. Multilayered polyelectrolyte microcapsules are also of great interest due to their possible use as microcontainers. This work will present two methods that can be used as employable drug delivery systems, both of which can encapsulate an active molecule and tune the release properties of the active species. Poly-(N-isopropyl acrylamide), (PNIPAM) is known to be a thermo-sensitive polymer that has a Lower Critical Solution Temperature (LCST) around 32oC; above this temperature PNIPAM is insoluble in water and collapses. It is also known that with the addition of salt, the LCST decreases. This work shows Differential Scanning Calorimetry (DSC) and Confocal Laser Scanning Microscopy (CLSM) evidence that the LCST of the PNIPAM can be tuned with salt type and concentration. Microcapsules were used to encapsulate this thermo-sensitive polymer, resulting in a reversible and tunable stimuli- responsive system. The encapsulation of the PNIPAM inside of the capsule was proven with Raman spectroscopy, DSC (bulk LCST measurements), AFM (thickness change), SEM (morphology change) and CLSM (in situ LCST measurement inside of the capsules). The exploitation of the capsules as a microcontainer is advantageous not only because of the protection the capsules give to the active molecules, but also because it facilitates easier transport. The second system investigated demonstrates the ability to reduce the permeability of polyelectrolyte multilayer films by the addition of charged wax particles. The incorporation of this hydrophobic coating leads to a reduced water sensitivity particularly after heating, which melts the wax, forming a barrier layer. This conclusion was proven with Neutron Reflectivity by showing the decreased presence of D2O in planar polyelectrolyte films after annealing creating a barrier layer. The permeability of capsules could also be decreased by the addition of a wax layer. This was proved by the increase in recovery time measured by Florescence Recovery After Photobleaching, (FRAP) measurements. In general two advanced methods, potentially suitable for drug delivery systems, have been proposed. In both cases, if biocompatible elements are used to fabricate the capsule wall, these systems provide a stable method of encapsulating active molecules. Stable encapsulation coupled with the ability to tune the wall thickness gives the ability to control the release profile of the molecule of interest.
Ultrathin, semi-permeable membranes are not only essential in natural systems (membranes of cells or organelles) but they are also important for applications (separation, filtering) in miniaturized devices. Membranes, integrated as diffusion barriers or filters in micron scale devices need to fulfill equivalent requirements as the natural systems, in particular mechanical stability and functionality (e.g. permeability), while being only tens of nm in thickness to allow fast diffusion times. Promising candidates for such membranes are polyelectrolyte multilayers, which were found to be mechanically stable, and variable in functionality. In this thesis two concepts to integrate such membranes in larger scale structures were developed. The first is based on the directed adhesion of polyelectrolyte hollow microcapsules. As a result, arrays of capsules were created. These can be useful for combinatorial chemistry or sensing. This concept was expanded to couple encapsulated living cells to the surface. The second concept is the transfer of flat freestanding multilayer membranes to structured surfaces. We have developed a method that allows us to couple mm2 areas of defect free film with thicknesses down to 50 nm to structured surfaces and to avoid crumpling of the membrane. We could again use this technique to produce arrays of micron size. The freestanding membrane is a diffusion barrier for high molecular weight molecules, while small molecules can pass through the membrane and thus allows us to sense solution properties. We have shown also that osmotic pressures lead to membrane deflection. That could be described quantitatively.
Soils contain a large amount of carbon (C) that is a critical regulator of the global C budget. Already small changes in the processes governing soil C cycling have the potential to release considerable amounts of CO2, a greenhouse gas (GHG), adding additional radiative forcing to the atmosphere and hence to changing climate. Increased temperatures will probably create a feedback, causing soils to release more GHGs. Furthermore changes in soil C balance impact soil fertility and soil quality, potentially degrading soils and reducing soils function as important resource. Consequently the assessment of soil C dynamics under present, recent past and future environmental conditions is not only of scientific interest and requires an integrated consideration of main factors and processes governing soil C dynamics. To perform this assessment an eco-hydrological modelling tool was used and extended by a process-based description of coupled soil carbon and nitrogen turnover. The extended model aims at delivering sound information on soil C storage changes beside changes in water quality, quantity and vegetation growth under global change impacts in meso- to macro-scale river basins, exemplary demonstrated for a Central European river basin (the Elbe). As a result this study: ▪ Provides information on joint effects of land-use (land cover and land management) and climate changes on croplands soil C balance in the Elbe river basin (Central Europe) presently and in the future. ▪ Evaluates which processes, and at what level of process detail, have to be considered to perform an integrated simulation of soil C dynamics at the meso- to macro-scale and demonstrates the model’s capability to simulate these processes compared to observations. ▪ Proposes a process description relating soil C pools and turnover properties to readily measurable quantities. This reduces the number of model parameters, enhances the comparability of model results to observations, and delivers same performance simulating long-term soil C dynamics as other models. ▪ Presents an extensive assessment of the parameter and input data uncertainty and their importance both temporally and spatially on modelling soil C dynamics. For the basin scale assessments it is estimated that croplands in the Elbe basin currently act as a net source of carbon (net annual C flux of 11 g C m-2 yr-1, 1.57 106 tons CO2 yr-1 entire croplands on average). Although this highly depends on the amount of harvest by-products remaining on the field. Future anticipated climate change and observed climate change in the basin already accelerates soil C loss and increases source strengths (additional 3.2 g C m-2 yr-1, 0.48 106 tons CO2 yr-1 entire croplands). But anticipated changes of agro-economic conditions, translating to altered crop share distributions, display stronger effects on soil C storage than climate change. Depending on future use of land expected to fall out of agricultural use in the future (~ 30 % of croplands area as “surplus” land), the basin either considerably looses soil C and the net annual C flux to the atmosphere increases (surplus used as black fallow) or the basin converts to a net sink of C (sequestering 0.44 106 tons CO2 yr-1 under extensified use as ley-arable) or reacts with decrease in source strength when using bioenergy crops. Bioenergy crops additionally offer a considerable potential for fossil fuel substitution (~37 PJ, 1015 J per year), whereas the basin wide use of harvest by-products for energy generation has to be seen critically although offering an annual energy potential of approximately 125 PJ. Harvest by-products play a central role in soil C reproduction and a percentage between 50 and 80 % should remain on the fields in order to maintain soil quality and fertility. The established modelling tool allows quantifying climate, land use and major land management impacts on soil C balance. New is that the SOM turnover description is embedded in an eco-hydrological river basin model, allowing an integrated consideration of water quantity, water quality, vegetation growth, agricultural productivity and soil carbon changes under different environmental conditions. The methodology and assessment presented here demonstrates the potential for integrated assessment of soil C dynamics alongside with other ecosystem services under global change impacts and provides information on the potentials of soils for climate change mitigation (soil C sequestration) and on their soil fertility status.
The formation of colloids by the controlled reduction, nucleation, and growth of inorganic precursor salts in different media has been investigated for more than a century. Recently, the preparation of ultrafine particles has received much attention since they can offer highly promising and novel options for a wide range of technical applications (nanotechnology, electrooptical devices, pharmaceutics, etc). The interest derives from the well-known fact that properties of advanced materials are critically dependent on the microstructure of the sample. Control of size, size distribution and morphology of the individual grains or crystallites is of the utmost importance in order to obtain the material characteristics desired. Several methods can be employed for the synthesis of nanoparticles. On the one hand, the reduction can occur in diluted aqueous or alcoholic solutions. On the other hand, the reduction process can be realized in a template phase, e.g. in well-defined microemulsion droplets. However, the stability of the nanoparticles formed mainly depends on their surface charge and it can be influenced with some added protective components. Quite different types of polymers, including polyelectrolytes and amphiphilic block copolymers, can for instance be used as protecting agents. The reduction and stabilization of metal colloids in aqueous solution by adding self-synthesized hydrophobically modified polyelectrolytes were studied in much more details. The polymers used are hydrophobically modified derivatives of poly(sodium acrylate) and of maleamic acid copolymers as well as the commercially available branched poly(ethyleneimine). The first notable result is that the polyelectrolytes used can act alone as both reducing and stabilizing agent for the preparation of gold nanoparticles. The investigation was then focused on the influence of the hydrophobic substitution of the polymer backbone on the reduction and stabilization processes. First of all, the polymers were added at room temperature and the reduction process was investigated over a longer time period (up to 8 days). In comparison, the reduction process was realized faster at higher temperature, i.e. 100°C. In both cases metal nanoparticles of colloidal dimensions can be produced. However, the size and shape of the individual nanoparticles mainly depends on the polymer added and the temperature procedure used. In a second part, the influence of the prior mentioned polyelectrolytes was investigated on the phase behaviour as well as on the properties of the inverse micellar region (L2 phase) of quaternary systems consisting of a surfactant, toluene-pentanol (1:1) and water. The majority of the present work has been made with the anionic surfactant sodium dodecylsulfate (SDS) and the cationic surfactant cetyltrimethylammonium bromide (CTAB) since they can interact with the oppositely charged polyelectrolytes and the microemulsions formed using these surfactants present a large water-in-oil region. Subsequently, the polymer-modified microemulsions were used as new templates for the synthesis of inorganic particles, ranging from metals to complex crystallites, of very small size. The water droplets can indeed act as nanoreactors for the nucleation and growth of the particles, and the added polymer can influence the droplet size, the droplet-droplet interactions, as well as the stability of the surfactant film by the formation of polymer-surfactant complexes. One further advantage of the polymer-modified microemulsions is the possibility to stabilize the primary formed nanoparticles via a polymer adsorption (steric and/or electrostatic stabilization). Thus, the polyelectrolyte-modified nanoparticles formed can be redispersed without flocculation after solvent evaporation.
The aim of this study was to provide deeper insights in passerine phylogenetic relationships using new molecular markers. The monophyly of the largest avian order Passeriformes (~59% of all living birds) and the division into its suborders suboscines and oscines are well established. Phylogenetic relationships within the group have been extremely puzzling, as most of the evolutionary lineages originated through rapid radiation. Numerous studies have hypothesised conflicting passerine phylogenies and have repeatedly stimulated further research with new markers. In the present study, I used three different approaches to contribute to the ongoing phylogenetic debate in Passeriformes. I investigated the recently introduced gene ZENK for its phylogenetic utility for passerine systematics in combination and comparison to three already established nuclear markers. My phylogenetic analyses of a comprehensive data set yielded highly resolved, consistent and strongly supported trees. I was able to show the high utility of ZENK for elucidating phylogenetic relationships within Passeriformes. For the second and third approach, I used chicken repeat 1 (CR1) retrotransposons as phylogenetic markers. I presented two specific CR1 insertions as apomorphic characters, whose presence/absence pattern significantly contributed to the resolution of a particular phylogenetic uncertainty, namely the position of the rockfowl species Picathartes spp. in the passerine tree. Based on my results, I suggest a closer relationship of these birds to crows, ravens, jays, and allies. For the third approach, I showed that CR1 sequences contain phylogenetic signal and investigated their applicability in more detail. In this context, I screened for CR1 elements in different passerine birds, used sequences of several loci to construct phylogenetic trees, and evaluated their reliability. I was able to corroborate existing hypotheses and provide strong evidence for some new hypotheses, e.g. I suggest a revision of the taxa Corvidae and Corvinae as vireos are closer related to crows, ravens, and allies. The subdivision of the Passerida into three superfamilies, Sylvioidea, Passeroidea, and Muscicapoidea was strongly supported. I found evidence for a split within Sylvioidea into two clades, one consisting of tits and the other comprising warblers, bulbuls, laughingthrushes, whitethroats, and allies. Whereas Passeridae appear to be paraphyletic, monophyly of weavers and estrild finches as a separate clade was strongly supported. The sister taxon relationships of dippers and the thrushes/flycatcher/chat assemblage was corroborated and I suggest a closer relationship of waxwings and kinglets to wrens, tree-creepers, and nuthatches.
The goal of a Brain-Computer Interface (BCI) consists of the development of a unidirectional interface between a human and a computer to allow control of a device only via brain signals. While the BCI systems of almost all other groups require the user to be trained over several weeks or even months, the group of Prof. Dr. Klaus-Robert Müller in Berlin and Potsdam, which I belong to, was one of the first research groups in this field which used machine learning techniques on a large scale. The adaptivity of the processing system to the individual brain patterns of the subject confers huge advantages for the user. Thus BCI research is considered a hot topic in machine learning and computer science. It requires interdisciplinary cooperation between disparate fields such as neuroscience, since only by combining machine learning and signal processing techniques based on neurophysiological knowledge will the largest progress be made. In this work I particularly deal with my part of this project, which lies mainly in the area of computer science. I have considered the following three main points: <b>Establishing a performance measure based on information theory:</b> I have critically illuminated the assumptions of Shannon's information transfer rate for application in a BCI context. By establishing suitable coding strategies I was able to show that this theoretical measure approximates quite well to what is practically achieveable. <b>Transfer and development of suitable signal processing and machine learning techniques:</b> One substantial component of my work was to develop several machine learning and signal processing algorithms to improve the efficiency of a BCI. Based on the neurophysiological knowledge that several independent EEG features can be observed for some mental states, I have developed a method for combining different and maybe independent features which improved performance. In some cases the performance of the combination algorithm outperforms the best single performance by more than 50 %. Furthermore, I have theoretically and practically addressed via the development of suitable algorithms the question of the optimal number of classes which should be used for a BCI. It transpired that with BCI performances reported so far, three or four different mental states are optimal. For another extension I have combined ideas from signal processing with those of machine learning since a high gain can be achieved if the temporal filtering, i.e., the choice of frequency bands, is automatically adapted to each subject individually. <b>Implementation of the Berlin brain computer interface and realization of suitable experiments:</b> Finally a further substantial component of my work was to realize an online BCI system which includes the developed methods, but is also flexible enough to allow the simple realization of new algorithms and ideas. So far, bitrates of up to 40 bits per minute have been achieved with this system by absolutely untrained users which, compared to results of other groups, is highly successful.
Our work goes in two directions. At first we want to transfer definitions, concepts and results of the theory of hyperidentities and solid varieties from the total to the partial case. (1) We prove that the operators chi^A_RNF and chi^E_RNF are only monotone and additive and we show that the sets of all fixed points of these operators are characterized only by three instead of four equivalent conditions for the case of closure operators. (2) We prove that V is n − SF-solid iff clone^SF V is free with respect to itself, freely generated by the independent set {[fi(x_1, . . . , x_n)]Id^SF_n V | i \in I}. (3) We prove that if V is n-fluid and ~V |P(V ) =~V −iso |P(V ) then V is kunsolid for k >= n (where P(V ) is the set of all V -proper hypersubstitutions of type \tau ). (4) We prove that a strong M-hyperquasi-equational theory is characterized by four equivalent conditions. The second direction of our work is to follow ideas which are typical for the partial case. (1) We characterize all minimal partial clones which are strongly solidifyable. (2)We define the operator Chi^A_Ph where Ph is a monoid of regular partial hypersubstitutions.Using this concept, we define the concept of a Phyp_R(\tau )-solid strong regular variety of partial algebras and we prove that a PHyp_R(\tau )-solid strong regular variety satisfies four equivalent conditions.
Sucrose synthase (Susy) is a key enzyme of sucrose metabolism, catalysing the reversible conversion of sucrose and UDP to UDP-glucose and fructose. Therefore, its activity, localization and function have been studied in various plant species. It has been shown that Susy can play a role in supplying energy in companion cells for phloem loading (Fu and Park, 1995), provides substrates for starch synthesis (Zrenner et al., 1995), and supplies UDP-glucose for cell wall synthesis (Haigler et al., 2001). Analysis of the Arabidopsis genome identifies six Susy isoforms. The expression of these isoforms was investigated using promoter-reporter gene constructs (GUS) and real time RT-PCR. Although these isoforms are closely related at the protein level they have radically different spatial and temporal patterns of expression in the plant with no two isoforms showing the same distribution. More than one isoform is expressed in all organs examined. Some of them have high but specific expression in particular organs or developmental stages whilst others are constantly expressed throughout the whole plant and across various stages of development. The in planta function of the six Susy isoforms were explored through analysis of T-DNA insertion mutants and RNAi lines. Plants without the expression of individual isoforms show no differences in growth and development, and are not significantly different from wild type plants in soluble sugars, starch and cellulose contents under all growth conditions investigated. Analysis of T-DNA insertion mutant lacking Sus3 isoform that was exclusively expressed in stomata cells only had a minor influence on guard cell osmoregulation and/or bioenergetics. Although none of the sucrose synthases appear to be essential for normal growth under our standard growth conditions, they may be necessary for growth under stress conditions. Different isoforms of sucrose synthase respond differently to various abiotic stresses. It has been shown that oxygen deprivation up regulates Sus1 and Sus4 and increases total Susy activity. However, the analysis of the plants with reduced expression of both Sus1 and Sus4 revealed no obvious effects on plant performance under oxygen deprivation. Low temperature up regulates Sus1 expression but the loss of this isoform has no effect on the freezing tolerance of non acclimated and cold acclimated plants. These data provide a comprehensive overview of the expression of this gene family which supports some of the previously reported roles for Susy and indicates the involvement of specific isoforms in metabolism and/or signalling.
In this work approaches for new detection system development for an Analytical Ultracentrifuge (AUC) were explored. Unlike its counterpart in chromatography fractionation techniques, the use of a Multidetection system for AUC has not yet been implemented to full extent despite its potential benefit. In this study we tried to couple existing fundamental spectroscopic and scattering techniques that are used in day to day science as tool for extracting analyte information. Trials were performed for adapting Raman, Light scattering and UV/Vis (with possibility to work with the whole range of wavelengths) to AUC. Conclusions were drawn for Raman and Light scattering to be a possible detection system for AUC, while the development for a fast fiber optics based multiwavelength detector was completed. The multiwavelength detector demonstrated the capability of data generation matching the literature and reference measurement data and faster data collection than that of the commercial instrument. It became obvious that with the generation of data in 3-D space in the UV/Vis detection system, the user can select the wavelength for the evaluation of experimental results as the data set contains the whole range of information from UV/Vis wavelength. The detector showed the data generation with much faster speed unlike the commercial instruments. The advantage of fast data generation was exemplified with the evaluation of data for a mixture of three colloids. These data were in conformity with measurement results from normal radial experiments and without significant diffusion broadening. Thus conclusions were drawn that with our designed Multiwavelength detector, meaningful data in 3-D space can be collected with much faster speed of data generation.
The primary objective of this work was to develop a laser source for fundamental investigations in the field of laser – materials interactions. In particular it is supposed to facilitate the study of the influence of the temporal energy distribution such as the interaction between adjacent pulses on ablation processes. Therefore, the aim was to design a laser with a highly flexible and easily controllable temporal energy distribution. The laser to meet these demands is an SBS-laser with optional active mode-locking. The nonlinear reflectivity of the SBS-mirror leads to a passive Q-switching and issues ns-pulse bursts with µs spacing. The pulse train parameters such as pulse duration, pulse spacing, pulse energy and number of pulses within a burst can be individually adjusted by tuning the pump parameters and the starting conditions for the laser. Another feature of the SBS-reflection is phase conjugation, which leads to an excellent beam quality thanks to the compensation of phase distortions. Transverse fundamental mode operation and a beam quality better than 1.4 times diffraction limited can be maintained for average output powers of up to 10 W. In addition to the dynamics on a ns-timescale described above, a defined splitting up of each ns-pulse into a train of ps-pulses can be achieved by additional active mode-locking. This twofold temporal focussing of the intensity leads to single pulse energies of up to 2 mJ at pulse durations of approximately 400 ps which corresponds to a pulse peak power of 5 MW. While the pulse duration is of the same order of magnitude as those of other passively Q-switched lasers with simultaneous mode-locking, the pulse energy and pulse peak power exceeds the values of these systems found in the literature by an order of magnitude. To the best of my knowledge the laser presented here is the first implementation of a self-starting mode-locked SBS-laser oscillator. In order to gain a better understanding and control of the transient output of the laser two complementary numerical models were developed. The first is based on laser rate equations which are solved for each laser mode individually while the mode-locking dynamics are calculated from the resultant transient spectrum. The rate equations consider the mean photon densities in the resonator, therefore the propagation of the light inside the resonator is not properly displayed. The second model, in contrast, introduces a spatial resolution of the resonator and hence the propagation inside the resonator can more accurately be considered. Consequently, a mismatch between the loss modulation frequency and the resonator round trip time can be conceived. The model calculates all dynamics in the time domain and therefore the spectral influences such as the Stokes-shift have to be neglected. Both models achieve an excellent reproduction of the ns-dynamics that are generated by the SBS-Q-switch. Separately, each model fails to reproduce all aspects of the ps-dynamics of the SBS-laser in detail. This can be attributed to the complexity of the numerous physical processes involved in this system. But thanks to their complementary nature they provide a very useful tool for investigating the various influences on the dynamics of the mode-locked SBS-laser individually. These aspects can eventually be recomposed to give a complete picture of the mechanisms which govern the output dynamics. Among the aspects under scrutiny were in particular the start resonator quality which determines the starting condition for the SBS-Q-switch, the modulation depth of the AOM and the phonon lifetime as well as the Brillouin-frequency of the SBS-medium. The numerical simulations and the experiments have opened several doors inviting further investigations and promising a potential for further improvement of the experimental results: The results of the simulations in combination with the experimental results which determined the starting conditions for the simulations leave no doubt that the bandwidth generation can primarily be attributed to the SBS-Stokes-shift during the buildup of the Q-switch pulse. For each resonator round trip, bandwidth is generated by shifting a part of the revolving light in frequency. The magnitude of the frequency shift corresponds to the Brillouin-frequency which is a constant of the SBS material and amounts in the case of SF6 to 240 MHz. The modulation of the AOM merely provides an exchange of population between spectrally adjacent modes and therefore diminishes a modulation in the spectrum. By use of a material with a Brillouin-frequency in the GHz range the bandwidth generation can be considerably accelerated thereby shortening the pulse duration. Also, it was demonstrated that yet another nonlinear effect of the SBS can be exploited: If the phonon lifetime is short compared to the resonator round trip time we obtain a modulation in the SBS-reflectivity that supports the modulation of the AOM. The application of an external optical feedback by a conventional mirror turns out to be an alternative to the AOM in synchronizing the longitudinal resonator modes. The interesting feature about this system is that it is ― although highly complex in the physical processes and the temporal output dynamics ― very simple and inexpensive from a technical point of view. No expensive modulators and no control electronics are necessary. Finally, the numerical models constitute a powerful tool for the investigation of emission dynamics of complex laser systems on arbitrary timescales and can also display the spectral evolution of the laser output. In particular it could be demonstrated that differences in the results of the complementary models vanish for systems of lesser complexity.
The terrestrial biosphere impacts considerably on the global carbon cycle. In particular, ecosystems contribute to set off anthropogenic induced fossil fuel emissions and hence decelerate the rise of the atmospheric CO₂ concentration. However, the future net sink strength of an ecosystem will heavily depend on the response of the individual processes to a changing climate. Understanding the makeup of these processes and their interaction with the environment is, therefore, of major importance to develop long-term climate mitigation strategies. Mathematical models are used to predict the fate of carbon in the soil-plant-atmosphere system under changing environmental conditions. However, the underlying processes giving rise to the net carbon balance of an ecosystem are complex and not entirely understood at the canopy level. Therefore, carbon exchange models are characterised by considerable uncertainty rendering the model-based prediction into the future prone to error. Observations of the carbon exchange at the canopy scale can help learning about the dominant processes and hence contribute to reduce the uncertainty associated with model-based predictions. For this reason, a global network of measurement sites has been established that provides long-term observations of the CO₂ exchange between a canopy and the atmosphere along with micrometeorological conditions. These time series, however, suffer from observation uncertainty that, if not characterised, limits their use in ecosystem studies. The general objective of this work is to develop a modelling methodology that synthesises physical process understanding with the information content in canopy scale data as an attempt to overcome the limitations in both carbon exchange models and observations. Similar hybrid modelling approaches have been successfully applied for signal extraction out of noisy time series in environmental engineering. Here, simple process descriptions are used to identify relationships between the carbon exchange and environmental drivers from noisy data. The functional form of these relationships are not prescribed a priori but rather determined directly from the data, ensuring the model complexity to be commensurate with the observations. Therefore, this data-led analysis results in the identification of the processes dominating carbon exchange at the ecosystem scale as reflected in the data. The description of these processes may then lead to robust carbon exchange models that contribute to a faithful prediction of the ecosystem carbon balance. This work presents a number of studies that make use of the developed data-led modelling approach for the analysis and interpretation of net canopy CO₂ flux observations. Given the limited knowledge about the underlying real system, the evaluation of the derived models with synthetic canopy exchange data is introduced as a standard procedure prior to any real data employment. The derived data-led models prove successful in several different applications. First, the data-based nature of the presented methods makes them particularly useful for replacing missing data in the observed time series. The resulting interpolated CO₂ flux observation series can then be analysed with dynamic modelling techniques, or integrated to coarser temporal resolution series for further use e.g., in model evaluation exercises. However, the noise component in these observations interferes with deterministic flux integration in particular when long time periods are considered. Therefore, a method to characterise the uncertainties in the flux observations that uses a semi-parametric stochastic model is introduced in a second study. As a result, an (uncertain) estimate of the annual net carbon exchange of the observed ecosystem can be inferred directly from a statistically consistent integration of the noisy data. For the forest measurement sites analysed, the relative uncertainty for the annual sum did not exceed 11 percent highlighting the value of the data. Based on the same models, a disaggregation of the net CO₂ flux into carbon assimilation and respiration is presented in a third study that allows for the estimation of annual ecosystem carbon uptake and release. These two components can then be further analysed for their separate response to environmental conditions. Finally, a fourth study demonstrates how the results from data-led analyses can be turned into a simple parametric model that is able to predict the carbon exchange of forest ecosystems. Given the global network of measurements available the derived model can now be tested for generality and transferability to other biomes. In summary, this work particularly highlights the potential of the presented data-led methodologies to identify and describe dominant carbon exchange processes at the canopy level contributing to a better understanding of ecosystem functioning.
Uncertainty about the sensitivity of the climate system to changes in the Earth’s radiative balance constitutes a primary source of uncertainty for climate projections. Given the continuous increase in atmospheric greenhouse gas concentrations, constraining the uncertainty range in such type of sensitivity is of vital importance. A common measure for expressing this key characteristic for climate models is the climate sensitivity, defined as the simulated change in global-mean equilibrium temperature resulting from a doubling of atmospheric CO2 concentration. The broad range of climate sensitivity estimates (1.5-4.5°C as given in the last Assessment Report of the Intergovernmental Panel on Climate Change, 2001), inferred from comprehensive climate models, illustrates that the strength of simulated feedback mechanisms varies strongly among different models. The central goal of this thesis is to constrain uncertainty in climate sensitivity. For this objective we first generate a large ensemble of model simulations, covering different feedback strengths, and then request their consistency with present-day observational data and proxy-data from the Last Glacial Maximum (LGM). Our analyses are based on an ensemble of fully-coupled simulations, that were realized with a climate model of intermediate complexity (CLIMBER-2). These model versions cover a broad range of different climate sensitivities, ranging from 1.3 to 5.5°C, and have been generated by simultaneously perturbing a set of 11 model parameters. The analysis of the simulated model feedbacks reveals that the spread in climate sensitivity results from different realizations of the feedback strengths in water vapour, clouds, lapse rate and albedo. The calculated spread in the sum of all feedbacks spans almost the entire plausible range inferred from a sampling of more complex models. We show that the requirement for consistency between simulated pre-industrial climate and a set of seven global-mean data constraints represents a comparatively weak test for model sensitivity (the data constrain climate sensitivity to 1.3-4.9°C). Analyses of the simulated latitudinal profile and of the seasonal cycle suggest that additional present-day data constraints, based on these characteristics, do not further constrain uncertainty in climate sensitivity. The novel approach presented in this thesis consists in systematically combining a large set of LGM simulations with data information from reconstructed regional glacial cooling. Irrespective of uncertainties in model parameters and feedback strengths, the set of our model versions reveals a close link between the simulated warming due to a doubling of CO2, and the cooling obtained for the LGM. Based on this close relationship between past and future temperature evolution, we define a method (based on linear regression) that allows us to estimate robust 5-95% quantiles for climate sensitivity. We thus constrain the range of climate sensitivity to 1.3-3.5°C using proxy-data from the LGM at low and high latitudes. Uncertainties in glacial radiative forcing enlarge this estimate to 1.2-4.3°C, whereas the assumption of large structural uncertainties may increase the upper limit by an additional degree. Using proxy-based data constraints for tropical and Antarctic cooling we show that very different absolute temperature changes in high and low latitudes all yield very similar estimates of climate sensitivity. On the whole, this thesis highlights that LGM proxy-data information can offer an effective means of constraining the uncertainty range in climate sensitivity and thus underlines the potential of paleo-climatic data to reduce uncertainty in future climate projections.
This study introduces a method for multiparallel analysis of small organic compounds in the unicellular green alga Chlamydomonas reinhardtii, one of the premier model organisms in cell biology. The comprehensive study of the changes of metabolite composition, or metabolomics, in response to environmental, genetic or developmental signals is an important complement of other functional genomic techniques in the effort to develop an understanding of how genes, proteins and metabolites are all integrated into a seamless and dynamic network to sustain cellular functions. The sample preparation protocol was optimized to quickly inactivate enzymatic activity, achieve maximum extraction capacity and process large sample quantities. As a result of the rapid sampling, extraction and analysis by gas chromatography coupled to time-of-flight mass spectrometry (GC-TOF) more than 800 analytes from a single sample can be measured, of which over a 100 could be positively identified. As part of the analysis of GC-TOF raw data, aliquot ratio analysis to systematically remove artifact signals and tools for the use of principal component analysis (PCA) on metabolomic datasets are proposed. Cells subjected to nitrogen (N), phosphorus (P), sulfur (S) or iron (Fe) depleted growth conditions develop highly distinctive metabolite profiles with metabolites implicated in many different processes being affected in their concentration during adaptation to nutrient deprivation. Metabolite profiling allowed characterization of both specific and general responses to nutrient deprivation at the metabolite level. Modulation of the substrates for N-assimilation and the oxidative pentose phosphate pathway indicated a priority for maintaining the capability for immediate activation of N assimilation even under conditions of decreased metabolic activity and arrested growth, while the rise in 4-hydroxyproline in S deprived cells could be related to enhanced degradation of proteins of the cell wall. The adaptation to sulfur deficiency was analyzed with greater temporal resolution and responses of wild-type cells were compared with mutant cells deficient in SAC1, an important regulator of the sulfur deficiency response. Whereas concurrent metabolite depletion and accumulation occurs during adaptation to S deprivation in wild-type cells, the sac1 mutant strain is characterized by a massive incapability to sustain many processes that normally lead to transient or permanent accumulation of the levels of certain metabolites or recovery of metabolite levels after initial down-regulation. For most of the steps in arginine biosynthesis in Chlamydomonas mutants have been isolated that are deficient in the respective enzyme activities. Three strains deficient in the activities of N-acetylglutamate-5-phosphate reductase (arg1), N2 acetylornithine-aminotransferase (arg9), and argininosuccinate lyase (arg2), respectively, were analyzed with regard to activation of endogenous arginine biosynthesis after withdrawal of externally supplied arginine. Enzymatic blocks in the arginine biosynthetic pathway could be characterized by precursor accumulation, like the amassment of argininosuccinate in arg2 cells, and depletion of intermediates occurring downstream of the enzymatic block, e.g. N2-acetylornithine, ornithine, and argininosuccinate depletion in arg9 cells. The unexpected finding of substantial levels of the arginine pathway intermediates N-acetylornithine, citrulline, and argininosuccinate downstream the enzymatic block in arg1 cells provided an explanation for the residual growth capacity of these cells in the absence of external arginine sources. The presence of these compounds, together with the unusual accumulation of N-Acetylglutamate, the first intermediate that commits the glutamate backbone to ornithine and arginine biosynthesis, in arg1 cells suggests that alternative pathways, possibly involving the activity of ornithine aminotransferase, may be active when the default reaction sequence to produce ornithine via acetylation of glutamate is disabled.
When top sports performers fail or “choke” under pressure, everyone asks: why? Research has identified a number of conditions (e.g. an audience) that elicit choking and that moderate (e.g. trait-anxiety) pressure – performance relation. Furthermore, mediating processes have been investigated. For example, explicit monitoring theories link performance failure under psychological stress to an increase in attention paid to a skill and its step-by-step execution (Beilock & Carr, 2001). Many studies have provided support for these ideas. However, so far only overt performance measures have been investigated which do not allow more thorough analyses of processes or performance strategies. But also a theoretical framework has been missing, that could (a) explain the effects of explicit monitoring on skill execution and that (b) makes predictions as to what is being monitored during execution. Consequently in this study, the nodalpoint hypothesis of motor control (Hossner & Ehrlenspiel, 2006) was taken to predict movement changes on three levels of analysis at certain “nodalpoints” within the movement sequence. Performance in two different laboratory tasks was assessed with respect to overt performance (the observable result, for example accuracy in the target), covert performance (description of movement execution, for example the acceleration of body segements) and task exploitation (the utilization of task properties such as covariation). A fake competition (see Beilock & Carr, 2002) was used to invoke pressure. In study 1 a ball bouncing task in a virtual-reality set-up was chosen. Previous studies (de Rugy, Wei, Müller, & Sternad, 2003) have shown that learners are usually able to “passively” exploit the dynamical stability of the system. According to explicit monitoring theories, choking should be expected either if the task itself evokes an “active control” (Experiment 1) or if learners are provided with explicit instructions (Experiment 2). In both experiments, participants first went through a practice phase on day 1. On day 2, following the Baseline Test participants were divided into a High-Stress or No-Stress Group for the final Performance Test. The High-Stress Group entered a fake competition. Overt performance was measured by the Absolute Error (AE) of ball amplitudes from target height; covert performance was measured by Period Modulation between successive hits and task exploitation was measured by Acceleration (AC) at ball-racket impact and Covariation (COV) of impact parameters. To evoke active control in Exp. 1 (N=20), perturbations to the ball flight were introduced. In Exp. 2 (N=39) half of the participants received explicit skill-focused instructions during learning. For overt performance, results generally show an interaction between Stress Group and Test, with better performance (i. e. lower AE) for the High-Stress group in the final Performance Test. This effect is also independent of the Instructions that participants had received during learning (Exp. 2). Similar effects were found for COV but not for AC. In study 2 a visuomotor tracking task in which participants had to pursuit a target cross that was moving on an invisible curve. This curve consisted of 3 segments of 6 turning points sequentially ordered around the x-axis. Participants learned two short movement sequences which were then concatenated to form a single sequence. It was expected that under pressure, this sequence should “fall apart” at the point of concatenation. Overt Performance was assessed by the Root Mean Square Error between target and pursuit cross as well as the Absolute Error at the turning points, covert performance was measured by the Latency from target to pursuit turning and task exploitation was measured by the temporal covariation between successive intervals between turning points. Experiment 3 (intraindividual variation) as well as Experiment 4 (interindividual variation) show performance enhancement in the pressure situation on the overt level with matching results on covert and task exploitation level. Thus, contrary to previous studies, no choking under pressure was found in any of the experiments. This may be interpreted as a failure in the experimental manipulation. But certainly also important characteristics of the task are highlighted. Choking should occur in tasks where performers do not have the time to use action or thought control strategies, that are more relevant to their “self” and that are discrete in nature.
In view of the importance of charge storage in polymer electrets for electromechanical transducer applications, the aim of this work is to contribute to the understanding of the charge-retention mechanisms. Furthermore, we will try to explain how the long-term storage of charge carriers in polymeric electrets works and to identify the probable trap sites. Charge trapping and de-trapping processes were investigated in order to obtain evidence of the trap sites in polymeric electrets. The charge de-trapping behavior of two particular polymer electrets was studied by means of thermal and optical techniques. In order to obtain evidence of trapping or de-trapping, charge and dipole profiles in the thickness direction were also monitored. In this work, the study was performed on polyethylene terephthalate (PETP) and on cyclic-olefin copolymers (COCs). PETP is a photo-electret and contains a net dipole moment that is located in the carbonyl group (C = O). The electret behavior of PETP arises from both the dipole orientation and the charge storage. In contrast to PETP, COCs are not photo-electrets and do not exhibit a net dipole moment. The electret behavior of COCs arises from the storage of charges only. COC samples were doped with dyes in order to probe their internal electric field. COCs show shallow charge traps at 0.6 and 0.11 eV, characteristic for thermally activated processes. In addition, deep charge traps are present at 4 eV, characteristic for optically stimulated processes. PETP films exhibit a photo-current transient with a maximum that depends on the temperature with an activation energy of 0.106 eV. The pair thermalization length (rc) calculated from this activation energy for the photo-carrier generation in PETP was estimated to be approx. 4.5 nm. The generated photo-charge carriers can recombine, interact with the trapped charge, escape through the electrodes or occupy an empty trap. PETP possesses a small quasi-static pyroelectric coefficient (QPC): ~0.6 nC/(m²K) for unpoled samples, ~60 nC/(m²K) for poled samples and ~60 nC/(m²K) for unpoled samples under an electric bias (E ~10 V/µm). When stored charges generate an internal electric field of approx. 10 V/µm, they are able to induce a QPC comparable to that of the oriented dipoles. Moreover, we observe charge-dipole interaction. Since the raw data of the QPC-experiments on PETP samples is noisy, a numerical Fourier-filtering procedure was applied. Simulations show that the data analysis is reliable when the noise level is up to 3 times larger than the calculated pyroelectric current for the QPC. PETP films revealed shallow traps at approx. 0.36 eV during thermally-stimulated current measurements. These energy traps are associated with molecular dipole relaxations (C = O). On the other hand, photo-activated measurements yield deep charge traps at 4.1 and 5.2 eV. The observed wavelengths belong to the transitions in PETP that are analogous to the π - π* benzene transitions. The observed charge de-trapping selectivity in the photocharge decay indicates that the charge detrapping is from a direct photon-charge interaction. Additionally, the charge de-trapping can be facilitated by photo-exciton generation and the interaction of the photo-excitons with trapped charge carriers. These results indicate that the benzene rings (C6H4) and the dipolar groups (C = O) can stabilize and share an extra charge carrier in a chemical resonance. In this way, this charge could be de-trapped in connection with the photo-transitions of the benzene ring and with the dipole relaxations. The thermally-activated charge release shows a difference in the trap depth to its optical counterpart. This difference indicates that the trap levels depend on the de-trapping process and on the chemical nature of the trap site. That is, the processes of charge detrapping from shallow traps are related to secondary forces. The processes of charge de-trapping from deep traps are related to primary forces. Furthermore, the presence of deep trap levels causes the stability of the charge for long periods of time.
Biochemical and physiological studies of Arabidopsis thaliana Diacylglycerol Kinase 7 (AtDGK7)
(2006)
A family of diacylglycerol kinases (DGK) phosphorylates the substrate diacylglycerol (DAG) to generate phosphatidic acid (PA) . Both molecules, DAG and PA, are involved in signal transduction pathways. In the model plant Arabidopsis thaliana, seven candidate genes (named AtDGK1 to AtDGK7) code for putative DGK isoforms. Here I report the molecular cloning and characterization of AtDGK7. Biochemical, molecular and physiological experiments of AtDGK7 and their corresponding enzyme are analyzed. Information from Genevestigator says that AtDGK7 gene is expressed in seedlings and adult Arabidopsis plants, especially in flowers. The AtDGK7 gene encodes the smallest functional DGK predicted in higher plants; but also, has an alternative coding sequence containing an extended AtDGK7 open reading frame, confirmed by PCR and submitted to the GenBank database (under the accession number DQ350135). The new cDNA has an extension of 439 nucleotides coding for 118 additional amino acids The former AtDGK7 enzyme has a predicted molecular mass of ~41 kDa and its activity is affected by pH and detergents. The DGK inhibitor R59022 also affects AtDGK7 activity, although at higher concentrations (i.e. IC50 ~380 µM). The AtDGK7 enzyme also shows a Michaelis-Menten type saturation curve for 1,2-DOG. Calculated Km and Vmax were 36 µM 1,2-DOG and 0.18 pmol PA min-1 mg of protein-1, respectively, under the assay conditions. Former protein AtDGK7 are able to phosphorylate different DAG analogs that are typically found in plants. The new deduced AtDGK7 protein harbors the catalytic DGKc and accessory domains DGKa, instead the truncated one as the former AtDGK7 protein (Gomez-Merino et al., 2005).
Biochemical and cellular characterization of filamin binding proteins in cross striated muscle
(2006)
We analyze the asymptotic behavior in the limit epsilon to zero for a wide class of difference operators H_epsilon = T_epsilon + V_epsilon with underlying multi-well potential. They act on the square summable functions on the lattice (epsilon Z)^d. We start showing the validity of an harmonic approximation and construct WKB-solutions at the wells. Then we construct a Finslerian distance d induced by H and show that short integral curves are geodesics and d gives the rate for the exponential decay of Dirichlet eigenfunctions. In terms of this distance, we give sharp estimates for the interaction between the wells and construct the interaction matrix.
Advances in biotechnologies rapidly increase the number of molecules of a cell which can be observed simultaneously. This includes expression levels of thousands or ten-thousands of genes as well as concentration levels of metabolites or proteins. Such Profile data, observed at different times or at different experimental conditions (e.g., heat or dry stress), show how the biological experiment is reflected on the molecular level. This information is helpful to understand the molecular behaviour and to identify molecules or combination of molecules that characterise specific biological condition (e.g., disease). This work shows the potentials of component extraction algorithms to identify the major factors which influenced the observed data. This can be the expected experimental factors such as the time or temperature as well as unexpected factors such as technical artefacts or even unknown biological behaviour. Extracting components means to reduce the very high-dimensional data to a small set of new variables termed components. Each component is a combination of all original variables. The classical approach for that purpose is the principal component analysis (PCA). It is shown that, in contrast to PCA which maximises the variance only, modern approaches such as independent component analysis (ICA) are more suitable for analysing molecular data. The condition of independence between components of ICA fits more naturally our assumption of individual (independent) factors which influence the data. This higher potential of ICA is demonstrated by a crossing experiment of the model plant Arabidopsis thaliana (Thale Cress). The experimental factors could be well identified and, in addition, ICA could even detect a technical artefact. However, in continuously observations such as in time experiments, the data show, in general, a nonlinear distribution. To analyse such nonlinear data, a nonlinear extension of PCA is used. This nonlinear PCA (NLPCA) is based on a neural network algorithm. The algorithm is adapted to be applicable to incomplete molecular data sets. Thus, it provides also the ability to estimate the missing data. The potential of nonlinear PCA to identify nonlinear factors is demonstrated by a cold stress experiment of Arabidopsis thaliana. The results of component analysis can be used to build a molecular network model. Since it includes functional dependencies it is termed functional network. Applied to the cold stress data, it is shown that functional networks are appropriate to visualise biological processes and thereby reveals molecular dynamics.
Since 1971, the Freudenthal Institute has developed an approach to mathematics education named Realistic Mathematics Education (RME). The philosophy of RME is based on Hans Freudenthal’s concept of ‘mathematics as a human activity’. Prof. Hans Freudenthal (1905-1990), a mathematician and educator, believes that ‘ready-made mathematics’ should not be taught in school. By contrast, he urges that students should be offered ‘realistic situations’ so that they can rediscover from informal to formal mathematics. Although mathematics education in Vietnam has some achievements, it still encounters several challenges. Recently, the reform of teaching methods has become an urgent task in Vietnam. It appears that Vietnamese mathematics education lacks necessary theoretical frameworks. At first sight, the philosophy of RME is suitable for the orientation of the teaching method reform in Vietnam. However, the potential of RME for mathematics education as well as the ability of applying RME to teaching mathematics is still questionable in Vietnam. The primary aim of this dissertation is to research into abilities of applying RME to teaching and learning mathematics in Vietnam and to answer the question “how could RME enrich Vietnamese mathematics education?”. This research will emphasize teaching geometry in Vietnamese middle school. More specifically, the dissertation will implement the following research tasks: • Analyzing the characteristics of Vietnamese mathematics education in the ‘reformed’ period (from the early 1980s to the early 2000s) and at present; • Implementing a survey of 152 middle school teachers’ ideas from several Vietnamese provinces and cities about Vietnamese mathematics education; • Analyzing RME, including Freudenthal’s viewpoints for RME and the characteristics of RME; • Discussing how to design RME-based lessons and how to apply these lessons to teaching and learning in Vietnam; • Experimenting RME-based lessons in a Vietnamese middle school; • Analyzing the feedback from the students’ worksheets and the teachers’ reports, including the potentials of RME-based lessons for Vietnamese middle school and the difficulties the teachers and their students encountered with RME-based lessons; • Discussing proposals for applying RME-based lessons to teaching and learning mathematics in Vietnam, including making suggestions for teachers who will apply these lessons to their teaching and designing courses for in-service teachers and teachers-in training. This research reveals that although teachers and students may encounter some obstacles while teaching and learning with RME-based lesson, RME could become a potential approach for mathematics education and could be effectively applied to teaching and learning mathematics in Vietnamese school.
This Thesis was devoted to the study of the coupled system composed by El Niño/Southern Oscillation and the Annual Cycle. More precisely, the work was focused on two main problems: 1. How to separate both oscillations into an affordable model for understanding the behaviour of the whole system. 2. How to model the system in order to achieve a better understanding of the interaction, as well as to predict future states of the system. We focused our efforts in the Sea Surface Temperature equations, considering that atmospheric effects were secondary to the ocean dynamics. The results found may be summarised as follows: 1. Linear methods are not suitable for characterising the dimensionality of the sea surface temperature in the tropical Pacific Ocean. Therefore they do not help to separate the oscillations by themselves. Instead, nonlinear methods of dimensionality reduction are proven to be better in defining a lower limit for the dimensionality of the system as well as in explaining the statistical results in a more physical way [1]. In particular, Isomap, a nonlinear modification of Multidimensional Scaling methods, provides a physically appealing method of decomposing the data, as it substitutes the euclidean distances in the manifold by an approximation of the geodesic distances. We expect that this method could be successfully applied to other oscillatory extended systems and, in particular, to meteorological systems. 2. A three dimensional dynamical system could be modeled, using a backfitting algorithm, for describing the dynamics of the sea surface temperature in the tropical Pacific Ocean. We observed that, although there were few data points available, we could predict future behaviours of the coupled ENSO-Annual Cycle system with an accuracy of less than six months, although the constructed system presented several drawbacks: few data points to input in the backfitting algorithm, untrained model, lack of forcing with external data and simplification using a close system. Anyway, ensemble prediction techniques showed that the prediction skills of the three dimensional time series were as good as those found in much more complex models. This suggests that the climatological system in the tropics is mainly explained by ocean dynamics, while the atmosphere plays a secondary role in the physics of the process. Relevant predictions for short lead times can be made using a low dimensional system, despite its simplicity. The analysis of the SST data suggests that nonlinear interaction between the oscillations is small, and that noise plays a secondary role in the fundamental dynamics of the oscillations [2]. A global view of the work shows a general procedure to face modeling of climatological systems. First, we should find a suitable method of either linear or nonlinear dimensionality reduction. Then, low dimensional time series could be extracted out of the method applied. Finally, a low dimensional model could be found using a backfitting algorithm in order to predict future states of the system.
The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season.
The ultimate aim of this study is to better understand the relevance of weak electricity in the adaptive radiation of the African mormyrid fish. The chosen model taxon, the genus Campylomormyrus, exhibits a wide diversity of electric organ discharge (EOD) waveform types. Their EOD is age, sex, and species specific and is an important character for discriminating among species that are otherwise cryptic. After having established a complementary set of molecular markers, I examined the radiation of Campylomormyrus by a combined approach of molecular data (sequence data from the mitochondrial cytochrome b and the nuclear S7 ribosomal protein gene, as well as 18 microsatellite loci, especially developed for the genus Campylomormyrus), observation of ontogeny and diversification of EOD waveform, and morphometric analysis of relevant morphological traits. I built up the first convincing phylogenetic hypothesis for the genus Campylomormyrus. Taking advantage of microsatellite data, the identified phylogenetic clades proved to be reproductively isolated biological species. This way I detected at least six species occurring in sympatry near Brazzaville/Kinshasa (Congo Basin). By combining molecular data and EOD analyses, I could show that there are three cryptic species, characterised by their own adult EOD types, hidden under a common juvenile EOD form. In addition, I confirmed that adult male EOD is species-specific and is more different among closely related species than among more distantly related ones. This result and the observation that the EOD changes with maturity suggest its function as a reproductive isolation mechanism. As a result of my morphometric shape analysis, I could assign species types to the identified reproductively isolated groups to produce a sound taxonomy of the group. Besides this, I could also identify morphological traits relevant for the divergences between the identified species. Among them, the variations I found in the shape of the trunk-like snout, suggest the presence of different trophic specializations; therefore, this trait might have been involved in the ecological radiation of the group. In conclusion, I provided a convincing scenario envisioning an adaptive radiation of weakly electric fish triggered by sexual selection via assortative mating due to differences in EOD characteristics, but caused by a divergent selection of morphological traits correlated with the feeding ecology.
This thesis discusses challenges in IT security education, points out a gap between e-learning and practical education, and presents a work to fill the gap. E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in conventional computer laboratories poses particular problems such as immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. In this thesis, we introduce the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibility to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by a virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present a security management solution to prevent the misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This thesis demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.