Refine
Year of publication
- 2006 (840) (remove)
Document Type
- Article (581)
- Doctoral Thesis (82)
- Monograph/Edited Volume (54)
- Conference Proceeding (32)
- Postprint (26)
- Preprint (24)
- Review (18)
- Other (17)
- Master's Thesis (3)
- Working Paper (2)
Language
- English (840) (remove)
Keywords
- climate change (4)
- polyelectrolyte (4)
- Fluoreszenz-Resonanz-Energie-Transfer (3)
- Immunoassay (3)
- Nanopartikel (3)
- Nichtlineare Dynamik (3)
- Optimality Theory (3)
- Polyelektrolyt (3)
- metabolite profiling (3)
- metabolomics (3)
Institute
- Institut für Physik und Astronomie (159)
- Institut für Biochemie und Biologie (128)
- Institut für Chemie (81)
- Institut für Geowissenschaften (73)
- Institut für Mathematik (71)
- Institut für Informatik und Computational Science (55)
- Department Linguistik (46)
- Extern (37)
- Department Psychologie (36)
- Institut für Anglistik und Amerikanistik (34)
Quantum dots (QDs) are common as luminescing markers for imaging in biological applications because their optical properties seem to be inert against their surrounding solvent. This, together with broad and strong absorption bands and intense, sharp tuneable luminescence bands, makes them interesting candidates for methods utilizing Förster Resonance Energy Transfer (FRET), e. g. for sensitive homogeneous fluoroimmunoassays (FIA). In this work we demonstrate energy transfer from Eu<SUP>3+</SUP>-trisbipyridin (Eu-TBP) donors to CdSe-ZnS-QD acceptors in solutions with and without serum. The QDs are commercially available CdSe-ZnS core-shell particles emitting at 655 nm (QD655). The FRET system was achieved by the binding of the streptavidin conjugated donors with the biotin conjugated acceptors. After excitation of Eu-TBP and as result of the energy transfer, the luminescence of the QD655 acceptors also showed lengthened decay times like the donors. The energy transfer efficiency, as calculated from the decay times of the bound and the unbound components, amounted to 37%. The Förster-radius, estimated from the absorption and emission bands, was ca. 77 Å. The effective binding ratio, which not only depends on the ratio of binding pairs but also on unspecific binding, was obtained from the donor emission dependent on the concentration. As serum promotes unspecific binding, the overall FRET efficiency of the assay was reduced. We conclude that QDs are good substitutes for acceptors in FRET if combined with slow decay donors like Europium. The investigation of the influence of the serum provides guidance towards improving binding properties of QD assays.
In view of the importance of charge storage in polymer electrets for electromechanical transducer applications, the aim of this work is to contribute to the understanding of the charge-retention mechanisms. Furthermore, we will try to explain how the long-term storage of charge carriers in polymeric electrets works and to identify the probable trap sites. Charge trapping and de-trapping processes were investigated in order to obtain evidence of the trap sites in polymeric electrets. The charge de-trapping behavior of two particular polymer electrets was studied by means of thermal and optical techniques. In order to obtain evidence of trapping or de-trapping, charge and dipole profiles in the thickness direction were also monitored. In this work, the study was performed on polyethylene terephthalate (PETP) and on cyclic-olefin copolymers (COCs). PETP is a photo-electret and contains a net dipole moment that is located in the carbonyl group (C = O). The electret behavior of PETP arises from both the dipole orientation and the charge storage. In contrast to PETP, COCs are not photo-electrets and do not exhibit a net dipole moment. The electret behavior of COCs arises from the storage of charges only. COC samples were doped with dyes in order to probe their internal electric field. COCs show shallow charge traps at 0.6 and 0.11 eV, characteristic for thermally activated processes. In addition, deep charge traps are present at 4 eV, characteristic for optically stimulated processes. PETP films exhibit a photo-current transient with a maximum that depends on the temperature with an activation energy of 0.106 eV. The pair thermalization length (rc) calculated from this activation energy for the photo-carrier generation in PETP was estimated to be approx. 4.5 nm. The generated photo-charge carriers can recombine, interact with the trapped charge, escape through the electrodes or occupy an empty trap. PETP possesses a small quasi-static pyroelectric coefficient (QPC): ~0.6 nC/(m²K) for unpoled samples, ~60 nC/(m²K) for poled samples and ~60 nC/(m²K) for unpoled samples under an electric bias (E ~10 V/µm). When stored charges generate an internal electric field of approx. 10 V/µm, they are able to induce a QPC comparable to that of the oriented dipoles. Moreover, we observe charge-dipole interaction. Since the raw data of the QPC-experiments on PETP samples is noisy, a numerical Fourier-filtering procedure was applied. Simulations show that the data analysis is reliable when the noise level is up to 3 times larger than the calculated pyroelectric current for the QPC. PETP films revealed shallow traps at approx. 0.36 eV during thermally-stimulated current measurements. These energy traps are associated with molecular dipole relaxations (C = O). On the other hand, photo-activated measurements yield deep charge traps at 4.1 and 5.2 eV. The observed wavelengths belong to the transitions in PETP that are analogous to the π - π* benzene transitions. The observed charge de-trapping selectivity in the photocharge decay indicates that the charge detrapping is from a direct photon-charge interaction. Additionally, the charge de-trapping can be facilitated by photo-exciton generation and the interaction of the photo-excitons with trapped charge carriers. These results indicate that the benzene rings (C6H4) and the dipolar groups (C = O) can stabilize and share an extra charge carrier in a chemical resonance. In this way, this charge could be de-trapped in connection with the photo-transitions of the benzene ring and with the dipole relaxations. The thermally-activated charge release shows a difference in the trap depth to its optical counterpart. This difference indicates that the trap levels depend on the de-trapping process and on the chemical nature of the trap site. That is, the processes of charge detrapping from shallow traps are related to secondary forces. The processes of charge de-trapping from deep traps are related to primary forces. Furthermore, the presence of deep trap levels causes the stability of the charge for long periods of time.
What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis
(2006)
Since Galileo Galilei invented the first thermometer, researchers have tried to understand the complex dynamics of ocean and atmosphere by means of scientific methods. They observe nature and formulate theories about the climate system. Since some decades powerful computers are capable to simulate the past and future evolution of climate. Time series analysis tries to link the observed data to the computer models: Using statistical methods, one estimates characteristic properties of the underlying climatological processes that in turn can enter the models. The quality of an estimation is evaluated by means of error bars and significance testing. On the one hand, such a test should be capable to detect interesting features, i.e. be sensitive. On the other hand, it should be robust and sort out false positive results, i.e. be specific. This thesis mainly aims to contribute to methodological questions of time series analysis with a focus on sensitivity and specificity and to apply the investigated methods to recent climatological problems. First, the inference of long-range correlations by means of Detrended Fluctuation Analysis (DFA) is studied. It is argued that power-law scaling of the fluctuation function and thus long-memory may not be assumed a priori but have to be established. This requires to investigate the local slopes of the fluctuation function. The variability characteristic for stochastic processes is accounted for by calculating empirical confidence regions. The comparison of a long-memory with a short-memory model shows that the inference of long-range correlations from a finite amount of data by means of DFA is not specific. When aiming to infer short memory by means of DFA, a local slope larger than $\alpha=0.5$ for large scales does not necessarily imply long-memory. Also, a finite scaling of the autocorrelation function is shifted to larger scales in the fluctuation function. It turns out that long-range correlations cannot be concluded unambiguously from the DFA results for the Prague temperature data set. In the second part of the thesis, an equivalence class of nonstationary Gaussian stochastic processes is defined in the wavelet domain. These processes are characterized by means of wavelet multipliers and exhibit well defined time dependent spectral properties; they allow one to generate realizations of any nonstationary Gaussian process. The dependency of the realizations on the wavelets used for the generation is studied, bias and variance of the wavelet sample spectrum are calculated. To overcome the difficulties of multiple testing, an areawise significance test is developed and compared to the conventional pointwise test in terms of sensitivity and specificity. Applications to Climatological and Hydrological questions are presented. The thesis at hand mainly aims to contribute to methodological questions of time series analysis and to apply the investigated methods to recent climatological problems. In the last part, the coupling between El Nino/Southern Oscillation (ENSO) and the Indian Monsoon on inter-annual time scales is studied by means of Hilbert transformation and a curvature defined phase. This method allows one to investigate the relation of two oscillating systems with respect to their phases, independently of their amplitudes. The performance of the technique is evaluated using a toy model. From the data, distinct epochs are identified, especially two intervals of phase coherence, 1886-1908 and 1964-1980, confirming earlier findings from a new point of view. A significance test of high specificity corroborates these results. Also so far unknown periods of coupling invisible to linear methods are detected. These findings suggest that the decreasing correlation during the last decades might be partly inherent to the ENSO/Monsoon system. Finally, a possible interpretation of how volcanic radiative forcing could cause the coupling is outlined.
In this work approaches for new detection system development for an Analytical Ultracentrifuge (AUC) were explored. Unlike its counterpart in chromatography fractionation techniques, the use of a Multidetection system for AUC has not yet been implemented to full extent despite its potential benefit. In this study we tried to couple existing fundamental spectroscopic and scattering techniques that are used in day to day science as tool for extracting analyte information. Trials were performed for adapting Raman, Light scattering and UV/Vis (with possibility to work with the whole range of wavelengths) to AUC. Conclusions were drawn for Raman and Light scattering to be a possible detection system for AUC, while the development for a fast fiber optics based multiwavelength detector was completed. The multiwavelength detector demonstrated the capability of data generation matching the literature and reference measurement data and faster data collection than that of the commercial instrument. It became obvious that with the generation of data in 3-D space in the UV/Vis detection system, the user can select the wavelength for the evaluation of experimental results as the data set contains the whole range of information from UV/Vis wavelength. The detector showed the data generation with much faster speed unlike the commercial instruments. The advantage of fast data generation was exemplified with the evaluation of data for a mixture of three colloids. These data were in conformity with measurement results from normal radial experiments and without significant diffusion broadening. Thus conclusions were drawn that with our designed Multiwavelength detector, meaningful data in 3-D space can be collected with much faster speed of data generation.
Earthquakes form by sudden brittle failure of rock mostly as shear ruptures along a rupture plane. Beside this, mechanisms other than pure shearing have been observed for some earthquakes mainly in volcanic areas. Possible explanations include complex rupture geometries and tensile earthquakes. Tensile earthquakes occur by opening or closure of cracks during rupturing. They are likely to be often connected with fluids that cause pressure changes in the pore space of rocks leading to earthquake triggering. Tensile components have been reported for swarm earthquakes in West Bohemia in 2000. The aim and subject of this work is an assessment and the accurate determination of such tensile components for earthquakes in anisotropic media. Currently used standard techniques for the retrieval of earthquake source mechanisms assume isotropic rock properties. By means of moment tensors, equivalent forces acting at the source are used to explain the radiated wavefield. Conversely, seismic anisotropy, i.e. directional dependence of elastic properties, has been observed in the earth's crust and mantle such as in West Bohemia. In comparison to isotropy, anisotropy causes modifications in wave amplitudes and shear-wave splitting. In this work, effects of seismic anisotropy on true or apparent tensile source components of earthquakes are investigated. In addition, earthquake source parameters are determined considering anisotropy. It is shown that moment tensors and radiation patterns due to shear sources in anisotropic media may be similar to those of tensile sources in isotropic media. In contrast, similarities between tensile earthquakes in anisotropic rocks and shear sources in isotropic media may exist. As a consequence, the interpretation of tensile source components is ambiguous. The effects that are due to anisotropy depend on the orientation of the earthquake source and the degree of anisotropy. The moment of an earthquake is also influenced by anisotropy. The orientation of fault planes can be reliably determined even if isotropy instead of anisotropy is assumed and if the spectra of the compressional waves are used. Greater difficulties may arise when the spectra of split shear waves are additionally included. Retrieved moment tensors show systematic artefacts. Observed tensile source components determined for events in West Bohemia in 1997 can only partly be attributed to the effects of moderate anisotropy. Furthermore, moment tensors determined earlier for earthquakes induced at the German Continental Deep Drilling Program (KTB), Bavaria, were reinterpreted under assumptions of anisotropic rock properties near the borehole. The events can be consistently identified as shear sources, although their moment tensors comprise tensile components that are considered to be apparent. These results emphasise the necessity to consider anisotropy to uniquely determine tensile source parameters. Therefore, a new inversion algorithm has been developed, tested, and successfully applied to 112 earthquakes that occurred during the most recent intense swarm episode in West Bohemia in 2000 at the German-Czech border. Their source mechanisms have been retrieved using isotropic and anisotropic velocity models. Determined local magnitudes are in the range between 1.6 and 3.2. Fault-plane solutions are similar to each other and characterised by left-lateral faulting on steeply dipping, roughly North-South oriented rupture planes. Their dip angles decrease above a depth of about 8.4km. Tensile source components indicating positive volume changes are found for more than 60% of the considered earthquakes. Their size depends on source time and location. They are significant at the beginning of the swarm and at depths below 8.4km but they decrease in importance later in the course of the swarm. Determined principle stress axes include P axes striking Northeast and Taxes striking Southeast. They resemble those found earlier in Central Europe. However, depth-dependence in plunge is observed. Plunge angles of the P axes decrease gradually from 50° towards shallow angles with increasing depth. In contrast, the plunge angles of the T axes change rapidly from about 8° above a depth of 8.4km to 21° below this depth. By this thesis, spatial and temporal variations in tensile source components and stress conditions have been reported for the first time for swarm earthquakes in West Bohemia in 2000. They also persist, when anisotropy is assumed and can be explained by intrusion of fluids into the opened cracks during tensile faulting.
The intracontinental endorheic Aral Sea, remote from oceanic influences, represents an excellent sedimentary archive in Central Asia that can be used for high-resolution palaeoclimate studies. We performed palynological, microfacies and geochemical analyses on sediment cores retrieved from Chernyshov Bay, in the NW part of the modern Large Aral Sea. The most complete sedimentary sequence, whose total length is 11 m, covers approximately the past 2000 years of the late Holocene. High-resolution palynological analyses, conducted on both dinoflagellate cysts assemblages and pollen grains, evidenced prominent environmental change in the Aral Sea and in the catchment area. The diversity and the distribution of dinoflagellate cysts within the assemblages characterized the sequence of salinity and lake-level changes during the past 2000 years. Due to the strong dependence of the Aral Sea hydrology to inputs from its tributaries, the lake levels are ultimately linked to fluctuations in meltwater discharges during spring. As the amplitude of glacial meltwater inputs is largely controlled by temperature variations in the Tien Shan and Pamir Mountains during the melting season, salinity and lake-level changes of the Aral Sea reflect temperature fluctuations in the high catchment area during the past 2000 years. Dinoflagellate cyst assemblages document lake lowstands and hypersaline conditions during ca. 0–425 AD, 920–1230 AD, 1500 AD, 1600–1650 AD, 1800 AD and since the 1960s, whereas oligosaline conditions and higher lake levels prevailed during the intervening periods. Besides, reworked dinoflagellate cysts from Palaeogene and Neogene deposits happened to be a valuable proxy for extreme sheet-wash events, when precipitation is enhanced over the Aral Sea Basin as during 1230–1450 AD. We propose that the recorded environmental changes are related primarily to climate, but may have been possibly amplified during extreme conditions by human-controlled irrigation activities or military conflicts. Additionally, salinity levels and variations in solar activity show striking similarities over the past millennium, as during 1000–1300 AD, 1450–1550 and 1600–1700 AD when low lake levels match well with an increase in solar activity thus suggesting that an increase in the net radiative forcing reinforced past Aral Sea’s regressions. On the other hand, we used pollen analyses to quantify changes in moisture conditions in the Aral Sea Basin. High-resolution reconstruction of precipitation (mean annual) and temperature (mean annual, coldest versus warmest month) parameters are performed using the “probability mutual climatic spheres” method, providing the sequence of climate change for the past 2000 years in western Central Asia. Cold and arid conditions prevailed during ca. 0–400 AD, 900–1150 AD and 1500–1650 AD with the extension of xeric vegetation dominated by steppe elements. Conversely, warmer and less arid conditions occurred during ca. 400–900 AD and 1150–1450 AD, where steppe vegetation was enriched in plants requiring moister conditions. Change in the precipitation pattern over the Aral Sea Basin is shown to be predominantly controlled by the Eastern Mediterranean (EM) cyclonic system, which provides humidity to the Middle East and western Central Asia during winter and early spring. As the EM is significantly regulated by pressure modulations of the North Atlantic Oscillation (NAO) when the system is in a negative phase, a relationship between humidity over western Central Asia and the NAO is proposed. Besides, laminated sediments record shifts in sedimentary processes during the late Holocene that reflect pronounced changes in taphonomic dynamics. In Central Asia, the frequency of dust storms occurring during spring when the continent is heating up is mostly controlled by the intensity and the position of the Siberian High (SH) Pressure System. Using titanium (Ti) content in laminated sediments as a proxy for aeolian detrital inputs, changes in wind dynamics over Central Asia is documented for the past 1500 years, offering the longest reconstruction of SH variability to date. Based on high Ti content, stronger wind dynamics are reported from 450–700 AD, 1210–1265 AD, 1350–1750 AD and 1800–1975 AD, reporting a stronger SH during spring. In contrast, lower Ti content from 1750–1800 AD and 1980–1985 AD reflect a diminished influence of the SH and a reduced atmospheric circulation. During 1180–1210 AD and 1265–1310 AD, considerably weakened atmospheric circulation is evidenced. As a whole, though climate dynamics controlled environmental changes and ultimately modulated changes in the western Central Asia’s climate system, it is likely that changes in solar activity also had an impact by influencing to some extent the Aral Sea’s hydrology balance and also regional temperature patterns in the past. <hr> The appendix of the thesis is provided via the HTML document as ZIP download.
Advances in biotechnologies rapidly increase the number of molecules of a cell which can be observed simultaneously. This includes expression levels of thousands or ten-thousands of genes as well as concentration levels of metabolites or proteins. Such Profile data, observed at different times or at different experimental conditions (e.g., heat or dry stress), show how the biological experiment is reflected on the molecular level. This information is helpful to understand the molecular behaviour and to identify molecules or combination of molecules that characterise specific biological condition (e.g., disease). This work shows the potentials of component extraction algorithms to identify the major factors which influenced the observed data. This can be the expected experimental factors such as the time or temperature as well as unexpected factors such as technical artefacts or even unknown biological behaviour. Extracting components means to reduce the very high-dimensional data to a small set of new variables termed components. Each component is a combination of all original variables. The classical approach for that purpose is the principal component analysis (PCA). It is shown that, in contrast to PCA which maximises the variance only, modern approaches such as independent component analysis (ICA) are more suitable for analysing molecular data. The condition of independence between components of ICA fits more naturally our assumption of individual (independent) factors which influence the data. This higher potential of ICA is demonstrated by a crossing experiment of the model plant Arabidopsis thaliana (Thale Cress). The experimental factors could be well identified and, in addition, ICA could even detect a technical artefact. However, in continuously observations such as in time experiments, the data show, in general, a nonlinear distribution. To analyse such nonlinear data, a nonlinear extension of PCA is used. This nonlinear PCA (NLPCA) is based on a neural network algorithm. The algorithm is adapted to be applicable to incomplete molecular data sets. Thus, it provides also the ability to estimate the missing data. The potential of nonlinear PCA to identify nonlinear factors is demonstrated by a cold stress experiment of Arabidopsis thaliana. The results of component analysis can be used to build a molecular network model. Since it includes functional dependencies it is termed functional network. Applied to the cold stress data, it is shown that functional networks are appropriate to visualise biological processes and thereby reveals molecular dynamics.
Uncertainty about the sensitivity of the climate system to changes in the Earth’s radiative balance constitutes a primary source of uncertainty for climate projections. Given the continuous increase in atmospheric greenhouse gas concentrations, constraining the uncertainty range in such type of sensitivity is of vital importance. A common measure for expressing this key characteristic for climate models is the climate sensitivity, defined as the simulated change in global-mean equilibrium temperature resulting from a doubling of atmospheric CO2 concentration. The broad range of climate sensitivity estimates (1.5-4.5°C as given in the last Assessment Report of the Intergovernmental Panel on Climate Change, 2001), inferred from comprehensive climate models, illustrates that the strength of simulated feedback mechanisms varies strongly among different models. The central goal of this thesis is to constrain uncertainty in climate sensitivity. For this objective we first generate a large ensemble of model simulations, covering different feedback strengths, and then request their consistency with present-day observational data and proxy-data from the Last Glacial Maximum (LGM). Our analyses are based on an ensemble of fully-coupled simulations, that were realized with a climate model of intermediate complexity (CLIMBER-2). These model versions cover a broad range of different climate sensitivities, ranging from 1.3 to 5.5°C, and have been generated by simultaneously perturbing a set of 11 model parameters. The analysis of the simulated model feedbacks reveals that the spread in climate sensitivity results from different realizations of the feedback strengths in water vapour, clouds, lapse rate and albedo. The calculated spread in the sum of all feedbacks spans almost the entire plausible range inferred from a sampling of more complex models. We show that the requirement for consistency between simulated pre-industrial climate and a set of seven global-mean data constraints represents a comparatively weak test for model sensitivity (the data constrain climate sensitivity to 1.3-4.9°C). Analyses of the simulated latitudinal profile and of the seasonal cycle suggest that additional present-day data constraints, based on these characteristics, do not further constrain uncertainty in climate sensitivity. The novel approach presented in this thesis consists in systematically combining a large set of LGM simulations with data information from reconstructed regional glacial cooling. Irrespective of uncertainties in model parameters and feedback strengths, the set of our model versions reveals a close link between the simulated warming due to a doubling of CO2, and the cooling obtained for the LGM. Based on this close relationship between past and future temperature evolution, we define a method (based on linear regression) that allows us to estimate robust 5-95% quantiles for climate sensitivity. We thus constrain the range of climate sensitivity to 1.3-3.5°C using proxy-data from the LGM at low and high latitudes. Uncertainties in glacial radiative forcing enlarge this estimate to 1.2-4.3°C, whereas the assumption of large structural uncertainties may increase the upper limit by an additional degree. Using proxy-based data constraints for tropical and Antarctic cooling we show that very different absolute temperature changes in high and low latitudes all yield very similar estimates of climate sensitivity. On the whole, this thesis highlights that LGM proxy-data information can offer an effective means of constraining the uncertainty range in climate sensitivity and thus underlines the potential of paleo-climatic data to reduce uncertainty in future climate projections.
This thesis discusses challenges in IT security education, points out a gap between e-learning and practical education, and presents a work to fill the gap. E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in conventional computer laboratories poses particular problems such as immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. In this thesis, we introduce the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibility to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by a virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present a security management solution to prevent the misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This thesis demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
The layer-by-layer assembly (LBL) of polyelectrolytes has been extensively studied for the preparation of ultrathin films due to the versatility of the build-up process. The control of the permeability of these layers is particularly important as there are potential drug delivery applications. Multilayered polyelectrolyte microcapsules are also of great interest due to their possible use as microcontainers. This work will present two methods that can be used as employable drug delivery systems, both of which can encapsulate an active molecule and tune the release properties of the active species. Poly-(N-isopropyl acrylamide), (PNIPAM) is known to be a thermo-sensitive polymer that has a Lower Critical Solution Temperature (LCST) around 32oC; above this temperature PNIPAM is insoluble in water and collapses. It is also known that with the addition of salt, the LCST decreases. This work shows Differential Scanning Calorimetry (DSC) and Confocal Laser Scanning Microscopy (CLSM) evidence that the LCST of the PNIPAM can be tuned with salt type and concentration. Microcapsules were used to encapsulate this thermo-sensitive polymer, resulting in a reversible and tunable stimuli- responsive system. The encapsulation of the PNIPAM inside of the capsule was proven with Raman spectroscopy, DSC (bulk LCST measurements), AFM (thickness change), SEM (morphology change) and CLSM (in situ LCST measurement inside of the capsules). The exploitation of the capsules as a microcontainer is advantageous not only because of the protection the capsules give to the active molecules, but also because it facilitates easier transport. The second system investigated demonstrates the ability to reduce the permeability of polyelectrolyte multilayer films by the addition of charged wax particles. The incorporation of this hydrophobic coating leads to a reduced water sensitivity particularly after heating, which melts the wax, forming a barrier layer. This conclusion was proven with Neutron Reflectivity by showing the decreased presence of D2O in planar polyelectrolyte films after annealing creating a barrier layer. The permeability of capsules could also be decreased by the addition of a wax layer. This was proved by the increase in recovery time measured by Florescence Recovery After Photobleaching, (FRAP) measurements. In general two advanced methods, potentially suitable for drug delivery systems, have been proposed. In both cases, if biocompatible elements are used to fabricate the capsule wall, these systems provide a stable method of encapsulating active molecules. Stable encapsulation coupled with the ability to tune the wall thickness gives the ability to control the release profile of the molecule of interest.
With increasing number of applications in Internet and mobile environments, distributed software systems are demanded to be more powerful and flexible, especially in terms of dynamism and security. This dissertation describes my work concerning three aspects: dynamic reconfiguration of component software, security control on middleware applications, and web services dynamic composition. Firstly, I proposed a technology named Routing Based Workflow (RBW) to model the execution and management of collaborative components and realize temporary binding for component instances. The temporary binding means component instances are temporarily loaded into a created execution environment to execute their functions, and then are released to their repository after executions. The temporary binding allows to create an idle execution environment for all collaborative components, on which the change operations can be immediately carried out. The changes on execution environment will result in a new collaboration of all involved components, and also greatly simplifies the classical issues arising from dynamic changes, such as consistency preserving etc. To demonstrate the feasibility of RBW, I created a dynamic secure middleware system - the Smart Data Server Version 3.0 (SDS3). In SDS3, an open source implementation of CORBA is adopted and modified as the communication infrastructure, and three secure components managed by RBW, are created to enhance the security on the access of deployed applications. SDS3 offers multi-level security control on its applications from strategy control to application-specific detail control. For the management by RBW, the strategy control of SDS3 applications could be dynamically changed by reorganizing the collaboration of the three secure components. In addition, I created the Dynamic Services Composer (DSC) based on Apache open source projects, Apache Axis and WSIF. In DSC, RBW is employed to model the interaction and collaboration of web services and to enable the dynamic changes on the flow structure of web services. Finally, overall performance tests were made to evaluate the efficiency of the developed RBW and SDS3. The results demonstrated that temporary binding of component instances makes slight impacts on the execution efficiency of components, and the blackout time arising from dynamic changes can be extremely reduced in any applications.
The goal of a Brain-Computer Interface (BCI) consists of the development of a unidirectional interface between a human and a computer to allow control of a device only via brain signals. While the BCI systems of almost all other groups require the user to be trained over several weeks or even months, the group of Prof. Dr. Klaus-Robert Müller in Berlin and Potsdam, which I belong to, was one of the first research groups in this field which used machine learning techniques on a large scale. The adaptivity of the processing system to the individual brain patterns of the subject confers huge advantages for the user. Thus BCI research is considered a hot topic in machine learning and computer science. It requires interdisciplinary cooperation between disparate fields such as neuroscience, since only by combining machine learning and signal processing techniques based on neurophysiological knowledge will the largest progress be made. In this work I particularly deal with my part of this project, which lies mainly in the area of computer science. I have considered the following three main points: <b>Establishing a performance measure based on information theory:</b> I have critically illuminated the assumptions of Shannon's information transfer rate for application in a BCI context. By establishing suitable coding strategies I was able to show that this theoretical measure approximates quite well to what is practically achieveable. <b>Transfer and development of suitable signal processing and machine learning techniques:</b> One substantial component of my work was to develop several machine learning and signal processing algorithms to improve the efficiency of a BCI. Based on the neurophysiological knowledge that several independent EEG features can be observed for some mental states, I have developed a method for combining different and maybe independent features which improved performance. In some cases the performance of the combination algorithm outperforms the best single performance by more than 50 %. Furthermore, I have theoretically and practically addressed via the development of suitable algorithms the question of the optimal number of classes which should be used for a BCI. It transpired that with BCI performances reported so far, three or four different mental states are optimal. For another extension I have combined ideas from signal processing with those of machine learning since a high gain can be achieved if the temporal filtering, i.e., the choice of frequency bands, is automatically adapted to each subject individually. <b>Implementation of the Berlin brain computer interface and realization of suitable experiments:</b> Finally a further substantial component of my work was to realize an online BCI system which includes the developed methods, but is also flexible enough to allow the simple realization of new algorithms and ideas. So far, bitrates of up to 40 bits per minute have been achieved with this system by absolutely untrained users which, compared to results of other groups, is highly successful.
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent. However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time.
The advent of large-scale and high-throughput technologies has recently caused a shift in focus in contemporary biology from decades of reductionism towards a more systemic view. Alongside the availability of genome sequences the exploration of organisms utilizing such approach should give rise to a more comprehensive understanding of complex systems. Domestication and intensive breeding of crop plants has led to a parallel narrowing of their genetic basis. The potential to improve crops by conventional breeding using elite cultivars is therefore rather limited and molecular technologies, such as marker assisted selection (MAS) are currently being exploited to re-introduce allelic variance from wild species. Molecular breeding strategies have mostly focused on the introduction of yield or resistance related traits to date. However given that medical research has highlighted the importance of crop compositional quality in the human diet this research field is rapidly becoming more important. Chemical composition of biological tissues can be efficiently assessed by metabolite profiling techniques, which allow the multivariate detection of metabolites of a given biological sample. Here, a GC/MS metabolite profiling approach has been applied to investigate natural variation of tomatoes with respect to the chemical composition of their fruits. The establishment of a mass spectral and retention index (MSRI) library was a prerequisite for this work in order to establish a framework for the identification of metabolites from a complex mixture. As mass spectral and retention index information is highly important for the metabolomics community this library was made publicly available. Metabolite profiling of tomato wild species revealed large differences in the chemical composition, especially of amino and organic acids, as well as on the sugar composition and secondary metabolites. Intriguingly, the analysis of a set of S. pennellii introgression lines (IL) identified 889 quantitative trait loci of compositional quality and 326 yield-associated traits. These traits are characterized by increases/decreases not only of single metabolites but also of entire metabolic pathways, thus highlighting the potential of this approach in uncovering novel aspects of metabolic regulation. Finally the biosynthetic pathway of the phenylalanine-derived fruit volatiles phenylethanol and phenylacetaldehyde was elucidated via a combination of metabolic profiling of natural variation, stable isotope tracer experiments and reverse genetic experimentation.
Stars are born in turbulent molecular clouds that fragment and collapse under the influence of their own gravity, forming a cluster of hundred or more stars. The star formation process is controlled by the interplay between supersonic turbulence and gravity. In this work, the properties of stellar clusters created by numerical simulations of gravoturbulent fragmentation are compared to those from observations. This includes the analysis of properties of individual protostars as well as statistical properties of the entire cluster. It is demonstrated that protostellar mass accretion is a highly dynamical and time-variant process. The peak accretion rate is reached shortly after the formation of the protostellar core. It is about one order of magnitude higher than the constant accretion rate predicted by the collapse of a classical singular isothermal sphere, in agreement with the observations. For a more reasonable comparison, the model accretion rates are converted to the observables bolometric temperature, bolometric luminosity, and envelope mass. The accretion rates from the simulations are used as input for an evolutionary scheme. The resulting distribution in the Tbol-Lbol-Menv parameter space is then compared to observational data by means of a 3D Kolmogorov-Smirnov test. The highest probability found that the distributions of model tracks and observational data points are drawn from the same population is 70%. The ratios of objects belonging to different evolutionary classes in observed star-forming clusters are compared to the temporal evolution of the gravoturbulent models in order to estimate the evolutionary stage of a cluster. While it is difficult to estimate absolute ages, the realtive numbers of young stars reveal the evolutionary status of a cluster with respect to other clusters. The sequence shows Serpens as the youngest and IC 348 as the most evolved of the investigated clusters. Finally the structures of young star clusters are investigated by applying different statistical methods like the normalised mean correlation length and the minimum spanning tree technique and by a newly defined measure for the cluster elongation. The clustering parameters of the model clusters correspond in many cases well to those from observed ones. The temporal evolution of the clustering parameters shows that the star cluster builds up from several subclusters and evolves to a more centrally concentrated cluster, while the cluster expands slower than new stars are formed.
We analyze the asymptotic behavior in the limit epsilon to zero for a wide class of difference operators H_epsilon = T_epsilon + V_epsilon with underlying multi-well potential. They act on the square summable functions on the lattice (epsilon Z)^d. We start showing the validity of an harmonic approximation and construct WKB-solutions at the wells. Then we construct a Finslerian distance d induced by H and show that short integral curves are geodesics and d gives the rate for the exponential decay of Dirichlet eigenfunctions. In terms of this distance, we give sharp estimates for the interaction between the wells and construct the interaction matrix.
Ultrathin, semi-permeable membranes are not only essential in natural systems (membranes of cells or organelles) but they are also important for applications (separation, filtering) in miniaturized devices. Membranes, integrated as diffusion barriers or filters in micron scale devices need to fulfill equivalent requirements as the natural systems, in particular mechanical stability and functionality (e.g. permeability), while being only tens of nm in thickness to allow fast diffusion times. Promising candidates for such membranes are polyelectrolyte multilayers, which were found to be mechanically stable, and variable in functionality. In this thesis two concepts to integrate such membranes in larger scale structures were developed. The first is based on the directed adhesion of polyelectrolyte hollow microcapsules. As a result, arrays of capsules were created. These can be useful for combinatorial chemistry or sensing. This concept was expanded to couple encapsulated living cells to the surface. The second concept is the transfer of flat freestanding multilayer membranes to structured surfaces. We have developed a method that allows us to couple mm2 areas of defect free film with thicknesses down to 50 nm to structured surfaces and to avoid crumpling of the membrane. We could again use this technique to produce arrays of micron size. The freestanding membrane is a diffusion barrier for high molecular weight molecules, while small molecules can pass through the membrane and thus allows us to sense solution properties. We have shown also that osmotic pressures lead to membrane deflection. That could be described quantitatively.
In semi-arid savannas, unsustainable land use can lead to degradation of entire landscapes, e.g. in the form of shrub encroachment. This leads to habitat loss and is assumed to reduce species diversity. In BIOTA phase 1, we investigated the effects of land use on population dynamics on farm scale. In phase 2 we scale up to consider the whole regional landscape consisting of a diverse mosaic of farms with different historic and present land use intensities. This mosaic creates a heterogeneous, dynamic pattern of structural diversity at a large spatial scale. Understanding how the region-wide dynamic land use pattern affects the abundance of animal and plant species requires the integration of processes on large as well as on small spatial scales. In our multidisciplinary approach, we integrate information from remote sensing, genetic and ecological field studies as well as small scale process models in a dynamic region-wide simulation tool. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Decisions for the conservation of biodiversity and sustainable management of natural resources are typically related to large scales, i.e. the landscape level. However, understanding and predicting the effects of land use and climate change on scales relevant for decision-making requires to include both, large scale vegetation dynamics and small scale processes, such as soil-plant interactions. Integrating the results of multiple BIOTA subprojects enabled us to include necessary data of soil science, botany, socio-economics and remote sensing into a high resolution, process-based and spatially-explicit model. Using an example from a sustainably-used research farm and a communally used and degraded farming area in semiarid southern Namibia we show the power of simulation models as a tool to integrate processes across disciplines and scales.