Refine
Year of publication
- 2006 (840) (remove)
Document Type
- Article (581)
- Doctoral Thesis (82)
- Monograph/Edited Volume (54)
- Conference Proceeding (32)
- Postprint (26)
- Preprint (24)
- Review (18)
- Other (17)
- Master's Thesis (3)
- Working Paper (2)
Language
- English (840) (remove)
Keywords
- climate change (4)
- polyelectrolyte (4)
- Fluoreszenz-Resonanz-Energie-Transfer (3)
- Immunoassay (3)
- Nanopartikel (3)
- Nichtlineare Dynamik (3)
- Optimality Theory (3)
- Polyelektrolyt (3)
- metabolite profiling (3)
- metabolomics (3)
Institute
- Institut für Physik und Astronomie (159)
- Institut für Biochemie und Biologie (128)
- Institut für Chemie (81)
- Institut für Geowissenschaften (73)
- Institut für Mathematik (71)
- Institut für Informatik und Computational Science (55)
- Department Linguistik (46)
- Extern (37)
- Department Psychologie (36)
- Institut für Anglistik und Amerikanistik (34)
Quantum dots (QDs) are common as luminescing markers for imaging in biological applications because their optical properties seem to be inert against their surrounding solvent. This, together with broad and strong absorption bands and intense, sharp tuneable luminescence bands, makes them interesting candidates for methods utilizing Förster Resonance Energy Transfer (FRET), e. g. for sensitive homogeneous fluoroimmunoassays (FIA). In this work we demonstrate energy transfer from Eu<SUP>3+</SUP>-trisbipyridin (Eu-TBP) donors to CdSe-ZnS-QD acceptors in solutions with and without serum. The QDs are commercially available CdSe-ZnS core-shell particles emitting at 655 nm (QD655). The FRET system was achieved by the binding of the streptavidin conjugated donors with the biotin conjugated acceptors. After excitation of Eu-TBP and as result of the energy transfer, the luminescence of the QD655 acceptors also showed lengthened decay times like the donors. The energy transfer efficiency, as calculated from the decay times of the bound and the unbound components, amounted to 37%. The Förster-radius, estimated from the absorption and emission bands, was ca. 77 Å. The effective binding ratio, which not only depends on the ratio of binding pairs but also on unspecific binding, was obtained from the donor emission dependent on the concentration. As serum promotes unspecific binding, the overall FRET efficiency of the assay was reduced. We conclude that QDs are good substitutes for acceptors in FRET if combined with slow decay donors like Europium. The investigation of the influence of the serum provides guidance towards improving binding properties of QD assays.
In view of the importance of charge storage in polymer electrets for electromechanical transducer applications, the aim of this work is to contribute to the understanding of the charge-retention mechanisms. Furthermore, we will try to explain how the long-term storage of charge carriers in polymeric electrets works and to identify the probable trap sites. Charge trapping and de-trapping processes were investigated in order to obtain evidence of the trap sites in polymeric electrets. The charge de-trapping behavior of two particular polymer electrets was studied by means of thermal and optical techniques. In order to obtain evidence of trapping or de-trapping, charge and dipole profiles in the thickness direction were also monitored. In this work, the study was performed on polyethylene terephthalate (PETP) and on cyclic-olefin copolymers (COCs). PETP is a photo-electret and contains a net dipole moment that is located in the carbonyl group (C = O). The electret behavior of PETP arises from both the dipole orientation and the charge storage. In contrast to PETP, COCs are not photo-electrets and do not exhibit a net dipole moment. The electret behavior of COCs arises from the storage of charges only. COC samples were doped with dyes in order to probe their internal electric field. COCs show shallow charge traps at 0.6 and 0.11 eV, characteristic for thermally activated processes. In addition, deep charge traps are present at 4 eV, characteristic for optically stimulated processes. PETP films exhibit a photo-current transient with a maximum that depends on the temperature with an activation energy of 0.106 eV. The pair thermalization length (rc) calculated from this activation energy for the photo-carrier generation in PETP was estimated to be approx. 4.5 nm. The generated photo-charge carriers can recombine, interact with the trapped charge, escape through the electrodes or occupy an empty trap. PETP possesses a small quasi-static pyroelectric coefficient (QPC): ~0.6 nC/(m²K) for unpoled samples, ~60 nC/(m²K) for poled samples and ~60 nC/(m²K) for unpoled samples under an electric bias (E ~10 V/µm). When stored charges generate an internal electric field of approx. 10 V/µm, they are able to induce a QPC comparable to that of the oriented dipoles. Moreover, we observe charge-dipole interaction. Since the raw data of the QPC-experiments on PETP samples is noisy, a numerical Fourier-filtering procedure was applied. Simulations show that the data analysis is reliable when the noise level is up to 3 times larger than the calculated pyroelectric current for the QPC. PETP films revealed shallow traps at approx. 0.36 eV during thermally-stimulated current measurements. These energy traps are associated with molecular dipole relaxations (C = O). On the other hand, photo-activated measurements yield deep charge traps at 4.1 and 5.2 eV. The observed wavelengths belong to the transitions in PETP that are analogous to the π - π* benzene transitions. The observed charge de-trapping selectivity in the photocharge decay indicates that the charge detrapping is from a direct photon-charge interaction. Additionally, the charge de-trapping can be facilitated by photo-exciton generation and the interaction of the photo-excitons with trapped charge carriers. These results indicate that the benzene rings (C6H4) and the dipolar groups (C = O) can stabilize and share an extra charge carrier in a chemical resonance. In this way, this charge could be de-trapped in connection with the photo-transitions of the benzene ring and with the dipole relaxations. The thermally-activated charge release shows a difference in the trap depth to its optical counterpart. This difference indicates that the trap levels depend on the de-trapping process and on the chemical nature of the trap site. That is, the processes of charge detrapping from shallow traps are related to secondary forces. The processes of charge de-trapping from deep traps are related to primary forces. Furthermore, the presence of deep trap levels causes the stability of the charge for long periods of time.
What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis
(2006)
Since Galileo Galilei invented the first thermometer, researchers have tried to understand the complex dynamics of ocean and atmosphere by means of scientific methods. They observe nature and formulate theories about the climate system. Since some decades powerful computers are capable to simulate the past and future evolution of climate. Time series analysis tries to link the observed data to the computer models: Using statistical methods, one estimates characteristic properties of the underlying climatological processes that in turn can enter the models. The quality of an estimation is evaluated by means of error bars and significance testing. On the one hand, such a test should be capable to detect interesting features, i.e. be sensitive. On the other hand, it should be robust and sort out false positive results, i.e. be specific. This thesis mainly aims to contribute to methodological questions of time series analysis with a focus on sensitivity and specificity and to apply the investigated methods to recent climatological problems. First, the inference of long-range correlations by means of Detrended Fluctuation Analysis (DFA) is studied. It is argued that power-law scaling of the fluctuation function and thus long-memory may not be assumed a priori but have to be established. This requires to investigate the local slopes of the fluctuation function. The variability characteristic for stochastic processes is accounted for by calculating empirical confidence regions. The comparison of a long-memory with a short-memory model shows that the inference of long-range correlations from a finite amount of data by means of DFA is not specific. When aiming to infer short memory by means of DFA, a local slope larger than $\alpha=0.5$ for large scales does not necessarily imply long-memory. Also, a finite scaling of the autocorrelation function is shifted to larger scales in the fluctuation function. It turns out that long-range correlations cannot be concluded unambiguously from the DFA results for the Prague temperature data set. In the second part of the thesis, an equivalence class of nonstationary Gaussian stochastic processes is defined in the wavelet domain. These processes are characterized by means of wavelet multipliers and exhibit well defined time dependent spectral properties; they allow one to generate realizations of any nonstationary Gaussian process. The dependency of the realizations on the wavelets used for the generation is studied, bias and variance of the wavelet sample spectrum are calculated. To overcome the difficulties of multiple testing, an areawise significance test is developed and compared to the conventional pointwise test in terms of sensitivity and specificity. Applications to Climatological and Hydrological questions are presented. The thesis at hand mainly aims to contribute to methodological questions of time series analysis and to apply the investigated methods to recent climatological problems. In the last part, the coupling between El Nino/Southern Oscillation (ENSO) and the Indian Monsoon on inter-annual time scales is studied by means of Hilbert transformation and a curvature defined phase. This method allows one to investigate the relation of two oscillating systems with respect to their phases, independently of their amplitudes. The performance of the technique is evaluated using a toy model. From the data, distinct epochs are identified, especially two intervals of phase coherence, 1886-1908 and 1964-1980, confirming earlier findings from a new point of view. A significance test of high specificity corroborates these results. Also so far unknown periods of coupling invisible to linear methods are detected. These findings suggest that the decreasing correlation during the last decades might be partly inherent to the ENSO/Monsoon system. Finally, a possible interpretation of how volcanic radiative forcing could cause the coupling is outlined.
In this work approaches for new detection system development for an Analytical Ultracentrifuge (AUC) were explored. Unlike its counterpart in chromatography fractionation techniques, the use of a Multidetection system for AUC has not yet been implemented to full extent despite its potential benefit. In this study we tried to couple existing fundamental spectroscopic and scattering techniques that are used in day to day science as tool for extracting analyte information. Trials were performed for adapting Raman, Light scattering and UV/Vis (with possibility to work with the whole range of wavelengths) to AUC. Conclusions were drawn for Raman and Light scattering to be a possible detection system for AUC, while the development for a fast fiber optics based multiwavelength detector was completed. The multiwavelength detector demonstrated the capability of data generation matching the literature and reference measurement data and faster data collection than that of the commercial instrument. It became obvious that with the generation of data in 3-D space in the UV/Vis detection system, the user can select the wavelength for the evaluation of experimental results as the data set contains the whole range of information from UV/Vis wavelength. The detector showed the data generation with much faster speed unlike the commercial instruments. The advantage of fast data generation was exemplified with the evaluation of data for a mixture of three colloids. These data were in conformity with measurement results from normal radial experiments and without significant diffusion broadening. Thus conclusions were drawn that with our designed Multiwavelength detector, meaningful data in 3-D space can be collected with much faster speed of data generation.
Earthquakes form by sudden brittle failure of rock mostly as shear ruptures along a rupture plane. Beside this, mechanisms other than pure shearing have been observed for some earthquakes mainly in volcanic areas. Possible explanations include complex rupture geometries and tensile earthquakes. Tensile earthquakes occur by opening or closure of cracks during rupturing. They are likely to be often connected with fluids that cause pressure changes in the pore space of rocks leading to earthquake triggering. Tensile components have been reported for swarm earthquakes in West Bohemia in 2000. The aim and subject of this work is an assessment and the accurate determination of such tensile components for earthquakes in anisotropic media. Currently used standard techniques for the retrieval of earthquake source mechanisms assume isotropic rock properties. By means of moment tensors, equivalent forces acting at the source are used to explain the radiated wavefield. Conversely, seismic anisotropy, i.e. directional dependence of elastic properties, has been observed in the earth's crust and mantle such as in West Bohemia. In comparison to isotropy, anisotropy causes modifications in wave amplitudes and shear-wave splitting. In this work, effects of seismic anisotropy on true or apparent tensile source components of earthquakes are investigated. In addition, earthquake source parameters are determined considering anisotropy. It is shown that moment tensors and radiation patterns due to shear sources in anisotropic media may be similar to those of tensile sources in isotropic media. In contrast, similarities between tensile earthquakes in anisotropic rocks and shear sources in isotropic media may exist. As a consequence, the interpretation of tensile source components is ambiguous. The effects that are due to anisotropy depend on the orientation of the earthquake source and the degree of anisotropy. The moment of an earthquake is also influenced by anisotropy. The orientation of fault planes can be reliably determined even if isotropy instead of anisotropy is assumed and if the spectra of the compressional waves are used. Greater difficulties may arise when the spectra of split shear waves are additionally included. Retrieved moment tensors show systematic artefacts. Observed tensile source components determined for events in West Bohemia in 1997 can only partly be attributed to the effects of moderate anisotropy. Furthermore, moment tensors determined earlier for earthquakes induced at the German Continental Deep Drilling Program (KTB), Bavaria, were reinterpreted under assumptions of anisotropic rock properties near the borehole. The events can be consistently identified as shear sources, although their moment tensors comprise tensile components that are considered to be apparent. These results emphasise the necessity to consider anisotropy to uniquely determine tensile source parameters. Therefore, a new inversion algorithm has been developed, tested, and successfully applied to 112 earthquakes that occurred during the most recent intense swarm episode in West Bohemia in 2000 at the German-Czech border. Their source mechanisms have been retrieved using isotropic and anisotropic velocity models. Determined local magnitudes are in the range between 1.6 and 3.2. Fault-plane solutions are similar to each other and characterised by left-lateral faulting on steeply dipping, roughly North-South oriented rupture planes. Their dip angles decrease above a depth of about 8.4km. Tensile source components indicating positive volume changes are found for more than 60% of the considered earthquakes. Their size depends on source time and location. They are significant at the beginning of the swarm and at depths below 8.4km but they decrease in importance later in the course of the swarm. Determined principle stress axes include P axes striking Northeast and Taxes striking Southeast. They resemble those found earlier in Central Europe. However, depth-dependence in plunge is observed. Plunge angles of the P axes decrease gradually from 50° towards shallow angles with increasing depth. In contrast, the plunge angles of the T axes change rapidly from about 8° above a depth of 8.4km to 21° below this depth. By this thesis, spatial and temporal variations in tensile source components and stress conditions have been reported for the first time for swarm earthquakes in West Bohemia in 2000. They also persist, when anisotropy is assumed and can be explained by intrusion of fluids into the opened cracks during tensile faulting.
The intracontinental endorheic Aral Sea, remote from oceanic influences, represents an excellent sedimentary archive in Central Asia that can be used for high-resolution palaeoclimate studies. We performed palynological, microfacies and geochemical analyses on sediment cores retrieved from Chernyshov Bay, in the NW part of the modern Large Aral Sea. The most complete sedimentary sequence, whose total length is 11 m, covers approximately the past 2000 years of the late Holocene. High-resolution palynological analyses, conducted on both dinoflagellate cysts assemblages and pollen grains, evidenced prominent environmental change in the Aral Sea and in the catchment area. The diversity and the distribution of dinoflagellate cysts within the assemblages characterized the sequence of salinity and lake-level changes during the past 2000 years. Due to the strong dependence of the Aral Sea hydrology to inputs from its tributaries, the lake levels are ultimately linked to fluctuations in meltwater discharges during spring. As the amplitude of glacial meltwater inputs is largely controlled by temperature variations in the Tien Shan and Pamir Mountains during the melting season, salinity and lake-level changes of the Aral Sea reflect temperature fluctuations in the high catchment area during the past 2000 years. Dinoflagellate cyst assemblages document lake lowstands and hypersaline conditions during ca. 0–425 AD, 920–1230 AD, 1500 AD, 1600–1650 AD, 1800 AD and since the 1960s, whereas oligosaline conditions and higher lake levels prevailed during the intervening periods. Besides, reworked dinoflagellate cysts from Palaeogene and Neogene deposits happened to be a valuable proxy for extreme sheet-wash events, when precipitation is enhanced over the Aral Sea Basin as during 1230–1450 AD. We propose that the recorded environmental changes are related primarily to climate, but may have been possibly amplified during extreme conditions by human-controlled irrigation activities or military conflicts. Additionally, salinity levels and variations in solar activity show striking similarities over the past millennium, as during 1000–1300 AD, 1450–1550 and 1600–1700 AD when low lake levels match well with an increase in solar activity thus suggesting that an increase in the net radiative forcing reinforced past Aral Sea’s regressions. On the other hand, we used pollen analyses to quantify changes in moisture conditions in the Aral Sea Basin. High-resolution reconstruction of precipitation (mean annual) and temperature (mean annual, coldest versus warmest month) parameters are performed using the “probability mutual climatic spheres” method, providing the sequence of climate change for the past 2000 years in western Central Asia. Cold and arid conditions prevailed during ca. 0–400 AD, 900–1150 AD and 1500–1650 AD with the extension of xeric vegetation dominated by steppe elements. Conversely, warmer and less arid conditions occurred during ca. 400–900 AD and 1150–1450 AD, where steppe vegetation was enriched in plants requiring moister conditions. Change in the precipitation pattern over the Aral Sea Basin is shown to be predominantly controlled by the Eastern Mediterranean (EM) cyclonic system, which provides humidity to the Middle East and western Central Asia during winter and early spring. As the EM is significantly regulated by pressure modulations of the North Atlantic Oscillation (NAO) when the system is in a negative phase, a relationship between humidity over western Central Asia and the NAO is proposed. Besides, laminated sediments record shifts in sedimentary processes during the late Holocene that reflect pronounced changes in taphonomic dynamics. In Central Asia, the frequency of dust storms occurring during spring when the continent is heating up is mostly controlled by the intensity and the position of the Siberian High (SH) Pressure System. Using titanium (Ti) content in laminated sediments as a proxy for aeolian detrital inputs, changes in wind dynamics over Central Asia is documented for the past 1500 years, offering the longest reconstruction of SH variability to date. Based on high Ti content, stronger wind dynamics are reported from 450–700 AD, 1210–1265 AD, 1350–1750 AD and 1800–1975 AD, reporting a stronger SH during spring. In contrast, lower Ti content from 1750–1800 AD and 1980–1985 AD reflect a diminished influence of the SH and a reduced atmospheric circulation. During 1180–1210 AD and 1265–1310 AD, considerably weakened atmospheric circulation is evidenced. As a whole, though climate dynamics controlled environmental changes and ultimately modulated changes in the western Central Asia’s climate system, it is likely that changes in solar activity also had an impact by influencing to some extent the Aral Sea’s hydrology balance and also regional temperature patterns in the past. <hr> The appendix of the thesis is provided via the HTML document as ZIP download.
Advances in biotechnologies rapidly increase the number of molecules of a cell which can be observed simultaneously. This includes expression levels of thousands or ten-thousands of genes as well as concentration levels of metabolites or proteins. Such Profile data, observed at different times or at different experimental conditions (e.g., heat or dry stress), show how the biological experiment is reflected on the molecular level. This information is helpful to understand the molecular behaviour and to identify molecules or combination of molecules that characterise specific biological condition (e.g., disease). This work shows the potentials of component extraction algorithms to identify the major factors which influenced the observed data. This can be the expected experimental factors such as the time or temperature as well as unexpected factors such as technical artefacts or even unknown biological behaviour. Extracting components means to reduce the very high-dimensional data to a small set of new variables termed components. Each component is a combination of all original variables. The classical approach for that purpose is the principal component analysis (PCA). It is shown that, in contrast to PCA which maximises the variance only, modern approaches such as independent component analysis (ICA) are more suitable for analysing molecular data. The condition of independence between components of ICA fits more naturally our assumption of individual (independent) factors which influence the data. This higher potential of ICA is demonstrated by a crossing experiment of the model plant Arabidopsis thaliana (Thale Cress). The experimental factors could be well identified and, in addition, ICA could even detect a technical artefact. However, in continuously observations such as in time experiments, the data show, in general, a nonlinear distribution. To analyse such nonlinear data, a nonlinear extension of PCA is used. This nonlinear PCA (NLPCA) is based on a neural network algorithm. The algorithm is adapted to be applicable to incomplete molecular data sets. Thus, it provides also the ability to estimate the missing data. The potential of nonlinear PCA to identify nonlinear factors is demonstrated by a cold stress experiment of Arabidopsis thaliana. The results of component analysis can be used to build a molecular network model. Since it includes functional dependencies it is termed functional network. Applied to the cold stress data, it is shown that functional networks are appropriate to visualise biological processes and thereby reveals molecular dynamics.
Uncertainty about the sensitivity of the climate system to changes in the Earth’s radiative balance constitutes a primary source of uncertainty for climate projections. Given the continuous increase in atmospheric greenhouse gas concentrations, constraining the uncertainty range in such type of sensitivity is of vital importance. A common measure for expressing this key characteristic for climate models is the climate sensitivity, defined as the simulated change in global-mean equilibrium temperature resulting from a doubling of atmospheric CO2 concentration. The broad range of climate sensitivity estimates (1.5-4.5°C as given in the last Assessment Report of the Intergovernmental Panel on Climate Change, 2001), inferred from comprehensive climate models, illustrates that the strength of simulated feedback mechanisms varies strongly among different models. The central goal of this thesis is to constrain uncertainty in climate sensitivity. For this objective we first generate a large ensemble of model simulations, covering different feedback strengths, and then request their consistency with present-day observational data and proxy-data from the Last Glacial Maximum (LGM). Our analyses are based on an ensemble of fully-coupled simulations, that were realized with a climate model of intermediate complexity (CLIMBER-2). These model versions cover a broad range of different climate sensitivities, ranging from 1.3 to 5.5°C, and have been generated by simultaneously perturbing a set of 11 model parameters. The analysis of the simulated model feedbacks reveals that the spread in climate sensitivity results from different realizations of the feedback strengths in water vapour, clouds, lapse rate and albedo. The calculated spread in the sum of all feedbacks spans almost the entire plausible range inferred from a sampling of more complex models. We show that the requirement for consistency between simulated pre-industrial climate and a set of seven global-mean data constraints represents a comparatively weak test for model sensitivity (the data constrain climate sensitivity to 1.3-4.9°C). Analyses of the simulated latitudinal profile and of the seasonal cycle suggest that additional present-day data constraints, based on these characteristics, do not further constrain uncertainty in climate sensitivity. The novel approach presented in this thesis consists in systematically combining a large set of LGM simulations with data information from reconstructed regional glacial cooling. Irrespective of uncertainties in model parameters and feedback strengths, the set of our model versions reveals a close link between the simulated warming due to a doubling of CO2, and the cooling obtained for the LGM. Based on this close relationship between past and future temperature evolution, we define a method (based on linear regression) that allows us to estimate robust 5-95% quantiles for climate sensitivity. We thus constrain the range of climate sensitivity to 1.3-3.5°C using proxy-data from the LGM at low and high latitudes. Uncertainties in glacial radiative forcing enlarge this estimate to 1.2-4.3°C, whereas the assumption of large structural uncertainties may increase the upper limit by an additional degree. Using proxy-based data constraints for tropical and Antarctic cooling we show that very different absolute temperature changes in high and low latitudes all yield very similar estimates of climate sensitivity. On the whole, this thesis highlights that LGM proxy-data information can offer an effective means of constraining the uncertainty range in climate sensitivity and thus underlines the potential of paleo-climatic data to reduce uncertainty in future climate projections.
This thesis discusses challenges in IT security education, points out a gap between e-learning and practical education, and presents a work to fill the gap. E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in conventional computer laboratories poses particular problems such as immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. In this thesis, we introduce the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibility to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by a virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present a security management solution to prevent the misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This thesis demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
The layer-by-layer assembly (LBL) of polyelectrolytes has been extensively studied for the preparation of ultrathin films due to the versatility of the build-up process. The control of the permeability of these layers is particularly important as there are potential drug delivery applications. Multilayered polyelectrolyte microcapsules are also of great interest due to their possible use as microcontainers. This work will present two methods that can be used as employable drug delivery systems, both of which can encapsulate an active molecule and tune the release properties of the active species. Poly-(N-isopropyl acrylamide), (PNIPAM) is known to be a thermo-sensitive polymer that has a Lower Critical Solution Temperature (LCST) around 32oC; above this temperature PNIPAM is insoluble in water and collapses. It is also known that with the addition of salt, the LCST decreases. This work shows Differential Scanning Calorimetry (DSC) and Confocal Laser Scanning Microscopy (CLSM) evidence that the LCST of the PNIPAM can be tuned with salt type and concentration. Microcapsules were used to encapsulate this thermo-sensitive polymer, resulting in a reversible and tunable stimuli- responsive system. The encapsulation of the PNIPAM inside of the capsule was proven with Raman spectroscopy, DSC (bulk LCST measurements), AFM (thickness change), SEM (morphology change) and CLSM (in situ LCST measurement inside of the capsules). The exploitation of the capsules as a microcontainer is advantageous not only because of the protection the capsules give to the active molecules, but also because it facilitates easier transport. The second system investigated demonstrates the ability to reduce the permeability of polyelectrolyte multilayer films by the addition of charged wax particles. The incorporation of this hydrophobic coating leads to a reduced water sensitivity particularly after heating, which melts the wax, forming a barrier layer. This conclusion was proven with Neutron Reflectivity by showing the decreased presence of D2O in planar polyelectrolyte films after annealing creating a barrier layer. The permeability of capsules could also be decreased by the addition of a wax layer. This was proved by the increase in recovery time measured by Florescence Recovery After Photobleaching, (FRAP) measurements. In general two advanced methods, potentially suitable for drug delivery systems, have been proposed. In both cases, if biocompatible elements are used to fabricate the capsule wall, these systems provide a stable method of encapsulating active molecules. Stable encapsulation coupled with the ability to tune the wall thickness gives the ability to control the release profile of the molecule of interest.