Filtern
Erscheinungsjahr
- 2004 (174) (entfernen)
Dokumenttyp
- Dissertation (174) (entfernen)
Gehört zur Bibliographie
- ja (174)
Schlagworte
- Modellierung (4)
- Chaos (3)
- Synchronisation (3)
- Anisotropie (2)
- Arava Fault (2)
- Arava-Störung (2)
- Dead Sea Transform (2)
- Ellipsometrie (2)
- Energiestoffwechsel (2)
- FMC (2)
Institut
- Institut für Physik und Astronomie (29)
- Institut für Biochemie und Biologie (20)
- Institut für Chemie (19)
- Wirtschaftswissenschaften (13)
- Bürgerliches Recht (12)
- Öffentliches Recht (11)
- Department Linguistik (8)
- Institut für Geowissenschaften (8)
- Department Psychologie (6)
- Institut für Ernährungswissenschaft (6)
Die Autoren untersuchten mit Hilfe einer Fragebogenstudie das Sexualverhalten von StudentenInnen und KrankenpflegeschülernInnen unter der Bedrohung durch AIDS(n = 593). Als Ergebnis lässt sich festhalten, dass unterschiedliche Personengruppen mit unterschiedlichen Einstellungen, mit unterschiedlichem Wissen über HIV und AIDS, mit unterschiedlichem Sexualverhalten sowie einem unterschiedlichen Grad von persönlicher Betroffenheit auf differenzierte Weise angesprochen und zur Prävention angeleitet werden müssen. Die berufliche Nähe zu HIV und AIDS hat keinen Einfluss auf die sexuellen Einstellungen und Verhaltensweisen. Nur durch eine Selbststeuerung kann einer Gefahrensituation, wie sie eine mögliche HIV-Infektion darstellt, begegnet werden. Von daher muss neben der persönlichen Betroffenheit auch die Einsicht gegeben sein, dass ich mich als Individuum eigenständig vor dieser Gefahr schützen kann. Ferner muss dieses Verhalten in die eigene Lebenswelt eingepasst und von der eigenen sozialen Umgebung getragen werden. Präventionsbemühungen müssen auf kompetenzsteigernde, ressourcenorientierte und differenzierte Maßnahmen setzen. Ansätze von Furchtappellen und Lustfeindlichkeit wirken kontraproduktiv. Eine Beschränkung der Prävention auf individuumzentrierte Maßnahmen ist wenig effektiv, sofern gesellschaftliche und strukturelle Bedingungen ausgeblendet werden. Ziel von Sexualpädagogik und AIDS-Präventionsarbeit muss es daher sein, eine von allen geteilte Kommunikationsstruktur für Intimität zu entwickeln.
This thesis analyses synchronization phenomena occurring in large ensembles of interacting oscillatory units. In particular, the effects of nonisochronicity (frequency dependence on the oscillator's amplitude) on the macroscopic transition to synchronization are studied in detail. The new phenomena found (Anomalous Synchronization) are investigated in populations of oscillators as well as between oscillator's ensembles.
We calculate the additional carbon emissions as a result of the conversion of natural land in a process of urbanisation; and the change of carbon flows by “urbanised” ecosystems, when the atmospheric carbon is exported to the neighboring territories, from 1980 till 2050 for the eight regions of the world. As a scenario we use combined UN and demographic model′s prognoses for regional total and urban population growth. The calculations of urban areas dynamics are based on two models: the regression model and the Gamma-model. The urbanised area is sub-divided on built-up, „green“ (parks, etc.) and informal settlements (favelas) areas. The next step is to calculate the regional and world dynamics of carbon emission and export, and the annual total carbon balance. Both models give similar results with some quantitative differences. In the first model, the world annual emissions attain a maximum of 205 MtC/year between 2020-2030. Emissions will then slowly decrease. The maximum contributions are given by China and the Asia and Pacific regions. In the second model, world annual emissions increase to 1.25 GtC in 2005, beginning to decrease afterwards. If we compare the emission maximum with the annual emission caused by deforestation, 1.36GtC per year, then we can say that the role of urbanised territories (UT) is of a comparable magnitude. Regarding the world annual export of carbon by UT, we observe its monotonous growth by three times, from 24 MtC to 66 MtC in the first model, and from 249 MtC to 505 MtC in the second one. The latter, is therefore comparable to the amount of carbon transported by rivers into the ocean (196-537 MtC). By estimating the total balance we find that urbanisation shifts the total balance towards a “sink” state. The urbanisation is inhibited in the interval 2020-2030, and by 2050 the growth of urbanised areas would almost stop. Hence, the total emission of natural carbon at that stage will stabilise at the level of the 1980s (80 MtC per year). As estimated by the second model, the total balance, being almost constant until 2000, then starts to decrease at an almost constant rate. We can say that by the end of the XXI century, the total carbon balance will be equal to zero, when the exchange flows are fully balanced, and may even be negative, when the system begins to take up carbon from the atmosphere, i.e., becomes a “sink”.
My thesis is concerned with several new noise-induced phenomena in excitable neural models, especially those with FitzHugh-Nagumo dynamics. In these effects the fluctuations intrinsically present in any complex neural network play a constructive role and improve functionality. I report the occurrence of Vibrational Resonance in excitable systems. Both in an excitable electronic circuit and in the FitzHugh-Nagumo model, I show that an optimal amplitude of high-frequency driving enhances the response of an excitable system to a low-frequency signal. Additionally, the influence of additive noise and the interplay between Stochastic and Vibrational Resonance is analyzed. Further, I study systems which combine both oscillatory and excitable properties, and hence intrinsically possess two internal frequencies. I show that in such a system the effect of Stochastic Resonance can be amplified by an additional high-frequency signal which is in resonance with the oscillatory frequency. This amplification needs much lower noise intensities than for conventional Stochastic Resonance in excitable systems. I study frequency selectivity in noise-induced subthreshold signal processing in a system with many noise-supported stochastic attractors. I show that the response of the coupled elements at different noise levels can be significantly enhanced or reduced by forcing some elements into resonance with these new frequencies which correspond to appropriate phase-relations. A noise-induced phase transition to excitability is reported in oscillatory media with FitzHugh-Nagumo dynamics. This transition takes place via noise-induced stabilization of a deterministically unstable fixed point of the local dynamics, while the overall phase-space structure of the system is maintained. The joint action of coupling and noise leads to a different type of phase transition and results in a stabilization of the system. The resulting noise-induced regime is shown to display properties characteristic of excitable media, such as Stochastic Resonance and wave propagation. This effect thus allows the transmission of signals through an otherwise globally oscillating medium. In particular, these theoretical findings suggest a possible mechanism for suppressing undesirable global oscillations in neural networks (which are usually characteristic of abnormal medical conditions such as Parkinson′s disease or epilepsy), using the action of noise to restore excitability, which is the normal state of neuronal ensembles.
Die Untersuchung mikrogelinster astronomischer Objekte ermöglicht es, Informationen über die Größe und Struktur dieser Objekte zu erhalten. Im ersten Teil dieser Arbeit werden die Spektren von drei gelinsten Quasare, die mit dem Potsdamer Multi Aperture Spectrophotometer (PMAS) erhalten wurden, auf Anzeichen für Mikrolensing untersucht. In den Spektren des Vierfachquasares HE 0435-1223 und des Doppelquasares HE 0047-1756 konnten Hinweise für Mikrolensing gefunden werden, während der Doppelquasar UM 673 (Q 0142--100) keine Anzeichen für Mikrolensing zeigt. Die Invertierung der Lichtkurve eines Mikrolensing-Kausik-Crossing-Ereignisses ermöglicht es, das eindimensionale Helligkeitsprofil der gelinsten Quelle zu rekonstruieren. Dies wird im zweiten Teil dieser Arbeit untersucht. Die mathematische Beschreibung dieser Aufgabe führt zu einer Volterra'schen Integralgleichung der ersten Art, deren Lösung ein schlecht gestelltes Problem ist. Zu ihrer Lösung wird in dieser Arbeit ein lokales Regularisierungsverfahren angewendet, das an die kausale Strukture der Volterra'schen Gleichung besser angepasst ist als die bisher verwendete Tikhonov-Phillips-Regularisierung. Es zeigt sich, dass mit dieser Methode eine bessere Rekonstruktion kleinerer Strukturen in der Quelle möglich ist. Weiterhin wird die Anwendbarkeit der Regularisierungsmethode auf realistische Lichtkurven mit irregulärem Sampling bzw. größeren Lücken in den Datenpunkten untersucht.
Independent component analysis (ICA) is a tool for statistical data analysis and signal processing that is able to decompose multivariate signals into their underlying source components. Although the classical ICA model is highly useful, there are many real-world applications that require powerful extensions of ICA. This thesis presents new methods that extend the functionality of ICA: (1) reliability and grouping of independent components with noise injection, (2) robust and overcomplete ICA with inlier detection, and (3) nonlinear ICA with kernel methods.
For recombinant production of proteins for structural and functional analyses, the E. coli expression system is the most widely used due to high yields and straightforward processing. However, particularly the expression of eukaryotic proteins in E. coli is often problematic, e.g. when the protein is not folded correctly and is deposited in insoluble inclusion bodies. In some cases it is favourable to analyse deletion constructs of a protein or an individual protein domain instead of the full-length protein. This implies the generation of a set of expression constructs that need to be characterised. In this work methods to optimise and evaluate in vitro folding of inclusion body proteins as well as high-throughput characterisation of expression constructs were developed. Transferring inclusion body proteins to their native state involves two steps: (a) solubilisation with a chaotropic reagent or a strong ionic detergent and (b) folding of the protein by removal of the chaotrop accompanied by the transfer into an appropriate buffer. The yield of natively folded protein is often substantially reduced due to aggregation or misfolding; it may, however, be improved by certain additives to the folding buffer. These additives need to be identified empirically. In this thesis a screening procedure for folding conditions was developed. To reduce the number of possible combinations of screening additives, empirical observations documented in the literature as well as well known properties of certain screening additives were considered. To decrease the amount of protein and work invested, the screen was miniaturised and automated using a pipetting robot. Twenty rapid dilution conditions for the denatured protein are tested and two conditions for folding of proteins using the detergent/cyclodextrin protein folding system of Rozema et al. (1996). 100 µg protein is used per condition. In addition, eight conditions can be tested for folding of His-tagged proteins (approx. 200 µg) immobilised on metal chelate resins. The screen was successfully applied to fold a human protein, the p22 subunit of dynactin that is expressed in inclusion bodies in E. coli. For p22 dynactin – as is the case for many proteins – there was no biological assay available to assess the success of the folding screen. Protein solubility can not be used as a stringent criterion because beside natively folded protein, soluble misfolded species and microaggregates may occur. This work evaluates methods to detect small amounts of natively folded protein after automated folding screening. Before folding screening with p22 dynactin, two model enzymes, bovine carbonic anhydrase II (CAB) and pig heart mitochondrial malate dehydrogenase, were used for evaluation. Recovered activity after refolding was correlated to different biophysical methods. 8-anilino-1-naphtalenesulfonic acid binding-experiments gave no useful information when refolding CAB, due to low sensitivity and because misfolded protein could not be readily distinguished from native protein. Tryptophan fluorescence spectra of refolded CAB were used to assess the success of refolding. The shift of the intensity maximum to a shorter wavelength, compared to the denaturant unfolded protein, as well as the fluorescence intensity correlated to recovered enzymatic activity. For both model enzymes, analytical hydrophobic interaction chromatography (HIC) was useful to identify refolded samples that contain active enzyme. Compactly folded, active enzyme eluted in a distinct peak in a decreasing ammonium sulfate gradient. The detection limit of analytical HIC was approx. 5 µg. In case of CAB, tryptophan fluorescence spectroscopy and analytical HIC showed that both methods in combination can be useful to rule out false positives or false negatives obtained with one method. These two methods were also useful to identify conditions for folding of p22 dynactin. However, tryptophan fluorescence spectroscopy can lead to false positives because in some cases spectra of soluble microaggregates are not well distinguishable from spectra of natively folded protein. In summary, a fast and reliable screening procedure was developed to make inclusion body proteins accessible to structural or functional analyses. In a separate project, 88 different E. coli expression constructs for 17 human protein domains that had been identified by sequence analysis were analysed using high-throughput purification and folding analysis in order to obtain candidates suitable for structural analysis. After 96 deep-well microplate expression and automated protein purification, solubly expressed protein domains were directly analysed using 1D ¹H-NMR spectroscopy. It was found that isolated methyl group signals below 0.5 ppm are particularly sensitive and reliable probes for folded protein. In addition – similar to the evaluation of a folding screen – analytical HIC proved to be an efficient tool for identifying constructs that yield compactly folded protein. Both methods, 1D ¹H-NMR spectroscopy and analytical HIC, provided complementary results. Six constructs, representing two domains, could be quickly identified as targets that are well suitable for structural analysis. The structure of one of these domains was solved recently by co-workers, the other structure was published by another group during this project.
Ellipsometrische Lichtstreuung als neue Methode zur Charakterisierung der Grenzfläche von Kolloiden
(2004)
Die ellipsometrische Lichtstreuung wird als eine neue, leistungsfähige Methode zur Charakterisierung von Schichten um kolloidale Partikel vorgestellt. Theoretische Grundlage der Methode ist die Mie-Theorie der Lichtstreuung. Experimentell wurde die Polarisationsoptik eines Null-Ellipsometers in den Strahlengang eines Lichtstreuaufbaus eingebaut. Wie in der Reflexionsellipsometrie um den Brewsterwinkel herum erhält man in der ellipsometrischen Streuung einen Winkelbereich, in dem die Methode empfindlich auf Schichten an der Oberfläche der Partikel ist. An verschiedenen Systemen wurde die Tauglichkeit der ellipsometrischen Streuung zur Charakterisierung von Schichten auf Partikeln demonstriert. So wurden Dicke und Brechungsindex einer thermosensitiven Schicht von Poly(N-isopropylacrylamid) auf einem Poly(methylmethacrylat)-Kern bestimmt. Damit ist es möglich, experimentell den Schichtbrechungsindex und damit den Quellungsgrad zu bestimmen. Des Weiteren wurde der Einfluss der NaCl-Konzentration auf die Polyelektrolythülle von Poly(methylmethacrylat)-Poly(styrolsulfonat)-Blockcopolymer-Partikeln untersucht. Die Polyelektrolytketten liegen im hier untersuchten Beispiel nicht gestreckt vor. Als drittes wurde die Verteilung von niedermolekularen Ionen um elektrostatisch stabilisierte Poly(styrol)-Latexpartikel in Wasser untersucht. Hier wurde gezeigt, dass die beobachteten Schichtdicken und Schichtbrechungsindizes viel größer sind, als nach der klassischen Poisson-Boltzmann-Theorie zu erwarten ist. Des Weiteren wurde die Doppelbrechung von unilamellaren Lipidvesikeln bestimmt. Außerdem wurden Messungen der dynamische Lichtstreuung im Intensitätsminimum der Ellipsometrie durchgeführt. Dabei wird ein Prozess mit einer Korrelationszeit, die unabhängig vom Streuvektor, aber abhängig von der verwendeten Wellenlänge ist, sichtbar. Die Natur dieses Prozesses konnte hier nicht vollständig geklärt werden.
Muster globaler anthropogener CO₂-Emissionen : sozio-ökonomische Determinanten und ihre Wirkung
(2004)
Die wesentlichen sozio-ökonomischen Prozesse, die die vermehrten anthropogenen CO₂-Emissionen verursachen, können durch die Determinanten Bevölkerung, Wohlstand (Bruttoinlandsprodukt pro Kopf) und Technologie (Energie- und Kohlenstoffintensität) vereinfacht beschrieben werden. Der Einfluss dieser Determinanten auf die Emissionsänderungen ist nicht für alle Länder der Erde gleich. Zeitreihen der CO₂-Emissionen aus der Verbrennung fossiler Energieträger, der Bevölkerung, des Bruttoinlandsproduktes und des Primärenergieverbrauches von 121 Ländern bilden die Grundlage für das entwickelte statistische Verfahren zur schrittweisen Informationsverdichtung, mit dem der gesamte Datenraum zu 6 energiewirtschaftlichen Ländertypen schrittweise zusammengefasst wird. Zur Beschreibung dieser Ländertypen wird mit Hilfe der Dekompositionsanalyse der Einfluss der Bevölkerungs-, der Wohlstands- und der Technologiekomponenten an den Emissionsänderungen quantifiziert. Die Ländertypen können vereinfacht als Repräsentanten unterschiedlicher Entwicklungsstufen und -richtungen angesehen werden. Sie bilden unter anderem eine Grundlage für die Weiterentwicklung und Kalibrierung regionalisierter makro-ökonomischer Modelle zu den sozio-ökonomischen Hintergründen der vermehrten anthropogenen CO₂-Emissionen.
Gerade in den letzten Jahren erfuhr Open Source Software (OSS) eine zunehmende Verbreitung und Popularität und hat sich in verschiedenen Anwendungsdomänen etabliert. Die Prozesse, welche sich im Kontext der OSS-Entwicklung (auch: OSSD – Open Source Software-Development) evolutionär herausgebildet haben, weisen in den verschiedenen OSS-Entwicklungsprojekten z.T. ähnliche Eigenschaften und Strukturen auf und auch die involvierten Entitäten, wie z.B. Artefakte, Rollen oder Software-Werkzeuge sind weitgehend miteinander vergleichbar. Dies motiviert den Gedanken, ein verallgemeinerbares Modell zu entwickeln, welches die generalisierbaren Entwicklungsprozesse im Kontext von OSS zu einem übertragbaren Modell abstrahiert. Auch in der Wissenschaftsdisziplin des Software Engineering (SE) wurde bereits erkannt, dass sich der OSSD-Ansatz in verschiedenen Aspekten erheblich von klassischen (proprietären) Modellen des SE unterscheidet und daher diese Methoden einer eigenen wissenschaftlichen Betrachtung bedürfen. In verschiedenen Publikationen wurden zwar bereits einzelne Aspekte der OSS-Entwicklung analysiert und Theorien über die zugrundeliegenden Entwicklungsmethoden formuliert, aber es existiert noch keine umfassende Beschreibung der typischen Prozesse der OSSD-Methodik, die auf einer empirischen Untersuchung existierender OSS-Entwicklungsprojekte basiert. Da dies eine Voraussetzung für die weitere wissenschaftliche Auseinandersetzung mit OSSD-Prozessen darstellt, wird im Rahmen dieser Arbeit auf der Basis vergleichender Fallstudien ein deskriptives Modell der OSSD-Prozesse hergeleitet und mit Modellierungselementen der UML formalisiert beschrieben. Das Modell generalisiert die identifizierten Prozesse, Prozessentitäten und Software-Infrastrukturen der untersuchten OSSD-Projekte. Es basiert auf einem eigens entwickelten Metamodell, welches die zu analysierenden Entitäten identifiziert und die Modellierungssichten und -elemente beschreibt, die zur UML-basierten Beschreibung der Entwicklungsprozesse verwendet werden. In einem weiteren Arbeitsschritt wird eine weiterführende Analyse des identifizierten Modells durchgeführt, um Implikationen, und Optimierungspotentiale aufzuzeigen. Diese umfassen beispielsweise die ungenügende Plan- und Terminierbarkeit von Prozessen oder die beobachtete Tendenz von OSSD-Akteuren, verschiedene Aktivitäten mit unterschiedlicher Intensität entsprechend der subjektiv wahrgenommenen Anreize auszuüben, was zur Vernachlässigung einiger Prozesse führt. Anschließend werden Optimierungszielstellungen dargestellt, die diese Unzulänglichkeiten adressieren, und ein Optimierungsansatz zur Verbesserung des OSSD-Modells wird beschrieben. Dieser Ansatz umfasst die Erweiterung der identifizierten Rollen, die Einführung neuer oder die Erweiterung bereits identifizierter Prozesse und die Modifikation oder Erweiterung der Artefakte des generalisierten OSS-Entwicklungsmodells. Die vorgestellten Modellerweiterungen dienen vor allem einer gesteigerten Qualitätssicherung und der Kompensation von vernachlässigten Prozessen, um sowohl die entwickelte Software- als auch die Prozessqualität im OSSD-Kontext zu verbessern. Desweiteren werden Softwarefunktionalitäten beschrieben, welche die identifizierte bestehende Software-Infrastruktur erweitern und eine gesamtheitlichere, softwaretechnische Unterstützung der OSSD-Prozesse ermöglichen sollen. Abschließend werden verschiedene Anwendungsszenarien der Methoden des OSS-Entwicklungsmodells, u.a. auch im kommerziellen SE, identifiziert und ein Implementierungsansatz basierend auf der OSS GENESIS vorgestellt, der zur Implementierung und Unterstützung des OSSD-Modells verwendet werden kann.
A polymer is a large molecule made up of many elementary chemical units, joined together by covalent bonds (for example, polyethylene). Polyelectrolytes (PELs) are polymer chains containing a certain amount of ionizable monomers. With their specific properties PELs acquire big importance in molecular and cell biology as well as in technology. Compared to neutral polymers the theory of PELs is less understood. In particular, this is valid for PELs in poor solvents. A poor solvent environment causes an effective attraction between monomers. Hence, for PELs in a poor solvent, there occurs a competition between attraction and repulsion. Strong or quenched PELs are completely dissociated at any accessible pH. The position of charges along the chain is fixed by chemical synthesis. On the other hand, in weak or annealed PELs dissociation of charges depends on solution pH. For the first time the simulation results have given direct evidence that at rather poor solvents an annealed PEL indeed undergoes a first-order phase transition when the chemical potential (solution pH) reaches at a certain value. The discontinuous transition occurs between a weakly charged compact globular structure and a strongly charged stretched configuration. At not too poor solvents theory predicts that globule would become unstable with respect to the formation of pearl-necklaces. The results show that pearl-necklaces exist in annealed PELs indeed. Furthermore, as predicted by theory, the simulation results have shown that annealed PELs display a sharp transition from a highly charged stretched state to a weakly charged globule at a critical salt concentration.
Recent high-throughput technologies enable the acquisition of a variety of complementary data and imply regulatory networks on the systems biology level. A common approach to the reconstruction of such networks is the cluster analysis which is based on a similarity measure. We use the information theoretic concept of the mutual information, that has been originally defined for discrete data, as a measure of similarity and propose an extension to a commonly applied algorithm for its calculation from continuous biological data. We compare our approach to previously existing algorithms. We develop a performance optimised software package for the application of the mutual information to large-scale datasets. Furthermore, we design and implement a web-based service for the analysis of integrated data measured with different technologies. Application to biological data reveals biologically relevant groupings and reconstructed signalling networks show agreements with physiological findings.
Combining the magnetic properties of a given material with the tremendous advantages of colloids can exponentially increase the advantages of both systems. This thesis deals with the field of magnetic nanotechnology. Thus, the design and characterization of new magnetic colloids with fascinating properties compared with the bulk materials is presented. Ferrofluids are referred to either as water or organic stable dispersions of superparamagnetic nanoparticles which respond to the application of an external magnetic field but lose their magnetization in the absence of a magnetic field. In the first part of this thesis, a three-step synthesis for the fabrication of a novel water-based ferrofluid is presented. The encapsulation of high amounts of magnetite into polystyrene particles can efficiently be achieved by a new process including two miniemulsion processes. The ferrofluids consist of novel magnetite polystyrene nanoparticles dispersed in water which are obtained by three-step process including coprecipitation of magnetite, its hydrophobization and further surfactant coating to enable the redispersion in water and the posterior encapsulation into polystyrene by miniemulsion polymerization. It is a desire to take advantage of a potential thermodynamic control for the design of nanoparticles, and the concept of "nanoreactors" where the essential ingredients for the formation of the nanoparticles are already in the beginning. The formulation and application of polymer particles and hybrid particles composed of polymeric and magnetic material is of high interest for biomedical applications. Ferrofluids can for instance be used in medicine for cancer therapy and magnetic resonance imaging. Superparamagnetic or paramagnetic colloids containing iron or gadolinium are also used as magnetic resonance imaging contrast agent, for example as a important tool in the diagnosis of cancer, since they enhance the relaxation of the water of the neighbouring zones. New nanostructured composites by the thermal decomposition of iron pentacarbonyl in the monomer phase and thereafter the formation of paramagnetic nanocomposites by miniemulsion polymerization are discussed in the second part of this thesis. In order to obtain the confined paramagnetic nanocomposites a two-step process was used. In the first step, the thermal decomposition of the iron pentacarbonyl was obtained in the monomer phase using oleic acid as stabilizer. In the second step, this iron-containing monomer dispersion was used for making a miniemulsion polymerization thereof. The addition of lanthanide complexes to ester-containing monomers such as butyl acrylate and subsequent polymerization leading to the spontaneous formation of highly organized layered nanocomposites is presented in the final part of this thesis. By an one-step miniemulsion process, the formation of a lamellar structure within the polymer nanoparticles is achieved. The magnetization and the NMR relaxation measurements have shown these new layered nanocomposites to be very apt for application as contrast agent in magnetic resonance imaging.
This thesis presents new approaches to evolutions of binary black hole systems in numerical relativity. We analyze and compare evolutions from various physically motivated initial data sets, in particular presenting the first evolutions of Thin Sandwich data generated by the Meudon group. For the first time two different quasi-circular orbit initial data sequences are compared through fully 3d numerical evolutions: Puncture data and Thin Sandwich data (TSD) based on a helical killing vector ansatz. The two different sets are compared in terms of the physical quantities that can be measured from the numerical data, and in terms of their evolutionary behavior. The evolutions demonstrate that for the latter, "Meudon" datasets, the black holes do in fact orbit for a longer amount of time before they merge, in comparison with Puncture data from the same separation. This indicates they are potentially better estimates of quasi-circular orbit parameters. The merger times resulting from the numerical simulations are consistent with independent Post-Newtonian estimates that the final plunge phase of a black hole inspiral should take 60% of an orbit.
Chemical transformations and hydraulic processes in soil and groundwater often lead to an apparent retention of nitrate in lowland catchments. Models are needed to evaluate the interaction of these processes in space and time. The objectives of this study are i) to develop a specific modelling approach by combining selected modelling tools simulating N-transport and turnover in soils and groundwater of lowland catchments, ii) to study interactions between catchment properties and nitrogen transport. Special attention was paid to potential N-loads to surface waters. The modelling approach combines various submodels for water flow and solute transport in soil and groundwater: The soil-water- and nitrogen-model mRISK-N, the groundwater flow model MODFLOW and the solute transport model RT3D. In order to investigate interactions of N-transport and catchment characteristics, the distribution and availability of reaction partners have to be taken into account. Therefore, a special reaction-module is developed, which simulates various chemical processes in groundwater, such as the degradation of organic matter by oxygen, nitrate, sulphate or pyrite oxidation by oxygen and nitrate. The model approach is applied to different simulation, focussing on specific submodels. All simulation studies are based on field data from the Schaugraben catchment, a pleistocene catchment of approximately 25 km², close to Osterburg(Altmark) in the North of Saxony-Anhalt. The following modelling studies have been carried out: i) evaluation of the soil-water- and nitrogen-model based on lysimeter data, ii) modelling of a field scale tracer experiment on nitrate transport and turnover in the groundwater as a first application of the reaction module, iii) evaluation of interactions between hydraulic and chemical aquifer properties in a two-dimensional groundwater transect, iv) modelling of distributed groundwater recharge and soil nitrogen leaching in the study area, to be used as input data for subsequent groundwater simulations, v) study of groundwater nitrate distribution and nitrate breakthrough to the surface water system in the Schaugraben catchment area and a subcatchment, using three-dimensional modelling of reactive groundwater transport. The various model applications prove the model to be capable of simulating interactions between transport, turnover and hydraulic and chemical catchment properties. The distribution of nitrate in the sediment and the resulting loads to surface waters are strongly affected by the amount of reactive substances and by the residence time within the aquifer. In the Schaugraben catchment simulations, it is found that a period of 70 years is needed to raise the average seepage concentrations of nitrate to a level corresponding to the given input situation, if no reactions are considered. Under reactive transport conditions, nitrate concentrations are reduced effectively. Simulation results show that groundwater exfiltration does not contribute considerably to the nitrate pollution of surface waters, as most nitrate entering soils and groundwater is lost by denitrification. Additional sources, such as direct inputs or tile drains have to be taken into account to explain surface water loads. The prognostic value of the models for the study site is limited by uncertainties of input data and estimation of model parameters. Nevertheless, the modelling approach is a useful aid for the identification of source and sink areas of nitrate pollution as well as the investigation of system response to management measures or landuse changes with scenario simulations. The modelling approach assists in the interpretation of observed data, as it allows to integrate local observations into a spatial and temporal framework.
Nanostructured materials are the materials having structural features on the scale of nanometers i.e. 10-9 m. the structural features can enhance the natural properties of the materials or induce additional properties, which are useful for day to technology as well as the future technologies One way to synthesize nanostructured materials is using templating techniques. The templating process involves use of a certain “mould” or “scaffold” to generate the structure. The mould is called as the template, can be a single molecule or assembly of molecule or a larger object, which has its own structure. The product material can be obtained by filling the space around the template with a “precursor”, transformation of precursor into the desired material and then removal of template to get product. The precursor can be any chemical moiety that can be easily transformed in to the desired material. Alternatively the desired material is processed into very tiny bricks or “nano building blocks (NBB)” and the product is obtained by arrangement of the NBB by using a scaffold. We synthesized porous metal oxide spheres of namely TiO2-M2O3: titanium dioxide- M-oxide (M = aluminum, gallium and indium) TiO2-M2O3 and cerium oxide-zirconium oxide solid solution. We used porous polymeric beads as templates. These beads used for chromatographic purposes. For the synthesis of TiO2-M2O3 we used metal- alkoxides as precursor. The pore of beads were filled with precursor and then reacted with water to give transformation of the precursor to amorphous oxide network. The network is crystallized and template is removed by heat treatment at high temperatures. In a similar way we obtained porous spheres of CexZr1-xO2. For this we synthesized nanoparticle of CexZr1-xO2 and used then for the templating process to obtain porous CexZr1-xO2 spheres. Additionally, using the same nanoparticles we synthesized nano-porous powder using self-assembly process between a block-copolymers scaffold and nanoparticles. Morphological and physico-chemical properties of these materials were studies systematically by using various analytical techniques TiO2-M2O3 material were tested for photocatalytic degradation of 2-Chlorophenol a poisonous pollutant. While CexZr1-xO2 spheres were tested for methanol steam reforming reaction to generate hydrogen, which is a fuel for future generation power sources like fuel cells. All the materials showed good catalytic performance.
In festen azobenzenhaltigen Polymeren wurde bei Bestrahlung mit blauem Licht ein makroskopischer Materialtransport beobachtet. Um die Dynamik der Gitterentstehung zu verfolgen, wurde am Speicherring für Synchrotronstrahlung ein Gitterschreibaufbau errichtet. Damit konnte erstmals in dieser Arbeit die Gitterbildungsgeschwindigkeit in-situ simultan mit Röntgen- und Lichtstreuung untersucht werden. Mit Hilfe einer speziellen Anpassung der Röntgenstreutheorie konnten sehr gute Übereinstimmungen von theoretischen Berechnungen mit den Messergebnissen erzielt werden. Dabei konnte nachgewiesen werden, dass sich zeitgleich mit einem Oberflächengitter auch ein Dichtegitter entwickelt. Durch die Trennung beider Streuanteile ließ sich die Dynamik der Strukturentstehungen bestimmen. Des weiteren konnte erstmals mit Hilfe der Photoelektronenspektroskopie die molekulare Orientierung an der Oberfläche eines Oberflächengitters nachgewiesen werden. Die Bewegungsursache kann auf einen Impulsübertrag während der Isomerisierung zurückgeführt werden, während die Bewegungsrichtung durch den elektrischen Feldvektor festgelegt wird. Die Theorie der Gitterentstehung konnte verbessert werden.
Eine Nutzung der optischen Anisotropie dünner Schichten ist vor allem für die Displaytechnologie, die optische Datenspeicherung und für optische Sicherheitselemente von hoher Bedeutung. Diese Doktorarbeit befasst sich mit theoretischen und experimentellen Untersuchung von dreidimensionaler Anisotropie und dabei insbesondere mit der Untersuchung von lichtinduzierter dreidimensionaler Anisotropie in organischen dünnen Polymer-Schichten. Die gewonnenen Erkentnisse und entwickelten Methoden können wertvolle Beiträge für Optimierungsprozesse, wie bei der Kompensation der Blickwinkelabhängigkeit von Flüssigkristall-Displays, liefern. Die neue Methode der Immersions-Transmissions-Ellipsometrie (ITE) zur Untersuchung von dünneren Schichten wurde im Rahmen dieser Dissertation entwickelt. Diese Methode gestattet es, in Kombination mit konventioneller Reflexions- und Transmissionsellipsometrie, die absoluten dreidimensionalen Brechungsindices einer biaxialen Schicht zu bestimmen. Erstmals gelang es damit, das dreidimensionale Brechungsindexellipsoid von transparenten, dünneren (150 nm) Filmen hochgenau (drei Stellen hinter dem Komma) zu bestimmen. Die ITE-Methode hat demzufolge das Potential, auch bei noch dünneren Schichten mit Gewinn eingesetzt werden zu können. Die lichtinduzierte Generierung von dreidimensionaler Anisotropie wurde in dünnen Schichten von azobenzenhaltigen und zimtsäurehaltigen, amorphen und flüssig-kristallinen Homo- und Copolymeren untersucht. Erstmals wurden quantitative Untersuchungen zur Änderung von lichtinduzierten, dreidimensionalen Anisotropien in dünnen Schichten von azobenzenhaltigen und zimtsäurehaltigen Polymeren bei Tempern oberhalb der Glastemperatur durchgeführt. Bei vielen der untersuchten Polymere war die dreidimensionale Ordnung nach dem Bestrahlen mit polarisiertem Licht und anschließendem Tempern oberhalb der Glastemperatur scheinbar von der Schichtdicke abhängig. Die Ursache liegt wohl in der, mit der neuentwickelten ITE-Methode detektierten, planaren Ausgangsorientierung der aufgeschleuderten dünneren Schichten. Um Verkippungs-Gradienten in dickeren Polymerschichten in ihrem Verlauf zu bestimmen, wurde eine spezielle Methode unter Benutzung der Wellenleitermoden-Spektroskopie entwickelt. Quantenchemisch bestimmte, maximal induzierbare Doppelbrechungen in flüssig-kristallinen Polymeren wurden mit den experimentell gefundenen Ordnungen verglichen.
Zwischen 1990 und 1994 wurden rund 1000 Liegenschaften, die in der ehemaligen DDR von der Sowjetarmee und der NVA für militärische Übungen genutzt wurden, an Bund und Länder übergeben. Die größten Truppenübungsplätze liegen in Brandenburg und sind heute teilweise in Großschutzgebiete integriert, andere Plätze werden von der Bundeswehr weiterhin aktiv genutzt. Aufgrund des militärischen Betriebs sind die Böden dieser Truppenübungsplätze oft durch Blindgänger, Munitionsreste, Treibstoff- und Schmierölreste bis hin zu chemischen Kampfstoffen belastet. Allerdings existieren auf fast allen Liegenschaften neben diesen durch Munition und militärische Übungen belasteten Bereichen auch naturschutzfachlich wertvolle Flächen; gerade in den Offenlandbereichen kann dies durchaus mit einer Belastung durch Kampfmittel einhergehen. Charakteristisch für diese offenen Flächen, zu denen u.a. Zwergstrauchheiden, Trockenrasen, wüstenähnliche Sandflächen und andere nährstoffarme baumlose Lebensräume gehören, sind Großflächigkeit, Abgeschiedenheit sowie ihre besondere Nutzung und Bewirtschaftung, d.h. die Abwesenheit von land- und forstwirtschaftlichem Betrieb sowie von Siedlungsflächen. Diese Charakteristik war die Grundlage für die Entwicklung einer speziell angepassten Flora und Fauna. Nach Beendigung des Militärbetriebs setzte dann in weiten Teilen eine großflächige Sukzession – die allmähliche Veränderung der Zusammensetzung von Pflanzen- und Tiergesellschaften – ein, die diese offenen Bereiche teilweise bereits in Wald verwandelte und somit verschwinden ließ. Dies wiederum führte zum Verlust der an diese Offenlandflächen gebundenen Tier- und Pflanzenarten. Zur Erhaltung, Gestaltung und Entwicklung dieser offenen Flächen wurden daher von einer interdisziplinären Gruppe von Naturwissenschaftlern verschiedene Methoden und Konzepte auf ihre jeweilige Wirksamkeit untersucht. So konnten schließlich die für die jeweiligen Standortbedingungen geeigneten Maßnahmen eingeleitet werden. Voraussetzung für die Einleitung der Maßnahmen sind zum einen Kenntnisse zu diesen jeweiligen Standortbedingungen, d.h. zum Ist-Zustand, sowie zur Entwicklung der Flächen, d.h. zur Dynamik. So kann eine Abschätzung über die zukünftige Flächenentwicklung getroffen werden, damit ein effizienter Maßnahmeneinsatz stattfinden kann. Geoinformationssysteme (GIS) spielen dabei eine entscheidende Rolle zur digitalen Dokumentation der Biotop- und Nutzungstypen, da sie die Möglichkeit bieten, raum- und zeitbezogene Geometrie- und Sachdaten in großen Mengen zu verarbeiten. Daher wurde ein fachspezifisches GIS für Truppenübungsplätze entwickelt und implementiert. Die Aufgaben umfassten die Konzeption der Datenbank und des Objektmodells sowie fachspezifischer Modellierungs-, Analyse- und Präsentationsfunktionen. Für die Integration von Fachdaten in die GIS-Datenbank wurde zudem ein Metadatenkatalog entwickelt, der in Form eines zusätzlichen GIS-Tools verfügbar ist. Die Basisdaten für das GIS wurden aus Fernerkundungsdaten, topographischen Karten sowie Geländekartierungen gewonnen. Als Instrument für die Abschätzung der zukünftigen Entwicklung wurde das Simulationstool AST4D entwickelt, in dem sowohl die Nutzung der (Raster-)Daten des GIS als Ausgangsdaten für die Simulationen als auch die Nutzung der Simulationsergebnisse im GIS möglich ist. Zudem können die Daten in AST4D raumbezogen visualisiert werden. Das mathematische Konstrukt für das Tool war ein so genannter Zellulärer Automat, mit dem die Flächenentwicklung unter verschiedenen Voraussetzungen simuliert werden kann. So war die Bildung verschiedener Szenarien möglich, d.h. die Simulation der Flächenentwicklung mit verschiedenen (bekannten) Eingangsparametern und den daraus resultierenden unterschiedlichen (unbekannten) Endzuständen. Vor der Durchführung einer der drei in AST4D möglichen Simulationsstufen können angepasst an das jeweilige Untersuchungsgebiet benutzerspezifische Festlegungen getroffen werden.
Red cell development in adult humans results in the mean daily production of 2x1011 erythrocytes. Mature hemoglobinized and enucleated erythrocytes develop from multipotent hematopoietic stem/progenitor cells through more committed progenitor cell types such as BFU-E and CFU-E. The studies on the molecular mechanisms of erythropoiesis in the human system require a sufficient number of purified erythroid progenitors of the different stages of erythropoiesis. Primary human erythroid progenitors are difficult to obtain as a homogenous population in sufficiently high cell numbers. Various culture conditions for the in vitro cell culture of primary human erythroid progenitors have been previously described. Mainly, the culture resulted in the generation of rather mature stages of Epo-dependent erythroid progenitors. In this study our efforts were directed towards the isolation and characterization of more early red cell progenitors that are Epo-independent. To identify such progenitors, CD34+ cells were purified from cord blood and cultured under serum free conditions in the presence of the growth factors SCF, IL-3 and hyper-IL-6, referred to as SI2 culture conditions. By immunomagnetic bead selection of E-cadherin (E-cad) positive cells, E-cad+ progenitors were isolated. These Epo-independent E-cad+ progenitors have been amplified under SI2 conditions to large cell numbers. The E-cad+ progenitors were characterized for surface antigen expression by flow cytometry, response to growth factors in proliferation assay and for their differentiation potential into mature red cells. Additionally, the properties of E-cad+ progenitors were compared to those of two other erythroid progenitors: Epo-dependent progenitors described by Panzenböck et al. (referred to as SCF/Epo progenitor), and CD36+ progenitors described by Freyssinier et al. (Panzenböck et al., 1998; Freyssinier et al., 1999). Finally, the gene expression profile of E-cad+ progenitors was compared to the profiles of SCF/Epo progenitors and CD36+ progenitors using the DNA microarray technique. Based on our studies we propose that Epo-independent E-cad+ progenitors are early stage, BFU-E like progenitors. They respond to Epo, despite the fact that they were generated in the absence of Epo, and can completely undergo erythroid differentiation. Furthermore, we demonstrate that the growth properties, the growth factor response and the surface marker expression of E-cad+ progenitors are similar to those of the SCF/Epo progenitors and the CD36+ progenitors. By the comparison of gene profiles, we were also able to demonstrate that the Epo-dependent and Epo-independent red cell progenitors are very similar. Analyzing the molecular differences between E-cad+ and SCF/Epo progenitors revealed several candidate genes such as galectin-3, cyclin D1, AMHR, PDF and IGFBP4, which are potential regulators involved in red cell development. We also demonstrate that the CD36+ progenitors, isolated by immunomagentic bead selection, are a heterogeneous progenitor population containing an E-cad+ and an E-cad- subpopulation. Based on their gene expression profile, CD36+ progenitors seem to exhibit both erythroid and megakaryocytic features. These studies led to a more updated model of erythroid cell development that should pave the way for further studies on molecular mechanisms of erythropoiesis.
The Dead Sea Transform (DST) is a prominent shear zone in the Middle East. It separates the Arabian plate from the Sinai microplate and stretches from the Red Sea rift in the south via the Dead Sea to the Taurus-Zagros collision zone in the north. Formed in the Miocene about 17 Ma ago and related to the breakup of the Afro-Arabian continent, the DST accommodates the left-lateral movement between the two plates. The study area is located in the Arava Valley between the Dead Sea and the Red Sea, centered across the Arava Fault (AF), which constitutes the major branch of the transform in this region. A set of seismic experiments comprising controlled sources, linear profiles across the fault, and specifically designed receiver arrays reveals the subsurface structure in the vicinity of the AF and of the fault zone itself down to about 3-4 km depth. A tomographically determined seismic P velocity model shows a pronounced velocity contrast near the fault with lower velocities on the western side than east of it. Additionally, S waves from local earthquakes provide an average P-to-S velocity ratio in the study area, and there are indications for a variations across the fault. High-resolution tomographic velocity sections and seismic reflection profiles confirm the surface trace of the AF, and observed features correlate well with fault-related geological observations. Coincident electrical resistivity sections from magnetotelluric measurements across the AF show a conductive layer west of the fault, resistive regions east of it, and a marked contrast near the trace of the AF, which seems to act as an impermeable barrier for fluid flow. The correlation of seismic velocities and electrical resistivities lead to a characterisation of subsurface lithologies from their physical properties. Whereas the western side of the fault is characterised by a layered structure, the eastern side is rather uniform. The vertical boundary between the western and the eastern units seems to be offset to the east of the AF surface trace. A modelling of fault-zone reflected waves indicates that the boundary between low and high velocities is possibly rather sharp but exhibits a rough surface on the length scale a few hundreds of metres. This gives rise to scattering of seismic waves at this boundary. The imaging (migration) method used is based on array beamforming and coherency analysis of P-to-P scattered seismic phases. Careful assessment of the resolution ensures reliable imaging results. The western low velocities correspond to the young sedimentary fill in the Arava Valley, and the high velocities in the east reflect mainly Precambrian igneous rocks. A 7 km long subvertical scattering zone reflector is offset about 1 km east of the AF surface trace and can be imaged from 1 km to about 4 km depth. The reflector marks the boundary between two lithological blocks juxtaposed most probably by displacement along the DST. This interpretation as a lithological boundary is supported by the combined seismic and magnetotelluric analysis. The boundary may be a strand of the AF, which is offset from the current, recently active surface trace. The total slip of the DST may be distributed spatially and in time over these two strands and possibly other faults in the area.
The correlations between the chemical structures of the 2,5-diphenyl-1,3,4-oxadiazole compounds and their corresponding vapour deposited film structures on Si/SiO2 were systematically investigated with AFM, XSR and IR for the first time. The result shows that the film structure depends strongly on the substrate temperature (Ts). For the compounds with ether bridge group, the film periodicity depends linearly on the length of the aliphatic chain. The films based on those oxadiazols have ordered structure in the investigated substrate temperature region, while die amide bridged compounds form ordered film only at high Ts due to the formation of intermolecular H-bond. The tilt angle of most molecules is determined by the pi-pi complexes between the molecules. The intermolecular interaction between head groups leads to the structural transformation during the thermal treatment after deposition. All the ether bridged oxadiazoles form films with bilayer structure, while amide bridged oxadiazole form film bilayer structure only when the molecule has a head group.
This thesis deals with the encoding and transmission of information through a quantum channel. A quantum channel is a quantum mechanical system whose state is manipulated by a sender and read out by a receiver. The individual state of the channel represents the message. The two topics of the thesis comprise 1) the possibility of compressing a message stored in a quantum channel without loss of information and 2) the possibility to communicate a message directly from one party to another in a secure manner, that is, a third party is not able to eavesdrop the message without being detected. The main results of the thesis are the following. A general framework for variable-length quantum codes is worked out. These codes are necessary to make lossless compression possible. Due to the quantum nature of the channel, the encoded messages are in general in a superposition of different lengths. It is found to be impossible to compress a quantum message without loss of information if the message is not apriori known to the sender. In the other case it is shown that lossless quantum data compression is possible and a lower bound on the compression rate is derived. Furthermore, an explicit compression scheme is constructed that works for arbitrarily given source message ensembles. A quantum cryptographic protocol - the “ping-pong protocol” - is presented that realizes the secure direct communication of classical messages through a quantum channel. The security of the protocol against arbitrary eavesdropping attacks is proven for the case of an ideal quantum channel. In contrast to other quantum cryptographic protocols, the ping-pong protocol is deterministic and can thus be used to transmit a random key as well as a composed message. The protocol is perfectly secure for the transmission of a key, and it is quasi-secure for the direct transmission of a message. The latter means that the probability of successful eavesdropping exponentially decreases with the length of the message.
We study the effect on the elastic properties of lipid membranes induced by anchoring of long hydrophilic polymers. Theoretically, two limiting regimes for the membrane spontaneous curvature are expected : i) at low surface polymer concentration (mushroom regime) the spontaneous curvature should scale linearly with the surface density of anchored polymers; ii) at high coverage (brush regime) the dependence should be quadratic. We attempt to test the predictions for the brush regime by monitoring the morphological changes induced on giant vesicles. As long polymers we use fluorescently labeled λ-phage DNA molecules which are attached to biotinylated lipid vesicles with a biotin-avidin-biotin linkage. By varying the amount of biotinylated lipid in the membrane we control the surface concentration of the anchors. The amount of anchored DNA to the membrane is quantified with fluorescence measurements. Changes in the elastic properties of the membrane as DNA grafts to it are monitored via analysis of the vesicle fluctuations. The spontaneous curvature of the membrane increases as a function of the surface coverage. At higher grafting concentrations the vesicles bud. The size of the buds can also be used to assess the membrane curvature. The effect on the bending stiffness is a subject of further investigation.