Refine
Year of publication
- 2012 (292) (remove)
Document Type
- Doctoral Thesis (292) (remove)
Language
- English (158)
- German (129)
- Spanish (3)
- Italian (1)
- Multiple languages (1)
Keywords
- Korrosion (3)
- Nanopartikel (3)
- corrosion (3)
- Blickbewegungen (2)
- Computationale Modellierung (2)
- Evaluation (2)
- Fernerkundung (2)
- Fluoreszenzmikroskopie (2)
- Klimawandel (2)
- Lake sediments (2)
Institute
- Institut für Biochemie und Biologie (63)
- Institut für Chemie (44)
- Institut für Physik und Astronomie (35)
- Wirtschaftswissenschaften (24)
- Institut für Geowissenschaften (19)
- Institut für Informatik und Computational Science (17)
- Institut für Romanistik (12)
- Philosophische Fakultät (9)
- Extern (8)
- Sozialwissenschaften (7)
The aim of this thesis is the quantum dynamical study of two examples of scanning tunneling microscope (STM)-controllable, Si(100)(2x1) surface-mounted switches of atomic and molecular scale. The first example considers the switching of single H-atoms between two dangling-bond chemisorption sites on a Si-dimer of the Si(100) surface (Grey et al., 1996). The second system examines the conformational switching of single 1,5-cyclooctadiene molecules chemisorbed on the Si(100) surface (Nacci et al., 2008). The temporal dynamics are provided by the propagation of the density matrix in time via an according set of equations of motion (EQM). The latter are based on the open-system density matrix theory in Lindblad form. First order perturbation theory is used to evaluate those transition rates between vibrational levels of the system part. In order to account for interactions with the surface phonons, two different dissipative models are used, namely the bilinear, harmonic and the Ohmic bath model. IET-induced vibrational transitions in the system are due to the dipole- and the resonance-mechanism. A single surface approach is used to study the influence of dipole scattering and resonance scattering in the below-threshold regime. Further, a second electronic surface was included to study the resonance-induced switching in the above-threshold regime. Static properties of the adsorbate, e.g., potentials and dipole function and potentials, are obtained from quantum chemistry and used within the established quantum dynamical models.
In einer quasiexperimentellen Längsschnittstudie mit 380 Lehramtsstudierenden wurde das Interventionsprogramm „Gestärkt für den Lehrerberuf“, welches Elemente eines Self-Assessments der berufsrelevanten Kompetenzen mit konkreten Beratungsmöglichkeiten und einem Zieleffektivitätstraining (Dargel, 2006) zur Entwicklung individueller berufsbezogener Kompetenzen verbindet, auf seine Wirksamkeit (Reflexionskompetenz, Lehrerselbstwirksamkeit, berufsbezogene Kompetenzen, Beanspruchungserleben, Widerstandsfähigkeit) und den Wirkungsprozess (Zielbindung, Zielrealisierbarkeit, Zieleffektivität) hin überprüft. In dem Prä-Post-Follow-up-Test-Vergleichsgruppen-Design wurden eine Interventionsgruppe, deren Treatment auf dem Stärkenansatz basiert (1), eine defizitorientierte Interventionsgruppe (2), sowie eine kombinierte Interventionsgruppe, bei der der Stärkenansatz durch den Defizitansatz ergänzt wird (3), einer unbehandelten Kontrollgruppe sowie einer alternativ behandelten Kontrollgruppe, die ausschließlich in ihren sozial-kommunikativer Kompetenzen geschult wurde, gegenübergestellt. Es gelang zum Post- und Follow-up-Test, sowohl die individuellen beruflichen Kompetenzen als auch die Reflexionskompetenz von Teilnehmern der Interventionsgruppen im Vergleich zur unbehandelten Kontrollgruppe zu fördern. Die Teilnehmer der kombinierten Intervention profitierten im Vergleich zu den Teilnehmern der anderen beiden Interventionsgruppen stärker im Bereich Lehrerselbstwirksamkeit, Widerstandsfähigkeit und Zieleffektivität. Gegenüber der alternativen Kontrollgruppe zeigten sie ebenfalls einen stärkeren Zuwachs in der Entwicklung ihrer berufsrelevanten Kompetenzen und in ihrer Widerstandsfähigkeit. Die Studie liefert erste Hinweise darauf, dass ein Ansatz, welcher Stärkenfokussierung und Defizitorientierung integriert, besonders effektiv wirkt.
In the western hemisphere, the piano is one of the most important instruments. While its evolution lasted for more than three centuries, and the most important physical aspects have already been investigated, some parts in the characterization of the piano remain not well understood. Considering the pivotal piano soundboard, the effect of ribs mounted on the board exerted on the sound radiation and propagation in particular, is mostly neglected in the literature. The present investigation deals exactly with the sound wave propagation effects that emerge in the presence of an array of equally-distant mounted ribs at a soundboard. Solid-state theory proposes particular eigenmodes and -frequencies for such arrangements, which are comparable to single units in a crystal. Following this 'linear chain model' (LCM), differences in the frequency spectrum are observable as a distinct band structure. Also, the amplitudes of the modes are changed, due to differences of the damping factor. These scattering effects were not only investigated for a well-understood conceptional rectangular soundboard (multichord), but also for a genuine piano resonance board manufactured by the piano maker company 'C. Bechstein Pianofortefabrik'. To obtain the possibility to distinguish between the characterizing spectra both with and without mounted ribs, the typical assembly plan for the Bechstein instrument was specially customized. Spectral similarities and differences between both boards are found in terms of damping and tone. Furthermore, specially prepared minimal-invasive piezoelectric polymer sensors made from polyvinylidene fluoride (PVDF) were used to record solid-state vibrations of the investigated system. The essential calibration and characterization of these polymer sensors was performed by determining the electromechanical conversion, which is represented by the piezoelectric coefficient. Therefore, the robust 'sinusoidally varying external force' method was applied, where a dynamic force perpendicular to the sensor's surface, generates movable charge carriers. Crucial parameters were monitored, with the frequency response function as the most important one for acousticians. Along with conventional condenser microphones, the sound was measured as solid-state vibration as well as airborne wave. On this basis, statements can be made about emergence, propagation, and also the overall radiation of the generated modes of the vibrating system. Ultimately, these results acoustically characterize the entire system.
Cellulose is the most abundant biopolymer on earth and the main load-bearing structure in plant cell walls. Cellulose microfibrils are laid down in a tight parallel array, surrounding plant cells like a corset. Orientation of microfibrils determines the direction of growth by directing turgor pressure to points of expansion (Somerville et al., 2004). Hence, cellulose deficient mutants usually show cell and organ swelling due to disturbed anisotropic cell expansion (reviewed in Endler and Persson, 2011). How do cellulose microfibrils gain their parallel orientation? First experiments in the 1960s suggested, that cortical microtubules aid the cellulose synthases on their way around the cell (Green, 1962; Ledbetter and Porter, 1963). This was proofed in 2006 through life cell imaging (Paredez et al., 2006). However, how this guidance was facilitated, remained unknown. Through a combinatory approach, including forward and reverse genetics together with advanced co-expression analysis, we identified pom2 as a cellulose deficient mutant. Map- based cloning revealed that the gene locus of POM2 corresponded to CELLULOSE SYNTHASE INTERACTING 1 (CSI1). Intriguingly, we previously found the CSI1 protein to interact with the putative cytosolic part of the primary cellulose synthases in a yeast-two-hybrid screen (Gu et al., 2010). Exhaustive cell biological analysis of the POM2/CSI1 protein allowed to determine its cellular function. Using spinning disc confocal microscopy, we could show that in the absence of POM2/CSI1, cellulose synthase complexes lose their microtubule-dependent trajectories in the plasma membrane. The loss of POM2/CSI1, however does not influence microtubule- dependent delivery of cellulose synthases (Bringmann et al., 2012). Consequently, POM2/CSI1 acts as a bridging protein between active cellulose synthases and cortical microtubules. This thesis summarizes three publications of the author, regarding the identification of proteins that connect cellulose synthases to the cytoskeleton. This involves the development of bioinformatics tools allowing candidate gene prediction through co-expression studies (Mutwil et al., 2009), identification of candidate genes through interaction studies (Gu et al., 2010), and determination of the cellular function of the candidate gene (Bringmann et al., 2012).
MHC genes encode proteins that are responsible for the recognition of foreign antigens and the triggering of a subsequent, adequate immune response of the organism. Thus they hold a key position in the immune system of vertebrates. It is believed that the extraordinary genetic diversity of MHC genes is shaped by adaptive selectional processes in response to the reoccurring adaptations of parasites and pathogens. A large number of MHC studies were performed in a wide range of wildlife species aiming to understand the role of immune gene diversity in parasite resistance under natural selection conditions. Methodically, most of this work with very few exceptions has focussed only upon the structural, i.e. sequence diversity of regions responsible for antigen binding and presentation. Most of these studies found evidence that MHC gene variation did indeed underlie adaptive processes and that an individual’s allelic diversity explains parasite and pathogen resistance to a large extent. Nevertheless, our understanding of the effective mechanisms is incomplete. A neglected, but potentially highly relevant component concerns the transcriptional differences of MHC alleles. Indeed, differences in the expression levels MHC alleles and their potential functional importance have remained unstudied. The idea that also transcriptional differences might play an important role relies on the fact that lower MHC gene expression is tantamount with reduced induction of CD4+ T helper cells and thus with a reduced immune response. Hence, I studied the expression of MHC genes and of immune regulative cytokines as additional factors to reveal the functional importance of MHC diversity in two free-ranging rodent species (Delomys sublineatus, Apodemus flavicollis) in association with their gastrointestinal helminths under natural selection conditions. I established the method of relative quantification of mRNA on liver and spleen samples of both species in our laboratory. As there was no available information on nucleic sequences of potential reference genes in both species, PCR primer systems that were established in laboratory mice have to be tested and adapted for both non-model organisms. In the due course, sets of stable reference genes for both species were found and thus the preconditions for reliable measurements of mRNA levels established. For D. sublineatus it could be demonstrated that helminth infection elicits aspects of a typical Th2 immune response. Whereas mRNA levels of the cytokine interleukin Il4 increased with infection intensity by strongyle nematodes neither MHC nor cytokine expression played a significant role in D. sublineatus. For A. flavicollis I found a negative association between the parasitic nematode Heligmosomoides polygyrus and hepatic MHC mRNA levels. As a lower MHC expression entails a lower immune response, this could be evidence for an immune evasive strategy of the nematode, as it has been suggested for many micro-parasites. This implies that H. polygyrus is capable to interfere actively with the MHC transcription. Indeed, this parasite species has long been suspected to be immunosuppressive, e.g. by induction of regulatory T-helper cells that respond with a higher interleukin Il10 and tumor necrosis factor Tgfb production. Both cytokines in turn cause an abated MHC expression. By disabling recognition by the MHC molecule H. polygyrus might be able to prevent an activation of the immune system. Indeed, I found a strong tendency in animals carrying the allele Apfl-DRB*23 to have an increased infection intensity with H. polygyrus. Furthermore, I found positive and negative associations between specific MHC alleles and other helminth species, as well as typical signs of positive selection acting on the nucleic sequences of the MHC. The latter was evident by an elevated rate of non-synonymous to synonymous substitutions in the MHC sequences of exon 2 encoding the functionally important antigen binding sites whereas the first and third exons of the MHC DRB gene were highly conserved. In conclusion, the studies in this thesis demonstrate that valid procedures to quantify expression of immune relevant genes are also feasible in non-model wildlife organisms. In addition to structural MHC diversity, also MHC gene expression should be considered to obtain a more complete picture on host-pathogen coevolutionary selection processes. This is especially true if parasites are able to interfere with systemic MHC expression. In this case advantageous or disadvantageous effects of allelic binding motifs are abated. The studies could not define the role of MHC gene expression in antagonistic coevolution as such but the results suggest that it depends strongly on the specific parasite species that is involved.
Ausgehend von den primärsensorischen Arealen verlaufen Verarbeitungswege nach anterior durch die Temporallappen, die der Objekterkennung dienen. Besonders die vorderste Spitze der Temporallappen, der anteriore Temporalkortex, wird mit Funktionen der Objektidentifizierung assoziiert. Es existieren jedoch mehrere Vermutungen, welcher Art die Objekte sind, die in dieser Region verarbeitet werden. Es gibt Annahmen über die Verarbeitung von Sprache, von menschlichen Stimmen, semantischen Informationen oder individuellen Konzepten. Um zwischen diesen Theorien zu differenzieren, wurden vier ereigniskorrelierte fMRT-Messungen an jungen gesunden Erwachsenen durchgeführt. Die Probanden hörten in drei Experimenten die Stimmen berühmter und unbekannter Personen und in einem der Experimente zusätzlich Geräusche von Tieren und Musikinstrumenten. Im vierten Experiment wurden Zeichnungen von Comicfiguren gezeigt sowie von Tieren und Obst- und Gemüsesorten. Die neuronale Aktivität bei der Verarbeitung dieser Reize im Vergleich zu Zeiten ohne Stimulation wurde mit Hilfe von Interesseregionen untersucht, die nahezu die gesamten Temporallappen abdeckten und diese in jeweils zwölf Areale untergliederten. In den anterioren Temporallappen waren sowohl mit auditiven als auch mit visuellen Stimuli deutliche Aktivierungsunterschiede in Abhängigkeit von der semantischen Kategorie festzustellen. Individuelle Konzepte (menschliche Stimmen und Zeichentrickfiguren) riefen eine signifikant stärkere Aktivierung hervor als kategoriale Konzepte (Tiere, Musikinstrumente, Obst- und Gemüse). Außerdem war das Signal, dass durch die Stimmen der bekannten Personen ausgelöst wurde, deutlich stärker als das Signal der unbekannten Stimmen. Damit sind die Daten am ehesten kompatibel mit der Annahme, dass die anterioren Temporallappen, bekannte individuelle Konzepte verarbeiten. Da die beschriebenen Signalunterschiede zwischen den verschiedenen Bedingungen ausgehend von den transversalen Temporalgyri nach anterior zum Temporalpol zunahmen, unterstützen die Ergebnisse zudem die Theorie von einem ventralen Verarbeitungsweg, der die Temporallappen nach anterior durchquert und zur Objekterkennung beiträgt. In Übereinstimmung mit den Annahmen der Konvergenzzonentheorie von A. R. Damasio scheint die spezifische Funktion dieses rostral gerichteten Verarbeitungsweges aus der sukzessiven Kombination immer mehr sensomotorischer Merkmale von Objekten zu bestehen. Da bekannte individuelle Konzepte eine besonders hohe Anzahl von Merkmalen aufweisen, ist eine weiter nach anterior verlaufende Verarbeitung zu beobachten als bei unbekannten oder kategorialen Konzepten.
Particles in Saturn’s main rings range in size from dust to even kilometer-sized objects. Their size distribution is thought to be a result of competing accretion and fragmentation processes. While growth is naturally limited in tidal environments, frequent collisions among these objects may contribute to both accretion and fragmentation. As ring particles are primarily made of water ice attractive surface forces like adhesion could significantly influence these processes, finally determining the resulting size distribution. Here, we derive analytic expressions for the specific self-energy Q and related specific break-up energy Q⋆ of aggregates. These expressions can be used for any aggregate type composed of monomeric constituents. We compare these expressions to numerical experiments where we create aggregates of various types including: regular packings like the face-centered cubic (fcc), Ballistic Particle Cluster Aggregates (BPCA), and modified BPCAs including e.g. different constituent size distributions. We show that accounting for attractive surface forces such as adhesion a simple approach is able to: a) generally account for the size dependence of the specific break-up energy for fragmentation to occur reported in the literature, namely the division into “strength” and “gravity” regimes, and b) estimate the maximum aggregate size in a collisional ensemble to be on the order of a few meters, consistent with the maximum aggregate size observed in Saturn’s rings of about 10m.
Growing populations, continued economic development, and limited natural resources are critical factors affecting sustainable development. These factors are particularly pertinent in developing countries in which large parts of the population live at a subsistence level and options for sustainable development are limited. Therefore, addressing sustainable land use strategies in such contexts requires that decision makers have access to evidence-based impact assessment tools that can help in policy design and implementation. Ex-ante impact assessment is an emerging field poised at the science-policy interface and is used to assess the potential impacts of policy while also exploring trade-offs between economic, social and environmental sustainability targets. The objective of this study was to operationalise the impact assessment of land use scenarios in the context of developing countries that are characterised by limited data availability and quality. The Framework for Participatory Impact Assessment (FoPIA) was selected for this study because it allows for the integration of various sustainability dimensions, the handling of complexity, and the incorporation of local stakeholder perceptions. FoPIA, which was originally developed for the European context, was adapted to the conditions of developing countries, and its implementation was demonstrated in five selected case studies. In each case study, different land use options were assessed, including (i) alternative spatial planning policies aimed at the controlled expansion of rural-urban development in the Yogyakarta region (Indonesia), (ii) the expansion of soil and water conservation measures in the Oum Zessar watershed (Tunisia), (iii) the use of land conversion and the afforestation of agricultural areas to reduce soil erosion in Guyuan district (China), (iv) agricultural intensification and the potential for organic agriculture in Bijapur district (India), and (v) land division and privatisation in Narok district (Kenya). The FoPIA method was effectively adapted by dividing the assessment into three conceptual steps: (i) scenario development; (ii) specification of the sustainability context; and (iii) scenario impact assessment. A new methodological approach was developed for communicating alternative land use scenarios to local stakeholders and experts and for identifying recommendations for future land use strategies. Stakeholder and expert knowledge was used as the main sources of information for the impact assessment and was complemented by available quantitative data. Based on the findings from the five case studies, FoPIA was found to be suitable for implementing the impact assessment at case study level while ensuring a high level of transparency. FoPIA supports the identification of causal relationships underlying regional land use problems, facilitates communication among stakeholders and illustrates the effects of alternative decision options with respect to all three dimensions of sustainable development. Overall, FoPIA is an appropriate tool for performing preliminary assessments but cannot replace a comprehensive quantitative impact assessment, and FoPIA should, whenever possible, be accompanied by evidence from monitoring data or analytical tools. When using FoPIA for a policy oriented impact assessment, it is recommended that the process should follow an integrated, complementary approach that combines quantitative models, scenario techniques, and participatory methods.
Metals are often used in environments that are conducive to corrosion, which leads to a reduction in their mechanical properties and durability. Coatings are applied to corrosion-prone metals such as aluminum alloys to inhibit the destructive surface process of corrosion in a passive or active way. Standard anticorrosive coatings function as a physical barrier between the material and the corrosive environment and provide passive protection only when intact. In contrast, active protection prevents or slows down corrosion even when the main barrier is damaged. The most effective industrially used active corrosion inhibition for aluminum alloys is provided by chromate conversion coatings. However, their toxicity and worldwide restriction provoke an urgent need for finding environmentally friendly corrosion preventing systems. A promising approach to replace the toxic chromate coatings is to embed particles containing nontoxic inhibitor in a passive coating matrix. This work presents the development and optimization of effective anticorrosive coatings for the industrially important aluminum alloy, AA2024-T3 using this approach. The protective coatings were prepared by dispersing mesoporous silica containers, loaded with the nontoxic corrosion inhibitor 2-mercaptobenzothiazole, in a passive sol-gel (SiOx/ZrOx) or organic water-based layer. Two types of porous silica containers with different sizes (d ≈ 80 and 700 nm, respectively) were investigated. The studied robust containers exhibit high surface area (≈ 1000 m² g-1), narrow pore size distribution (dpore ≈ 3 nm) and large pore volume (≈ 1 mL g-1) as determined by N2 sorption measurements. These properties favored the subsequent adsorption and storage of a relatively large amount of inhibitor as well as its release in response to pH changes induced by the corrosion process. The concentration, position and size of the embedded containers were varied to ascertain the optimum conditions for overall anticorrosion performance. Attaining high anticorrosion efficiency was found to require a compromise between delivering an optimal amount of corrosion inhibitor and preserving the coating barrier properties. This study broadens the knowledge about the main factors influencing the coating anticorrosion efficiency and assists the development of optimum active anticorrosive coatings doped with inhibitor loaded containers.
Estimation of the self-similarity exponent has attracted growing interest in recent decades and became a research subject in various fields and disciplines. Real-world data exhibiting self-similar behavior and/or parametrized by self-similarity exponent (in particular Hurst exponent) have been collected in different fields ranging from finance and human sciencies to hydrologic and traffic networks. Such rich classes of possible applications obligates researchers to investigate qualitatively new methods for estimation of the self-similarity exponent as well as identification of long-range dependencies (or long memory). In this thesis I present the Bayesian estimation of the Hurst exponent. In contrast to previous methods, the Bayesian approach allows the possibility to calculate the point estimator and confidence intervals at the same time, bringing significant advantages in data-analysis as discussed in this thesis. Moreover, it is also applicable to short data and unevenly sampled data, thus broadening the range of systems where the estimation of the Hurst exponent is possible. Taking into account that one of the substantial classes of great interest in modeling is the class of Gaussian self-similar processes, this thesis considers the realizations of the processes of fractional Brownian motion and fractional Gaussian noise. Additionally, applications to real-world data, such as the data of water level of the Nile River and fixational eye movements are also discussed.
Charakterisierung der neuen centrosomalen Proteine CP148 und CP55 in Dictyostelium discoideum
(2012)
Das im Cytosol liegende Dictyostelium Centrosom ist aus einer geschichteten Core-Region aufgebaut, die von einer Mikrotubuli-nukleierenden Corona umgeben ist. Zudem ist es über eine spezifische Verbindung eng an den Kern geknüpft und durch die Kernmembran hindurch mit den geclusterten Centromeren verbunden. Beim G2/M Übergang dissoziiert die Corona vom Centrosom und der Core verdoppelt sich so dass zwei Spindelpole entstehen. CP55 und CP148 wurden in einer Proteom-Analyse des Centrosoms identifiziert. CP148 ist ein neues coiled-coil Protein der centrosomalen Corona. Es zeigt eine zellzyklusabhängige An- und Abwesenheit am Centrosom, die mit der Dissoziation der Corona in der Prophase und ihrer Neubildung in der Telophase korreliert. Während der Telophase erschienen in GFP-CP148 exprimierenden Zellen viele, kleine GFP-CP148-Foci im Cytoplasma, die zum Teil miteinander fusionierten und zum Centrosom wanderten. Daraus resultierte eine hypertrophe Corona in Zellen mit starker GFP-CP148 Überexpression. Ein Knockdown von CP148 durch RNAi führte zu einem Verlust der Corona und einem ungeordneten Interphase Mikrotubuli-Cytoskelett. Die Bildung der mitotischen Spindel und der astralen Mikrotubuli blieb davon unbeeinflusst. Das bedeutet, dass die Mikrotubuli-Nukleationskomplexe während der Interphase und Mitose über verschiedene Wege mit dem Core assoziiert sind. Des Weiteren bewirkte der Knockdown eine Dispersion der Centromere sowie eine veränderte Sun1 Lokalisation in der Kernhülle. Somit spielt CP148 ebenso eine Rolle in der Centrosomen-Centromer-Verbindung. Zusammengefasst ist CP148 ein essentielles Protein für die Bildung und Organisation der Corona, welche wiederum für die Centrosom/Centromer Verbindung benötigt wird. CP55 wurde als Protein der Core-Region identifiziert und verbleibt während des Zellzyklus am Centrosom. Dort besitzt es strukturelle Aufgaben, da die Mehrheit der GFP-CP55 Moleküle in der Interphase keine Mobilität zeigten. Die GFP-CP55 Überexpression führte zur Bildung von überzähligen Centrosomen mit der üblichen Ausstattung an Markerproteinen der Corona und des Cores. CP55 Knockout-Zellen waren durch eine erhöhte Ploidie, eine weniger strukturierte und leicht vergrößerte Corona sowie zusätzliche cytosolische Mikrotubuli-organisierende Zentren charakterisiert. Letztere entstanden in der Telophase und enthielten nur Corona- aber keine Core-Proteine. In CP55 k/o Zellen erfolgte die Rekrutierung des Corona-Organisators CP148 an den Spindelpol bereits in der frühen Metaphase anstatt, wie üblich, erst in der Telophase. Außerdem zeigten die Knockout-Zellen Wachstumsdefekte, deren Grund vermutlich Schwierigkeiten bei der Centrosomenverdopplung in der Prophase durch das Fehlen von CP55 waren. Darüber hinaus konnten die Knockout-Zellen phagozytiertes Material nicht verwerten, obwohl der Vorgang der Phagozytose nicht beeinträchtigt war. Dieser Defekt kann dem im CP55 k/o auftretenden dispergierten Golgi-Apparat zugeschrieben werden.
In industrialized economies such as the European countries unemployment rates are very responsive to the business cycle and significant shares stay unemployed for more than one year. To fight cyclical and long-term unemployment countries spend significant shares of their budget on Active Labor Market Policies (ALMP). To improve the allocation and design of ALMP it is essential for policy makers to have reliable evidence on the effectiveness of such programs available. Although the number of studies has been increased during the last decades, policy makers still lack evidence on innovative programs and for specific subgroups of the labor market. Using Germany as a case study, the dissertation aims at contributing in this way by providing new evidence on start-up subsidies, marginal employment and programs for youth unemployed. The idea behind start-up subsidies is to encourage unemployed individuals to exit unemployment by starting their own business. Those programs have compared to traditional programs of ALMP the advantage that not only the participant escapes unemployment but also might generate additional jobs for other individuals. Considering two distinct start-up subsidy programs, the dissertation adds three substantial aspects to the literature: First, the programs are effective in improving the employment and income situation of participants compared to non-participants in the long-run. Second, the analysis on effect heterogeneity reveals that the programs are particularly effective for disadvantaged groups in the labor market like low educated or low qualified individuals, and in regions with unfavorable economic conditions. Third, the analysis considers the effectiveness of start-up programs for women. Due to higher preferences for flexible working hours and limited part-time jobs, unemployed women often face more difficulties to integrate in dependent employment. It can be shown that start-up subsidy programs are very promising as unemployed women become self-employed which gives them more flexibility to reconcile work and family. Overall, the results suggest that the promotion of self-employment among the unemployed is a sensible strategy to fight unemployment by abolishing labor market barriers for disadvantaged groups and sustainably integrating those into the labor market. The next chapter of the dissertation considers the impact of marginal employment on labor market outcomes of the unemployed. Unemployed individuals in Germany are allowed to earn additional income during unemployment without suffering a reduction in their unemployment benefits. Those additional earnings are usually earned by taking up so-called marginal employment that is employment below a certain income level subject to reduced payroll taxes (also known as “mini-job”). The dissertation provides an empirical evaluation of the impact of marginal employment on unemployment duration and subsequent job quality. The results suggest that being marginal employed during unemployment has no significant effect on unemployment duration but extends employment duration. Moreover, it can be shown that taking up marginal employment is particularly effective for long-term unemployed, leading to higher job-finding probabilities and stronger job stability. It seems that mini-jobs can be an effective instrument to help long-term unemployed individuals to find (stable) jobs which is particularly interesting given the persistently high shares of long-term unemployed in European countries. Finally, the dissertation provides an empirical evaluation of the effectiveness of ALMP programs to improve labor market prospects of unemployed youth. Youth are generally considered a population at risk as they have lower search skills and little work experience compared to adults. This results in above-average turnover rates between jobs and unemployment for youth which is particularly sensitive to economic fluctuations. Therefore, countries spend significant resources on ALMP programs to fight youth unemployment. However, so far only little is known about the effectiveness of ALMP for unemployed youth and with respect to Germany no comprehensive quantitative analysis exists at all. Considering seven different ALMP programs, the results show an overall positive picture with respect to post-treatment employment probabilities for all measures under scrutiny except for job creation schemes. With respect to effect heterogeneity, it can be shown that almost all programs particularly improve the labor market prospects of youths with high levels of pretreatment schooling. Furthermore, youths who are assigned to the most successful employment measures have much better characteristics in terms of their pre-treatment employment chances compared to non-participants. Therefore, the program assignment process seems to favor individuals for whom the measures are most beneficial, indicating a lack of ALMP alternatives that could benefit low-educated youths.
This work is concerned with the characterization of certain classes of stochastic processes via duality formulae. In particular we consider reciprocal processes with jumps, a subject up to now neglected in the literature. In the first part we introduce a new formulation of a characterization of processes with independent increments. This characterization is based on a duality formula satisfied by processes with infinitely divisible increments, in particular Lévy processes, which is well known in Malliavin calculus. We obtain two new methods to prove this duality formula, which are not based on the chaos decomposition of the space of square-integrable function- als. One of these methods uses a formula of partial integration that characterizes infinitely divisible random vectors. In this context, our characterization is a generalization of Stein’s lemma for Gaussian random variables and Chen’s lemma for Poisson random variables. The generality of our approach permits us to derive a characterization of infinitely divisible random measures. The second part of this work focuses on the study of the reciprocal classes of Markov processes with and without jumps and their characterization. We start with a resume of already existing results concerning the reciprocal classes of Brownian diffusions as solutions of duality formulae. As a new contribution, we show that the duality formula satisfied by elements of the reciprocal class of a Brownian diffusion has a physical interpretation as a stochastic Newton equation of motion. Thus we are able to connect the results of characterizations via duality formulae with the theory of stochastic mechanics by our interpretation, and to stochastic optimal control theory by the mathematical approach. As an application we are able to prove an invariance property of the reciprocal class of a Brownian diffusion under time reversal. In the context of pure jump processes we derive the following new results. We describe the reciprocal classes of Markov counting processes, also called unit jump processes, and obtain a characterization of the associated reciprocal class via a duality formula. This formula contains as key terms a stochastic derivative, a compensated stochastic integral and an invariant of the reciprocal class. Moreover we present an interpretation of the characterization of a reciprocal class in the context of stochastic optimal control of unit jump processes. As a further application we show that the reciprocal class of a Markov counting process has an invariance property under time reversal. Some of these results are extendable to the setting of pure jump processes, that is, we admit different jump-sizes. In particular, we show that the reciprocal classes of Markov jump processes can be compared using reciprocal invariants. A characterization of the reciprocal class of compound Poisson processes via a duality formula is possible under the assumption that the jump-sizes of the process are incommensurable.
Chemische und physikalische Eigenschaften von Polymeren können verschiedene Zelltypen unterschiedlich, z. B. hinsichtlich Adhärenz oder Funktionalität, beeinflussen. Die Elastizität eines Polymers beeinflusst vor allem, welche Zugkräfte eine Zelle gegenüber ihrem Substrat entwickeln kann. Das Zellverhalten wird dann über intrazelluläre Rückkopplungsmechanismen reguliert. Die Oberflächenladung und/oder Hydrophilie eines Polymers beeinflusst zunächst die Adsorption von Ionen, Proteinen und anderen Molekülen. Vor allem über die Zusammensetzung, Dichte und Konformation der adsorbierten Komponenten werden anschließend die Wechselwirkungen mit den Zellen vermittelt. Des Weiteren können verschiedene Zelltypen unterschiedliche membranassoziierte Proteine, Zucker und Lipide aufweisen, so dass Polymereigenschaften zellspezifische Effekte bewirken können. Für biotechnologische Anwendungen und für den Einsatz in der regenerativen Medizin gewinnen Polymere, die spezifische Zellreaktionen regulieren können, immer weiter an Bedeutung. Die Isolierung und Kultur von primären Keratinozyten ist noch immer anspruchsvoll und die adäquate Heilung von Hautwunden stellt eine fortwährende medizinische Herausforderung dar. Ein Polymer, das eine bevorzugte Adhärenz von Keratinozyten bei gleichzeitig verminderter Anheftung dermaler Fibroblasten ermöglicht, würde erhebliche Vorteile für den Einsatz in der Keratinozyten-Zellkultur und als Wundauflage bieten. Um den potentiell spezifischen Einfluss bestimmter Polymereigenschaften auf primäre humane Keratinozyten und dermale Fibroblasten zu untersuchen, wurde in der vorliegenden Arbeit ein Zellkultursystem für die Mono- und Cokultur beider Zelltypen entwickelt. Das Testsystem wurde als Screening konzipiert, um den Einfluss unterschiedlicher Polymereigenschaften in mehreren Abstufungen auf die Zellen zu untersuchen. Folgende Parameter wurden untersucht: 1. Vitalität und Dichte adhärenter und nicht-adhärierter Zellen, 2. Schädigung der Zellmembran, 3. selektive Adhärenz von Keratinozyten in Cokultur durch die spezifische immunzytochemische Färbung von Keratin14 und Vimentin. Für die Polymere mit variabler Elastizität wurden zusätzlich die Ablagerung extrazellulärer Matrixkomponenten und die Sekretion löslicher Faktoren durch die Zellen untersucht. Als Modellpolymere für die Variation der Elastizität wurden vernetzte Poly(n-butylacrylate) (cPnBA) verwendet, da deren Elastizität durch den Anteil des Vernetzers eingestellt werden kann. Auf dem weniger elastischen cPnBA zeigte sich in der Cokultur ein doppelt so hohes Verhältnis von Keratinozyten zu Fibroblasten wie auf dem elastischeren cPnBA, so dass ein leichter zellselektiver Effekt angenommen werden kann. Acrylnitril-basierte Copolymere wurden als Modellpolymere für die Variation der Oberflächenladung und Hydrophilie verwendet, da die Eigenschaften durch Art und molaren Anteil des Comonomers eingestellt werden können. Durch Variation des molaren Anteils der Comonomere mit positiver bzw. negativer Ladung, Methacrylsäure-2-aminoethylester-hydrochhlorid (AEMA) und N-3-Aminopropyl-methacrylamid-hydro-chlorid (APMA) bzw. Natriumsalz der 2-Methyl-2-propen-1-sulfonsäure (NaMAS), wurde der Anteil der positiven bzw. negativen Ladung im Copolymer variiert. Durch die Erhöhung des molaren Anteils des hydrophilen Comonomers N-Vinylpyrrolidon (NVP) wurde die Hydrophilie des Copolymers gesteigert. Die Erhöhung des molaren Anteils an positiv geladenem Comonomer AEMA im Copolymer führte tendenziell zu einer höheren Keratinozytendichte, wobei die Fibroblastendichte unverändert blieb. Durch die Erhöhung des molaren Anteils des positiv geladenen Comonomers APMA ergaben sich keine deutlichen Unterschiede in Dichte, Vitalität oder Selektivität der Zellen. Durch die stufenweise Erhöhung des molaren Anteils des negativ geladenen Comonomers NaMAS konnte, wie im Falle von AEMA, eine Tendenz zur verbesserten Keratinozytenadhärenz beobachtet werden. Die Steigerung der Hydrophilie der Copolymere führte sowohl für Keratinozyten als auch für Fibroblasten zu einer reduzierten Adhärenz und Vitalität. In der vorliegenden Doktorarbeit wurde ein Testverfahren etabliert, das die Untersuchung von primären humanen Keratinozyten und primären humanen Fibroblasten in Monokultur und Cokultur auf verschiedenen Polymeren ermöglicht. Die bisherigen Ergebnisse zeigen, dass sich durch die gezielte Modifizierung verschiedener Polymereigenschaften die Adhärenz und Vitalität beider Zelltypen beeinflussen lässt. Die Reduktion der Elastizität sowie die Erhöhung des molaren Anteils geladener Comonomere führten zu einer Zunahme der Keratinozytenadhärenz. Da die Fibroblasten unbeeinflusst blieben, zeigte sich für einige der untersuchten Polymere eine leichte Zellselektivität. Diese könnte durch die weitere Erhöhung der Steifigkeit oder des Anteils geladener Comonomere möglicherweise weiter gesteigert werden.
The Indian summer monsoon (ISM) is one of the largest climate systems on earth and impacts the livelihood of nearly 40% of the world’s population. Despite dedicated efforts, a comprehensive picture of monsoon variability has proved elusive largely due to the absence of long term high resolution records, spatial inhomogeneity of the monsoon precipitation, and the complex forcing mechanisms (solar insolation, internal teleconnections for e.g., El Niño-Southern Oscillation, tropical-midlatitude interactions). My work aims to improve the understanding of monsoon variability through generation of long term high resolution palaeoclimate data from climatically sensitive regions in the ISM and westerlies domain. To achieve this aim I have (i) identified proxies (sedimentological, geochemical, isotopic, and mineralogical) that are sensitive to environmental changes; (ii) used the identified proxies to generate long term palaeoclimate data from two climatically sensitive regions, one in NW Himalayas (transitional westerlies and ISM domain in the Spiti valley and one in the core monsoon zone (Lonar lake) in central India); (iii) undertaken a regional overview to generate “snapshots” of selected time slices; and (iv) interpreted the spatial precipitation anomalies in terms of those caused by modern teleconnections. This approach must be considered only as the first step towards identifying the past teleconnections as the boundary conditions in the past were significantly different from today and would have impacted the precipitation anomalies. As the Spiti valley is located in the in the active tectonic orogen of Himalayas, it was essential to understand the role of regional tectonics to make valid interpretations of catchment erosion and detrital influx into the lake. My approach of using integrated structural/morphometric and geomorphic signatures provided clear evidence for active tectonics in this area and demonstrated the suitability of these lacustrine sediments as palaleoseismic archives. The investigations on the lacustrine outcrops in Spiti valley also provided information on changes in seasonality of precipitation and occurrence of frequent and intense periods (ca. 6.8-6.1 cal ka BP) of detrital influx indicating extreme hydrological events in the past. Regional comparison for this time slice indicates a possible extended “break-monsoon like” mode for the monsoon that favors enhanced precipitation over the Tibetan plateau, Himalayas and their foothills. My studies on surface sediments from Lonar lake helped to identify environmentally sensitive proxies which could also be used to interpret palaeodata obtained from a ca. 10m long core raised from the lake in 2008. The core encompasses the entire Holocene and is the first well dated (by 14C) archive from the core monsoon zone of central India. My identification of authigenic evaporite gaylussite crystals within the core sediments provided evidence of exceptionally drier conditions during 4.7-3.9 and 2.0-0.5 cal ka BP. Additionally, isotopic investigations on these crystals provided information on eutrophication, stratification, and carbon cycling processes in the lake.
Virtual 3D city and landscape models are the main subject investigated in this thesis. They digitally represent urban space and have many applications in different domains, e.g., simulation, cadastral management, and city planning. Visualization is an elementary component of these applications. Photo-realistic visualization with an increasingly high degree of detail leads to fundamental problems for comprehensible visualization. A large number of highly detailed and textured objects within a virtual 3D city model may create visual noise and overload the users with information. Objects are subject to perspective foreshortening and may be occluded or not displayed in a meaningful way, as they are too small. In this thesis we present abstraction techniques that automatically process virtual 3D city and landscape models to derive abstracted representations. These have a reduced degree of detail, while essential characteristics are preserved. After introducing definitions for model, scale, and multi-scale representations, we discuss the fundamentals of map generalization as well as techniques for 3D generalization. The first presented technique is a cell-based generalization of virtual 3D city models. It creates abstract representations that have a highly reduced level of detail while maintaining essential structures, e.g., the infrastructure network, landmark buildings, and free spaces. The technique automatically partitions the input virtual 3D city model into cells based on the infrastructure network. The single building models contained in each cell are aggregated to abstracted cell blocks. Using weighted infrastructure elements, cell blocks can be computed on different hierarchical levels, storing the hierarchy relation between the cell blocks. Furthermore, we identify initial landmark buildings within a cell by comparing the properties of individual buildings with the aggregated properties of the cell. For each block, the identified landmark building models are subtracted using Boolean operations and integrated in a photo-realistic way. Finally, for the interactive 3D visualization we discuss the creation of the virtual 3D geometry and their appearance styling through colors, labeling, and transparency. We demonstrate the technique with example data sets. Additionally, we discuss applications of generalization lenses and transitions between abstract representations. The second technique is a real-time-rendering technique for geometric enhancement of landmark objects within a virtual 3D city model. Depending on the virtual camera distance, landmark objects are scaled to ensure their visibility within a specific distance interval while deforming their environment. First, in a preprocessing step a landmark hierarchy is computed, this is then used to derive distance intervals for the interactive rendering. At runtime, using the virtual camera distance, a scaling factor is computed and applied to each landmark. The scaling factor is interpolated smoothly at the interval boundaries using cubic Bézier splines. Non-landmark geometry that is near landmark objects is deformed with respect to a limited number of landmarks. We demonstrate the technique by applying it to a highly detailed virtual 3D city model and a generalized 3D city model. In addition we discuss an adaptation of the technique for non-linear projections and mobile devices. The third technique is a real-time rendering technique to create abstract 3D isocontour visualization of virtual 3D terrain models. The virtual 3D terrain model is visualized as a layered or stepped relief. The technique works without preprocessing and, as it is implemented using programmable graphics hardware, can be integrated with minimal changes into common terrain rendering techniques. Consequently, the computation is done in the rendering pipeline for each vertex, primitive, i.e., triangle, and fragment. For each vertex, the height is quantized to the nearest isovalue. For each triangle, the vertex configuration with respect to their isovalues is determined first. Using the configuration, the triangle is then subdivided. The subdivision forms a partial step geometry aligned with the triangle. For each fragment, the surface appearance is determined, e.g., depending on the surface texture, shading, and height-color-mapping. Flexible usage of the technique is demonstrated with applications from focus+context visualization, out-of-core terrain rendering, and information visualization. This thesis presents components for the creation of abstract representations of virtual 3D city and landscape models. Re-using visual language from cartography, the techniques enable users to build on their experience with maps when interpreting these representations. Simultaneously, characteristics of 3D geovirtual environments are taken into account by addressing and discussing, e.g., continuous scale, interaction, and perspective.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
The microscopic origin of ultrafast demagnetization, i.e. the quenching of the magnetization of a ferromagnetic metal on a sub-picosecond timescale after laser excitation, is still only incompletely understood, despite a large body of experimental and theoretical work performed since the discovery of the effect more than 15 years ago. Time- and element-resolved x-ray magnetic circular dichroism measurements can provide insight into the microscopic processes behind ultrafast demagnetization as well as its dependence on materials properties. Using the BESSY II Femtoslicing facility, a storage ring based source of 100 fs short soft x-ray pulses, ultrafast magnetization dynamics of ferromagnetic NiFe and GdTb alloys as well as a Au/Ni layered structure were investigated in laser pump – x-ray probe experiments. After laser excitation, the constituents of Ni50Fe50 and Ni80Fe20 exhibit distinctly different time constants of demagnetization, leading to decoupled dynamics, despite the strong exchange interaction that couples the Ni and Fe sublattices under equilibrium conditions. Furthermore, the time constants of demagnetization for Ni and Fe are different in Ni50Fe50 and Ni80Fe20, and also different from the values for the respective pure elements. These variations are explained by taking the magnetic moments of the Ni and Fe sublattices, which are changed from the pure element values due to alloying, as well as the strength of the intersublattice exchange interaction into account. GdTb exhibits demagnetization in two steps, typical for rare earths. The time constant of the second, slower magnetization decay was previously linked to the strength of spin-lattice coupling in pure Gd and Tb, with the stronger, direct spin-lattice coupling in Tb leading to a faster demagnetization. In GdTb, the demagnetization of Gd follows Tb on all timescales. This is due to the opening of an additional channel for the dissipation of spin angular momentum to the lattice, since Gd magnetic moments in the alloy are coupled via indirect exchange interaction to neighboring Tb magnetic moments, which are in turn strongly coupled to the lattice. Time-resolved measurements of the ultrafast demagnetization of a Ni layer buried under a Au cap layer, thick enough to absorb nearly all of the incident pump laser light, showed a somewhat slower but still sub-picosecond demagnetization of the buried Ni layer in Au/Ni compared to a Ni reference sample. Supported by simulations, I conclude that demagnetization can thus be induced by transport of hot electrons excited in the Au layer into the Ni layer, without the need for direct interaction between photons and spins.
Die vorliegende Arbeit enthält eine statistische Analyse der Gesamtheit öffentlicher Unternehmen in Deutschland und ihrer wirtschaftlichen Lage. Für diese Untersuchung stand eine Datenbank für etwa 9000 öffentliche Unternehmen mit knapp 500 Merkmalen zur Verfügung, die im Wesentlichen den Posten der Jahresabschlüsse und verschiedenen Identifikationsmerkmalen (wie u. a. Unternehmenssitz, Wirtschaftszweig und Rechtsform) entsprechen. Die Analyse umfasst den Zeitraum von 1998 bis 2006. Die extrem umfangreiche Datengrundlage – Jahresabschlussstatistiken öffentlicher Unternehmen – ist für einen Statistiker eine große Versuchung. In der Arbeit wurden Methoden der beschreibenden Statistik und der Jahresabschlussanalyse mit Bilanzkennzahlen angewandt. Vor allem in den letzten zwanzig Jahren wurde die Entwicklung der Gesamtheit öffentlicher Unternehmen durch Wandelprozesse geprägt und von Diskussionen über ihre Leistungsfähigkeit begleitet. Die Dynamik der Gesamtheit öffentlicher Unternehmen zeigt sich v. a. an der Vielfalt ihrer Aufgabenbereiche und Organisationsformen. Daher wurde in dieser Arbeit versucht, zunächst eine Bestandsaufnahme des öffentlichen Unternehmensbereichs durchzuführen. Ein weiteres Ziel war die Beschreibung der Wirtschaftslage öffentlicher Unternehmen im letzten Jahrzehnt, wobei ihre Leistungsfähigkeit in den Vordergrund gestellt wird. Die Leistungsfähigkeit öffentlicher Unternehmen nur über die betriebswirtschaftliche Effizienz zu messen, ist gewiss einseitig und nicht ausreichend. Diese ließ sich aber im Vergleich zur volkswirtschaftlichen oder sozialen Effizienz leichter operationalisieren: Die betriebswirtschaftlichen Effizienzkriterien können gut aus den Jahresabschlüssen abgeleitet werden. Dadurch wird auch ein Vergleich mit privaten Unternehmen in gewissen Grenzen möglich. Die Beschreibung der Wirtschaftslage öffentlicher Unternehmen wurde als Analyse ihrer einzelnen Teillagen (Vermögens-, Finanz- und Ertragslage) strukturiert. Insgesamt unterstreicht die Analyse der Teillagen die enge Verflechtung zwischen öffentlichen Unternehmen und öffentlichen Haushalten. Die vorliegende Untersuchung soll die Forschung auf dem Gebiet der datengetriebenen Statistik, die im Universitätsbereich in letzten Jahren im Vergleich zur modellgetriebenen Statistik oft vernachlässigt wurde, ausweiten.
Taking advantage of ATRP and using functionalized initiators, different functionalities were introduced in both α and ω chain-ends of synthetic polymers. These functionalized polymers could then go through modular synthetic pathways such as click cycloaddition (copper-catalyzed or copper-free) or amidation to couple synthetic polymers to other synthetic polymers, biomolecules or silica monoliths. Using this general strategy and designing these co/polymers so that they are thermoresponsive, yet bioinert and biocompatible with adjustable cloud point values (as it is the case in the present thesis), the whole generated system becomes "smart" and potentially applicable in different branches. The applications which were considered in the present thesis were in polymer post-functionalization (in situ functionalization of micellar aggregates with low and high molecular weight molecules), hydrophilic/hydrophobic tuning, chromatography and bioconjugation (enzyme thermoprecipitation and recovery, improvement of enzyme activity). Different α-functionalized co/polymers containing cholesterol moiety, aldehyde, t-Boc protected amine, TMS-protected alkyne and NHS-activated ester were designed and synthesized in this work.
Structuring process models
(2012)
One can fairly adopt the ideas of Donald E. Knuth to conclude that process modeling is both a science and an art. Process modeling does have an aesthetic sense. Similar to composing an opera or writing a novel, process modeling is carried out by humans who undergo creative practices when engineering a process model. Therefore, the very same process can be modeled in a myriad number of ways. Once modeled, processes can be analyzed by employing scientific methods. Usually, process models are formalized as directed graphs, with nodes representing tasks and decisions, and directed arcs describing temporal constraints between the nodes. Common process definition languages, such as Business Process Model and Notation (BPMN) and Event-driven Process Chain (EPC) allow process analysts to define models with arbitrary complex topologies. The absence of structural constraints supports creativity and productivity, as there is no need to force ideas into a limited amount of available structural patterns. Nevertheless, it is often preferable that models follow certain structural rules. A well-known structural property of process models is (well-)structuredness. A process model is (well-)structured if and only if every node with multiple outgoing arcs (a split) has a corresponding node with multiple incoming arcs (a join), and vice versa, such that the set of nodes between the split and the join induces a single-entry-single-exit (SESE) region; otherwise the process model is unstructured. The motivations for well-structured process models are manifold: (i) Well-structured process models are easier to layout for visual representation as their formalizations are planar graphs. (ii) Well-structured process models are easier to comprehend by humans. (iii) Well-structured process models tend to have fewer errors than unstructured ones and it is less probable to introduce new errors when modifying a well-structured process model. (iv) Well-structured process models are better suited for analysis with many existing formal techniques applicable only for well-structured process models. (v) Well-structured process models are better suited for efficient execution and optimization, e.g., when discovering independent regions of a process model that can be executed concurrently. Consequently, there are process modeling languages that encourage well-structured modeling, e.g., Business Process Execution Language (BPEL) and ADEPT. However, the well-structured process modeling implies some limitations: (i) There exist processes that cannot be formalized as well-structured process models. (ii) There exist processes that when formalized as well-structured process models require a considerable duplication of modeling constructs. Rather than expecting well-structured modeling from start, we advocate for the absence of structural constraints when modeling. Afterwards, automated methods can suggest, upon request and whenever possible, alternative formalizations that are "better" structured, preferably well-structured. In this thesis, we study the problem of automatically transforming process models into equivalent well-structured models. The developed transformations are performed under a strong notion of behavioral equivalence which preserves concurrency. The findings are implemented in a tool, which is publicly available.
In the context of cosmological structure formation sheets, filaments and eventually halos form due to gravitational instabilities. It is noteworthy, that at all times, the majority of the baryons in the universe does not reside in the dense halos but in the filaments and the sheets of the intergalactic medium. While at higher redshifts of z > 2, these baryons can be detected via the absorption of light (originating from more distant sources) by neutral hydrogen at temperatures of T ~ 10^4 K (the Lyman-alpha forest), at lower redshifts only about 20 % can be found in this state. The remain (about 50 to 70 % of the total baryons mass) is unaccounted for by observational means. Numerical simulations predict that these missing baryons could reside in the filaments and sheets of the cosmic web at high temperatures of T = 10^4.5 - 10^7 K, but only at low to intermediate densities, and constitutes the warm-hot intergalactic medium (WHIM). The high temperatures of the WHIM are caused by the formation of shocks and the subsequent shock-heating of the gas. This results in a high degree of ionization and renders the reliable detection of the WHIM a challenging task. Recent high-resolution hydrodynamical simulations indicate that, at redshifts of z ~ 2, filaments are able to provide very massive galaxies with a significant amount of cool gas at temperatures of T ~ 10^4 K. This could have an important impact on the star-formation in those galaxies. It is therefore of principle importance to investigate the particular hydro- and thermodynamical conditions of these large filament structures. Density and temperature profiles, and velocity fields, are expected to leave their special imprint on spectroscopic observations. A potential multiphase structure may act as tracer in observational studies of the WHIM. In the context of cold streams, it is important to explore the processes, which regulate the amount of gas transported by the streams. This includes the time evolution of filaments, as well as possible quenching mechanisms. In this context, the halo mass range in which cold stream accretion occurs is of particular interest. In order to address these questions, we perform particular hydrodynamical simulations of very high resolution, and investigate the formation and evolution of prototype structures representing the typical filaments and sheets of the WHIM. We start with a comprehensive study of the one-dimensional collapse of a sinusoidal density perturbation (pancake formation) and examine the influence of radiative cooling, heating due to an UV background, thermal conduction, and the effect of small-scale perturbations given by the cosmological power spectrum. We use a set of simulations, parametrized by the wave length of the initial perturbation L. For L ~ 2 Mpc/h the collapse leads to shock-confined structures. As a result of radiative cooling and of heating due to an UV background, a relatively cold and dense core forms. With increasing L the core becomes denser and more concentrated. Thermal conduction enhances this trend and may lead to an evaporation of the core at very large L ~ 30 Mpc/h. When extending our simulations into three dimensions, instead of a pancake structure, we obtain a configuration consisting of well-defined sheets, filaments, and a gaseous halo. For L > 4 Mpc/h filaments form, which are fully confined by an accretion shock. As with the one-dimensional pancakes, they exhibit an isothermal core. Thus, our results confirm a multiphase structure, which may generate particular spectral tracers. We find that, after its formation, the core becomes shielded against further infall of gas onto the filament, and its mass content decreases with time. In the vicinity of the halo, the filament's core can be attributed to the cold streams found in other studies. We show, that the basic structure of these cold streams exists from the very beginning of the collapse process. Further on, the cross section of the streams is constricted by the outwards moving accretion shock of the halo. Thermal conduction leads to a complete evaporation of the cold stream for L > 6 Mpc/h. This corresponds to halos with a total mass higher than M_halo = 10^13 M_sun, and predicts that in more massive halos star-formation can not be sustained by cold streams. Far away from the gaseous halo, the temperature gradients in the filament are not sufficiently strong for thermal conduction to be effective.
Tectonic and geological processes on Earth often result in structural anisotropy of the subsurface, which can be imaged by various geophysical methods. In order to achieve appropriate and realistic Earth models for interpretation, inversion algorithms have to allow for an anisotropic subsurface. Within the framework of this thesis, I analyzed a magnetotelluric (MT) data set taken from the Cape Fold Belt in South Africa. This data set exhibited strong indications for crustal anisotropy, e.g. MT phases out of the expected quadrant, which are beyond of fitting and interpreting with standard isotropic inversion algorithms. To overcome this obstacle, I have developed a two-dimensional inversion method for reconstructing anisotropic electrical conductivity distributions. The MT inverse problem represents in general a non-linear and ill-posed minimization problem with many degrees of freedom: In isotropic case, we have to assign an electrical conductivity value to each cell of a large grid to assimilate the Earth's subsurface, e.g. a grid with 100 x 50 cells results in 5000 unknown model parameters in an isotropic case; in contrast, we have the sixfold in an anisotropic scenario where the single value of electrical conductivity becomes a symmetric, real-valued tensor while the number of the data remains unchanged. In order to successfully invert for anisotropic conductivities and to overcome the non-uniqueness of the solution of the inverse problem it is necessary to use appropriate constraints on the class of allowed models. This becomes even more important as MT data is not equally sensitive to all anisotropic parameters. In this thesis, I have developed an algorithm through which the solution of the anisotropic inversion problem is calculated by minimization of a global penalty functional consisting of three entries: the data misfit, the model roughness constraint and the anisotropy constraint. For comparison, in an isotropic approach only the first two entries are minimized. The newly defined anisotropy term is measured by the sum of the square difference of the principal conductivity values of the model. The basic idea of this constraint is straightforward. If an isotropic model is already adequate to explain the data, there is no need to introduce electrical anisotropy at all. In order to ensure successful inversion, appropriate trade-off parameters, also known as regularization parameters, have to be chosen for the different model constraints. Synthetic tests show that using fixed trade-off parameters usually causes the inversion to end up by either a smooth model with large RMS error or a rough model with small RMS error. Using of a relaxation approach on the regularization parameters after each successful inversion iteration will result in smoother inversion model and a better convergence. This approach seems to be a sophisticated way for the selection of trade-off parameters. In general, the proposed inversion method is adequate for resolving the principal conductivities defined in horizontal plane. Once none of the principal directions of the anisotropic structure is coincided with the predefined strike direction, only the corresponding effective conductivities, which is the projection of the principal conductivities onto the model coordinate axes direction, can be resolved and the information about the rotation angles is lost. In the end the MT data from the Cape Fold Belt in South Africa has been analyzed. The MT data exhibits an area (> 10 km) where MT phases over 90 degrees occur. This part of data cannot be modeled by standard isotropic modeling procedures and hence can not be properly interpreted. The proposed inversion method, however, could not reproduce the anomalous large phases as desired because of losing the information about rotation angles. MT phases outside the first quadrant are usually obtained by different anisotropic anomalies with oblique anisotropy strike. In order to achieve this challenge, the algorithm needs further developments. However, forward modeling studies with the MT data have shown that surface highly conductive heterogeneity in combination with a mid-crustal electrically anisotropic zone are required to fit the data. According to known geological and tectonic information the mid-crustal zone is interpreted as a deep aquifer related to the fractured Table Mountain Group rocks in the Cape Fold Belt.
The dissertation examines the use of performance information by public managers. “Use” is conceptualized as purposeful utilization in order to steer, learn, and improve public services. The main research question is: Why do public managers use performance information? To answer this question, I systematically review the existing literature, identify research gaps and introduce the approach of my dissertation. The first part deals with manager-related variables that might affect performance information use but which have thus far been disregarded. The second part models performance data use by applying a theory from social psychology which is based on the assumption that this management behavior is conscious and reasoned. The third part examines the extent to which explanations of performance information use vary if we include others sources of “unsystematic” feedback in our analysis. The empirical results are based on survey data from 2011. I surveyed middle managers from eight selected divisions of all German cities with county status (n=954). To analyze the data, I used factor analysis, multiple regression analysis, and structural equation modeling. My research resulted in four major findings: 1) The use of performance information can be modeled as a reasoned behavior which is determined by the attitude of the managers and of their immediate peers. 2) Regular users of performance data surprisingly are not generally inclined to analyze abstract data but rather prefer gathering information through personal interaction. 3) Managers who take on ownership of performance information at an early stage in the measurement process are also more likely to use this data when it is reported to them. 4) Performance reports are only one source of information among many. Public managers prefer verbal feedback from insiders and feedback from external stakeholders over systematic performance reports. The dissertation explains these findings using a deductive approach and discusses their implications for theory and practice.
Soil conditions under vegetation cover and their spatial and temporal variations from point to catchment scale are crucial for understanding hydrological processes within the vadose zone, for managing irrigation and consequently maximizing yield by precision farming. Soil moisture and soil roughness are the key parameters that characterize the soil status. In order to monitor their spatial and temporal variability on large scales, remote sensing techniques are required. Therefore the determination of soil parameters under vegetation cover was approached in this thesis by means of (multi-angular) polarimetric SAR acquisitions at a longer wavelength (L-band, lambda=23cm). In this thesis, the penetration capabilities of L-band are combined with newly developed (multi-angular) polarimetric decomposition techniques to separate the different scattering contributions, which are occurring in vegetation and on ground. Subsequently the ground components are inverted to estimate the soil characteristics. The novel (multi-angular) polarimetric decomposition techniques for soil parameter retrieval are physically-based, computationally inexpensive and can be solved analytically without any a priori knowledge. Therefore they can be applied without test site calibration directly to agricultural areas. The developed algorithms are validated with fully polarimetric SAR data acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR) for three different study areas in Germany. The achieved results reveal inversion rates up to 99% for the soil moisture and soil roughness retrieval in agricultural areas. However, in forested areas the inversion rate drops significantly for most of the algorithms, because the inversion in forests is invalid for the applied scattering models at L-band. The validation against simultaneously acquired field measurements indicates an estimation accuracy (root mean square error) of 5-10vol.% for the soil moisture (range of in situ values: 1-46vol.%) and of 0.37-0.45cm for the soil roughness (range of in situ values: 0.5-4.0cm) within the catchment. Hence, a continuous monitoring of soil parameters with the obtained precision, excluding frozen and snow covered conditions, is possible. Especially future, fully polarimetric, space-borne, long wavelength SAR missions can profit distinctively from the developed polarimetric decomposition techniques for separation of ground and volume contributions as well as for soil parameter retrieval on large spatial scales.
Modelling of environmental change impacts on water resources and hydrological extremes in Germany
(2012)
Water resources, in terms of quantity and quality, are significantly influenced by environmental changes, especially by climate and land use changes. The main objective of the present study is to project climate change impacts on the seasonal dynamics of water fluxes, spatial changes in water balance components as well as the future flood and low flow conditions in Germany. This study is based on the modeling results of the process-based eco-hydrological model SWIM (Soil and Water Integrated Model) driven by various regional climate scenarios on one hand. On the other hand, it is supported by statistical analysis on long-term trends of observed and simulated time series. In addition, this study evaluates the impacts of potential land use changes on water quality in terms of NO3-N load in selected sub-regions of the Elbe basin. In the context of climate change, the actual evapotransipration is likely to increase in most parts of Germany, while total runoff generation may decrease in south and east regions in the scenario period 2051-2060. Water discharge in all six studied large rivers (Ems, Weser, Saale, Danube, Main and Neckar) would be 8 – 30% lower in summer and autumn compared to the reference period (1961 – 1990), and the strongest decline is expected for the Saale, Danube and Neckar. The 50-year low flow is likely to occur more frequently in western, southern and central Germany after 2061 as suggested by more than 80% of the model runs. The current low flow period (from August to September) may be extended until the late autumn at the end of this century. Higher winter flow is expected in all of these rivers, and the increase is most significant for the Ems (about 18%). No general pattern of changes in flood directions can be concluded according to the results driven by different RCMs, emission scenarios and multi-realizations. An optimal agricultural land use and management are essential for the reduction in nutrient loads and improvement of water quality. In the Weiße Elster and Unstrut sub-basins (Elbe), an increase of 10% in the winter rape area can result in 12-19% more NO3-N load in rivers. In contrast, another energy plant, maize, has a moderate effect on the water environment. Mineral fertilizers have a much stronger effect on the NO3-N load than organic fertilizers. Cover crops, which play an important role in the reduction of nitrate losses from fields, should be maintained on cropland. The uncertainty in estimating future high flows and, in particular, extreme floods remain high due to different RCM structures, emission scenarios and multi-realizations. In contrast, the projection of low flows under warmer climate conditions appears to be more pronounced and consistent. The largest source of uncertainty related to NO3-N modelling originates from the input data on the agricultural management.
One of the major problems for the implementation of water resources planning and management in arid and semi-arid environments is the scarcity of hydrological data and, consequently, research studies. In this thesis, the hydrology of dryland river systems was analyzed and a semi-distributed hydrological model and a forecasting approach were developed for flow transmission processes in river-systems with a focus on semi-arid conditions. Three different sources of hydrological data (streamflow series, groundwater level series and multi-temporal satellite data) were combined in order to analyze the channel transmission losses of a large reach of the Jaguaribe River in NE Brazil. A perceptual model of this reach was derived suggesting that the application of models, which were developed for sub-humid and temperate regions, may be more suitable for this reach than classical models, which were developed for arid and semi-arid regions. Summarily, it was shown that this river reach is hydraulically connected with groundwater and shifts from being a losing river at the dry and beginning of rainy seasons to become a losing/gaining (mostly losing) river at the middle and end of rainy seasons. A new semi-distributed channel transmission losses model was developed, which was based primarily on the capability of simulation in very different dryland environments and flexible model structures for testing hypotheses on the dominant hydrological processes of rivers. This model was successfully tested in a large reach of the Jaguaribe River in NE Brazil and a small stream in the Walnut Gulch Experimental Watershed in the SW USA. Hypotheses on the dominant processes of the channel transmission losses (different model structures) in the Jaguaribe river were evaluated, showing that both lateral (stream-)aquifer water fluxes and ground-water flow in the underlying alluvium parallel to the river course are necessary to predict streamflow and channel transmission losses, the former process being more relevant than the latter. This procedure not only reduced model structure uncertainties, but also reported modelling failures rejecting model structure hypotheses, namely streamflow without river-aquifer interaction and stream-aquifer flow without groundwater flow parallel to the river course. The application of the model to different dryland environments enabled learning about the model itself from differences in channel reach responses. For example, the parameters related to the unsaturated part of the model, which were active for the small reach in the USA, presented a much greater variation in the sensitivity coefficients than those which drove the saturated part of the model, which were active for the large reach in Brazil. Moreover, a nonparametric approach, which dealt with both deterministic evolution and inherent fluctuations in river discharge data, was developed based on a qualitative dynamical system-based criterion, which involved a learning process about the structure of the time series, instead of a fitting procedure only. This approach, which was based only on the discharge time series itself, was applied to a headwater catchment in Germany, in which runoff are induced by either convective rainfall during the summer or snow melt in the spring. The application showed the following important features: • the differences between runoff measurements were more suitable than the actual runoff measurements when using regression models; • the catchment runoff system shifted from being a possible dynamical system contaminated with noise to a linear random process when the interval time of the discharge time series increased; • and runoff underestimation can be expected for rising limbs and overestimation for falling limbs. This nonparametric approach was compared with a distributed hydrological model designed for real-time flood forecasting, with both presenting similar results on average. Finally, a benchmark for hydrological research using semi-distributed modelling was proposed, based on the aforementioned analysis, modelling and forecasting of flow transmission processes. The aim of this benchmark was not to describe a blue-print for hydrological modelling design, but rather to propose a scientific method to improve hydrological knowledge using semi-distributed hydrological modelling. Following the application of the proposed benchmark to a case study, the actual state of its hydrological knowledge and its predictive uncertainty can be determined, primarily through rejected hypotheses on the dominant hydrological processes and differences in catchment/variables responses.
Bad governance causes economic, social, developmental and environmental problems in many developing countries. Developing countries have adopted a number of reforms that have assisted in achieving good governance. The success of governance reform depends on the starting point of each country – what institutional arrangements exist at the out-set and who the people implementing reforms within the existing institutional framework are. This dissertation focuses on how formal institutions (laws and regulations) and informal institutions (culture, habit and conception) impact on good governance. Three characteristics central to good governance - transparency, participation and accountability are studied in the research.
A number of key findings were: Good governance in Hanoi and Berlin represent the two extremes of the scale, while governance in Berlin is almost at the top of the scale, governance in Hanoi is at the bottom. Good governance in Hanoi is still far from achieved. In Berlin, information about public policies, administrative services and public finance is available, reliable and understandable. People do not encounter any problems accessing public information. In Hanoi, however, public information is not easy to access. There are big differences between Hanoi and Berlin in the three forms of participation. While voting in Hanoi to elect local deputies is formal and forced, elections in Berlin are fair and free. The candidates in local elections in Berlin come from different parties, whereas the candidacy of local deputies in Hanoi is thoroughly controlled by the Fatherland Front. Even though the turnout of voters in local deputy elections is close to 90 percent in Hanoi, the legitimacy of both the elections and the process of representation is non-existent because the local deputy candidates are decided by the Communist Party.
The involvement of people in solving local problems is encouraged by the government in Berlin. The different initiatives include citizenry budget, citizen activity, citizen initiatives, etc. Individual citizens are free to participate either individually or through an association.
Lacking transparency and participation, the quality of public service in Hanoi is poor. Citizens seldom get their services on time as required by the regulations. Citizens who want to receive public services can bribe officials directly, use the power of relationships, or pay a third person – the mediator ("Cò" - in Vietnamese).
In contrast, public service delivery in Berlin follows the customer-orientated principle. The quality of service is high in relation to time and cost. Paying speed money, bribery and using relationships to gain preferential public service do not exist in Berlin.
Using the examples of Berlin and Hanoi, it is clear to see how transparency, participation and accountability are interconnected and influence each other. Without a free and fair election as well as participation of non-governmental organisations, civil organisations, and the media in political decision-making and public actions, it is hard to hold the Hanoi local government accountable.
The key differences in formal institutions (regulative and cognitive) between Berlin and Hanoi reflect the three main principles: rule of law vs. rule by law, pluralism vs. monopoly Party in politics and social market economy vs. market economy with socialist orientation.
In Berlin the logic of appropriateness and codes of conduct are respect for laws, respect of individual freedom and ideas and awareness of community development. People in Berlin take for granted that public services are delivered to them fairly. Ideas such as using money or relationships to shorten public administrative procedures do not exist in the mind of either public officials or citizens.
In Hanoi, under a weak formal framework of good governance, new values and norms (prosperity, achievement) generated in the economic transition interact with the habits of the centrally-planned economy (lying, dependence, passivity) and traditional values (hierarchy, harmony, family, collectivism) influence behaviours of those involved.
In Hanoi “doing the right thing” such as compliance with law doesn’t become “the way it is”.
The unintended consequence of the deliberate reform actions of the Party is the prevalence of corruption. The socialist orientation seems not to have been achieved as the gap between the rich and the poor has widened.
Good governance is not achievable if citizens and officials are concerned only with their self-interest. State and society depend on each other. Theoretically to achieve good governance in Hanoi, institutions (formal and informal) able to create good citizens, officials and deputies should be generated. Good citizens are good by habit rather than by nature.
The rule of law principle is necessary for the professional performance of local administrations and People’s Councils. When the rule of law is applied consistently, the room for informal institutions to function will be reduced.
Promoting good governance in Hanoi is dependent on the need and desire to change the government and people themselves. Good governance in Berlin can be seen to be the result of the efforts of the local government and citizens after a long period of development and continuous adjustment.
Institutional transformation is always a long and complicated process because the change in formal regulations as well as in the way they are implemented may meet strong resistance from the established practice. This study has attempted to point out the weaknesses of the institutions of Hanoi and has identified factors affecting future development towards good governance. But it is not easy to determine how long it will take to change the institutional setting of Hanoi in order to achieve good governance.
Die Erkennung komplexer Kohlenhydrate durch das Tailspike Protein aus dem Bakteriophagen HK620
(2012)
Kohlenhydrate stellen aufgrund der strukturellen Vielfalt und ihrer oft exponierten Lage auf Zelloberflächen wichtige Erkennungsstrukturen dar. Die Wechselwirkungen von Proteinen mit diesen Kohlenhydraten vermitteln einen spezifischen Informationsaustausch. Protein-Kohlenhydrat-Interaktionen und ihre Triebkräfte sind bislang nur teilweise verstanden, da nur wenig strukturelle Daten von Proteinen im Komplex mit vorwiegend kleinen Kohlenhydraten erhältlich sind. Mit der vorliegenden Promotionsarbeit soll ein Beitrag zum Verständnis von Protein-Kohlenhydrat-Wechselwirkungen durch Analysen struktureller Thermodynamik geleistet werden, um zukünftig Vorhersagen mit zuverlässigen Algorithmen zu erlauben. Als Modellsystem zur Erkennung komplexer Kohlenhydrate diente dabei das Tailspike Protein (TSP) aus dem Bakteriophagen HK620. Dieser Phage erkennt spezifisch seinen E. coli-Wirt anhand der Oberflächenzucker, der sogenannten O-Antigene. Dabei binden die TSP des Phagen das O-Antigen des Lipopolysaccharids (LPS) und weisen zudem eine hydrolytische Aktivität gegenüber dem Polysaccharid (PS) auf. Anhand von isolierten Oligosacchariden des Antigens (Typ O18A1) wurde die Bindung an HK620TSP und verschiedener Varianten davon systematisch analysiert. Die Bindung der komplexen Kohlenhydrate durch HK620TSP zeichnet sich durch große Interaktionsflächen aus. Durch einzelne Aminosäureaustausche im aktiven Zentrum wurden Varianten generiert, die eine tausendfach erhöhte Affinität (KD ~ 100 nM) im Vergleich zum Wildtyp-Protein (KD ~ 130 μM) aufweisen. Dabei zeichnet sich das System dadurch aus, dass die Bindung bei Raumtemperatur nicht nur enthalpisch, sondern auch entropisch getrieben wird. Ursache für den günstigen Entropiebeitrag ist die große Anzahl an Wassermolekülen, die bei der Bindung des Hexasaccharids verdrängt werden. Röntgenstrukturanalysen zeigten für alle TSP-Komplexe außer für Variante D339N unabhängig von der Hexasaccharid-Affinität analoge Protein- und Kohlenhydrat-Konformationen. Dabei kann die Bindestelle in zwei Regionen unterteilt werden: Zum einen befindet sich am reduzierenden Ende eine hydrophobe Tasche mit geringen Beiträgen zur Affinitätsgenerierung. Der Zugang zu dieser Tasche kann ohne große Affinitätseinbuße durch einen einzelnen Aminosäureaustausch (D339N) blockiert werden. In der zweiten Region kann durch den Austausch eines Glutamats durch ein Glutamin (E372Q) eine Bindestelle für ein zusätzliches Wassermolekül generiert werden. Die Rotation einiger Aminosäuren bei Kohlenhydratbindung führt zur Desolvatisierung und zur Ausbildung von zusätzlichen Wasserstoffbrücken, wodurch ein starker Affinitätsgewinn erzielt wird. HK620TSP ist nicht nur spezifisch für das O18A1-Antigen, sondern erkennt zudem das um eine Glucose verkürzte Oligosaccharid des Typs O18A und hydrolysiert polymere Strukturen davon. Studien zur Bindung von O18A-Pentasaccharid zeigten, dass sich die Triebkräfte der Bindung im Vergleich zu dem zuvor beschriebenen O18A1-Hexasaccharid verschoben haben. Durch Fehlen der Seitenkettenglucose ist die Bindung im Vergleich zu dem O18A1-Hexasaccharid weniger stark entropisch getrieben (Δ(-TΔS) ~ 10 kJ/mol), während der Enthalpiebeitrag zu der Bindung günstiger ist (ΔΔH ~ -10 kJ/mol). Insgesamt gleichen sich diese Effekte aus, wodurch sehr ähnliche Affinitäten der TSP-Varianten zu O18A1-Hexasaccharid und O18A-Pentasaccharid gemessen wurden. Durch die Bindung der Glucose werden aus einer hydrophoben Tasche vier Wassermoleküle verdrängt, was entropisch stark begünstigt ist. Unter enthalpischen Aspekten ist dies ebenso wie einige Kontakte zwischen der Glucose und einigen Resten in der Tasche eher ungünstig. Die Bindung der Glucose in die hydrophobe Tasche an HK620TSP trägt somit nicht zur Affinitätsgenerierung bei und es bleibt zu vermuten, dass sich das O18A1-Antigen-bindende HK620TSP aus einem O18A-Antigen-bindenden TSP evolutionär herleitet. In dem dritten Teilprojekt der Dissertation wurde der Infektionsmechanismus des Phagen HK620 untersucht. Es konnte gezeigt werden, dass analog zu dem verwandten Phagen P22 die Ejektion der DNA aus HK620 allein durch das Lipopolysaccharid (LPS) des Wirts in vitro induziert werden kann. Die Morphologie und Kettenlänge des LPS sowie die Aktivität von HK620TSP gegenüber dem LPS erwiesen sich dabei als essentiell. So konnte die DNA-Ejektion in vitro auch durch LPS aus Bakterien der Serogruppe O18A induziert werden, welches ebenfalls von dem TSP des Phagen gebunden und hydrolysiert wird. Diese Ergebnisse betonen die Rolle von TSP für die Erkennung der LPS-Rezeptoren als wichtigen Schritt für die Infektion durch die Podoviren HK620 und P22.
Carbohydrate recognition is a ubiquitous principle underlying many fundamental biological processes like fertilization, embryogenesis and viral infections. But how carbohydrate specificity and affinity induce a molecular event is not well understood. One of these examples is bacteriophage P22 that binds and infects three distinct Salmonella enterica (S.) hosts. It recognizes and depolymerizes repetitive carbohydrate structures of O antigen in its host´s outer membrane lipopolysaccharide molecule. This is mediated by tailspikes, mainly β helical appendages on phage P22 short non contractile tail apparatus (podovirus). The O antigen of all three Salmonella enterica hosts is built from tetrasaccharide repeating units consisting of an identical main chain with a distinguished 3,6 dideoxyhexose substituent that is crucial for P22 tailspike recognition: tyvelose in S. Enteritidis, abequose in S. Typhimurium and paratose in S. Paratyphi. In the first study the complexes of P22 tailspike with its host’s O antigen octasaccharide were characterized. S. Paratyphi octasaccharide binds less tightly (ΔΔG≈7 kJ/mol) to the tailspike than the other two hosts. Crystal structure analysis of P22 tailspike co crystallized with S. Paratyphi octasaccharides revealed different interactions than those observed before in tailspike complexes with S. Enteritidis and S. Typhimurium octasaccharides. These different interactions occur due to a structural rearrangement in the S. Paratyphi octasaccharide. It results in an unfavorable glycosidic bond Φ/Ψ angle combination that also had occurred when the S. Paratyphi octasaccharide conformation was analyzed in an aprotic environment. Contributions of individual protein surface contacts to binding affinity were analyzed showing that conserved structural waters mediate specific recognition of all three different Salmonella host O antigens. Although different O antigen structures possess distinct binding behavior on the tailspike surface, all are recognized and infected by phage P22. Hence, in a second study, binding measurements revealed that multivalent O antigen was able to bind with high avidity to P22 tailspike. Dissociation rates of the polymer were three times slower than for an octasaccharide fragment pointing towards high affinity for O antigen polysaccharide. Furthermore, when phage P22 was incubated with lipopolysaccharide aggregates before plating on S. Typhimurium cells, P22 infectivity became significantly reduced. Therefore, in a third study, the function of carbohydrate recognition on the infection process was characterized. It was shown that large S. Typhimurium lipopolysaccharide aggregates triggered DNA release from the phage capsid in vitro. This provides evidence that phage P22 does not use a second receptor on the Salmonella surface for infection. P22 tailspike binding and cleavage activity modulate DNA egress from the phage capsid. DNA release occurred more slowly when the phage possessed mutant tailspikes with less hydrolytic activity and was not induced if lipopolysaccharides contained tailspike shortened O antigen polymer. Furthermore, the onset of DNA release was delayed by tailspikes with reduced binding affinity. The results suggest a model for P22 infection induced by carbohydrate recognition: tailspikes position the phage on Salmonella enterica and their hydrolytic activity forces a central structural protein of the phage assembly, the plug protein, onto the host´s membrane surface. Upon membrane contact, a conformational change has to occur in the assembly to eject DNA and pilot proteins from the phage to establish infection. Earlier studies had investigated DNA ejection in vitro solely for viruses with long non contractile tails (siphovirus) recognizing protein receptors. Podovirus P22 in this work was therefore the first example for a short tailed phage with an LPS recognition organelle that can trigger DNA ejection in vitro. However, O antigen binding and cleaving tailspikes are widely distributed in the phage biosphere, for example in siphovirus 9NA. Crystal structure analysis of 9NA tailspike revealed a complete similar fold to P22 tailspike although they only share 36 % sequence identity. Moreover, 9NA tailspike possesses similar enzyme activity towards S. Typhimurium O antigen within conserved amino acids. These are responsible for a DNA ejection process from siphovirus 9NA triggered by lipopolysaccharide aggregates. 9NA expelled its DNA 30 times faster than podovirus P22 although the associated conformational change is controlled with a similar high activation barrier. The difference in DNA ejection velocity mirrors different tail morphologies and their efficiency to translate a carbohydrate recognition signal into action.
Structural dynamics of photoexcited nanolayered perovskites studied by ultrafast x-ray diffraction
(2012)
This publication-based thesis represents a contribution to the active research field of ultrafast structural dynamics in laser-excited nanostructures. The investigation of such dynamics is mandatory for the understanding of the various physical processes on microscopic scales in complex materials which have great potentials for advances in many technological applications. I theoretically and experimentally examine the coherent, incoherent and anharmonic lattice dynamics of epitaxial metal-insulator heterostructures on timescales ranging from femtoseconds up to nanoseconds. To infer information on the transient dynamics in the photoexcited crystal lattices experimental techniques using ultrashort optical and x-ray pulses are employed. The experimental setups include table-top sources as well as large-scale facilities such as synchrotron sources. At the core of my work lies the development of a linear-chain model to simulate and analyze the photoexcited atomic-scale dynamics. The calculated strain fields are then used to simulate the optical and x-ray response of the considered thin films and multilayers in order to relate the experimental signatures to particular structural processes. This way one obtains insight into the rich lattice dynamics exhibiting coherent transport of vibrational energy from local excitations via delocalized phonon modes of the samples. The complex deformations in tailored multilayers are identified to give rise to highly nonlinear x-ray diffraction responses due to transient interference effects. The understanding of such effects and the ability to precisely calculate those are exploited for the design of novel ultrafast x-ray optics. In particular, I present several Phonon Bragg Switch concepts to efficiently generate ultrashort x-ray pulses for time-resolved structural investigations. By extension of the numerical models to include incoherent phonon propagation and anharmonic lattice potentials I present a new view on the fundamental research topics of nanoscale thermal transport and anharmonic phonon-phonon interactions such as nonlinear sound propagation and phonon damping. The former issue is exemplified by the time-resolved heat conduction from thin SrRuO3 films into a SrTiO3 substrate which exhibits an unexpectedly slow heat conductivity. Furthermore, I discuss various experiments which can be well reproduced by the versatile numerical models and thus evidence strong lattice anharmonicities in the perovskite oxide SrTiO3. The thesis also presents several advances of experimental techniques such as time-resolved phonon spectroscopy with optical and x-ray photons as well as concepts for the implementation of x-ray diffraction setups at standard synchrotron beamlines with largely improved time-resolution for investigations of ultrafast structural processes. This work forms the basis for ongoing research topics in complex oxide materials including electronic correlations and phase transitions related to the elastic, magnetic and polarization degrees of freedom.
In the course of this thesis gold nanoparticle/polyelectrolyte multilayer structures were prepared, characterized, and investigated according to their static and ultrafast optical properties. Using the dip-coating or spin-coating layer-by-layer deposition method, gold-nanoparticle layers were embedded in a polyelectrolyte environment with high structural perfection. Typical structures exhibit four repetition units, each consisting of one gold-particle layer and ten double layers of polyelectrolyte (cationic+anionic polyelectrolyte). The structures were characterized by X-ray reflectivity measurements, which reveal Bragg peaks up to the seventh order, evidencing the high stratication of the particle layers. In the same measurements pronounced Kiessig fringes were observed, which indicate a low global roughness of the samples. Atomic force microscopy (AFM) images veried this low roughness, which results from the high smoothing capabilities of polyelectrolyte layers. This smoothing effect facilitates the fabrication of stratified nanoparticle/polyelectrolyte multilayer structures, which were nicely illustrated in a transmission electron microscopy image. The samples' optical properties were investigated by static spectroscopic measurements in the visible and UV range. The measurements revealed a frequency shift of the reflectance and of the plasmon absorption band, depending on the thickness of the polyelectrolyte layers that cover a nanoparticle layer. When the covering layer becomes thicker than the particle interaction range, the absorption spectrum becomes independent of the polymer thickness. However, the reflectance spectrum continues shifting to lower frequencies (even for large thicknesses). The range of plasmon interaction was determined to be in the order of the particle diameter for 10 nm, 20 nm, and 150 nm particles. The transient broadband complex dielectric function of a multilayer structure was determined experimentally by ultrafast pump-probe spectroscopy. This was achieved by simultaneous measurements of the changes in the reflectance and transmittance of the excited sample over a broad spectral range. The changes in the real and imaginary parts of the dielectric function were directly deduced from the measured data by using a recursive formalism based on the Fresnel equations. This method can be applied to a broad range of nanoparticle systems where experimental data on the transient dielectric response are rare. This complete experimental approach serves as a test ground for modeling the dielectric function of a nanoparticle compound structure upon laser excitation.
Meiner nichtreduktionistischen Lesart Gadamers, derzufolge eine wechselseitige konstitutive Relation zwischen „Sprache“ und „Erfahrung“ besteht, ist es gestattet, den Vorwurf, die Sprachphilosophie Gadamers führe in den Relativismus, den man häufig gegenüber sprachphilosophischen Positionen erhebt, abzuweisen. Manchen Denkern zufolge haben die Philosophen der Postmoderne, zu denen auch Gadamer gezählt wurde, eine einfache Umkehrung der beiden Pole des modernen Verhältnisses „Sprache“ – „Erfahrung“ vollzogen: Während die Sprache in der Moderne in ihrer Bedingtheit zur Erfahrung und als bloßes Ausdrucksmittel verstanden wurde, wurde dieses Verhältnis in der neueren Philosophie nur umgekehrt, insofern die Philosophie in der Sprache das Fundament für die Erfahrung sehe, wonach die Erfahrung als ein Ausdruck der Sprache erscheine. Die vorliegende Arbeit setzt sich mit diesem Relativismusvorwurf auseinander und beabsichtigt, eine wechselseitige Abhängigkeit zwischen Sprache und Erfahrung ausgehend von Hans-Georg Gadamers Werk zu entwickeln. Um das zu erreichen, wurden zunächst eine doppelte negative-positive Erfahrungsstruktur und dann einige phänomenologische und transzendentale Merkmale der Erfahrung auf dem historischen Hintergrund für Gadamers Erfahrungsbegriff herausgearbeitet. Somit machte sich die konstitutive Sprachlichkeit der Erfahrung erkennbar. In einer Auseinandersetzung mit dem Sprachbegriff auf der anderen Seite wurde sein dialogischer und welterschließender Charakter veranschaulicht, so dass auch seine Angewiesenheit auf die Welterfahrung offenkundig wurde.
Current climate warming is affecting arctic regions at a faster rate than the rest of the world. This has profound effects on permafrost that underlies most of the arctic land area. Permafrost thawing can lead to the liberation of considerable amounts of greenhouse gases as well as to significant changes in the geomorphology, hydrology, and ecology of the corresponding landscapes, which may in turn act as a positive feedback to the climate system. Vast areas of the east Siberian lowlands, which are underlain by permafrost of the Yedoma-type Ice Complex, are particularly sensitive to climate warming because of the high ice content of these permafrost deposits. Thermokarst and thermal erosion are two major types of permafrost degradation in periglacial landscapes. The associated landforms are prominent indicators of climate-induced environmental variations on the regional scale. Thermokarst lakes and basins (alasses) as well as thermo-erosional valleys are widely distributed in the coastal lowlands adjacent to the Laptev Sea. This thesis investigates the spatial distribution and morphometric properties of these degradational features to reconstruct their evolutionary conditions during the Holocene and to deduce information on the potential impact of future permafrost degradation under the projected climate warming. The methodological approach is a combination of remote sensing, geoinformation, and field investigations, which integrates analyses on local to regional spatial scales. Thermokarst and thermal erosion have affected the study region to a great extent. In the Ice Complex area of the Lena River Delta, thermokarst basins cover a much larger area than do present thermokarst lakes on Yedoma uplands (20.0 and 2.2 %, respectively), which indicates that the conditions for large-area thermokarst development were more suitable in the past. This is supported by the reconstruction of the development of an individual alas in the Lena River Delta, which reveals a prolonged phase of high thermokarst activity since the Pleistocene/Holocene transition that created a large and deep basin. After the drainage of the primary thermokarst lake during the mid-Holocene, permafrost aggradation and degradation have occurred in parallel and in shorter alternating stages within the alas, resulting in a complex thermokarst landscape. Though more dynamic than during the first phase, late Holocene thermokarst activity in the alas was not capable of degrading large portions of Pleistocene Ice Complex deposits and substantially altering the Yedoma relief. Further thermokarst development in existing alasses is restricted to thin layers of Holocene ice-rich alas sediments, because the Ice Complex deposits underneath the large primary thermokarst lakes have thawed completely and the underlying deposits are ice-poor fluvial sands. Thermokarst processes on undisturbed Yedoma uplands have the highest impact on the alteration of Ice Complex deposits, but will be limited to smaller areal extents in the future because of the reduced availability of large undisturbed upland surfaces with poor drainage. On Kurungnakh Island in the central Lena River Delta, the area of Yedoma uplands available for future thermokarst development amounts to only 33.7 %. The increasing proximity of newly developing thermokarst lakes on Yedoma uplands to existing degradational features and other topographic lows decreases the possibility for thermokarst lakes to reach large sizes before drainage occurs. Drainage of thermokarst lakes due to thermal erosion is common in the study region, but thermo-erosional valleys also provide water to thermokarst lakes and alasses. Besides these direct hydrological interactions between thermokarst and thermal erosion on the local scale, an interdependence between both processes exists on the regional scale. A regional analysis of extensive networks of thermo-erosional valleys in three lowland regions of the Laptev Sea with a total study area of 5,800 km² found that these features are more common in areas with higher slopes and relief gradients, whereas thermokarst development is more pronounced in flat lowlands with lower relief gradients. The combined results of this thesis highlight the need for comprehensive analyses of both, thermokarst and thermal erosion, in order to assess past and future impacts and feedbacks of the degradation of ice-rich permafrost on hydrology and climate of a certain region.